* [RFC 0/3] Detect superfluous newline in logs @ 2023-11-17 13:18 David Marchand 2023-11-17 13:18 ` [RFC 1/3] lib: remove redundant newline from logs David Marchand ` (8 more replies) 0 siblings, 9 replies; 122+ messages in thread From: David Marchand @ 2023-11-17 13:18 UTC (permalink / raw) To: dev; +Cc: thomas, ferruh.yigit, bruce.richardson, stephen Getting readable and consistent logs is important when running a DPDK application, especially when troubleshooting. A common issue with logs is when a DPDK change do not add (or on the contrary add too many \n) in the format string. This issue would only get noticed when actually hitting this log (which may be something difficult to do). This series proposes to introduce a new RTE_LOG helper that is responsible for logging a one line message and spews a build error (with gcc) if any \n is part of the format string. Note: - the first patch is intentionnally sent as a single block: splitting it into per library commits with correct Fixes: tags is a tedious work. I would split it for a non RFC series. For now, it is enough to show case the idea. - the last patch shows how an existing log macro is converted, -- David Marchand David Marchand (3): lib: remove redundant newline from logs log: add a per line log helper lib: use per line logging drivers/crypto/ipsec_mb/ipsec_mb_ops.c | 2 +- lib/bbdev/rte_bbdev.c | 9 +- lib/cfgfile/rte_cfgfile.c | 18 ++-- lib/compressdev/rte_compressdev_internal.h | 5 +- lib/compressdev/rte_compressdev_pmd.c | 4 +- lib/cryptodev/rte_cryptodev.c | 5 +- lib/cryptodev/rte_cryptodev.h | 16 ++-- lib/dispatcher/rte_dispatcher.c | 12 +-- lib/dmadev/rte_dmadev.c | 8 +- lib/eventdev/eventdev_pmd.h | 14 +-- lib/eventdev/rte_event_crypto_adapter.c | 12 +-- lib/eventdev/rte_event_dma_adapter.c | 18 ++-- lib/eventdev/rte_event_eth_rx_adapter.c | 30 +++--- lib/eventdev/rte_event_eth_tx_adapter.c | 2 +- lib/eventdev/rte_event_timer_adapter.c | 21 +++-- lib/eventdev/rte_eventdev.c | 10 +- lib/gpudev/gpudev.c | 6 +- lib/graph/graph_private.h | 5 +- lib/log/rte_log.h | 21 +++++ lib/metrics/rte_metrics_telemetry.c | 6 +- lib/mldev/rte_mldev.c | 102 ++++++++++----------- lib/mldev/rte_mldev.h | 5 +- lib/net/rte_net_crc.c | 14 +-- lib/node/ethdev_rx.c | 4 +- lib/node/ip4_lookup.c | 2 +- lib/node/ip6_lookup.c | 2 +- lib/node/kernel_rx.c | 8 +- lib/node/kernel_tx.c | 4 +- lib/node/node_private.h | 6 +- lib/rawdev/rte_rawdev_pmd.h | 4 +- lib/rcu/rte_rcu_qsbr.c | 4 +- lib/rcu/rte_rcu_qsbr.h | 17 ++-- lib/stack/rte_stack.c | 8 +- lib/stack/stack_pvt.h | 4 +- 34 files changed, 220 insertions(+), 188 deletions(-) -- 2.41.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [RFC 1/3] lib: remove redundant newline from logs 2023-11-17 13:18 [RFC 0/3] Detect superfluous newline in logs David Marchand @ 2023-11-17 13:18 ` David Marchand 2023-11-17 13:18 ` [RFC 2/3] log: add a per line log helper David Marchand ` (7 subsequent siblings) 8 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-11-17 13:18 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, stable, Kai Ji, Pablo de Lara, Nicolas Chautru, Cristian Dumitrescu, Fan Zhang, Ashish Gupta, Akhil Goyal, Mattias Rönnblom, Chengwen Feng, Kevin Laatz, Jerin Jacob, Abhinandan Gujjar, Amit Prakash Shukla, Naga Harish K S V, Erik Gabriel Carrillo, Srikanth Yalavarthi, Jasvinder Singh, Nithin Dabilpuram, Pavan Nikhilesh, Honnappa Nagarahalli Fix places where two newline characters may be logged. Also fix some direct calls to printf or RTE_LOG when a dedicated log helper for a library existed. Cc: stable@dpdk.org Signed-off-by: David Marchand <david.marchand@redhat.com> --- drivers/crypto/ipsec_mb/ipsec_mb_ops.c | 2 +- lib/bbdev/rte_bbdev.c | 6 +- lib/cfgfile/rte_cfgfile.c | 14 ++-- lib/compressdev/rte_compressdev_pmd.c | 4 +- lib/cryptodev/rte_cryptodev.c | 5 +- lib/dispatcher/rte_dispatcher.c | 12 +-- lib/dmadev/rte_dmadev.c | 2 +- lib/eventdev/eventdev_pmd.h | 6 +- lib/eventdev/rte_event_crypto_adapter.c | 12 +-- lib/eventdev/rte_event_dma_adapter.c | 18 ++--- lib/eventdev/rte_event_eth_rx_adapter.c | 30 +++---- lib/eventdev/rte_event_eth_tx_adapter.c | 2 +- lib/eventdev/rte_event_timer_adapter.c | 4 +- lib/eventdev/rte_eventdev.c | 10 +-- lib/metrics/rte_metrics_telemetry.c | 2 +- lib/mldev/rte_mldev.c | 102 ++++++++++++------------ lib/net/rte_net_crc.c | 6 +- lib/node/ethdev_rx.c | 4 +- lib/node/ip4_lookup.c | 2 +- lib/node/ip6_lookup.c | 2 +- lib/node/kernel_rx.c | 8 +- lib/node/kernel_tx.c | 4 +- lib/rcu/rte_rcu_qsbr.c | 4 +- lib/rcu/rte_rcu_qsbr.h | 8 +- lib/stack/rte_stack.c | 8 +- 25 files changed, 138 insertions(+), 139 deletions(-) diff --git a/drivers/crypto/ipsec_mb/ipsec_mb_ops.c b/drivers/crypto/ipsec_mb/ipsec_mb_ops.c index 30f919cd40..2a5599b7d8 100644 --- a/drivers/crypto/ipsec_mb/ipsec_mb_ops.c +++ b/drivers/crypto/ipsec_mb/ipsec_mb_ops.c @@ -406,7 +406,7 @@ ipsec_mb_ipc_request(const struct rte_mp_msg *mp_msg, const void *peer) resp_param->result = ipsec_mb_qp_release(dev, qp_id); break; default: - CDEV_LOG_ERR("invalid mp request type\n"); + CDEV_LOG_ERR("invalid mp request type"); } out: diff --git a/lib/bbdev/rte_bbdev.c b/lib/bbdev/rte_bbdev.c index cfebea09c7..e09bb97abb 100644 --- a/lib/bbdev/rte_bbdev.c +++ b/lib/bbdev/rte_bbdev.c @@ -1106,12 +1106,12 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op, intr_handle = dev->intr_handle; if (intr_handle == NULL) { - rte_bbdev_log(ERR, "Device %u intr handle unset\n", dev_id); + rte_bbdev_log(ERR, "Device %u intr handle unset", dev_id); return -ENOTSUP; } if (queue_id >= RTE_MAX_RXTX_INTR_VEC_ID) { - rte_bbdev_log(ERR, "Device %u queue_id %u is too big\n", + rte_bbdev_log(ERR, "Device %u queue_id %u is too big", dev_id, queue_id); return -ENOTSUP; } @@ -1120,7 +1120,7 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op, ret = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data); if (ret && (ret != -EEXIST)) { rte_bbdev_log(ERR, - "dev %u q %u int ctl error op %d epfd %d vec %u\n", + "dev %u q %u int ctl error op %d epfd %d vec %u", dev_id, queue_id, op, epfd, vec); return ret; } diff --git a/lib/cfgfile/rte_cfgfile.c b/lib/cfgfile/rte_cfgfile.c index eefba6e408..2f9cc0722a 100644 --- a/lib/cfgfile/rte_cfgfile.c +++ b/lib/cfgfile/rte_cfgfile.c @@ -137,7 +137,7 @@ rte_cfgfile_check_params(const struct rte_cfgfile_parameters *params) unsigned int i; if (!params) { - CFG_LOG(ERR, "missing cfgfile parameters\n"); + CFG_LOG(ERR, "missing cfgfile parameters"); return -EINVAL; } @@ -150,7 +150,7 @@ rte_cfgfile_check_params(const struct rte_cfgfile_parameters *params) } if (valid_comment == 0) { - CFG_LOG(ERR, "invalid comment characters %c\n", + CFG_LOG(ERR, "invalid comment characters %c", params->comment_character); return -ENOTSUP; } @@ -188,7 +188,7 @@ rte_cfgfile_load_with_params(const char *filename, int flags, lineno++; if ((len >= sizeof(buffer) - 1) && (buffer[len-1] != '\n')) { CFG_LOG(ERR, " line %d - no \\n found on string. " - "Check if line too long\n", lineno); + "Check if line too long", lineno); goto error1; } /* skip parsing if comment character found */ @@ -209,7 +209,7 @@ rte_cfgfile_load_with_params(const char *filename, int flags, char *end = memchr(buffer, ']', len); if (end == NULL) { CFG_LOG(ERR, - "line %d - no terminating ']' character found\n", + "line %d - no terminating ']' character found", lineno); goto error1; } @@ -225,7 +225,7 @@ rte_cfgfile_load_with_params(const char *filename, int flags, split[1] = memchr(buffer, '=', len); if (split[1] == NULL) { CFG_LOG(ERR, - "line %d - no '=' character found\n", + "line %d - no '=' character found", lineno); goto error1; } @@ -249,7 +249,7 @@ rte_cfgfile_load_with_params(const char *filename, int flags, if (!(flags & CFG_FLAG_EMPTY_VALUES) && (*split[1] == '\0')) { CFG_LOG(ERR, - "line %d - cannot use empty values\n", + "line %d - cannot use empty values", lineno); goto error1; } @@ -414,7 +414,7 @@ int rte_cfgfile_set_entry(struct rte_cfgfile *cfg, const char *sectionname, return 0; } - CFG_LOG(ERR, "entry name doesn't exist\n"); + CFG_LOG(ERR, "entry name doesn't exist"); return -EINVAL; } diff --git a/lib/compressdev/rte_compressdev_pmd.c b/lib/compressdev/rte_compressdev_pmd.c index 156bccd972..762b44f03e 100644 --- a/lib/compressdev/rte_compressdev_pmd.c +++ b/lib/compressdev/rte_compressdev_pmd.c @@ -100,12 +100,12 @@ rte_compressdev_pmd_create(const char *name, struct rte_compressdev *compressdev; if (params->name[0] != '\0') { - COMPRESSDEV_LOG(INFO, "User specified device name = %s\n", + COMPRESSDEV_LOG(INFO, "User specified device name = %s", params->name); name = params->name; } - COMPRESSDEV_LOG(INFO, "Creating compressdev %s\n", name); + COMPRESSDEV_LOG(INFO, "Creating compressdev %s", name); COMPRESSDEV_LOG(INFO, "Init parameters - name: %s, socket id: %d", name, params->socket_id); diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c index b258827734..16fb870d97 100644 --- a/lib/cryptodev/rte_cryptodev.c +++ b/lib/cryptodev/rte_cryptodev.c @@ -11,7 +11,6 @@ #include <stdint.h> #include <inttypes.h> -#include <rte_log.h> #include <rte_debug.h> #include <dev_driver.h> #include <rte_memory.h> @@ -2072,7 +2071,7 @@ rte_cryptodev_sym_session_create(uint8_t dev_id, } if (xforms == NULL) { - CDEV_LOG_ERR("Invalid xform\n"); + CDEV_LOG_ERR("Invalid xform"); rte_errno = EINVAL; return NULL; } @@ -2682,7 +2681,7 @@ rte_cryptodev_driver_id_get(const char *name) int driver_id = -1; if (name == NULL) { - RTE_LOG(DEBUG, CRYPTODEV, "name pointer NULL"); + CDEV_LOG_DEBUG("name pointer NULL"); return -1; } diff --git a/lib/dispatcher/rte_dispatcher.c b/lib/dispatcher/rte_dispatcher.c index 10d02edde9..95dd41b818 100644 --- a/lib/dispatcher/rte_dispatcher.c +++ b/lib/dispatcher/rte_dispatcher.c @@ -246,7 +246,7 @@ evd_service_register(struct rte_dispatcher *dispatcher) rc = rte_service_component_register(&service, &dispatcher->service_id); if (rc != 0) RTE_EDEV_LOG_ERR("Registration of dispatcher service " - "%s failed with error code %d\n", + "%s failed with error code %d", service.name, rc); return rc; @@ -260,7 +260,7 @@ evd_service_unregister(struct rte_dispatcher *dispatcher) rc = rte_service_component_unregister(dispatcher->service_id); if (rc != 0) RTE_EDEV_LOG_ERR("Unregistration of dispatcher service " - "failed with error code %d\n", rc); + "failed with error code %d", rc); return rc; } @@ -279,7 +279,7 @@ rte_dispatcher_create(uint8_t event_dev_id) RTE_CACHE_LINE_SIZE, socket_id); if (dispatcher == NULL) { - RTE_EDEV_LOG_ERR("Unable to allocate memory for dispatcher\n"); + RTE_EDEV_LOG_ERR("Unable to allocate memory for dispatcher"); rte_errno = ENOMEM; return NULL; } @@ -483,7 +483,7 @@ evd_lcore_uninstall_handler(struct rte_dispatcher_lcore *lcore, unreg_handler = evd_lcore_get_handler_by_id(lcore, handler_id); if (unreg_handler == NULL) { - RTE_EDEV_LOG_ERR("Invalid handler id %d\n", handler_id); + RTE_EDEV_LOG_ERR("Invalid handler id %d", handler_id); return -EINVAL; } @@ -602,7 +602,7 @@ rte_dispatcher_finalize_unregister(struct rte_dispatcher *dispatcher, unreg_finalizer = evd_get_finalizer_by_id(dispatcher, finalizer_id); if (unreg_finalizer == NULL) { - RTE_EDEV_LOG_ERR("Invalid finalizer id %d\n", finalizer_id); + RTE_EDEV_LOG_ERR("Invalid finalizer id %d", finalizer_id); return -EINVAL; } @@ -636,7 +636,7 @@ evd_set_service_runstate(struct rte_dispatcher *dispatcher, int state) */ if (rc != 0) RTE_EDEV_LOG_ERR("Unexpected error %d occurred while setting " - "service component run state to %d\n", rc, + "service component run state to %d", rc, state); RTE_VERIFY(rc == 0); diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c index 4e5e420c82..009a21849a 100644 --- a/lib/dmadev/rte_dmadev.c +++ b/lib/dmadev/rte_dmadev.c @@ -726,7 +726,7 @@ rte_dma_vchan_status(int16_t dev_id, uint16_t vchan, enum rte_dma_vchan_status * return -EINVAL; if (vchan >= dev->data->dev_conf.nb_vchans) { - RTE_DMA_LOG(ERR, "Device %u vchan %u out of range\n", dev_id, vchan); + RTE_DMA_LOG(ERR, "Device %u vchan %u out of range", dev_id, vchan); return -EINVAL; } diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index 30bd90085c..2ec5aec0a8 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -49,14 +49,14 @@ extern "C" { /* Macros to check for valid device */ #define RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, retval) do { \ if (!rte_event_pmd_is_valid_dev((dev_id))) { \ - RTE_EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \ + RTE_EDEV_LOG_ERR("Invalid dev_id=%d", dev_id); \ return retval; \ } \ } while (0) #define RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, errno, retval) do { \ if (!rte_event_pmd_is_valid_dev((dev_id))) { \ - RTE_EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \ + RTE_EDEV_LOG_ERR("Invalid dev_id=%d", dev_id); \ rte_errno = errno; \ return retval; \ } \ @@ -64,7 +64,7 @@ extern "C" { #define RTE_EVENTDEV_VALID_DEVID_OR_RET(dev_id) do { \ if (!rte_event_pmd_is_valid_dev((dev_id))) { \ - RTE_EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \ + RTE_EDEV_LOG_ERR("Invalid dev_id=%d", dev_id); \ return; \ } \ } while (0) diff --git a/lib/eventdev/rte_event_crypto_adapter.c b/lib/eventdev/rte_event_crypto_adapter.c index 1b435c9f0e..d46595d190 100644 --- a/lib/eventdev/rte_event_crypto_adapter.c +++ b/lib/eventdev/rte_event_crypto_adapter.c @@ -133,7 +133,7 @@ static struct event_crypto_adapter **event_crypto_adapter; /* Macros to check for valid adapter */ #define EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) do { \ if (!eca_valid_id(id)) { \ - RTE_EDEV_LOG_ERR("Invalid crypto adapter id = %d\n", id); \ + RTE_EDEV_LOG_ERR("Invalid crypto adapter id = %d", id); \ return retval; \ } \ } while (0) @@ -309,7 +309,7 @@ eca_default_config_cb(uint8_t id, uint8_t dev_id, ret = rte_event_dev_configure(dev_id, &dev_conf); if (ret) { - RTE_EDEV_LOG_ERR("failed to configure event dev %u\n", dev_id); + RTE_EDEV_LOG_ERR("failed to configure event dev %u", dev_id); if (started) { if (rte_event_dev_start(dev_id)) return -EIO; @@ -319,7 +319,7 @@ eca_default_config_cb(uint8_t id, uint8_t dev_id, ret = rte_event_port_setup(dev_id, port_id, port_conf); if (ret) { - RTE_EDEV_LOG_ERR("failed to setup event port %u\n", port_id); + RTE_EDEV_LOG_ERR("failed to setup event port %u", port_id); return ret; } @@ -391,7 +391,7 @@ rte_event_crypto_adapter_create_ext(uint8_t id, uint8_t dev_id, sizeof(struct crypto_device_info), 0, socket_id); if (adapter->cdevs == NULL) { - RTE_EDEV_LOG_ERR("Failed to get mem for crypto devices\n"); + RTE_EDEV_LOG_ERR("Failed to get mem for crypto devices"); eca_circular_buffer_free(&adapter->ebuf); rte_free(adapter); return -ENOMEM; @@ -1403,7 +1403,7 @@ rte_event_crypto_adapter_runtime_params_set(uint8_t id, EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); if (params == NULL) { - RTE_EDEV_LOG_ERR("params pointer is NULL\n"); + RTE_EDEV_LOG_ERR("params pointer is NULL"); return -EINVAL; } @@ -1436,7 +1436,7 @@ rte_event_crypto_adapter_runtime_params_get(uint8_t id, EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); if (params == NULL) { - RTE_EDEV_LOG_ERR("params pointer is NULL\n"); + RTE_EDEV_LOG_ERR("params pointer is NULL"); return -EINVAL; } diff --git a/lib/eventdev/rte_event_dma_adapter.c b/lib/eventdev/rte_event_dma_adapter.c index af4b5ad388..4196164305 100644 --- a/lib/eventdev/rte_event_dma_adapter.c +++ b/lib/eventdev/rte_event_dma_adapter.c @@ -20,7 +20,7 @@ #define EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) \ do { \ if (!edma_adapter_valid_id(id)) { \ - RTE_EDEV_LOG_ERR("Invalid DMA adapter id = %d\n", id); \ + RTE_EDEV_LOG_ERR("Invalid DMA adapter id = %d", id); \ return retval; \ } \ } while (0) @@ -313,7 +313,7 @@ edma_default_config_cb(uint8_t id, uint8_t evdev_id, struct rte_event_dma_adapte ret = rte_event_dev_configure(evdev_id, &dev_conf); if (ret) { - RTE_EDEV_LOG_ERR("Failed to configure event dev %u\n", evdev_id); + RTE_EDEV_LOG_ERR("Failed to configure event dev %u", evdev_id); if (started) { if (rte_event_dev_start(evdev_id)) return -EIO; @@ -323,7 +323,7 @@ edma_default_config_cb(uint8_t id, uint8_t evdev_id, struct rte_event_dma_adapte ret = rte_event_port_setup(evdev_id, port_id, port_conf); if (ret) { - RTE_EDEV_LOG_ERR("Failed to setup event port %u\n", port_id); + RTE_EDEV_LOG_ERR("Failed to setup event port %u", port_id); return ret; } @@ -407,7 +407,7 @@ rte_event_dma_adapter_create_ext(uint8_t id, uint8_t evdev_id, num_dma_dev * sizeof(struct dma_device_info), 0, socket_id); if (adapter->dma_devs == NULL) { - RTE_EDEV_LOG_ERR("Failed to get memory for DMA devices\n"); + RTE_EDEV_LOG_ERR("Failed to get memory for DMA devices"); edma_circular_buffer_free(&adapter->ebuf); rte_free(adapter); return -ENOMEM; @@ -417,7 +417,7 @@ rte_event_dma_adapter_create_ext(uint8_t id, uint8_t evdev_id, for (i = 0; i < num_dma_dev; i++) { ret = rte_dma_info_get(i, &info); if (ret) { - RTE_EDEV_LOG_ERR("Failed to get dma device info\n"); + RTE_EDEV_LOG_ERR("Failed to get dma device info"); edma_circular_buffer_free(&adapter->ebuf); rte_free(adapter); return ret; @@ -1046,7 +1046,7 @@ rte_event_dma_adapter_vchan_add(uint8_t id, int16_t dma_dev_id, uint16_t vchan, sizeof(struct dma_vchan_info), 0, adapter->socket_id); if (dev_info->vchanq == NULL) { - printf("Queue pair add not supported\n"); + RTE_EDEV_LOG_ERR("Queue pair add not supported"); return -ENOMEM; } } @@ -1057,7 +1057,7 @@ rte_event_dma_adapter_vchan_add(uint8_t id, int16_t dma_dev_id, uint16_t vchan, sizeof(struct dma_vchan_info), 0, adapter->socket_id); if (dev_info->tqmap == NULL) { - printf("tq pair add not supported\n"); + RTE_EDEV_LOG_ERR("tq pair add not supported"); return -ENOMEM; } } @@ -1297,7 +1297,7 @@ rte_event_dma_adapter_runtime_params_set(uint8_t id, EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); if (params == NULL) { - RTE_EDEV_LOG_ERR("params pointer is NULL\n"); + RTE_EDEV_LOG_ERR("params pointer is NULL"); return -EINVAL; } @@ -1326,7 +1326,7 @@ rte_event_dma_adapter_runtime_params_get(uint8_t id, EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); if (params == NULL) { - RTE_EDEV_LOG_ERR("params pointer is NULL\n"); + RTE_EDEV_LOG_ERR("params pointer is NULL"); return -EINVAL; } diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c index 6db03adf04..afd5f5ab33 100644 --- a/lib/eventdev/rte_event_eth_rx_adapter.c +++ b/lib/eventdev/rte_event_eth_rx_adapter.c @@ -293,14 +293,14 @@ rxa_event_buf_get(struct event_eth_rx_adapter *rx_adapter, uint16_t eth_dev_id, #define RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) do { \ if (!rxa_validate_id(id)) { \ - RTE_EDEV_LOG_ERR("Invalid eth Rx adapter id = %d\n", id); \ + RTE_EDEV_LOG_ERR("Invalid eth Rx adapter id = %d", id); \ return retval; \ } \ } while (0) #define RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_GOTO_ERR_RET(id, retval) do { \ if (!rxa_validate_id(id)) { \ - RTE_EDEV_LOG_ERR("Invalid eth Rx adapter id = %d\n", id); \ + RTE_EDEV_LOG_ERR("Invalid eth Rx adapter id = %d", id); \ ret = retval; \ goto error; \ } \ @@ -308,7 +308,7 @@ rxa_event_buf_get(struct event_eth_rx_adapter *rx_adapter, uint16_t eth_dev_id, #define RTE_EVENT_ETH_RX_ADAPTER_TOKEN_VALID_OR_GOTO_ERR_RET(token, retval) do { \ if ((token) == NULL || strlen(token) == 0 || !isdigit(*token)) { \ - RTE_EDEV_LOG_ERR("Invalid eth Rx adapter token\n"); \ + RTE_EDEV_LOG_ERR("Invalid eth Rx adapter token"); \ ret = retval; \ goto error; \ } \ @@ -316,7 +316,7 @@ rxa_event_buf_get(struct event_eth_rx_adapter *rx_adapter, uint16_t eth_dev_id, #define RTE_ETH_VALID_PORTID_OR_GOTO_ERR_RET(port_id, retval) do { \ if (!rte_eth_dev_is_valid_port(port_id)) { \ - RTE_ETHDEV_LOG(ERR, "Invalid port_id=%u\n", port_id); \ + RTE_ETHDEV_LOG(ERR, "Invalid port_id=%u", port_id); \ ret = retval; \ goto error; \ } \ @@ -1540,7 +1540,7 @@ rxa_default_conf_cb(uint8_t id, uint8_t dev_id, ret = rte_event_dev_configure(dev_id, &dev_conf); if (ret) { - RTE_EDEV_LOG_ERR("failed to configure event dev %u\n", + RTE_EDEV_LOG_ERR("failed to configure event dev %u", dev_id); if (started) { if (rte_event_dev_start(dev_id)) @@ -1551,7 +1551,7 @@ rxa_default_conf_cb(uint8_t id, uint8_t dev_id, ret = rte_event_port_setup(dev_id, port_id, port_conf); if (ret) { - RTE_EDEV_LOG_ERR("failed to setup event port %u\n", + RTE_EDEV_LOG_ERR("failed to setup event port %u", port_id); return ret; } @@ -1628,7 +1628,7 @@ rxa_create_intr_thread(struct event_eth_rx_adapter *rx_adapter) if (!err) return 0; - RTE_EDEV_LOG_ERR("Failed to create interrupt thread err = %d\n", err); + RTE_EDEV_LOG_ERR("Failed to create interrupt thread err = %d", err); rte_free(rx_adapter->epoll_events); error: rte_ring_free(rx_adapter->intr_ring); @@ -1644,12 +1644,12 @@ rxa_destroy_intr_thread(struct event_eth_rx_adapter *rx_adapter) err = pthread_cancel((pthread_t)rx_adapter->rx_intr_thread.opaque_id); if (err) - RTE_EDEV_LOG_ERR("Can't cancel interrupt thread err = %d\n", + RTE_EDEV_LOG_ERR("Can't cancel interrupt thread err = %d", err); err = rte_thread_join(rx_adapter->rx_intr_thread, NULL); if (err) - RTE_EDEV_LOG_ERR("Can't join interrupt thread err = %d\n", err); + RTE_EDEV_LOG_ERR("Can't join interrupt thread err = %d", err); rte_free(rx_adapter->epoll_events); rte_ring_free(rx_adapter->intr_ring); @@ -1915,7 +1915,7 @@ rxa_init_service(struct event_eth_rx_adapter *rx_adapter, uint8_t id) if (rte_mbuf_dyn_rx_timestamp_register( &event_eth_rx_timestamp_dynfield_offset, &event_eth_rx_timestamp_dynflag) != 0) { - RTE_EDEV_LOG_ERR("Error registering timestamp field in mbuf\n"); + RTE_EDEV_LOG_ERR("Error registering timestamp field in mbuf"); return -rte_errno; } @@ -2445,7 +2445,7 @@ rxa_create(uint8_t id, uint8_t dev_id, RTE_DIM(default_rss_key)); if (rx_adapter->eth_devices == NULL) { - RTE_EDEV_LOG_ERR("failed to get mem for eth devices\n"); + RTE_EDEV_LOG_ERR("failed to get mem for eth devices"); rte_free(rx_adapter); return -ENOMEM; } @@ -2497,12 +2497,12 @@ rxa_config_params_validate(struct rte_event_eth_rx_adapter_params *rxa_params, return 0; } else if (!rxa_params->use_queue_event_buf && rxa_params->event_buf_size == 0) { - RTE_EDEV_LOG_ERR("event buffer size can't be zero\n"); + RTE_EDEV_LOG_ERR("event buffer size can't be zero"); return -EINVAL; } else if (rxa_params->use_queue_event_buf && rxa_params->event_buf_size != 0) { RTE_EDEV_LOG_ERR("event buffer size needs to be configured " - "as part of queue add\n"); + "as part of queue add"); return -EINVAL; } @@ -3597,7 +3597,7 @@ handle_rxa_stats(const char *cmd __rte_unused, /* Get Rx adapter stats */ if (rte_event_eth_rx_adapter_stats_get(rx_adapter_id, &rx_adptr_stats)) { - RTE_EDEV_LOG_ERR("Failed to get Rx adapter stats\n"); + RTE_EDEV_LOG_ERR("Failed to get Rx adapter stats"); return -1; } @@ -3636,7 +3636,7 @@ handle_rxa_stats_reset(const char *cmd __rte_unused, /* Reset Rx adapter stats */ if (rte_event_eth_rx_adapter_stats_reset(rx_adapter_id)) { - RTE_EDEV_LOG_ERR("Failed to reset Rx adapter stats\n"); + RTE_EDEV_LOG_ERR("Failed to reset Rx adapter stats"); return -1; } diff --git a/lib/eventdev/rte_event_eth_tx_adapter.c b/lib/eventdev/rte_event_eth_tx_adapter.c index 360d5caf6a..56435be991 100644 --- a/lib/eventdev/rte_event_eth_tx_adapter.c +++ b/lib/eventdev/rte_event_eth_tx_adapter.c @@ -334,7 +334,7 @@ txa_service_conf_cb(uint8_t __rte_unused id, uint8_t dev_id, ret = rte_event_port_setup(dev_id, port_id, pc); if (ret) { - RTE_EDEV_LOG_ERR("failed to setup event port %u\n", + RTE_EDEV_LOG_ERR("failed to setup event port %u", port_id); if (started) { if (rte_event_dev_start(dev_id)) diff --git a/lib/eventdev/rte_event_timer_adapter.c b/lib/eventdev/rte_event_timer_adapter.c index 27466707bc..3f22e85173 100644 --- a/lib/eventdev/rte_event_timer_adapter.c +++ b/lib/eventdev/rte_event_timer_adapter.c @@ -106,7 +106,7 @@ default_port_conf_cb(uint16_t id, uint8_t event_dev_id, uint8_t *event_port_id, ret = rte_event_dev_configure(dev_id, &dev_conf); if (ret < 0) { - EVTIM_LOG_ERR("failed to configure event dev %u\n", dev_id); + EVTIM_LOG_ERR("failed to configure event dev %u", dev_id); if (started) if (rte_event_dev_start(dev_id)) return -EIO; @@ -116,7 +116,7 @@ default_port_conf_cb(uint16_t id, uint8_t event_dev_id, uint8_t *event_port_id, ret = rte_event_port_setup(dev_id, port_id, port_conf); if (ret < 0) { - EVTIM_LOG_ERR("failed to setup event port %u on event dev %u\n", + EVTIM_LOG_ERR("failed to setup event port %u on event dev %u", port_id, dev_id); return ret; } diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index 0ca32d6721..157752868d 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -1007,13 +1007,13 @@ rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t } if (*dev->dev_ops->port_link == NULL) { - RTE_EDEV_LOG_ERR("Function not supported\n"); + RTE_EDEV_LOG_ERR("Function not supported"); rte_errno = ENOTSUP; return 0; } if (profile_id && *dev->dev_ops->port_link_profile == NULL) { - RTE_EDEV_LOG_ERR("Function not supported\n"); + RTE_EDEV_LOG_ERR("Function not supported"); rte_errno = ENOTSUP; return 0; } @@ -1428,8 +1428,8 @@ rte_event_vector_pool_create(const char *name, unsigned int n, int ret; if (!nb_elem) { - RTE_LOG(ERR, EVENTDEV, - "Invalid number of elements=%d requested\n", nb_elem); + RTE_EDEV_LOG_ERR("Invalid number of elements=%d requested", + nb_elem); rte_errno = EINVAL; return NULL; } @@ -1444,7 +1444,7 @@ rte_event_vector_pool_create(const char *name, unsigned int n, mp_ops_name = rte_mbuf_best_mempool_ops(); ret = rte_mempool_set_ops_byname(mp, mp_ops_name, NULL); if (ret != 0) { - RTE_LOG(ERR, EVENTDEV, "error setting mempool handler\n"); + RTE_EDEV_LOG_ERR("error setting mempool handler"); goto err; } diff --git a/lib/metrics/rte_metrics_telemetry.c b/lib/metrics/rte_metrics_telemetry.c index 5be21b2e86..1d133e1f8c 100644 --- a/lib/metrics/rte_metrics_telemetry.c +++ b/lib/metrics/rte_metrics_telemetry.c @@ -363,7 +363,7 @@ rte_metrics_tel_stat_names_to_ids(const char * const *stat_names, } } if (j == num_metrics) { - METRICS_LOG_WARN("Invalid stat name %s\n", + METRICS_LOG_WARN("Invalid stat name %s", stat_names[i]); free(names); return -EINVAL; diff --git a/lib/mldev/rte_mldev.c b/lib/mldev/rte_mldev.c index cc5f2e0cc6..196b1850e6 100644 --- a/lib/mldev/rte_mldev.c +++ b/lib/mldev/rte_mldev.c @@ -159,7 +159,7 @@ int rte_ml_dev_init(size_t dev_max) { if (dev_max == 0 || dev_max > INT16_MAX) { - RTE_MLDEV_LOG(ERR, "Invalid dev_max = %zu (> %d)\n", dev_max, INT16_MAX); + RTE_MLDEV_LOG(ERR, "Invalid dev_max = %zu (> %d)", dev_max, INT16_MAX); rte_errno = EINVAL; return -rte_errno; } @@ -217,7 +217,7 @@ rte_ml_dev_socket_id(int16_t dev_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -232,7 +232,7 @@ rte_ml_dev_info_get(int16_t dev_id, struct rte_ml_dev_info *dev_info) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -241,7 +241,7 @@ rte_ml_dev_info_get(int16_t dev_id, struct rte_ml_dev_info *dev_info) return -ENOTSUP; if (dev_info == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, dev_info cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, dev_info cannot be NULL", dev_id); return -EINVAL; } memset(dev_info, 0, sizeof(struct rte_ml_dev_info)); @@ -257,7 +257,7 @@ rte_ml_dev_configure(int16_t dev_id, const struct rte_ml_dev_config *config) int ret; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -271,7 +271,7 @@ rte_ml_dev_configure(int16_t dev_id, const struct rte_ml_dev_config *config) } if (config == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, config cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, config cannot be NULL", dev_id); return -EINVAL; } @@ -280,7 +280,7 @@ rte_ml_dev_configure(int16_t dev_id, const struct rte_ml_dev_config *config) return ret; if (config->nb_queue_pairs > dev_info.max_queue_pairs) { - RTE_MLDEV_LOG(ERR, "Device %d num of queues %u > %u\n", dev_id, + RTE_MLDEV_LOG(ERR, "Device %d num of queues %u > %u", dev_id, config->nb_queue_pairs, dev_info.max_queue_pairs); return -EINVAL; } @@ -294,7 +294,7 @@ rte_ml_dev_close(int16_t dev_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -318,7 +318,7 @@ rte_ml_dev_start(int16_t dev_id) int ret; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -345,7 +345,7 @@ rte_ml_dev_stop(int16_t dev_id) int ret; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -372,7 +372,7 @@ rte_ml_dev_queue_pair_setup(int16_t dev_id, uint16_t queue_pair_id, struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -386,7 +386,7 @@ rte_ml_dev_queue_pair_setup(int16_t dev_id, uint16_t queue_pair_id, } if (qp_conf == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, qp_conf cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, qp_conf cannot be NULL", dev_id); return -EINVAL; } @@ -404,7 +404,7 @@ rte_ml_dev_stats_get(int16_t dev_id, struct rte_ml_dev_stats *stats) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -413,7 +413,7 @@ rte_ml_dev_stats_get(int16_t dev_id, struct rte_ml_dev_stats *stats) return -ENOTSUP; if (stats == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, stats cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, stats cannot be NULL", dev_id); return -EINVAL; } memset(stats, 0, sizeof(struct rte_ml_dev_stats)); @@ -427,7 +427,7 @@ rte_ml_dev_stats_reset(int16_t dev_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return; } @@ -445,7 +445,7 @@ rte_ml_dev_xstats_names_get(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, in struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -462,7 +462,7 @@ rte_ml_dev_xstats_by_name_get(int16_t dev_id, const char *name, uint16_t *stat_i struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -471,12 +471,12 @@ rte_ml_dev_xstats_by_name_get(int16_t dev_id, const char *name, uint16_t *stat_i return -ENOTSUP; if (name == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, name cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, name cannot be NULL", dev_id); return -EINVAL; } if (value == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, value cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, value cannot be NULL", dev_id); return -EINVAL; } @@ -490,7 +490,7 @@ rte_ml_dev_xstats_get(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, int32_t struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -499,12 +499,12 @@ rte_ml_dev_xstats_get(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, int32_t return -ENOTSUP; if (stat_ids == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, stat_ids cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, stat_ids cannot be NULL", dev_id); return -EINVAL; } if (values == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, values cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, values cannot be NULL", dev_id); return -EINVAL; } @@ -518,7 +518,7 @@ rte_ml_dev_xstats_reset(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, int32_ struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -535,7 +535,7 @@ rte_ml_dev_dump(int16_t dev_id, FILE *fd) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -544,7 +544,7 @@ rte_ml_dev_dump(int16_t dev_id, FILE *fd) return -ENOTSUP; if (fd == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, file descriptor cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, file descriptor cannot be NULL", dev_id); return -EINVAL; } @@ -557,7 +557,7 @@ rte_ml_dev_selftest(int16_t dev_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -574,7 +574,7 @@ rte_ml_model_load(int16_t dev_id, struct rte_ml_model_params *params, uint16_t * struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -583,12 +583,12 @@ rte_ml_model_load(int16_t dev_id, struct rte_ml_model_params *params, uint16_t * return -ENOTSUP; if (params == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, params cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, params cannot be NULL", dev_id); return -EINVAL; } if (model_id == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, model_id cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, model_id cannot be NULL", dev_id); return -EINVAL; } @@ -601,7 +601,7 @@ rte_ml_model_unload(int16_t dev_id, uint16_t model_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -618,7 +618,7 @@ rte_ml_model_start(int16_t dev_id, uint16_t model_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -635,7 +635,7 @@ rte_ml_model_stop(int16_t dev_id, uint16_t model_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -652,7 +652,7 @@ rte_ml_model_info_get(int16_t dev_id, uint16_t model_id, struct rte_ml_model_inf struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -661,7 +661,7 @@ rte_ml_model_info_get(int16_t dev_id, uint16_t model_id, struct rte_ml_model_inf return -ENOTSUP; if (model_info == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, model_id %u, model_info cannot be NULL\n", dev_id, + RTE_MLDEV_LOG(ERR, "Dev %d, model_id %u, model_info cannot be NULL", dev_id, model_id); return -EINVAL; } @@ -675,7 +675,7 @@ rte_ml_model_params_update(int16_t dev_id, uint16_t model_id, void *buffer) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -684,7 +684,7 @@ rte_ml_model_params_update(int16_t dev_id, uint16_t model_id, void *buffer) return -ENOTSUP; if (buffer == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, buffer cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, buffer cannot be NULL", dev_id); return -EINVAL; } @@ -698,7 +698,7 @@ rte_ml_io_quantize(int16_t dev_id, uint16_t model_id, struct rte_ml_buff_seg **d struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -707,12 +707,12 @@ rte_ml_io_quantize(int16_t dev_id, uint16_t model_id, struct rte_ml_buff_seg **d return -ENOTSUP; if (dbuffer == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, dbuffer cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, dbuffer cannot be NULL", dev_id); return -EINVAL; } if (qbuffer == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, qbuffer cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, qbuffer cannot be NULL", dev_id); return -EINVAL; } @@ -726,7 +726,7 @@ rte_ml_io_dequantize(int16_t dev_id, uint16_t model_id, struct rte_ml_buff_seg * struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -735,12 +735,12 @@ rte_ml_io_dequantize(int16_t dev_id, uint16_t model_id, struct rte_ml_buff_seg * return -ENOTSUP; if (qbuffer == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, qbuffer cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, qbuffer cannot be NULL", dev_id); return -EINVAL; } if (dbuffer == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, dbuffer cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, dbuffer cannot be NULL", dev_id); return -EINVAL; } @@ -811,7 +811,7 @@ rte_ml_enqueue_burst(int16_t dev_id, uint16_t qp_id, struct rte_ml_op **ops, uin #ifdef RTE_LIBRTE_ML_DEV_DEBUG if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); rte_errno = -EINVAL; return 0; } @@ -823,13 +823,13 @@ rte_ml_enqueue_burst(int16_t dev_id, uint16_t qp_id, struct rte_ml_op **ops, uin } if (ops == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, ops cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, ops cannot be NULL", dev_id); rte_errno = -EINVAL; return 0; } if (qp_id >= dev->data->nb_queue_pairs) { - RTE_MLDEV_LOG(ERR, "Invalid qp_id %u\n", qp_id); + RTE_MLDEV_LOG(ERR, "Invalid qp_id %u", qp_id); rte_errno = -EINVAL; return 0; } @@ -847,7 +847,7 @@ rte_ml_dequeue_burst(int16_t dev_id, uint16_t qp_id, struct rte_ml_op **ops, uin #ifdef RTE_LIBRTE_ML_DEV_DEBUG if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); rte_errno = -EINVAL; return 0; } @@ -859,13 +859,13 @@ rte_ml_dequeue_burst(int16_t dev_id, uint16_t qp_id, struct rte_ml_op **ops, uin } if (ops == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, ops cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, ops cannot be NULL", dev_id); rte_errno = -EINVAL; return 0; } if (qp_id >= dev->data->nb_queue_pairs) { - RTE_MLDEV_LOG(ERR, "Invalid qp_id %u\n", qp_id); + RTE_MLDEV_LOG(ERR, "Invalid qp_id %u", qp_id); rte_errno = -EINVAL; return 0; } @@ -883,7 +883,7 @@ rte_ml_op_error_get(int16_t dev_id, struct rte_ml_op *op, struct rte_ml_op_error #ifdef RTE_LIBRTE_ML_DEV_DEBUG if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -892,12 +892,12 @@ rte_ml_op_error_get(int16_t dev_id, struct rte_ml_op *op, struct rte_ml_op_error return -ENOTSUP; if (op == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, op cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, op cannot be NULL", dev_id); return -EINVAL; } if (error == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, error cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, error cannot be NULL", dev_id); return -EINVAL; } #else diff --git a/lib/net/rte_net_crc.c b/lib/net/rte_net_crc.c index a685f9e7bb..900d6de7f4 100644 --- a/lib/net/rte_net_crc.c +++ b/lib/net/rte_net_crc.c @@ -179,7 +179,7 @@ avx512_vpclmulqdq_get_handlers(void) max_simd_bitwidth >= RTE_VECT_SIMD_512) return handlers_avx512; #endif - NET_LOG(INFO, "Requirements not met, can't use AVX512\n"); + NET_LOG(INFO, "Requirements not met, can't use AVX512"); return NULL; } @@ -205,7 +205,7 @@ sse42_pclmulqdq_get_handlers(void) max_simd_bitwidth >= RTE_VECT_SIMD_128) return handlers_sse42; #endif - NET_LOG(INFO, "Requirements not met, can't use SSE\n"); + NET_LOG(INFO, "Requirements not met, can't use SSE"); return NULL; } @@ -231,7 +231,7 @@ neon_pmull_get_handlers(void) max_simd_bitwidth >= RTE_VECT_SIMD_128) return handlers_neon; #endif - NET_LOG(INFO, "Requirements not met, can't use NEON\n"); + NET_LOG(INFO, "Requirements not met, can't use NEON"); return NULL; } diff --git a/lib/node/ethdev_rx.c b/lib/node/ethdev_rx.c index 3e8fac1df4..475eff6abe 100644 --- a/lib/node/ethdev_rx.c +++ b/lib/node/ethdev_rx.c @@ -160,13 +160,13 @@ ethdev_ptype_setup(uint16_t port, uint16_t queue) if (!l3_ipv4 || !l3_ipv6) { node_info("ethdev_rx", - "Enabling ptype callback for required ptypes on port %u\n", + "Enabling ptype callback for required ptypes on port %u", port); if (!rte_eth_add_rx_callback(port, queue, eth_pkt_parse_cb, NULL)) { node_err("ethdev_rx", - "Failed to add rx ptype cb: port=%d, queue=%d\n", + "Failed to add rx ptype cb: port=%d, queue=%d", port, queue); return -EINVAL; } diff --git a/lib/node/ip4_lookup.c b/lib/node/ip4_lookup.c index 0dbfde64fe..18955971f6 100644 --- a/lib/node/ip4_lookup.c +++ b/lib/node/ip4_lookup.c @@ -143,7 +143,7 @@ rte_node_ip4_route_add(uint32_t ip, uint8_t depth, uint16_t next_hop, ip, depth, val); if (ret < 0) { node_err("ip4_lookup", - "Unable to add entry %s / %d nh (%x) to LPM table on sock %d, rc=%d\n", + "Unable to add entry %s / %d nh (%x) to LPM table on sock %d, rc=%d", abuf, depth, val, socket, ret); return ret; } diff --git a/lib/node/ip6_lookup.c b/lib/node/ip6_lookup.c index 6f56eb5ec5..309964f60f 100644 --- a/lib/node/ip6_lookup.c +++ b/lib/node/ip6_lookup.c @@ -283,7 +283,7 @@ rte_node_ip6_route_add(const uint8_t *ip, uint8_t depth, uint16_t next_hop, if (ret < 0) { node_err("ip6_lookup", "Unable to add entry %s / %d nh (%x) to LPM " - "table on sock %d, rc=%d\n", + "table on sock %d, rc=%d", abuf, depth, val, socket, ret); return ret; } diff --git a/lib/node/kernel_rx.c b/lib/node/kernel_rx.c index 2dba7c8cc7..6c20cdbb1e 100644 --- a/lib/node/kernel_rx.c +++ b/lib/node/kernel_rx.c @@ -134,7 +134,7 @@ kernel_rx_node_do(struct rte_graph *graph, struct rte_node *node, kernel_rx_node if (len == 0 || len == 0xFFFF) { rte_pktmbuf_free(m); if (rx->idx <= 0) - node_dbg("kernel_rx", "rx_mbuf array is empty\n"); + node_dbg("kernel_rx", "rx_mbuf array is empty"); rx->idx--; break; } @@ -207,20 +207,20 @@ kernel_rx_node_init(const struct rte_graph *graph, struct rte_node *node) RTE_VERIFY(elem != NULL); if (ctx->pktmbuf_pool == NULL) { - node_err("kernel_rx", "Invalid mbuf pool on graph %s\n", graph->name); + node_err("kernel_rx", "Invalid mbuf pool on graph %s", graph->name); return -EINVAL; } recv_info = rte_zmalloc_socket("kernel_rx_info", sizeof(kernel_rx_info_t), RTE_CACHE_LINE_SIZE, graph->socket); if (!recv_info) { - node_err("kernel_rx", "Kernel recv_info is NULL\n"); + node_err("kernel_rx", "Kernel recv_info is NULL"); return -ENOMEM; } sock = socket(AF_INET, SOCK_RAW, IPPROTO_RAW); if (sock < 0) { - node_err("kernel_rx", "Unable to open RAW socket\n"); + node_err("kernel_rx", "Unable to open RAW socket"); return sock; } diff --git a/lib/node/kernel_tx.c b/lib/node/kernel_tx.c index 27d1808c71..3a96741622 100644 --- a/lib/node/kernel_tx.c +++ b/lib/node/kernel_tx.c @@ -36,7 +36,7 @@ kernel_tx_process_mbuf(struct rte_node *node, struct rte_mbuf **mbufs, uint16_t sin.sin_addr.s_addr = ip4->dst_addr; if (sendto(ctx->sock, buf, len, 0, (struct sockaddr *)&sin, sizeof(sin)) < 0) - node_err("kernel_tx", "Unable to send packets: %s\n", strerror(errno)); + node_err("kernel_tx", "Unable to send packets: %s", strerror(errno)); } } @@ -87,7 +87,7 @@ kernel_tx_node_init(const struct rte_graph *graph __rte_unused, struct rte_node ctx->sock = socket(AF_INET, SOCK_RAW, IPPROTO_RAW); if (ctx->sock < 0) - node_err("kernel_tx", "Unable to open RAW socket\n"); + node_err("kernel_tx", "Unable to open RAW socket"); return 0; } diff --git a/lib/rcu/rte_rcu_qsbr.c b/lib/rcu/rte_rcu_qsbr.c index a9f3d6cc98..41a44be4b9 100644 --- a/lib/rcu/rte_rcu_qsbr.c +++ b/lib/rcu/rte_rcu_qsbr.c @@ -92,7 +92,7 @@ rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id) return 1; } - __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n", + __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u", v->qsbr_cnt[thread_id].lock_cnt); id = thread_id & __RTE_QSBR_THRID_MASK; @@ -144,7 +144,7 @@ rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id) return 1; } - __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n", + __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u", v->qsbr_cnt[thread_id].lock_cnt); id = thread_id & __RTE_QSBR_THRID_MASK; diff --git a/lib/rcu/rte_rcu_qsbr.h b/lib/rcu/rte_rcu_qsbr.h index 5979fb0efb..6b908e7ee0 100644 --- a/lib/rcu/rte_rcu_qsbr.h +++ b/lib/rcu/rte_rcu_qsbr.h @@ -299,7 +299,7 @@ rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id) RTE_ASSERT(v != NULL && thread_id < v->max_threads); - __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n", + __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u", v->qsbr_cnt[thread_id].lock_cnt); /* Copy the current value of token. @@ -350,7 +350,7 @@ rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int thread_id) { RTE_ASSERT(v != NULL && thread_id < v->max_threads); - __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n", + __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u", v->qsbr_cnt[thread_id].lock_cnt); /* The reader can go offline only after the load of the @@ -427,7 +427,7 @@ rte_rcu_qsbr_unlock(__rte_unused struct rte_rcu_qsbr *v, 1, rte_memory_order_release); __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, WARNING, - "Lock counter %u. Nested locks?\n", + "Lock counter %u. Nested locks?", v->qsbr_cnt[thread_id].lock_cnt); #endif } @@ -481,7 +481,7 @@ rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int thread_id) RTE_ASSERT(v != NULL && thread_id < v->max_threads); - __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n", + __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u", v->qsbr_cnt[thread_id].lock_cnt); /* Acquire the changes to the shared data structure released diff --git a/lib/stack/rte_stack.c b/lib/stack/rte_stack.c index 1fabec2bfe..1dab6d6645 100644 --- a/lib/stack/rte_stack.c +++ b/lib/stack/rte_stack.c @@ -56,7 +56,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id, int ret; if (flags & ~(RTE_STACK_F_LF)) { - STACK_LOG_ERR("Unsupported stack flags %#x\n", flags); + STACK_LOG_ERR("Unsupported stack flags %#x", flags); return NULL; } @@ -65,7 +65,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id, #endif #if !defined(RTE_STACK_LF_SUPPORTED) if (flags & RTE_STACK_F_LF) { - STACK_LOG_ERR("Lock-free stack is not supported on your platform\n"); + STACK_LOG_ERR("Lock-free stack is not supported on your platform"); rte_errno = ENOTSUP; return NULL; } @@ -82,7 +82,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id, te = rte_zmalloc("STACK_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - STACK_LOG_ERR("Cannot reserve memory for tailq\n"); + STACK_LOG_ERR("Cannot reserve memory for tailq"); rte_errno = ENOMEM; return NULL; } @@ -92,7 +92,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id, mz = rte_memzone_reserve_aligned(mz_name, sz, socket_id, 0, __alignof__(*s)); if (mz == NULL) { - STACK_LOG_ERR("Cannot reserve stack memzone!\n"); + STACK_LOG_ERR("Cannot reserve stack memzone!"); rte_mcfg_tailq_write_unlock(); rte_free(te); return NULL; -- 2.41.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [RFC 2/3] log: add a per line log helper 2023-11-17 13:18 [RFC 0/3] Detect superfluous newline in logs David Marchand 2023-11-17 13:18 ` [RFC 1/3] lib: remove redundant newline from logs David Marchand @ 2023-11-17 13:18 ` David Marchand 2023-11-17 13:18 ` [RFC 3/3] lib: use per line logging David Marchand ` (6 subsequent siblings) 8 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-11-17 13:18 UTC (permalink / raw) To: dev; +Cc: thomas, ferruh.yigit, bruce.richardson, stephen gcc builtin __builtin_strchr can be used as a static assertion to check whether passed format strings contain a \n. This can be useful to detect double \n in log messages. Signed-off-by: David Marchand <david.marchand@redhat.com> --- lib/log/rte_log.h | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/lib/log/rte_log.h b/lib/log/rte_log.h index f7a8405de9..d6db699a07 100644 --- a/lib/log/rte_log.h +++ b/lib/log/rte_log.h @@ -17,6 +17,7 @@ extern "C" { #endif +#include <assert.h> #include <stdint.h> #include <stdio.h> #include <stdarg.h> @@ -358,6 +359,26 @@ int rte_vlog(uint32_t level, uint32_t logtype, const char *format, va_list ap) RTE_LOGTYPE_ ## t, # t ": " __VA_ARGS__) : \ 0) +#ifdef RTE_TOOLCHAIN_GCC +#define RTE_LOG_CHECK_NO_NEWLINE(fmt) \ + static_assert(!__builtin_strchr(fmt, '\n'), \ + "This log format string contains a \\n") +#else +#define RTE_LOG_CHECK_NO_NEWLINE(...) +#endif + +#define RTE_LOG_LINE(l, t, ...) do { \ + RTE_LOG_CHECK_NO_NEWLINE(RTE_FMT_HEAD(__VA_ARGS__,)); \ + RTE_LOG(l, t, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_FMT_TAIL(__VA_ARGS__ ,))); \ +} while (0) + +#define RTE_LOG_DP_LINE(l, t, ...) do { \ + RTE_LOG_CHECK_NO_NEWLINE(RTE_FMT_HEAD(__VA_ARGS__,)); \ + RTE_LOG_DP(l, t, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_FMT_TAIL(__VA_ARGS__ ,))); \ +} while (0) + #define RTE_LOG_REGISTER_IMPL(type, name, level) \ int type; \ RTE_INIT(__##type) \ -- 2.41.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [RFC 3/3] lib: use per line logging 2023-11-17 13:18 [RFC 0/3] Detect superfluous newline in logs David Marchand 2023-11-17 13:18 ` [RFC 1/3] lib: remove redundant newline from logs David Marchand 2023-11-17 13:18 ` [RFC 2/3] log: add a per line log helper David Marchand @ 2023-11-17 13:18 ` David Marchand 2023-11-17 13:27 ` [RFC 0/3] Detect superfluous newline in logs Bruce Richardson ` (5 subsequent siblings) 8 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-11-17 13:18 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, Nicolas Chautru, Cristian Dumitrescu, Fan Zhang, Ashish Gupta, Akhil Goyal, Chengwen Feng, Kevin Laatz, Jerin Jacob, Erik Gabriel Carrillo, Elena Agostini, Kiran Kumar K, Nithin Dabilpuram, Zhirun Yan, Srikanth Yalavarthi, Jasvinder Singh, Pavan Nikhilesh, Sachin Saxena, Hemant Agrawal, Honnappa Nagarahalli Use RTE_LOG_LINE in existing macros that append a \n. Signed-off-by: David Marchand <david.marchand@redhat.com> --- lib/bbdev/rte_bbdev.c | 3 ++- lib/cfgfile/rte_cfgfile.c | 4 ++-- lib/compressdev/rte_compressdev_internal.h | 5 +++-- lib/cryptodev/rte_cryptodev.h | 16 ++++++++-------- lib/dmadev/rte_dmadev.c | 6 ++++-- lib/eventdev/eventdev_pmd.h | 8 ++++---- lib/eventdev/rte_event_timer_adapter.c | 17 ++++++++++------- lib/gpudev/gpudev.c | 6 ++++-- lib/graph/graph_private.h | 5 +++-- lib/metrics/rte_metrics_telemetry.c | 4 ++-- lib/mldev/rte_mldev.h | 5 +++-- lib/net/rte_net_crc.c | 8 ++++---- lib/node/node_private.h | 6 ++++-- lib/rawdev/rte_rawdev_pmd.h | 4 ++-- lib/rcu/rte_rcu_qsbr.h | 9 ++++----- lib/stack/stack_pvt.h | 4 ++-- 16 files changed, 61 insertions(+), 49 deletions(-) diff --git a/lib/bbdev/rte_bbdev.c b/lib/bbdev/rte_bbdev.c index e09bb97abb..a61aa599aa 100644 --- a/lib/bbdev/rte_bbdev.c +++ b/lib/bbdev/rte_bbdev.c @@ -28,10 +28,11 @@ /* BBDev library logging ID */ RTE_LOG_REGISTER_DEFAULT(bbdev_logtype, NOTICE); +#define RTE_LOGTYPE_BBDEV bbdev_logtype /* Helper macro for logging */ #define rte_bbdev_log(level, fmt, ...) \ - rte_log(RTE_LOG_ ## level, bbdev_logtype, fmt "\n", ##__VA_ARGS__) + RTE_LOG_LINE(level, BBDEV, fmt , ##__VA_ARGS__) #define rte_bbdev_log_debug(fmt, ...) \ rte_bbdev_log(DEBUG, RTE_STR(__LINE__) ":%s() " fmt, __func__, \ diff --git a/lib/cfgfile/rte_cfgfile.c b/lib/cfgfile/rte_cfgfile.c index 2f9cc0722a..6a5e4fd942 100644 --- a/lib/cfgfile/rte_cfgfile.c +++ b/lib/cfgfile/rte_cfgfile.c @@ -29,10 +29,10 @@ struct rte_cfgfile { /* Setting up dynamic logging 8< */ RTE_LOG_REGISTER_DEFAULT(cfgfile_logtype, INFO); +#define RTE_LOGTYPE_CFGFILE cfgfile_logtype #define CFG_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, cfgfile_logtype, "%s(): " fmt "\n", \ - __func__, ## args) + RTE_LOG_LINE(level, CFGFILE, "%s(): " fmt, __func__, ## args) /* >8 End of setting up dynamic logging */ /** when we resize a file structure, how many extra entries diff --git a/lib/compressdev/rte_compressdev_internal.h b/lib/compressdev/rte_compressdev_internal.h index b3b193e3ee..34d6d95649 100644 --- a/lib/compressdev/rte_compressdev_internal.h +++ b/lib/compressdev/rte_compressdev_internal.h @@ -21,9 +21,10 @@ extern "C" { /* Logging Macros */ extern int compressdev_logtype; +#define RTE_LOGTYPE_COMPRESSDEV compressdev_logtype + #define COMPRESSDEV_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, compressdev_logtype, "%s(): " fmt "\n", \ - __func__, ##args) + RTE_LOG_LINE(level, COMPRESSDEV, "%s(): " fmt, __func__, ##args) /** * Dequeue processed packets from queue pair of a device. diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h index aaeaf294e6..5131d2d947 100644 --- a/lib/cryptodev/rte_cryptodev.h +++ b/lib/cryptodev/rte_cryptodev.h @@ -31,23 +31,23 @@ extern const char **rte_cyptodev_names; /* Logging Macros */ #define CDEV_LOG_ERR(...) \ - RTE_LOG(ERR, CRYPTODEV, \ - RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_LOG_LINE(ERR, CRYPTODEV, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,), \ __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__,))) #define CDEV_LOG_INFO(...) \ - RTE_LOG(INFO, CRYPTODEV, \ - RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_LOG_LINE(INFO, CRYPTODEV, \ + RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__,), \ RTE_FMT_TAIL(__VA_ARGS__,))) #define CDEV_LOG_DEBUG(...) \ - RTE_LOG(DEBUG, CRYPTODEV, \ - RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_LOG_LINE(DEBUG, CRYPTODEV, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,), \ __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__,))) #define CDEV_PMD_TRACE(...) \ - RTE_LOG(DEBUG, CRYPTODEV, \ - RTE_FMT("[%s] %s: " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_LOG_LINE(DEBUG, CRYPTODEV, \ + RTE_FMT("[%s] %s: " RTE_FMT_HEAD(__VA_ARGS__,), \ dev, __func__, RTE_FMT_TAIL(__VA_ARGS__,))) /** diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c index 009a21849a..c1a166858c 100644 --- a/lib/dmadev/rte_dmadev.c +++ b/lib/dmadev/rte_dmadev.c @@ -32,9 +32,11 @@ static struct { } *dma_devices_shared_data; RTE_LOG_REGISTER_DEFAULT(rte_dma_logtype, INFO); +#define RTE_LOGTYPE_DMA rte_dma_logtype + #define RTE_DMA_LOG(level, ...) \ - rte_log(RTE_LOG_ ## level, rte_dma_logtype, RTE_FMT("dma: " \ - RTE_FMT_HEAD(__VA_ARGS__,) "\n", RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(level, DMA, RTE_FMT("dma: " RTE_FMT_HEAD(__VA_ARGS__,), \ + RTE_FMT_TAIL(__VA_ARGS__,))) int rte_dma_dev_max(size_t dev_max) diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index 2ec5aec0a8..50cf7d9057 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -33,14 +33,14 @@ extern "C" { /* Logging Macros */ #define RTE_EDEV_LOG_ERR(...) \ - RTE_LOG(ERR, EVENTDEV, \ - RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_LOG_LINE(ERR, EVENTDEV, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,), \ __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__,))) #ifdef RTE_LIBRTE_EVENTDEV_DEBUG #define RTE_EDEV_LOG_DEBUG(...) \ - RTE_LOG(DEBUG, EVENTDEV, \ - RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_LOG_LINE(DEBUG, EVENTDEV, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,), \ __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__,))) #else #define RTE_EDEV_LOG_DEBUG(...) (void)0 diff --git a/lib/eventdev/rte_event_timer_adapter.c b/lib/eventdev/rte_event_timer_adapter.c index 3f22e85173..6ebb7b257e 100644 --- a/lib/eventdev/rte_event_timer_adapter.c +++ b/lib/eventdev/rte_event_timer_adapter.c @@ -30,27 +30,30 @@ #define DATA_MZ_NAME_FORMAT "rte_event_timer_adapter_data_%d" RTE_LOG_REGISTER_SUFFIX(evtim_logtype, adapter.timer, NOTICE); +#define RTE_LOGTYPE_EVTIM evtim_logtype RTE_LOG_REGISTER_SUFFIX(evtim_buffer_logtype, adapter.timer, NOTICE); +#define RTE_LOGTYPE_EVTIM_BUF evtim_buffer_logtype RTE_LOG_REGISTER_SUFFIX(evtim_svc_logtype, adapter.timer.svc, NOTICE); +#define RTE_LOGTYPE_EVTIM_SVC evtim_svc_logtype static struct rte_event_timer_adapter *adapters; static const struct event_timer_adapter_ops swtim_ops; #define EVTIM_LOG(level, logtype, ...) \ - rte_log(RTE_LOG_ ## level, logtype, \ - RTE_FMT("EVTIMER: %s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,) \ - "\n", __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(level, logtype, \ + RTE_FMT("EVTIMER: %s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,), \ + __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__,))) -#define EVTIM_LOG_ERR(...) EVTIM_LOG(ERR, evtim_logtype, __VA_ARGS__) +#define EVTIM_LOG_ERR(...) EVTIM_LOG(ERR, EVTIM, __VA_ARGS__) #ifdef RTE_LIBRTE_EVENTDEV_DEBUG #define EVTIM_LOG_DBG(...) \ - EVTIM_LOG(DEBUG, evtim_logtype, __VA_ARGS__) + EVTIM_LOG(DEBUG, EVTIM, __VA_ARGS__) #define EVTIM_BUF_LOG_DBG(...) \ - EVTIM_LOG(DEBUG, evtim_buffer_logtype, __VA_ARGS__) + EVTIM_LOG(DEBUG, EVTIM_BUF, __VA_ARGS__) #define EVTIM_SVC_LOG_DBG(...) \ - EVTIM_LOG(DEBUG, evtim_svc_logtype, __VA_ARGS__) + EVTIM_LOG(DEBUG, EVTIM_SVC, __VA_ARGS__) #else #define EVTIM_LOG_DBG(...) (void)0 #define EVTIM_BUF_LOG_DBG(...) (void)0 diff --git a/lib/gpudev/gpudev.c b/lib/gpudev/gpudev.c index 6845d18b4d..79118c3e94 100644 --- a/lib/gpudev/gpudev.c +++ b/lib/gpudev/gpudev.c @@ -17,9 +17,11 @@ /* Logging */ RTE_LOG_REGISTER_DEFAULT(gpu_logtype, NOTICE); +#define RTE_LOGTYPE_GPUDEV gpu_logtype + #define GPU_LOG(level, ...) \ - rte_log(RTE_LOG_ ## level, gpu_logtype, RTE_FMT("gpu: " \ - RTE_FMT_HEAD(__VA_ARGS__, ) "\n", RTE_FMT_TAIL(__VA_ARGS__, ))) + RTE_LOG_LINE(level, GPUDEV, RTE_FMT("gpu: " RTE_FMT_HEAD(__VA_ARGS__, ), \ + RTE_FMT_TAIL(__VA_ARGS__, ))) /* Set any driver error as EPERM */ #define GPU_DRV_RET(function) \ diff --git a/lib/graph/graph_private.h b/lib/graph/graph_private.h index d0ef13b205..672a034287 100644 --- a/lib/graph/graph_private.h +++ b/lib/graph/graph_private.h @@ -18,10 +18,11 @@ #include "rte_graph_worker.h" extern int rte_graph_logtype; +#define RTE_LOGTYPE_GRAPH rte_graph_logtype #define GRAPH_LOG(level, ...) \ - rte_log(RTE_LOG_##level, rte_graph_logtype, \ - RTE_FMT("GRAPH: %s():%u " RTE_FMT_HEAD(__VA_ARGS__, ) "\n", \ + RTE_LOG_LINE(level, GRAPH, \ + RTE_FMT("GRAPH: %s():%u " RTE_FMT_HEAD(__VA_ARGS__, ), \ __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__, ))) #define graph_err(...) GRAPH_LOG(ERR, __VA_ARGS__) diff --git a/lib/metrics/rte_metrics_telemetry.c b/lib/metrics/rte_metrics_telemetry.c index 1d133e1f8c..169590fdb7 100644 --- a/lib/metrics/rte_metrics_telemetry.c +++ b/lib/metrics/rte_metrics_telemetry.c @@ -16,11 +16,11 @@ struct telemetry_metrics_data tel_met_data; int metrics_log_level; +#define RTE_LOGTYPE_METRICS metrics_log_level /* Logging Macros */ #define METRICS_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ##level, metrics_log_level, "%s(): "fmt "\n", \ - __func__, ##args) + RTE_LOG_LINE(level, METRICS, "%s(): "fmt, __func__, ##args) #define METRICS_LOG_ERR(fmt, args...) \ METRICS_LOG(ERR, fmt, ## args) diff --git a/lib/mldev/rte_mldev.h b/lib/mldev/rte_mldev.h index 63b2670bb0..5cf6f0566f 100644 --- a/lib/mldev/rte_mldev.h +++ b/lib/mldev/rte_mldev.h @@ -144,9 +144,10 @@ extern "C" { /* Logging Macro */ extern int rte_ml_dev_logtype; +#define RTE_LOGTYPE_MLDEV rte_ml_dev_logtype -#define RTE_MLDEV_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_##level, rte_ml_dev_logtype, "%s(): " fmt "\n", __func__, ##args) +#define RTE_MLDEV_LOG(level, fmt, args...) \ + RTE_LOG_LINE(level, MLDEV, "%s(): " fmt, __func__, ##args) #define RTE_ML_STR_MAX 128 /**< Maximum length of name string */ diff --git a/lib/net/rte_net_crc.c b/lib/net/rte_net_crc.c index 900d6de7f4..b401ea3dd8 100644 --- a/lib/net/rte_net_crc.c +++ b/lib/net/rte_net_crc.c @@ -70,11 +70,11 @@ static const rte_net_crc_handler handlers_neon[] = { static uint16_t max_simd_bitwidth; -#define NET_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, libnet_logtype, "%s(): " fmt "\n", \ - __func__, ## args) - RTE_LOG_REGISTER_DEFAULT(libnet_logtype, INFO); +#define RTE_LOGTYPE_NET libnet_logtype + +#define NET_LOG(level, fmt, args...) \ + RTE_LOG_LINE(level, NET, "%s(): " fmt, __func__, ## args) /* Scalar handling */ diff --git a/lib/node/node_private.h b/lib/node/node_private.h index 26135aaa5b..5702146db4 100644 --- a/lib/node/node_private.h +++ b/lib/node/node_private.h @@ -11,9 +11,11 @@ #include <rte_mbuf_dyn.h> extern int rte_node_logtype; +#define RTE_LOGTYPE_NODE rte_node_logtype + #define NODE_LOG(level, node_name, ...) \ - rte_log(RTE_LOG_##level, rte_node_logtype, \ - RTE_FMT("NODE %s: %s():%u " RTE_FMT_HEAD(__VA_ARGS__, ) "\n", \ + RTE_LOG_LINE(level, NODE, \ + RTE_FMT("NODE %s: %s():%u " RTE_FMT_HEAD(__VA_ARGS__, ), \ node_name, __func__, __LINE__, \ RTE_FMT_TAIL(__VA_ARGS__, ))) diff --git a/lib/rawdev/rte_rawdev_pmd.h b/lib/rawdev/rte_rawdev_pmd.h index 7b9ef1d09f..7173282c66 100644 --- a/lib/rawdev/rte_rawdev_pmd.h +++ b/lib/rawdev/rte_rawdev_pmd.h @@ -27,11 +27,11 @@ extern "C" { #include "rte_rawdev.h" extern int librawdev_logtype; +#define RTE_LOGTYPE_RAWDEV librawdev_logtype /* Logging Macros */ #define RTE_RDEV_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, librawdev_logtype, "%s(): " fmt "\n", \ - __func__, ##args) + RTE_LOG_LINE(level, RAWDEV, "%s(): " fmt, __func__, ##args) #define RTE_RDEV_ERR(fmt, args...) \ RTE_RDEV_LOG(ERR, fmt, ## args) diff --git a/lib/rcu/rte_rcu_qsbr.h b/lib/rcu/rte_rcu_qsbr.h index 6b908e7ee0..23c9f89805 100644 --- a/lib/rcu/rte_rcu_qsbr.h +++ b/lib/rcu/rte_rcu_qsbr.h @@ -36,20 +36,19 @@ extern "C" { #include <rte_ring.h> extern int rte_rcu_log_type; +#define RTE_LOGTYPE_RCU rte_rcu_log_type #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG #define __RTE_RCU_DP_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, rte_rcu_log_type, \ - "%s(): " fmt "\n", __func__, ## args) + RTE_LOG_LINE(level, RCU, "%s(): " fmt, __func__, ## args) #else #define __RTE_RCU_DP_LOG(level, fmt, args...) #endif #if defined(RTE_LIBRTE_RCU_DEBUG) -#define __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...) do {\ +#define __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...) do { \ if (v->qsbr_cnt[thread_id].lock_cnt) \ - rte_log(RTE_LOG_ ## level, rte_rcu_log_type, \ - "%s(): " fmt "\n", __func__, ## args); \ + RTE_LOG_LINE(level, RCU, "%s(): " fmt, __func__, ## args); \ } while (0) #else #define __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...) diff --git a/lib/stack/stack_pvt.h b/lib/stack/stack_pvt.h index c7eab4027d..2dce42a9da 100644 --- a/lib/stack/stack_pvt.h +++ b/lib/stack/stack_pvt.h @@ -8,10 +8,10 @@ #include <rte_log.h> extern int stack_logtype; +#define RTE_LOGTYPE_STACK stack_logtype #define STACK_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ##level, stack_logtype, "%s(): "fmt "\n", \ - __func__, ##args) + RTE_LOG_LINE(level, STACK, "%s(): "fmt, __func__, ##args) #define STACK_LOG_ERR(fmt, args...) \ STACK_LOG(ERR, fmt, ## args) -- 2.41.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC 0/3] Detect superfluous newline in logs 2023-11-17 13:18 [RFC 0/3] Detect superfluous newline in logs David Marchand ` (2 preceding siblings ...) 2023-11-17 13:18 ` [RFC 3/3] lib: use per line logging David Marchand @ 2023-11-17 13:27 ` Bruce Richardson 2023-11-17 13:48 ` David Marchand 2023-11-17 13:47 ` Morten Brørup ` (4 subsequent siblings) 8 siblings, 1 reply; 122+ messages in thread From: Bruce Richardson @ 2023-11-17 13:27 UTC (permalink / raw) To: David Marchand; +Cc: dev, thomas, ferruh.yigit, stephen On Fri, Nov 17, 2023 at 02:18:21PM +0100, David Marchand wrote: > Getting readable and consistent logs is important when running a DPDK > application, especially when troubleshooting. > A common issue with logs is when a DPDK change do not add (or on the > contrary add too many \n) in the format string. > > This issue would only get noticed when actually hitting this log (which > may be something difficult to do). > > This series proposes to introduce a new RTE_LOG helper that is > responsible for logging a one line message and spews a build error (with > gcc) if any \n is part of the format string. > > > Note: > - the first patch is intentionnally sent as a single block: splitting it > into per library commits with correct Fixes: tags is a tedious work. > I would split it for a non RFC series. For now, it is enough to show > case the idea. > - the last patch shows how an existing log macro is converted, > > very nice. I definitely think this should be implemented for 24.03 Thanks, /Bruce PS: I'm not even sure that the first patch needs to be split. I think it's fairly clear as-is. ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC 0/3] Detect superfluous newline in logs 2023-11-17 13:27 ` [RFC 0/3] Detect superfluous newline in logs Bruce Richardson @ 2023-11-17 13:48 ` David Marchand 2023-11-17 14:11 ` Bruce Richardson 0 siblings, 1 reply; 122+ messages in thread From: David Marchand @ 2023-11-17 13:48 UTC (permalink / raw) To: Bruce Richardson; +Cc: dev, thomas, ferruh.yigit, stephen On Fri, Nov 17, 2023 at 2:27 PM Bruce Richardson <bruce.richardson@intel.com> wrote: > > On Fri, Nov 17, 2023 at 02:18:21PM +0100, David Marchand wrote: > > Getting readable and consistent logs is important when running a DPDK > > application, especially when troubleshooting. > > A common issue with logs is when a DPDK change do not add (or on the > > contrary add too many \n) in the format string. > > > > This issue would only get noticed when actually hitting this log (which > > may be something difficult to do). > > > > This series proposes to introduce a new RTE_LOG helper that is > > responsible for logging a one line message and spews a build error (with > > gcc) if any \n is part of the format string. > > > > > > Note: > > - the first patch is intentionnally sent as a single block: splitting it > > into per library commits with correct Fixes: tags is a tedious work. > > I would split it for a non RFC series. For now, it is enough to show > > case the idea. > > - the last patch shows how an existing log macro is converted, > > > > > very nice. I definitely think this should be implemented for 24.03 I am still wondering how this idea should evolve... Some points I have in mind but for which I am not sure what is the best. Some log helpers were exposed to applications and had no explicit requirement wrt \n. Applications may have used those helpers with multiline messages. So maybe existing *exposed* helpers should be left untouched... and a new helper would need to be introduced. IOW with an example, cryptodev (missing a RTE_ prefix) CDEV_LOG_ERR macro is publicly exposed. A CDEV_LOG_LINE_ERR may be needed to avoid breaking external users. There are a lot of other log macros that let it to the callers to add a trailing \n. Should we convert them? Converting the *whole* DPDK code to the new helper (with some exceptions for people who like multiline logs..) would be nice to close this topic once and for all. But it would likely be a nightmare for later fixes that contain logs and which could introduce regressions in such logs if the backport does not take care of re-adding a \n. -- David Marchand ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC 0/3] Detect superfluous newline in logs 2023-11-17 13:48 ` David Marchand @ 2023-11-17 14:11 ` Bruce Richardson 2023-11-17 14:21 ` David Marchand 0 siblings, 1 reply; 122+ messages in thread From: Bruce Richardson @ 2023-11-17 14:11 UTC (permalink / raw) To: David Marchand; +Cc: dev, thomas, ferruh.yigit, stephen On Fri, Nov 17, 2023 at 02:48:25PM +0100, David Marchand wrote: > On Fri, Nov 17, 2023 at 2:27 PM Bruce Richardson > <bruce.richardson@intel.com> wrote: > > > > On Fri, Nov 17, 2023 at 02:18:21PM +0100, David Marchand wrote: > > > Getting readable and consistent logs is important when running a DPDK > > > application, especially when troubleshooting. > > > A common issue with logs is when a DPDK change do not add (or on the > > > contrary add too many \n) in the format string. > > > > > > This issue would only get noticed when actually hitting this log (which > > > may be something difficult to do). > > > > > > This series proposes to introduce a new RTE_LOG helper that is > > > responsible for logging a one line message and spews a build error (with > > > gcc) if any \n is part of the format string. > > > > > > > > > Note: > > > - the first patch is intentionnally sent as a single block: splitting it > > > into per library commits with correct Fixes: tags is a tedious work. > > > I would split it for a non RFC series. For now, it is enough to show > > > case the idea. > > > - the last patch shows how an existing log macro is converted, > > > > > > > > very nice. I definitely think this should be implemented for 24.03 > > I am still wondering how this idea should evolve... > > Some points I have in mind but for which I am not sure what is the best. > > Some log helpers were exposed to applications and had no explicit > requirement wrt \n. > Applications may have used those helpers with multiline messages. > So maybe existing *exposed* helpers should be left untouched... and a > new helper would need to be introduced. And the existing helpers deprecated. Internal log helpers should not be exposed to apps. > IOW with an example, cryptodev (missing a RTE_ prefix) CDEV_LOG_ERR > macro is publicly exposed. > A CDEV_LOG_LINE_ERR may be needed to avoid breaking external users. > RTE_CDEV_LOG_ERR should be available, though, right? :-) > > There are a lot of other log macros that let it to the callers to add > a trailing \n. > Should we convert them? I would think that, ideally, yes we should. > Converting the *whole* DPDK code to the new helper (with some > exceptions for people who like multiline logs..) would be nice to > close this topic once and for all. > But it would likely be a nightmare for later fixes that contain logs > and which could introduce regressions in such logs if the backport > does not take care of re-adding a \n. Good point. I wonder how many backports add new logs, or modify existing ones? Are there any automated checks, or a checklist for backports to enable us to catch these sort of things? Naming could be a problem, but we could look to move over existing logs by using new log macro names, which should help us to catch these. Even if the log names are a bit awkward, if they are kept internal-only, it shouldn't matter much. [Plus we would always be free to do a mass-rename back to the original name in future, once the backport issues are reduced]. /Bruce ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC 0/3] Detect superfluous newline in logs 2023-11-17 14:11 ` Bruce Richardson @ 2023-11-17 14:21 ` David Marchand 0 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-11-17 14:21 UTC (permalink / raw) To: Bruce Richardson Cc: dev, thomas, ferruh.yigit, stephen, Morten Brørup, Luca Boccassi, Kevin Traynor, Xueming(Steven) Li On Fri, Nov 17, 2023 at 3:11 PM Bruce Richardson <bruce.richardson@intel.com> wrote: > > On Fri, Nov 17, 2023 at 02:48:25PM +0100, David Marchand wrote: > > On Fri, Nov 17, 2023 at 2:27 PM Bruce Richardson > > <bruce.richardson@intel.com> wrote: > > > > > > On Fri, Nov 17, 2023 at 02:18:21PM +0100, David Marchand wrote: > > > > Getting readable and consistent logs is important when running a DPDK > > > > application, especially when troubleshooting. > > > > A common issue with logs is when a DPDK change do not add (or on the > > > > contrary add too many \n) in the format string. > > > > > > > > This issue would only get noticed when actually hitting this log (which > > > > may be something difficult to do). > > > > > > > > This series proposes to introduce a new RTE_LOG helper that is > > > > responsible for logging a one line message and spews a build error (with > > > > gcc) if any \n is part of the format string. > > > > > > > > > > > > Note: > > > > - the first patch is intentionnally sent as a single block: splitting it > > > > into per library commits with correct Fixes: tags is a tedious work. > > > > I would split it for a non RFC series. For now, it is enough to show > > > > case the idea. > > > > - the last patch shows how an existing log macro is converted, > > > > > > > > > > > very nice. I definitely think this should be implemented for 24.03 > > > > I am still wondering how this idea should evolve... > > > > Some points I have in mind but for which I am not sure what is the best. > > > > Some log helpers were exposed to applications and had no explicit > > requirement wrt \n. > > Applications may have used those helpers with multiline messages. > > So maybe existing *exposed* helpers should be left untouched... and a > > new helper would need to be introduced. > > And the existing helpers deprecated. Internal log helpers should not be > exposed to apps. I agree. > > > IOW with an example, cryptodev (missing a RTE_ prefix) CDEV_LOG_ERR > > macro is publicly exposed. > > A CDEV_LOG_LINE_ERR may be needed to avoid breaking external users. > > > > RTE_CDEV_LOG_ERR should be available, though, right? :-) Err, yes, but if we make it private, we don't need the RTE_ prefix ? :-) > > > > > There are a lot of other log macros that let it to the callers to add > > a trailing \n. > > Should we convert them? > > I would think that, ideally, yes we should. > > > Converting the *whole* DPDK code to the new helper (with some > > exceptions for people who like multiline logs..) would be nice to > > close this topic once and for all. > > But it would likely be a nightmare for later fixes that contain logs > > and which could introduce regressions in such logs if the backport > > does not take care of re-adding a \n. > > Good point. I wonder how many backports add new logs, or modify existing > ones? Are there any automated checks, or a checklist for backports to Touching logs in backport happens often afaics. (to simplify I added a vXX.11.0 tag pointing at vXX.11) $ for i in $(seq 0 4); do echo v21.11.${i}..v21.11.$((i + 1)) $(git diff v21.11.${i}..v21.11.$((i + 1)) | grep -c ^\+.*LOG); done v21.11.0..v21.11.1 122 v21.11.1..v21.11.2 40 v21.11.2..v21.11.3 43 v21.11.3..v21.11.4 30 v21.11.4..v21.11.5 24 $ for i in $(seq 0 2); do echo v22.11.${i}..v22.11.$((i + 1)) $(git diff v22.11.${i}..v22.11.$((i + 1)) | grep -c ^\+.*LOG); done v22.11.0..v22.11.1 0 v22.11.1..v22.11.2 37 v22.11.2..v22.11.3 106 I don't think there are checks on this topic (and to be fair, I don't see which check could be done). > enable us to catch these sort of things? > Naming could be a problem, but we could look to move over existing logs by > using new log macro names, which should help us to catch these. Even if the > log names are a bit awkward, if they are kept internal-only, it shouldn't > matter much. [Plus we would always be free to do a mass-rename back to the > original name in future, once the backport issues are reduced]. This is the way I had in mind. -- David Marchand ^ permalink raw reply [flat|nested] 122+ messages in thread
* RE: [RFC 0/3] Detect superfluous newline in logs 2023-11-17 13:18 [RFC 0/3] Detect superfluous newline in logs David Marchand ` (3 preceding siblings ...) 2023-11-17 13:27 ` [RFC 0/3] Detect superfluous newline in logs Bruce Richardson @ 2023-11-17 13:47 ` Morten Brørup 2023-11-17 14:09 ` David Marchand 2023-12-08 14:59 ` [RFC v2 00/14] " David Marchand ` (3 subsequent siblings) 8 siblings, 1 reply; 122+ messages in thread From: Morten Brørup @ 2023-11-17 13:47 UTC (permalink / raw) To: David Marchand, dev; +Cc: thomas, ferruh.yigit, bruce.richardson, stephen > From: David Marchand [mailto:david.marchand@redhat.com] > Sent: Friday, 17 November 2023 14.18 > > Getting readable and consistent logs is important when running a DPDK > application, especially when troubleshooting. > A common issue with logs is when a DPDK change do not add (or on the > contrary add too many \n) in the format string. > > This issue would only get noticed when actually hitting this log (which > may be something difficult to do). > > This series proposes to introduce a new RTE_LOG helper that is > responsible for logging a one line message and spews a build error > (with gcc) if any \n is part of the format string. > The new helper's name is RTE_LOG_LINE, not RTE_LOG. As far as I can see, RTE_LOG continues working as before - allowing one line of log message to span multiple lines of RTE_LOG() calls with \n in the last of them. Which is good. Anyway, I like the concept. And it solves a real problem. If you want other names, e.g. RTE_LOG for a complete line (appending \n in the macro itself), and RTE_LOG_PART (or similar) for an incomplete line, I wouldn't object. But that would probably break the API. > > Note: > - the first patch is intentionnally sent as a single block: splitting > it > into per library commits with correct Fixes: tags is a tedious work. > I would split it for a non RFC series. For now, it is enough to show > case the idea. > - the last patch shows how an existing log macro is converted, ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC 0/3] Detect superfluous newline in logs 2023-11-17 13:47 ` Morten Brørup @ 2023-11-17 14:09 ` David Marchand 2023-11-17 14:17 ` Morten Brørup 0 siblings, 1 reply; 122+ messages in thread From: David Marchand @ 2023-11-17 14:09 UTC (permalink / raw) To: Morten Brørup; +Cc: dev, thomas, ferruh.yigit, bruce.richardson, stephen On Fri, Nov 17, 2023 at 2:47 PM Morten Brørup <mb@smartsharesystems.com> wrote: > > > From: David Marchand [mailto:david.marchand@redhat.com] > > Sent: Friday, 17 November 2023 14.18 > > > > Getting readable and consistent logs is important when running a DPDK > > application, especially when troubleshooting. > > A common issue with logs is when a DPDK change do not add (or on the > > contrary add too many \n) in the format string. > > > > This issue would only get noticed when actually hitting this log (which > > may be something difficult to do). > > > > This series proposes to introduce a new RTE_LOG helper that is > > responsible for logging a one line message and spews a build error > > (with gcc) if any \n is part of the format string. > > > > The new helper's name is RTE_LOG_LINE, not RTE_LOG. Sorry, wrong completion. > > As far as I can see, RTE_LOG continues working as before - allowing one line of log message to span multiple lines of RTE_LOG() calls with \n in the last of them. Which is good. Indeed, we can't break / change RTE_LOG api. This API is too old, that would be a nightmare. > > Anyway, I like the concept. And it solves a real problem. > > If you want other names, e.g. RTE_LOG for a complete line (appending \n in the macro itself), and RTE_LOG_PART (or similar) for an incomplete line, I wouldn't object. But that would probably break the API. No API breaking allowed :-). -- David Marchand ^ permalink raw reply [flat|nested] 122+ messages in thread
* RE: [RFC 0/3] Detect superfluous newline in logs 2023-11-17 14:09 ` David Marchand @ 2023-11-17 14:17 ` Morten Brørup 0 siblings, 0 replies; 122+ messages in thread From: Morten Brørup @ 2023-11-17 14:17 UTC (permalink / raw) To: David Marchand; +Cc: dev, thomas, ferruh.yigit, bruce.richardson, stephen > From: David Marchand [mailto:david.marchand@redhat.com] > Sent: Friday, 17 November 2023 15.10 > > On Fri, Nov 17, 2023 at 2:47 PM Morten Brørup > <mb@smartsharesystems.com> wrote: > > > > > From: David Marchand [mailto:david.marchand@redhat.com] > > > Sent: Friday, 17 November 2023 14.18 > > > > > > Getting readable and consistent logs is important when running a > DPDK > > > application, especially when troubleshooting. > > > A common issue with logs is when a DPDK change do not add (or on > the > > > contrary add too many \n) in the format string. > > > > > > This issue would only get noticed when actually hitting this log > (which > > > may be something difficult to do). > > > > > > This series proposes to introduce a new RTE_LOG helper that is > > > responsible for logging a one line message and spews a build error > > > (with gcc) if any \n is part of the format string. > > > > > > > The new helper's name is RTE_LOG_LINE, not RTE_LOG. > > Sorry, wrong completion. > > > > > As far as I can see, RTE_LOG continues working as before - allowing > one line of log message to span multiple lines of RTE_LOG() calls with > \n in the last of them. Which is good. > > Indeed, we can't break / change RTE_LOG api. > This API is too old, that would be a nightmare. > > > > > Anyway, I like the concept. And it solves a real problem. > > > > If you want other names, e.g. RTE_LOG for a complete line (appending > \n in the macro itself), and RTE_LOG_PART (or similar) for an > incomplete line, I wouldn't object. But that would probably break the > API. > > No API breaking allowed :-). OK, then: Series-acked-by: Morten Brørup <mb@smartsharesystems.com> ^ permalink raw reply [flat|nested] 122+ messages in thread
* [RFC v2 00/14] Detect superfluous newline in logs 2023-11-17 13:18 [RFC 0/3] Detect superfluous newline in logs David Marchand ` (4 preceding siblings ...) 2023-11-17 13:47 ` Morten Brørup @ 2023-12-08 14:59 ` David Marchand 2023-12-08 14:59 ` [RFC v2 01/14] hash: remove some dead code David Marchand ` (13 more replies) 2023-12-18 9:27 ` [PATCH v3 00/14] Detect superfluous newline in logs David Marchand ` (2 subsequent siblings) 8 siblings, 14 replies; 122+ messages in thread From: David Marchand @ 2023-12-08 14:59 UTC (permalink / raw) To: dev; +Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb Getting readable and consistent logs is important when running a DPDK application, especially when troubleshooting. A common issue with logs is when a DPDK change do not add (or on the contrary add too many \n) in the format string. This issue would only get noticed when actually hitting this log (which may be a situation hard to reach). This series proposes to introduce a new RTE_LOG_LINE helper that is responsible for logging a one line message and spews a build error (with gcc) if any \n is part of the format string. Because this is still a RFC and a lot of changes are added in this v2, no ack from the v1 has been kept. Since the v1 discussion on the cover letter, I changed my mind, and made the choice to break existing logging helpers exported in the public API. The reasoning is that those should not be used in the first place: logs should be produced only by the library that registers the logtype. Some multiline logging for debugging and the test assert macros are still present, but in this case an explicit call to RTE_LOG() is done. This can be checked with a simple: $ git grep 'RTE_LOG(' -- lib/ :^lib/log/ lib/acl/acl_bld.c: RTE_LOG(DEBUG, ACL, "Build phase for ACL \"%s\":\n" lib/acl/acl_gen.c: RTE_LOG(DEBUG, ACL, "Gen phase for ACL \"%s\":\n" lib/bpf/bpf_validate.c: RTE_LOG(DEBUG, BPF, "%s(%p) stats:\n" lib/bpf/bpf_validate.c: RTE_LOG(DEBUG, BPF, lib/eal/common/eal_common_debug.c: RTE_LOG(CRIT, EAL, "Error - exiting with code: %d\n" lib/eal/include/rte_test.h: RTE_LOG(ERR, EAL, "Test assert %s line %d failed: " \ lib/ip_frag/ip_frag_common.h:#define IP_FRAG_LOG(lvl, fmt, args...) RTE_LOG(lvl, IPFRAG, fmt, ##args) lib/sched/rte_sched.c: RTE_LOG(DEBUG, SCHED, "Low level config for pipe profile %u:\n" lib/sched/rte_sched.c: RTE_LOG(DEBUG, SCHED, "Low level config for subport profile %u:\n" Changes since RFC v1: - rebased after Stephen log changes, - added more fixes as I was making progress on the topic, - added a check so dpdk developers stop using RTE_LOG(), - added preparation patches, like "lib: replace logging helpers", - converted all libraries, keeping some special cases with explicit calls to RTE_LOG, -- David Marchand David Marchand (14): hash: remove some dead code regexdev: fix logtype register lib: use dedicated logtypes lib: add newline in logs lib: remove redundant newline from logs eal/linux: remove log paraphrasing the doc bpf: remove log level in internal helper lib: simplify multilines log messages rcu: introduce a logging helper vhost: improve log for memory dumping configuration log: add a per line log helper lib: convert to per line logging lib: replace logging helpers lib: use per line logging in helpers devtools/checkpatches.sh | 8 + drivers/crypto/ipsec_mb/ipsec_mb_ops.c | 2 +- lib/acl/acl_bld.c | 28 +- lib/acl/acl_gen.c | 8 +- lib/acl/rte_acl.c | 8 +- lib/acl/tb_mem.c | 4 +- lib/bbdev/rte_bbdev.c | 11 +- lib/bpf/bpf.c | 2 +- lib/bpf/bpf_convert.c | 16 +- lib/bpf/bpf_exec.c | 12 +- lib/bpf/bpf_impl.h | 5 +- lib/bpf/bpf_jit_arm64.c | 8 +- lib/bpf/bpf_jit_x86.c | 4 +- lib/bpf/bpf_load.c | 2 +- lib/bpf/bpf_load_elf.c | 24 +- lib/bpf/bpf_pkt.c | 4 +- lib/bpf/bpf_stub.c | 8 +- lib/bpf/bpf_validate.c | 44 +- lib/cfgfile/rte_cfgfile.c | 18 +- lib/compressdev/rte_compressdev_internal.h | 5 +- lib/compressdev/rte_compressdev_pmd.c | 4 +- lib/cryptodev/rte_cryptodev.c | 4 +- lib/cryptodev/rte_cryptodev.h | 16 +- lib/dispatcher/rte_dispatcher.c | 12 +- lib/dmadev/rte_dmadev.c | 8 +- lib/eal/common/eal_common_bus.c | 22 +- lib/eal/common/eal_common_class.c | 4 +- lib/eal/common/eal_common_config.c | 2 +- lib/eal/common/eal_common_debug.c | 6 +- lib/eal/common/eal_common_dev.c | 80 +- lib/eal/common/eal_common_devargs.c | 18 +- lib/eal/common/eal_common_dynmem.c | 34 +- lib/eal/common/eal_common_fbarray.c | 12 +- lib/eal/common/eal_common_interrupts.c | 38 +- lib/eal/common/eal_common_lcore.c | 26 +- lib/eal/common/eal_common_memalloc.c | 12 +- lib/eal/common/eal_common_memory.c | 66 +- lib/eal/common/eal_common_memzone.c | 24 +- lib/eal/common/eal_common_options.c | 236 +++--- lib/eal/common/eal_common_proc.c | 112 +-- lib/eal/common/eal_common_tailqs.c | 12 +- lib/eal/common/eal_common_thread.c | 12 +- lib/eal/common/eal_common_timer.c | 6 +- lib/eal/common/eal_common_trace_utils.c | 2 +- lib/eal/common/eal_trace.h | 4 +- lib/eal/common/hotplug_mp.c | 54 +- lib/eal/common/malloc_elem.c | 6 +- lib/eal/common/malloc_heap.c | 40 +- lib/eal/common/malloc_mp.c | 72 +- lib/eal/common/rte_keepalive.c | 2 +- lib/eal/common/rte_malloc.c | 10 +- lib/eal/common/rte_service.c | 8 +- lib/eal/freebsd/eal.c | 74 +- lib/eal/freebsd/eal_alarm.c | 2 +- lib/eal/freebsd/eal_dev.c | 8 +- lib/eal/freebsd/eal_hugepage_info.c | 22 +- lib/eal/freebsd/eal_interrupts.c | 60 +- lib/eal/freebsd/eal_lcore.c | 2 +- lib/eal/freebsd/eal_memalloc.c | 10 +- lib/eal/freebsd/eal_memory.c | 34 +- lib/eal/freebsd/eal_thread.c | 2 +- lib/eal/freebsd/eal_timer.c | 10 +- lib/eal/linux/eal.c | 122 +-- lib/eal/linux/eal_alarm.c | 2 +- lib/eal/linux/eal_dev.c | 40 +- lib/eal/linux/eal_hugepage_info.c | 38 +- lib/eal/linux/eal_interrupts.c | 116 +-- lib/eal/linux/eal_lcore.c | 4 +- lib/eal/linux/eal_memalloc.c | 120 +-- lib/eal/linux/eal_memory.c | 208 ++--- lib/eal/linux/eal_thread.c | 4 +- lib/eal/linux/eal_timer.c | 14 +- lib/eal/linux/eal_vfio.c | 270 +++---- lib/eal/linux/eal_vfio_mp_sync.c | 4 +- lib/eal/riscv/rte_cycles.c | 4 +- lib/eal/unix/eal_filesystem.c | 14 +- lib/eal/unix/eal_firmware.c | 2 +- lib/eal/unix/eal_unix_memory.c | 8 +- lib/eal/unix/rte_thread.c | 34 +- lib/eal/windows/eal.c | 36 +- lib/eal/windows/eal_alarm.c | 12 +- lib/eal/windows/eal_debug.c | 8 +- lib/eal/windows/eal_dev.c | 8 +- lib/eal/windows/eal_hugepages.c | 10 +- lib/eal/windows/eal_interrupts.c | 10 +- lib/eal/windows/eal_lcore.c | 6 +- lib/eal/windows/eal_memalloc.c | 50 +- lib/eal/windows/eal_memory.c | 22 +- lib/eal/windows/eal_windows.h | 4 +- lib/eal/windows/include/rte_windows.h | 4 +- lib/eal/windows/rte_thread.c | 28 +- lib/efd/rte_efd.c | 58 +- lib/ethdev/ethdev_driver.c | 44 +- lib/ethdev/ethdev_pci.h | 2 +- lib/ethdev/ethdev_private.c | 10 +- lib/ethdev/rte_class_eth.c | 2 +- lib/ethdev/rte_ethdev.c | 858 ++++++++++----------- lib/ethdev/rte_ethdev.h | 51 +- lib/ethdev/rte_ethdev_cman.c | 16 +- lib/ethdev/rte_ethdev_telemetry.c | 44 +- lib/ethdev/rte_flow.c | 64 +- lib/ethdev/rte_flow.h | 3 - lib/ethdev/sff_telemetry.c | 30 +- lib/eventdev/eventdev_pmd.h | 14 +- lib/eventdev/rte_event_crypto_adapter.c | 12 +- lib/eventdev/rte_event_dma_adapter.c | 18 +- lib/eventdev/rte_event_eth_rx_adapter.c | 40 +- lib/eventdev/rte_event_eth_tx_adapter.c | 2 +- lib/eventdev/rte_event_timer_adapter.c | 21 +- lib/eventdev/rte_eventdev.c | 10 +- lib/fib/rte_fib.c | 14 +- lib/fib/rte_fib6.c | 14 +- lib/gpudev/gpudev.c | 6 +- lib/graph/graph_private.h | 5 +- lib/hash/rte_cuckoo_hash.c | 52 +- lib/hash/rte_cuckoo_hash.h | 11 - lib/hash/rte_fbk_hash.c | 4 +- lib/hash/rte_hash_crc.c | 12 +- lib/hash/rte_thash.c | 20 +- lib/hash/rte_thash_gfni.c | 8 +- lib/ip_frag/rte_ip_frag_common.c | 8 +- lib/latencystats/rte_latencystats.c | 41 +- lib/log/log.c | 6 +- lib/log/rte_log.h | 21 + lib/lpm/rte_lpm.c | 12 +- lib/lpm/rte_lpm6.c | 10 +- lib/mbuf/rte_mbuf.c | 14 +- lib/mbuf/rte_mbuf_dyn.c | 14 +- lib/mbuf/rte_mbuf_pool_ops.c | 4 +- lib/member/member.h | 14 + lib/member/rte_member.c | 15 +- lib/member/rte_member.h | 9 - lib/member/rte_member_heap.h | 39 +- lib/member/rte_member_ht.c | 13 +- lib/member/rte_member_sketch.c | 41 +- lib/member/rte_member_vbf.c | 9 +- lib/mempool/rte_mempool.c | 24 +- lib/mempool/rte_mempool.h | 2 +- lib/mempool/rte_mempool_ops.c | 10 +- lib/metrics/rte_metrics_telemetry.c | 6 +- lib/mldev/rte_mldev.c | 102 +-- lib/mldev/rte_mldev.h | 5 +- lib/net/rte_net_crc.c | 14 +- lib/node/ethdev_rx.c | 4 +- lib/node/ip4_lookup.c | 2 +- lib/node/ip6_lookup.c | 2 +- lib/node/kernel_rx.c | 8 +- lib/node/kernel_tx.c | 4 +- lib/node/node_private.h | 6 +- lib/pdump/rte_pdump.c | 113 ++- lib/pipeline/rte_pipeline.c | 228 +++--- lib/port/rte_port_ethdev.c | 18 +- lib/port/rte_port_eventdev.c | 18 +- lib/port/rte_port_fd.c | 24 +- lib/port/rte_port_frag.c | 14 +- lib/port/rte_port_ras.c | 12 +- lib/port/rte_port_ring.c | 18 +- lib/port/rte_port_sched.c | 12 +- lib/port/rte_port_source_sink.c | 48 +- lib/port/rte_port_sym_crypto.c | 18 +- lib/power/guest_channel.c | 36 +- lib/power/power_acpi_cpufreq.c | 116 +-- lib/power/power_amd_pstate_cpufreq.c | 132 ++-- lib/power/power_common.c | 14 +- lib/power/power_common.h | 6 +- lib/power/power_cppc_cpufreq.c | 130 ++-- lib/power/power_intel_uncore.c | 72 +- lib/power/power_kvm_vm.c | 22 +- lib/power/power_pstate_cpufreq.c | 156 ++-- lib/power/rte_power.c | 22 +- lib/power/rte_power_pmd_mgmt.c | 34 +- lib/power/rte_power_uncore.c | 14 +- lib/rawdev/rte_rawdev_pmd.h | 4 +- lib/rcu/rte_rcu_qsbr.c | 66 +- lib/rcu/rte_rcu_qsbr.h | 17 +- lib/regexdev/rte_regexdev.c | 88 +-- lib/regexdev/rte_regexdev.h | 13 +- lib/reorder/rte_reorder.c | 32 +- lib/rib/rte_rib.c | 10 +- lib/rib/rte_rib6.c | 10 +- lib/ring/rte_ring.c | 24 +- lib/sched/rte_pie.c | 18 +- lib/sched/rte_sched.c | 274 +++---- lib/stack/rte_stack.c | 8 +- lib/stack/stack_pvt.h | 4 +- lib/table/rte_table_acl.c | 72 +- lib/table/rte_table_array.c | 16 +- lib/table/rte_table_hash_cuckoo.c | 22 +- lib/table/rte_table_hash_ext.c | 22 +- lib/table/rte_table_hash_key16.c | 38 +- lib/table/rte_table_hash_key32.c | 38 +- lib/table/rte_table_hash_key8.c | 38 +- lib/table/rte_table_hash_lru.c | 22 +- lib/table/rte_table_lpm.c | 42 +- lib/table/rte_table_lpm_ipv6.c | 44 +- lib/table/rte_table_stub.c | 4 +- lib/telemetry/telemetry.c | 39 +- lib/vhost/fd_man.c | 8 +- lib/vhost/iotlb.c | 36 +- lib/vhost/socket.c | 102 +-- lib/vhost/vdpa.c | 8 +- lib/vhost/vduse.c | 120 +-- lib/vhost/vduse.h | 4 +- lib/vhost/vhost.c | 118 +-- lib/vhost/vhost.h | 24 +- lib/vhost/vhost_crypto.c | 12 +- lib/vhost/vhost_user.c | 530 ++++++------- lib/vhost/virtio_net.c | 188 ++--- lib/vhost/virtio_net_ctrl.c | 38 +- 209 files changed, 3982 insertions(+), 3963 deletions(-) create mode 100644 lib/member/member.h -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [RFC v2 01/14] hash: remove some dead code 2023-12-08 14:59 ` [RFC v2 00/14] " David Marchand @ 2023-12-08 14:59 ` David Marchand 2023-12-08 16:53 ` Stephen Hemminger 2023-12-08 20:46 ` Tyler Retzlaff 2023-12-08 14:59 ` [RFC v2 02/14] regexdev: fix logtype register David Marchand ` (12 subsequent siblings) 13 siblings, 2 replies; 122+ messages in thread From: David Marchand @ 2023-12-08 14:59 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Yipeng Wang, Sameh Gobriel, Vladimir Medvedkin, Ruifeng Wang, Ray Kinsella, Dharmik Thakkar This macro is not used. Fixes: 769b2de7fb52 ("hash: implement RCU resources reclamation") Cc: stable@dpdk.org Signed-off-by: David Marchand <david.marchand@redhat.com> --- lib/hash/rte_cuckoo_hash.h | 11 ----------- 1 file changed, 11 deletions(-) diff --git a/lib/hash/rte_cuckoo_hash.h b/lib/hash/rte_cuckoo_hash.h index f7afc4dd79..8ea793c66e 100644 --- a/lib/hash/rte_cuckoo_hash.h +++ b/lib/hash/rte_cuckoo_hash.h @@ -29,17 +29,6 @@ #define RETURN_IF_TRUE(cond, retval) #endif -#if defined(RTE_LIBRTE_HASH_DEBUG) -#define ERR_IF_TRUE(cond, fmt, args...) do { \ - if (cond) { \ - RTE_LOG(ERR, HASH, fmt, ##args); \ - return; \ - } \ -} while (0) -#else -#define ERR_IF_TRUE(cond, fmt, args...) -#endif - #include <rte_hash_crc.h> #include <rte_jhash.h> -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 01/14] hash: remove some dead code 2023-12-08 14:59 ` [RFC v2 01/14] hash: remove some dead code David Marchand @ 2023-12-08 16:53 ` Stephen Hemminger 2023-12-08 20:46 ` Tyler Retzlaff 1 sibling, 0 replies; 122+ messages in thread From: Stephen Hemminger @ 2023-12-08 16:53 UTC (permalink / raw) To: David Marchand Cc: dev, thomas, ferruh.yigit, bruce.richardson, mb, stable, Yipeng Wang, Sameh Gobriel, Vladimir Medvedkin, Ruifeng Wang, Ray Kinsella, Dharmik Thakkar On Fri, 8 Dec 2023 15:59:35 +0100 David Marchand <david.marchand@redhat.com> wrote: > This macro is not used. > > Fixes: 769b2de7fb52 ("hash: implement RCU resources reclamation") > Cc: stable@dpdk.org > > Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Stephen Hemminger <stephen@networkplumber.org> ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 01/14] hash: remove some dead code 2023-12-08 14:59 ` [RFC v2 01/14] hash: remove some dead code David Marchand 2023-12-08 16:53 ` Stephen Hemminger @ 2023-12-08 20:46 ` Tyler Retzlaff 1 sibling, 0 replies; 122+ messages in thread From: Tyler Retzlaff @ 2023-12-08 20:46 UTC (permalink / raw) To: David Marchand Cc: dev, thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Yipeng Wang, Sameh Gobriel, Vladimir Medvedkin, Ruifeng Wang, Ray Kinsella, Dharmik Thakkar On Fri, Dec 08, 2023 at 03:59:35PM +0100, David Marchand wrote: > This macro is not used. > > Fixes: 769b2de7fb52 ("hash: implement RCU resources reclamation") > Cc: stable@dpdk.org > > Signed-off-by: David Marchand <david.marchand@redhat.com> > --- Reviewed-by: Tyler Retzlaff <roretzla@linux.microsoft.com> ^ permalink raw reply [flat|nested] 122+ messages in thread
* [RFC v2 02/14] regexdev: fix logtype register 2023-12-08 14:59 ` [RFC v2 00/14] " David Marchand 2023-12-08 14:59 ` [RFC v2 01/14] hash: remove some dead code David Marchand @ 2023-12-08 14:59 ` David Marchand 2023-12-08 16:58 ` Stephen Hemminger ` (2 more replies) 2023-12-08 14:59 ` [RFC v2 03/14] lib: use dedicated logtypes David Marchand ` (11 subsequent siblings) 13 siblings, 3 replies; 122+ messages in thread From: David Marchand @ 2023-12-08 14:59 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Ori Kam, Parav Pandit, Guy Kaneti This library logtype was not initialized so its logs would end up under the 0 logtype, iow, RTE_LOGTYPE_EAL. Fixes: b25246beaefc ("regexdev: add core functions") Cc: stable@dpdk.org Signed-off-by: David Marchand <david.marchand@redhat.com> --- lib/regexdev/rte_regexdev.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/regexdev/rte_regexdev.c b/lib/regexdev/rte_regexdev.c index caec069182..d38a85eb0b 100644 --- a/lib/regexdev/rte_regexdev.c +++ b/lib/regexdev/rte_regexdev.c @@ -19,7 +19,7 @@ static struct { struct rte_regexdev_data data[RTE_MAX_REGEXDEV_DEVS]; } *rte_regexdev_shared_data; -int rte_regexdev_logtype; +RTE_LOG_REGISTER_DEFAULT(rte_regexdev_logtype, INFO); static uint16_t regexdev_find_free_dev(void) -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 02/14] regexdev: fix logtype register 2023-12-08 14:59 ` [RFC v2 02/14] regexdev: fix logtype register David Marchand @ 2023-12-08 16:58 ` Stephen Hemminger 2023-12-08 20:46 ` Tyler Retzlaff 2023-12-14 10:11 ` Ori Kam 2 siblings, 0 replies; 122+ messages in thread From: Stephen Hemminger @ 2023-12-08 16:58 UTC (permalink / raw) To: David Marchand Cc: dev, thomas, ferruh.yigit, bruce.richardson, mb, stable, Ori Kam, Parav Pandit, Guy Kaneti On Fri, 8 Dec 2023 15:59:36 +0100 David Marchand <david.marchand@redhat.com> wrote: > This library logtype was not initialized so its logs would end up under > the 0 logtype, iow, RTE_LOGTYPE_EAL. > > Fixes: b25246beaefc ("regexdev: add core functions") > Cc: stable@dpdk.org > > Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Stephen Hemminger <stephen@networkplumber.org> ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 02/14] regexdev: fix logtype register 2023-12-08 14:59 ` [RFC v2 02/14] regexdev: fix logtype register David Marchand 2023-12-08 16:58 ` Stephen Hemminger @ 2023-12-08 20:46 ` Tyler Retzlaff 2023-12-14 10:11 ` Ori Kam 2 siblings, 0 replies; 122+ messages in thread From: Tyler Retzlaff @ 2023-12-08 20:46 UTC (permalink / raw) To: David Marchand Cc: dev, thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Ori Kam, Parav Pandit, Guy Kaneti On Fri, Dec 08, 2023 at 03:59:36PM +0100, David Marchand wrote: > This library logtype was not initialized so its logs would end up under > the 0 logtype, iow, RTE_LOGTYPE_EAL. > > Fixes: b25246beaefc ("regexdev: add core functions") > Cc: stable@dpdk.org > > Signed-off-by: David Marchand <david.marchand@redhat.com> > --- Reviewed-by: Tyler Retzlaff <roretzla@linux.microsoft.com> ^ permalink raw reply [flat|nested] 122+ messages in thread
* RE: [RFC v2 02/14] regexdev: fix logtype register 2023-12-08 14:59 ` [RFC v2 02/14] regexdev: fix logtype register David Marchand 2023-12-08 16:58 ` Stephen Hemminger 2023-12-08 20:46 ` Tyler Retzlaff @ 2023-12-14 10:11 ` Ori Kam 2 siblings, 0 replies; 122+ messages in thread From: Ori Kam @ 2023-12-14 10:11 UTC (permalink / raw) To: David Marchand, dev Cc: NBU-Contact-Thomas Monjalon (EXTERNAL), ferruh.yigit, bruce.richardson, stephen, mb, stable, Parav Pandit, Guy Kaneti > -----Original Message----- > From: David Marchand <david.marchand@redhat.com> > Sent: Friday, December 8, 2023 5:00 PM > > This library logtype was not initialized so its logs would end up under > the 0 logtype, iow, RTE_LOGTYPE_EAL. > > Fixes: b25246beaefc ("regexdev: add core functions") > Cc: stable@dpdk.org > > Signed-off-by: David Marchand <david.marchand@redhat.com> > --- > lib/regexdev/rte_regexdev.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/lib/regexdev/rte_regexdev.c b/lib/regexdev/rte_regexdev.c > index caec069182..d38a85eb0b 100644 > --- a/lib/regexdev/rte_regexdev.c > +++ b/lib/regexdev/rte_regexdev.c > @@ -19,7 +19,7 @@ static struct { > struct rte_regexdev_data data[RTE_MAX_REGEXDEV_DEVS]; > } *rte_regexdev_shared_data; > > -int rte_regexdev_logtype; > +RTE_LOG_REGISTER_DEFAULT(rte_regexdev_logtype, INFO); > > static uint16_t > regexdev_find_free_dev(void) > -- > 2.43.0 Acked-by: Ori Kam <orika@nvidia.com> Best, Or ^ permalink raw reply [flat|nested] 122+ messages in thread
* [RFC v2 03/14] lib: use dedicated logtypes 2023-12-08 14:59 ` [RFC v2 00/14] " David Marchand 2023-12-08 14:59 ` [RFC v2 01/14] hash: remove some dead code David Marchand 2023-12-08 14:59 ` [RFC v2 02/14] regexdev: fix logtype register David Marchand @ 2023-12-08 14:59 ` David Marchand 2023-12-08 17:00 ` Stephen Hemminger ` (2 more replies) 2023-12-08 14:59 ` [RFC v2 04/14] lib: add newline in logs David Marchand ` (10 subsequent siblings) 13 siblings, 3 replies; 122+ messages in thread From: David Marchand @ 2023-12-08 14:59 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Akhil Goyal, Fan Zhang, Andrew Rybchenko, Amit Prakash Shukla, Jerin Jacob, Naga Harish K S V No printf! When a dedicated log helper exists, use it. And no usurpation please: a library should log under its logtype (see the eventdev rx adapter update for example). Note: the RTE_ETH_VALID_PORTID_OR_GOTO_ERR_RET macro is renamed for consistency with the rest of eventdev (private) macros. Cc: stable@dpdk.org Signed-off-by: David Marchand <david.marchand@redhat.com> --- lib/cryptodev/rte_cryptodev.c | 2 +- lib/ethdev/ethdev_driver.c | 4 ++-- lib/ethdev/ethdev_private.c | 2 +- lib/ethdev/rte_class_eth.c | 2 +- lib/eventdev/rte_event_dma_adapter.c | 4 ++-- lib/eventdev/rte_event_eth_rx_adapter.c | 12 ++++++------ lib/eventdev/rte_eventdev.c | 6 +++--- lib/mempool/rte_mempool_ops.c | 2 +- 8 files changed, 17 insertions(+), 17 deletions(-) diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c index b258827734..d8769f0b8d 100644 --- a/lib/cryptodev/rte_cryptodev.c +++ b/lib/cryptodev/rte_cryptodev.c @@ -2682,7 +2682,7 @@ rte_cryptodev_driver_id_get(const char *name) int driver_id = -1; if (name == NULL) { - RTE_LOG(DEBUG, CRYPTODEV, "name pointer NULL"); + CDEV_LOG_DEBUG("name pointer NULL"); return -1; } diff --git a/lib/ethdev/ethdev_driver.c b/lib/ethdev/ethdev_driver.c index fff4b7b4cd..55a9dcc565 100644 --- a/lib/ethdev/ethdev_driver.c +++ b/lib/ethdev/ethdev_driver.c @@ -487,7 +487,7 @@ rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da) pair = &args.pairs[i]; if (strcmp("representor", pair->key) == 0) { if (eth_da->type != RTE_ETH_REPRESENTOR_NONE) { - RTE_LOG(ERR, EAL, "duplicated representor key: %s\n", + RTE_ETHDEV_LOG(ERR, "duplicated representor key: %s\n", dargs); result = -1; goto parse_cleanup; @@ -713,7 +713,7 @@ rte_eth_representor_id_get(uint16_t port_id, if (info->ranges[i].controller != controller) continue; if (info->ranges[i].id_end < info->ranges[i].id_base) { - RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n", + RTE_ETHDEV_LOG(WARNING, "Port %hu invalid representor ID Range %u - %u, entry %d\n", port_id, info->ranges[i].id_base, info->ranges[i].id_end, i); continue; diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c index e98b7188b0..0e1c7b23c1 100644 --- a/lib/ethdev/ethdev_private.c +++ b/lib/ethdev/ethdev_private.c @@ -182,7 +182,7 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data) RTE_DIM(eth_da->representor_ports)); done: if (str == NULL) - RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str); + RTE_ETHDEV_LOG(ERR, "wrong representor format: %s\n", str); return str == NULL ? -1 : 0; } diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c index b61dae849d..311beb17cb 100644 --- a/lib/ethdev/rte_class_eth.c +++ b/lib/ethdev/rte_class_eth.c @@ -165,7 +165,7 @@ eth_dev_iterate(const void *start, valid_keys = eth_params_keys; kvargs = rte_kvargs_parse(str, valid_keys); if (kvargs == NULL) { - RTE_LOG(ERR, EAL, "cannot parse argument list\n"); + RTE_ETHDEV_LOG(ERR, "cannot parse argument list\n"); rte_errno = EINVAL; return NULL; } diff --git a/lib/eventdev/rte_event_dma_adapter.c b/lib/eventdev/rte_event_dma_adapter.c index af4b5ad388..cbf9405438 100644 --- a/lib/eventdev/rte_event_dma_adapter.c +++ b/lib/eventdev/rte_event_dma_adapter.c @@ -1046,7 +1046,7 @@ rte_event_dma_adapter_vchan_add(uint8_t id, int16_t dma_dev_id, uint16_t vchan, sizeof(struct dma_vchan_info), 0, adapter->socket_id); if (dev_info->vchanq == NULL) { - printf("Queue pair add not supported\n"); + RTE_EDEV_LOG_ERR("Queue pair add not supported"); return -ENOMEM; } } @@ -1057,7 +1057,7 @@ rte_event_dma_adapter_vchan_add(uint8_t id, int16_t dma_dev_id, uint16_t vchan, sizeof(struct dma_vchan_info), 0, adapter->socket_id); if (dev_info->tqmap == NULL) { - printf("tq pair add not supported\n"); + RTE_EDEV_LOG_ERR("tq pair add not supported"); return -ENOMEM; } } diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c index 6db03adf04..82ae31712d 100644 --- a/lib/eventdev/rte_event_eth_rx_adapter.c +++ b/lib/eventdev/rte_event_eth_rx_adapter.c @@ -314,9 +314,9 @@ rxa_event_buf_get(struct event_eth_rx_adapter *rx_adapter, uint16_t eth_dev_id, } \ } while (0) -#define RTE_ETH_VALID_PORTID_OR_GOTO_ERR_RET(port_id, retval) do { \ +#define RTE_EVENT_ETH_RX_ADAPTER_PORTID_VALID_OR_GOTO_ERR_RET(port_id, retval) do { \ if (!rte_eth_dev_is_valid_port(port_id)) { \ - RTE_ETHDEV_LOG(ERR, "Invalid port_id=%u\n", port_id); \ + RTE_EDEV_LOG_ERR("Invalid port_id=%u", port_id); \ ret = retval; \ goto error; \ } \ @@ -3671,7 +3671,7 @@ handle_rxa_get_queue_conf(const char *cmd __rte_unused, /* Get device ID from parameter string */ eth_dev_id = strtoul(token, NULL, 10); - RTE_ETH_VALID_PORTID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); + RTE_EVENT_ETH_RX_ADAPTER_PORTID_VALID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); token = strtok(NULL, ","); RTE_EVENT_ETH_RX_ADAPTER_TOKEN_VALID_OR_GOTO_ERR_RET(token, -1); @@ -3743,7 +3743,7 @@ handle_rxa_get_queue_stats(const char *cmd __rte_unused, /* Get device ID from parameter string */ eth_dev_id = strtoul(token, NULL, 10); - RTE_ETH_VALID_PORTID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); + RTE_EVENT_ETH_RX_ADAPTER_PORTID_VALID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); token = strtok(NULL, ","); RTE_EVENT_ETH_RX_ADAPTER_TOKEN_VALID_OR_GOTO_ERR_RET(token, -1); @@ -3813,7 +3813,7 @@ handle_rxa_queue_stats_reset(const char *cmd __rte_unused, /* Get device ID from parameter string */ eth_dev_id = strtoul(token, NULL, 10); - RTE_ETH_VALID_PORTID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); + RTE_EVENT_ETH_RX_ADAPTER_PORTID_VALID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); token = strtok(NULL, ","); RTE_EVENT_ETH_RX_ADAPTER_TOKEN_VALID_OR_GOTO_ERR_RET(token, -1); @@ -3868,7 +3868,7 @@ handle_rxa_instance_get(const char *cmd __rte_unused, /* Get device ID from parameter string */ eth_dev_id = strtoul(token, NULL, 10); - RTE_ETH_VALID_PORTID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); + RTE_EVENT_ETH_RX_ADAPTER_PORTID_VALID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); token = strtok(NULL, ","); RTE_EVENT_ETH_RX_ADAPTER_TOKEN_VALID_OR_GOTO_ERR_RET(token, -1); diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index 0ca32d6721..ae50821a3f 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -1428,8 +1428,8 @@ rte_event_vector_pool_create(const char *name, unsigned int n, int ret; if (!nb_elem) { - RTE_LOG(ERR, EVENTDEV, - "Invalid number of elements=%d requested\n", nb_elem); + RTE_EDEV_LOG_ERR("Invalid number of elements=%d requested", + nb_elem); rte_errno = EINVAL; return NULL; } @@ -1444,7 +1444,7 @@ rte_event_vector_pool_create(const char *name, unsigned int n, mp_ops_name = rte_mbuf_best_mempool_ops(); ret = rte_mempool_set_ops_byname(mp, mp_ops_name, NULL); if (ret != 0) { - RTE_LOG(ERR, EVENTDEV, "error setting mempool handler\n"); + RTE_EDEV_LOG_ERR("error setting mempool handler"); goto err; } diff --git a/lib/mempool/rte_mempool_ops.c b/lib/mempool/rte_mempool_ops.c index ae1d288f27..e871de9ec9 100644 --- a/lib/mempool/rte_mempool_ops.c +++ b/lib/mempool/rte_mempool_ops.c @@ -46,7 +46,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h) if (strlen(h->name) >= sizeof(ops->name) - 1) { rte_spinlock_unlock(&rte_mempool_ops_table.sl); - RTE_LOG(DEBUG, EAL, "%s(): mempool_ops <%s>: name too long\n", + RTE_LOG(DEBUG, MEMPOOL, "%s(): mempool_ops <%s>: name too long\n", __func__, h->name); rte_errno = EEXIST; return -EEXIST; -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 03/14] lib: use dedicated logtypes 2023-12-08 14:59 ` [RFC v2 03/14] lib: use dedicated logtypes David Marchand @ 2023-12-08 17:00 ` Stephen Hemminger 2023-12-08 20:49 ` Tyler Retzlaff 2023-12-16 9:47 ` Andrew Rybchenko 2 siblings, 0 replies; 122+ messages in thread From: Stephen Hemminger @ 2023-12-08 17:00 UTC (permalink / raw) To: David Marchand Cc: dev, thomas, ferruh.yigit, bruce.richardson, mb, stable, Akhil Goyal, Fan Zhang, Andrew Rybchenko, Amit Prakash Shukla, Jerin Jacob, Naga Harish K S V On Fri, 8 Dec 2023 15:59:37 +0100 David Marchand <david.marchand@redhat.com> wrote: > No printf! > When a dedicated log helper exists, use it. > And no usurpation please: a library should log under its logtype > (see the eventdev rx adapter update for example). > > Note: the RTE_ETH_VALID_PORTID_OR_GOTO_ERR_RET macro is renamed for > consistency with the rest of eventdev (private) macros. > > Cc: stable@dpdk.org > > Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Stephen Hemminger <stephen@networkplumber.org> ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 03/14] lib: use dedicated logtypes 2023-12-08 14:59 ` [RFC v2 03/14] lib: use dedicated logtypes David Marchand 2023-12-08 17:00 ` Stephen Hemminger @ 2023-12-08 20:49 ` Tyler Retzlaff 2023-12-16 9:47 ` Andrew Rybchenko 2 siblings, 0 replies; 122+ messages in thread From: Tyler Retzlaff @ 2023-12-08 20:49 UTC (permalink / raw) To: David Marchand Cc: dev, thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Akhil Goyal, Fan Zhang, Andrew Rybchenko, Amit Prakash Shukla, Jerin Jacob, Naga Harish K S V On Fri, Dec 08, 2023 at 03:59:37PM +0100, David Marchand wrote: > No printf! > When a dedicated log helper exists, use it. > And no usurpation please: a library should log under its logtype > (see the eventdev rx adapter update for example). > > Note: the RTE_ETH_VALID_PORTID_OR_GOTO_ERR_RET macro is renamed for > consistency with the rest of eventdev (private) macros. > > Cc: stable@dpdk.org > > Signed-off-by: David Marchand <david.marchand@redhat.com> > --- Reviewed-by: Tyler Retzlaff <roretzla@linux.microsoft.com> ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 03/14] lib: use dedicated logtypes 2023-12-08 14:59 ` [RFC v2 03/14] lib: use dedicated logtypes David Marchand 2023-12-08 17:00 ` Stephen Hemminger 2023-12-08 20:49 ` Tyler Retzlaff @ 2023-12-16 9:47 ` Andrew Rybchenko 2 siblings, 0 replies; 122+ messages in thread From: Andrew Rybchenko @ 2023-12-16 9:47 UTC (permalink / raw) To: David Marchand, dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Akhil Goyal, Fan Zhang, Amit Prakash Shukla, Jerin Jacob, Naga Harish K S V On 12/8/23 17:59, David Marchand wrote: > No printf! > When a dedicated log helper exists, use it. > And no usurpation please: a library should log under its logtype > (see the eventdev rx adapter update for example). > > Note: the RTE_ETH_VALID_PORTID_OR_GOTO_ERR_RET macro is renamed for > consistency with the rest of eventdev (private) macros. > > Cc: stable@dpdk.org > > Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> ^ permalink raw reply [flat|nested] 122+ messages in thread
* [RFC v2 04/14] lib: add newline in logs 2023-12-08 14:59 ` [RFC v2 00/14] " David Marchand ` (2 preceding siblings ...) 2023-12-08 14:59 ` [RFC v2 03/14] lib: use dedicated logtypes David Marchand @ 2023-12-08 14:59 ` David Marchand 2023-12-08 17:01 ` Stephen Hemminger ` (2 more replies) 2023-12-08 14:59 ` [RFC v2 05/14] lib: remove redundant newline from logs David Marchand ` (9 subsequent siblings) 13 siblings, 3 replies; 122+ messages in thread From: David Marchand @ 2023-12-08 14:59 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Harman Kalra, Andrew Rybchenko, Vladimir Medvedkin, Anatoly Burakov, David Hunt, Sivaprasad Tummala Fix places leading to a log message not terminated with a newline. Cc: stable@dpdk.org Signed-off-by: David Marchand <david.marchand@redhat.com> --- lib/eal/common/eal_common_options.c | 2 +- lib/eal/linux/eal_hugepage_info.c | 2 +- lib/eal/linux/eal_interrupts.c | 2 +- lib/ethdev/ethdev_pci.h | 2 +- lib/ethdev/rte_ethdev.c | 40 ++++++++++++++--------------- lib/lpm/rte_lpm6.c | 6 ++--- lib/power/guest_channel.c | 2 +- lib/power/rte_power_pmd_mgmt.c | 6 ++--- 8 files changed, 31 insertions(+), 31 deletions(-) diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c index a6d21f1cba..e9ba01fb89 100644 --- a/lib/eal/common/eal_common_options.c +++ b/lib/eal/common/eal_common_options.c @@ -2141,7 +2141,7 @@ rte_vect_set_max_simd_bitwidth(uint16_t bitwidth) struct internal_config *internal_conf = eal_get_internal_configuration(); if (internal_conf->max_simd_bitwidth.forced) { - RTE_LOG(NOTICE, EAL, "Cannot set max SIMD bitwidth - user runtime override enabled"); + RTE_LOG(NOTICE, EAL, "Cannot set max SIMD bitwidth - user runtime override enabled\n"); return -EPERM; } diff --git a/lib/eal/linux/eal_hugepage_info.c b/lib/eal/linux/eal_hugepage_info.c index 581d9dfc91..36a495fb1f 100644 --- a/lib/eal/linux/eal_hugepage_info.c +++ b/lib/eal/linux/eal_hugepage_info.c @@ -403,7 +403,7 @@ inspect_hugedir_cb(const struct walk_hugedir_data *whd) struct stat st; if (fstat(whd->file_fd, &st) < 0) - RTE_LOG(DEBUG, EAL, "%s(): stat(\"%s\") failed: %s", + RTE_LOG(DEBUG, EAL, "%s(): stat(\"%s\") failed: %s\n", __func__, whd->file_name, strerror(errno)); else (*total_size) += st.st_size; diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c index d4919dff45..eabac24992 100644 --- a/lib/eal/linux/eal_interrupts.c +++ b/lib/eal/linux/eal_interrupts.c @@ -1542,7 +1542,7 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd) /* only check, initialization would be done in vdev driver.*/ if ((uint64_t)rte_intr_efd_counter_size_get(intr_handle) > sizeof(union rte_intr_read_buffer)) { - RTE_LOG(ERR, EAL, "the efd_counter_size is oversized"); + RTE_LOG(ERR, EAL, "the efd_counter_size is oversized\n"); return -EINVAL; } } else { diff --git a/lib/ethdev/ethdev_pci.h b/lib/ethdev/ethdev_pci.h index 320e3e0093..ddb559aa95 100644 --- a/lib/ethdev/ethdev_pci.h +++ b/lib/ethdev/ethdev_pci.h @@ -31,7 +31,7 @@ rte_eth_copy_pci_info(struct rte_eth_dev *eth_dev, struct rte_pci_device *pci_dev) { if ((eth_dev == NULL) || (pci_dev == NULL)) { - RTE_ETHDEV_LOG(ERR, "NULL pointer eth_dev=%p pci_dev=%p", + RTE_ETHDEV_LOG(ERR, "NULL pointer eth_dev=%p pci_dev=%p\n", (void *)eth_dev, (void *)pci_dev); return; } diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 3858983fcc..b9d99ece15 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -724,7 +724,7 @@ rte_eth_dev_get_port_by_name(const char *name, uint16_t *port_id) uint16_t pid; if (name == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get port ID from NULL name"); + RTE_ETHDEV_LOG(ERR, "Cannot get port ID from NULL name\n"); return -EINVAL; } @@ -2394,41 +2394,41 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id, nb_rx_desc = cap.max_nb_desc; if (nb_rx_desc > cap.max_nb_desc) { RTE_ETHDEV_LOG(ERR, - "Invalid value for nb_rx_desc(=%hu), should be: <= %hu", + "Invalid value for nb_rx_desc(=%hu), should be: <= %hu\n", nb_rx_desc, cap.max_nb_desc); return -EINVAL; } if (conf->peer_count > cap.max_rx_2_tx) { RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Rx queue(=%u), should be: <= %hu", + "Invalid value for number of peers for Rx queue(=%u), should be: <= %hu\n", conf->peer_count, cap.max_rx_2_tx); return -EINVAL; } if (conf->use_locked_device_memory && !cap.rx_cap.locked_device_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to use locked device memory for Rx queue, which is not supported"); + "Attempt to use locked device memory for Rx queue, which is not supported\n"); return -EINVAL; } if (conf->use_rte_memory && !cap.rx_cap.rte_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to use DPDK memory for Rx queue, which is not supported"); + "Attempt to use DPDK memory for Rx queue, which is not supported\n"); return -EINVAL; } if (conf->use_locked_device_memory && conf->use_rte_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to use mutually exclusive memory settings for Rx queue"); + "Attempt to use mutually exclusive memory settings for Rx queue\n"); return -EINVAL; } if (conf->force_memory && !conf->use_locked_device_memory && !conf->use_rte_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to force Rx queue memory settings, but none is set"); + "Attempt to force Rx queue memory settings, but none is set\n"); return -EINVAL; } if (conf->peer_count == 0) { RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Rx queue(=%u), should be: > 0", + "Invalid value for number of peers for Rx queue(=%u), should be: > 0\n", conf->peer_count); return -EINVAL; } @@ -2438,7 +2438,7 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id, count++; } if (count > cap.max_nb_queues) { - RTE_ETHDEV_LOG(ERR, "To many Rx hairpin queues max is %d", + RTE_ETHDEV_LOG(ERR, "To many Rx hairpin queues max is %d\n", cap.max_nb_queues); return -EINVAL; } @@ -2597,41 +2597,41 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id, nb_tx_desc = cap.max_nb_desc; if (nb_tx_desc > cap.max_nb_desc) { RTE_ETHDEV_LOG(ERR, - "Invalid value for nb_tx_desc(=%hu), should be: <= %hu", + "Invalid value for nb_tx_desc(=%hu), should be: <= %hu\n", nb_tx_desc, cap.max_nb_desc); return -EINVAL; } if (conf->peer_count > cap.max_tx_2_rx) { RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Tx queue(=%u), should be: <= %hu", + "Invalid value for number of peers for Tx queue(=%u), should be: <= %hu\n", conf->peer_count, cap.max_tx_2_rx); return -EINVAL; } if (conf->use_locked_device_memory && !cap.tx_cap.locked_device_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to use locked device memory for Tx queue, which is not supported"); + "Attempt to use locked device memory for Tx queue, which is not supported\n"); return -EINVAL; } if (conf->use_rte_memory && !cap.tx_cap.rte_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to use DPDK memory for Tx queue, which is not supported"); + "Attempt to use DPDK memory for Tx queue, which is not supported\n"); return -EINVAL; } if (conf->use_locked_device_memory && conf->use_rte_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to use mutually exclusive memory settings for Tx queue"); + "Attempt to use mutually exclusive memory settings for Tx queue\n"); return -EINVAL; } if (conf->force_memory && !conf->use_locked_device_memory && !conf->use_rte_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to force Tx queue memory settings, but none is set"); + "Attempt to force Tx queue memory settings, but none is set\n"); return -EINVAL; } if (conf->peer_count == 0) { RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Tx queue(=%u), should be: > 0", + "Invalid value for number of peers for Tx queue(=%u), should be: > 0\n", conf->peer_count); return -EINVAL; } @@ -2641,7 +2641,7 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id, count++; } if (count > cap.max_nb_queues) { - RTE_ETHDEV_LOG(ERR, "To many Tx hairpin queues max is %d", + RTE_ETHDEV_LOG(ERR, "To many Tx hairpin queues max is %d\n", cap.max_nb_queues); return -EINVAL; } @@ -6716,7 +6716,7 @@ rte_eth_ip_reassembly_capability_get(uint16_t port_id, } if (reassembly_capa == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get reassembly capability to NULL"); + RTE_ETHDEV_LOG(ERR, "Cannot get reassembly capability to NULL\n"); return -EINVAL; } @@ -6752,7 +6752,7 @@ rte_eth_ip_reassembly_conf_get(uint16_t port_id, } if (conf == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get reassembly info to NULL"); + RTE_ETHDEV_LOG(ERR, "Cannot get reassembly info to NULL\n"); return -EINVAL; } @@ -6780,7 +6780,7 @@ rte_eth_ip_reassembly_conf_set(uint16_t port_id, if (dev->data->dev_configured == 0) { RTE_ETHDEV_LOG(ERR, "Device with port_id=%u is not configured.\n" - "Cannot set IP reassembly configuration", + "Cannot set IP reassembly configuration\n", port_id); return -EINVAL; } diff --git a/lib/lpm/rte_lpm6.c b/lib/lpm/rte_lpm6.c index 873cc8bc26..24ce7dd022 100644 --- a/lib/lpm/rte_lpm6.c +++ b/lib/lpm/rte_lpm6.c @@ -280,7 +280,7 @@ rte_lpm6_create(const char *name, int socket_id, rules_tbl = rte_hash_create(&rule_hash_tbl_params); if (rules_tbl == NULL) { - RTE_LOG(ERR, LPM, "LPM rules hash table allocation failed: %s (%d)", + RTE_LOG(ERR, LPM, "LPM rules hash table allocation failed: %s (%d)\n", rte_strerror(rte_errno), rte_errno); goto fail_wo_unlock; } @@ -290,7 +290,7 @@ rte_lpm6_create(const char *name, int socket_id, sizeof(uint32_t) * config->number_tbl8s, RTE_CACHE_LINE_SIZE); if (tbl8_pool == NULL) { - RTE_LOG(ERR, LPM, "LPM tbl8 pool allocation failed: %s (%d)", + RTE_LOG(ERR, LPM, "LPM tbl8 pool allocation failed: %s (%d)\n", rte_strerror(rte_errno), rte_errno); rte_errno = ENOMEM; goto fail_wo_unlock; @@ -301,7 +301,7 @@ rte_lpm6_create(const char *name, int socket_id, sizeof(struct rte_lpm_tbl8_hdr) * config->number_tbl8s, RTE_CACHE_LINE_SIZE); if (tbl8_hdrs == NULL) { - RTE_LOG(ERR, LPM, "LPM tbl8 headers allocation failed: %s (%d)", + RTE_LOG(ERR, LPM, "LPM tbl8 headers allocation failed: %s (%d)\n", rte_strerror(rte_errno), rte_errno); rte_errno = ENOMEM; goto fail_wo_unlock; diff --git a/lib/power/guest_channel.c b/lib/power/guest_channel.c index cc05347425..a6f2097d5b 100644 --- a/lib/power/guest_channel.c +++ b/lib/power/guest_channel.c @@ -90,7 +90,7 @@ guest_channel_host_connect(const char *path, unsigned int lcore_id) flags |= O_NONBLOCK; if (fcntl(fd, F_SETFL, flags) < 0) { RTE_LOG(ERR, GUEST_CHANNEL, "Failed on setting non-blocking mode for " - "file %s", fd_path); + "file %s\n", fd_path); goto error; } /* QEMU needs a delay after connection */ diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c index 38f8384085..6f18ed0adf 100644 --- a/lib/power/rte_power_pmd_mgmt.c +++ b/lib/power/rte_power_pmd_mgmt.c @@ -686,7 +686,7 @@ int rte_power_pmd_mgmt_set_pause_duration(unsigned int duration) { if (duration == 0) { - RTE_LOG(ERR, POWER, "Pause duration must be greater than 0, value unchanged"); + RTE_LOG(ERR, POWER, "Pause duration must be greater than 0, value unchanged\n"); return -EINVAL; } pause_duration = duration; @@ -709,7 +709,7 @@ rte_power_pmd_mgmt_set_scaling_freq_min(unsigned int lcore, unsigned int min) } if (min > scale_freq_max[lcore]) { - RTE_LOG(ERR, POWER, "Invalid min frequency: Cannot be greater than max frequency"); + RTE_LOG(ERR, POWER, "Invalid min frequency: Cannot be greater than max frequency\n"); return -EINVAL; } scale_freq_min[lcore] = min; @@ -729,7 +729,7 @@ rte_power_pmd_mgmt_set_scaling_freq_max(unsigned int lcore, unsigned int max) if (max == 0) max = UINT32_MAX; if (max < scale_freq_min[lcore]) { - RTE_LOG(ERR, POWER, "Invalid max frequency: Cannot be less than min frequency"); + RTE_LOG(ERR, POWER, "Invalid max frequency: Cannot be less than min frequency\n"); return -EINVAL; } -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 04/14] lib: add newline in logs 2023-12-08 14:59 ` [RFC v2 04/14] lib: add newline in logs David Marchand @ 2023-12-08 17:01 ` Stephen Hemminger 2023-12-11 12:38 ` David Marchand 2023-12-08 20:50 ` Tyler Retzlaff 2023-12-16 9:43 ` Andrew Rybchenko 2 siblings, 1 reply; 122+ messages in thread From: Stephen Hemminger @ 2023-12-08 17:01 UTC (permalink / raw) To: David Marchand Cc: dev, thomas, ferruh.yigit, bruce.richardson, mb, stable, Harman Kalra, Andrew Rybchenko, Vladimir Medvedkin, Anatoly Burakov, David Hunt, Sivaprasad Tummala On Fri, 8 Dec 2023 15:59:38 +0100 David Marchand <david.marchand@redhat.com> wrote: > Fix places leading to a log message not terminated with a newline. > > Cc: stable@dpdk.org > > Signed-off-by: David Marchand <david.marchand@redhat.com> Maybe a coccinelle fixup script would help in future. Acked-by: Stephen Hemminger <stephen@networkplumber.org> ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 04/14] lib: add newline in logs 2023-12-08 17:01 ` Stephen Hemminger @ 2023-12-11 12:38 ` David Marchand 0 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-11 12:38 UTC (permalink / raw) To: Stephen Hemminger Cc: dev, thomas, ferruh.yigit, bruce.richardson, mb, stable, Harman Kalra, Andrew Rybchenko, Vladimir Medvedkin, Anatoly Burakov, David Hunt, Sivaprasad Tummala On Fri, Dec 8, 2023 at 6:02 PM Stephen Hemminger <stephen@networkplumber.org> wrote: > > On Fri, 8 Dec 2023 15:59:38 +0100 > David Marchand <david.marchand@redhat.com> wrote: > > > Fix places leading to a log message not terminated with a newline. > > > > Cc: stable@dpdk.org > > > > Signed-off-by: David Marchand <david.marchand@redhat.com> > > Maybe a coccinelle fixup script would help in future. Checkpatch will now complain for new users of RTE_LOG(). So hopefully, using RTE_LOG() will be an exception, rather than a normal occurence. -- David Marchand ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 04/14] lib: add newline in logs 2023-12-08 14:59 ` [RFC v2 04/14] lib: add newline in logs David Marchand 2023-12-08 17:01 ` Stephen Hemminger @ 2023-12-08 20:50 ` Tyler Retzlaff 2023-12-16 9:43 ` Andrew Rybchenko 2 siblings, 0 replies; 122+ messages in thread From: Tyler Retzlaff @ 2023-12-08 20:50 UTC (permalink / raw) To: David Marchand Cc: dev, thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Harman Kalra, Andrew Rybchenko, Vladimir Medvedkin, Anatoly Burakov, David Hunt, Sivaprasad Tummala On Fri, Dec 08, 2023 at 03:59:38PM +0100, David Marchand wrote: > Fix places leading to a log message not terminated with a newline. > > Cc: stable@dpdk.org > > Signed-off-by: David Marchand <david.marchand@redhat.com> > --- Reviewed-by: Tyler Retzlaff <roretzla@linux.microsoft.com> ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 04/14] lib: add newline in logs 2023-12-08 14:59 ` [RFC v2 04/14] lib: add newline in logs David Marchand 2023-12-08 17:01 ` Stephen Hemminger 2023-12-08 20:50 ` Tyler Retzlaff @ 2023-12-16 9:43 ` Andrew Rybchenko 2 siblings, 0 replies; 122+ messages in thread From: Andrew Rybchenko @ 2023-12-16 9:43 UTC (permalink / raw) To: David Marchand, dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Harman Kalra, Vladimir Medvedkin, Anatoly Burakov, David Hunt, Sivaprasad Tummala On 12/8/23 17:59, David Marchand wrote: > Fix places leading to a log message not terminated with a newline. > > Cc: stable@dpdk.org > > Signed-off-by: David Marchand <david.marchand@redhat.com> For ethdev: Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> ^ permalink raw reply [flat|nested] 122+ messages in thread
* [RFC v2 05/14] lib: remove redundant newline from logs 2023-12-08 14:59 ` [RFC v2 00/14] " David Marchand ` (3 preceding siblings ...) 2023-12-08 14:59 ` [RFC v2 04/14] lib: add newline in logs David Marchand @ 2023-12-08 14:59 ` David Marchand 2023-12-08 17:02 ` Stephen Hemminger ` (2 more replies) 2023-12-08 14:59 ` [RFC v2 06/14] eal/linux: remove log paraphrasing the doc David Marchand ` (8 subsequent siblings) 13 siblings, 3 replies; 122+ messages in thread From: David Marchand @ 2023-12-08 14:59 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Kai Ji, Pablo de Lara, Nicolas Chautru, Cristian Dumitrescu, Fan Zhang, Ashish Gupta, Akhil Goyal, Mattias Rönnblom, Chengwen Feng, Kevin Laatz, Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam, Jerin Jacob, Abhinandan Gujjar, Amit Prakash Shukla, Naga Harish K S V, Erik Gabriel Carrillo, Srikanth Yalavarthi, Jasvinder Singh, Nithin Dabilpuram, Pavan Nikhilesh, Honnappa Nagarahalli, Maxime Coquelin, Chenbo Xia Fix places where two newline characters may be logged. Cc: stable@dpdk.org Signed-off-by: David Marchand <david.marchand@redhat.com> --- Changes since RFC v1: - split fixes on direct calls to printf or RTE_LOG in a previous patch, --- drivers/crypto/ipsec_mb/ipsec_mb_ops.c | 2 +- lib/bbdev/rte_bbdev.c | 6 +- lib/cfgfile/rte_cfgfile.c | 14 ++-- lib/compressdev/rte_compressdev_pmd.c | 4 +- lib/cryptodev/rte_cryptodev.c | 2 +- lib/dispatcher/rte_dispatcher.c | 12 +-- lib/dmadev/rte_dmadev.c | 2 +- lib/eal/windows/eal_memory.c | 2 +- lib/eventdev/eventdev_pmd.h | 6 +- lib/eventdev/rte_event_crypto_adapter.c | 12 +-- lib/eventdev/rte_event_dma_adapter.c | 14 ++-- lib/eventdev/rte_event_eth_rx_adapter.c | 28 +++---- lib/eventdev/rte_event_eth_tx_adapter.c | 2 +- lib/eventdev/rte_event_timer_adapter.c | 4 +- lib/eventdev/rte_eventdev.c | 4 +- lib/metrics/rte_metrics_telemetry.c | 2 +- lib/mldev/rte_mldev.c | 102 ++++++++++++------------ lib/net/rte_net_crc.c | 6 +- lib/node/ethdev_rx.c | 4 +- lib/node/ip4_lookup.c | 2 +- lib/node/ip6_lookup.c | 2 +- lib/node/kernel_rx.c | 8 +- lib/node/kernel_tx.c | 4 +- lib/rcu/rte_rcu_qsbr.c | 4 +- lib/rcu/rte_rcu_qsbr.h | 8 +- lib/stack/rte_stack.c | 8 +- lib/vhost/vhost_crypto.c | 6 +- 27 files changed, 135 insertions(+), 135 deletions(-) diff --git a/drivers/crypto/ipsec_mb/ipsec_mb_ops.c b/drivers/crypto/ipsec_mb/ipsec_mb_ops.c index 52d6d010c7..f21f9cc5a0 100644 --- a/drivers/crypto/ipsec_mb/ipsec_mb_ops.c +++ b/drivers/crypto/ipsec_mb/ipsec_mb_ops.c @@ -407,7 +407,7 @@ ipsec_mb_ipc_request(const struct rte_mp_msg *mp_msg, const void *peer) resp_param->result = ipsec_mb_qp_release(dev, qp_id); break; default: - CDEV_LOG_ERR("invalid mp request type\n"); + CDEV_LOG_ERR("invalid mp request type"); } out: diff --git a/lib/bbdev/rte_bbdev.c b/lib/bbdev/rte_bbdev.c index cfebea09c7..e09bb97abb 100644 --- a/lib/bbdev/rte_bbdev.c +++ b/lib/bbdev/rte_bbdev.c @@ -1106,12 +1106,12 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op, intr_handle = dev->intr_handle; if (intr_handle == NULL) { - rte_bbdev_log(ERR, "Device %u intr handle unset\n", dev_id); + rte_bbdev_log(ERR, "Device %u intr handle unset", dev_id); return -ENOTSUP; } if (queue_id >= RTE_MAX_RXTX_INTR_VEC_ID) { - rte_bbdev_log(ERR, "Device %u queue_id %u is too big\n", + rte_bbdev_log(ERR, "Device %u queue_id %u is too big", dev_id, queue_id); return -ENOTSUP; } @@ -1120,7 +1120,7 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op, ret = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data); if (ret && (ret != -EEXIST)) { rte_bbdev_log(ERR, - "dev %u q %u int ctl error op %d epfd %d vec %u\n", + "dev %u q %u int ctl error op %d epfd %d vec %u", dev_id, queue_id, op, epfd, vec); return ret; } diff --git a/lib/cfgfile/rte_cfgfile.c b/lib/cfgfile/rte_cfgfile.c index eefba6e408..2f9cc0722a 100644 --- a/lib/cfgfile/rte_cfgfile.c +++ b/lib/cfgfile/rte_cfgfile.c @@ -137,7 +137,7 @@ rte_cfgfile_check_params(const struct rte_cfgfile_parameters *params) unsigned int i; if (!params) { - CFG_LOG(ERR, "missing cfgfile parameters\n"); + CFG_LOG(ERR, "missing cfgfile parameters"); return -EINVAL; } @@ -150,7 +150,7 @@ rte_cfgfile_check_params(const struct rte_cfgfile_parameters *params) } if (valid_comment == 0) { - CFG_LOG(ERR, "invalid comment characters %c\n", + CFG_LOG(ERR, "invalid comment characters %c", params->comment_character); return -ENOTSUP; } @@ -188,7 +188,7 @@ rte_cfgfile_load_with_params(const char *filename, int flags, lineno++; if ((len >= sizeof(buffer) - 1) && (buffer[len-1] != '\n')) { CFG_LOG(ERR, " line %d - no \\n found on string. " - "Check if line too long\n", lineno); + "Check if line too long", lineno); goto error1; } /* skip parsing if comment character found */ @@ -209,7 +209,7 @@ rte_cfgfile_load_with_params(const char *filename, int flags, char *end = memchr(buffer, ']', len); if (end == NULL) { CFG_LOG(ERR, - "line %d - no terminating ']' character found\n", + "line %d - no terminating ']' character found", lineno); goto error1; } @@ -225,7 +225,7 @@ rte_cfgfile_load_with_params(const char *filename, int flags, split[1] = memchr(buffer, '=', len); if (split[1] == NULL) { CFG_LOG(ERR, - "line %d - no '=' character found\n", + "line %d - no '=' character found", lineno); goto error1; } @@ -249,7 +249,7 @@ rte_cfgfile_load_with_params(const char *filename, int flags, if (!(flags & CFG_FLAG_EMPTY_VALUES) && (*split[1] == '\0')) { CFG_LOG(ERR, - "line %d - cannot use empty values\n", + "line %d - cannot use empty values", lineno); goto error1; } @@ -414,7 +414,7 @@ int rte_cfgfile_set_entry(struct rte_cfgfile *cfg, const char *sectionname, return 0; } - CFG_LOG(ERR, "entry name doesn't exist\n"); + CFG_LOG(ERR, "entry name doesn't exist"); return -EINVAL; } diff --git a/lib/compressdev/rte_compressdev_pmd.c b/lib/compressdev/rte_compressdev_pmd.c index 156bccd972..762b44f03e 100644 --- a/lib/compressdev/rte_compressdev_pmd.c +++ b/lib/compressdev/rte_compressdev_pmd.c @@ -100,12 +100,12 @@ rte_compressdev_pmd_create(const char *name, struct rte_compressdev *compressdev; if (params->name[0] != '\0') { - COMPRESSDEV_LOG(INFO, "User specified device name = %s\n", + COMPRESSDEV_LOG(INFO, "User specified device name = %s", params->name); name = params->name; } - COMPRESSDEV_LOG(INFO, "Creating compressdev %s\n", name); + COMPRESSDEV_LOG(INFO, "Creating compressdev %s", name); COMPRESSDEV_LOG(INFO, "Init parameters - name: %s, socket id: %d", name, params->socket_id); diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c index d8769f0b8d..a3a8fc9c07 100644 --- a/lib/cryptodev/rte_cryptodev.c +++ b/lib/cryptodev/rte_cryptodev.c @@ -2072,7 +2072,7 @@ rte_cryptodev_sym_session_create(uint8_t dev_id, } if (xforms == NULL) { - CDEV_LOG_ERR("Invalid xform\n"); + CDEV_LOG_ERR("Invalid xform"); rte_errno = EINVAL; return NULL; } diff --git a/lib/dispatcher/rte_dispatcher.c b/lib/dispatcher/rte_dispatcher.c index 10d02edde9..95dd41b818 100644 --- a/lib/dispatcher/rte_dispatcher.c +++ b/lib/dispatcher/rte_dispatcher.c @@ -246,7 +246,7 @@ evd_service_register(struct rte_dispatcher *dispatcher) rc = rte_service_component_register(&service, &dispatcher->service_id); if (rc != 0) RTE_EDEV_LOG_ERR("Registration of dispatcher service " - "%s failed with error code %d\n", + "%s failed with error code %d", service.name, rc); return rc; @@ -260,7 +260,7 @@ evd_service_unregister(struct rte_dispatcher *dispatcher) rc = rte_service_component_unregister(dispatcher->service_id); if (rc != 0) RTE_EDEV_LOG_ERR("Unregistration of dispatcher service " - "failed with error code %d\n", rc); + "failed with error code %d", rc); return rc; } @@ -279,7 +279,7 @@ rte_dispatcher_create(uint8_t event_dev_id) RTE_CACHE_LINE_SIZE, socket_id); if (dispatcher == NULL) { - RTE_EDEV_LOG_ERR("Unable to allocate memory for dispatcher\n"); + RTE_EDEV_LOG_ERR("Unable to allocate memory for dispatcher"); rte_errno = ENOMEM; return NULL; } @@ -483,7 +483,7 @@ evd_lcore_uninstall_handler(struct rte_dispatcher_lcore *lcore, unreg_handler = evd_lcore_get_handler_by_id(lcore, handler_id); if (unreg_handler == NULL) { - RTE_EDEV_LOG_ERR("Invalid handler id %d\n", handler_id); + RTE_EDEV_LOG_ERR("Invalid handler id %d", handler_id); return -EINVAL; } @@ -602,7 +602,7 @@ rte_dispatcher_finalize_unregister(struct rte_dispatcher *dispatcher, unreg_finalizer = evd_get_finalizer_by_id(dispatcher, finalizer_id); if (unreg_finalizer == NULL) { - RTE_EDEV_LOG_ERR("Invalid finalizer id %d\n", finalizer_id); + RTE_EDEV_LOG_ERR("Invalid finalizer id %d", finalizer_id); return -EINVAL; } @@ -636,7 +636,7 @@ evd_set_service_runstate(struct rte_dispatcher *dispatcher, int state) */ if (rc != 0) RTE_EDEV_LOG_ERR("Unexpected error %d occurred while setting " - "service component run state to %d\n", rc, + "service component run state to %d", rc, state); RTE_VERIFY(rc == 0); diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c index 4e5e420c82..009a21849a 100644 --- a/lib/dmadev/rte_dmadev.c +++ b/lib/dmadev/rte_dmadev.c @@ -726,7 +726,7 @@ rte_dma_vchan_status(int16_t dev_id, uint16_t vchan, enum rte_dma_vchan_status * return -EINVAL; if (vchan >= dev->data->dev_conf.nb_vchans) { - RTE_DMA_LOG(ERR, "Device %u vchan %u out of range\n", dev_id, vchan); + RTE_DMA_LOG(ERR, "Device %u vchan %u out of range", dev_id, vchan); return -EINVAL; } diff --git a/lib/eal/windows/eal_memory.c b/lib/eal/windows/eal_memory.c index 31410a41fd..fd39155163 100644 --- a/lib/eal/windows/eal_memory.c +++ b/lib/eal/windows/eal_memory.c @@ -110,7 +110,7 @@ eal_mem_win32api_init(void) VirtualAlloc2_ptr = (VirtualAlloc2_type)( (void *)GetProcAddress(library, function)); if (VirtualAlloc2_ptr == NULL) { - RTE_LOG_WIN32_ERR("GetProcAddress(\"%s\", \"%s\")\n", + RTE_LOG_WIN32_ERR("GetProcAddress(\"%s\", \"%s\")", library_name, function); /* Contrary to the docs, Server 2016 is not supported. */ diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index 30bd90085c..2ec5aec0a8 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -49,14 +49,14 @@ extern "C" { /* Macros to check for valid device */ #define RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, retval) do { \ if (!rte_event_pmd_is_valid_dev((dev_id))) { \ - RTE_EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \ + RTE_EDEV_LOG_ERR("Invalid dev_id=%d", dev_id); \ return retval; \ } \ } while (0) #define RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, errno, retval) do { \ if (!rte_event_pmd_is_valid_dev((dev_id))) { \ - RTE_EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \ + RTE_EDEV_LOG_ERR("Invalid dev_id=%d", dev_id); \ rte_errno = errno; \ return retval; \ } \ @@ -64,7 +64,7 @@ extern "C" { #define RTE_EVENTDEV_VALID_DEVID_OR_RET(dev_id) do { \ if (!rte_event_pmd_is_valid_dev((dev_id))) { \ - RTE_EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \ + RTE_EDEV_LOG_ERR("Invalid dev_id=%d", dev_id); \ return; \ } \ } while (0) diff --git a/lib/eventdev/rte_event_crypto_adapter.c b/lib/eventdev/rte_event_crypto_adapter.c index 1b435c9f0e..d46595d190 100644 --- a/lib/eventdev/rte_event_crypto_adapter.c +++ b/lib/eventdev/rte_event_crypto_adapter.c @@ -133,7 +133,7 @@ static struct event_crypto_adapter **event_crypto_adapter; /* Macros to check for valid adapter */ #define EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) do { \ if (!eca_valid_id(id)) { \ - RTE_EDEV_LOG_ERR("Invalid crypto adapter id = %d\n", id); \ + RTE_EDEV_LOG_ERR("Invalid crypto adapter id = %d", id); \ return retval; \ } \ } while (0) @@ -309,7 +309,7 @@ eca_default_config_cb(uint8_t id, uint8_t dev_id, ret = rte_event_dev_configure(dev_id, &dev_conf); if (ret) { - RTE_EDEV_LOG_ERR("failed to configure event dev %u\n", dev_id); + RTE_EDEV_LOG_ERR("failed to configure event dev %u", dev_id); if (started) { if (rte_event_dev_start(dev_id)) return -EIO; @@ -319,7 +319,7 @@ eca_default_config_cb(uint8_t id, uint8_t dev_id, ret = rte_event_port_setup(dev_id, port_id, port_conf); if (ret) { - RTE_EDEV_LOG_ERR("failed to setup event port %u\n", port_id); + RTE_EDEV_LOG_ERR("failed to setup event port %u", port_id); return ret; } @@ -391,7 +391,7 @@ rte_event_crypto_adapter_create_ext(uint8_t id, uint8_t dev_id, sizeof(struct crypto_device_info), 0, socket_id); if (adapter->cdevs == NULL) { - RTE_EDEV_LOG_ERR("Failed to get mem for crypto devices\n"); + RTE_EDEV_LOG_ERR("Failed to get mem for crypto devices"); eca_circular_buffer_free(&adapter->ebuf); rte_free(adapter); return -ENOMEM; @@ -1403,7 +1403,7 @@ rte_event_crypto_adapter_runtime_params_set(uint8_t id, EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); if (params == NULL) { - RTE_EDEV_LOG_ERR("params pointer is NULL\n"); + RTE_EDEV_LOG_ERR("params pointer is NULL"); return -EINVAL; } @@ -1436,7 +1436,7 @@ rte_event_crypto_adapter_runtime_params_get(uint8_t id, EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); if (params == NULL) { - RTE_EDEV_LOG_ERR("params pointer is NULL\n"); + RTE_EDEV_LOG_ERR("params pointer is NULL"); return -EINVAL; } diff --git a/lib/eventdev/rte_event_dma_adapter.c b/lib/eventdev/rte_event_dma_adapter.c index cbf9405438..4196164305 100644 --- a/lib/eventdev/rte_event_dma_adapter.c +++ b/lib/eventdev/rte_event_dma_adapter.c @@ -20,7 +20,7 @@ #define EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) \ do { \ if (!edma_adapter_valid_id(id)) { \ - RTE_EDEV_LOG_ERR("Invalid DMA adapter id = %d\n", id); \ + RTE_EDEV_LOG_ERR("Invalid DMA adapter id = %d", id); \ return retval; \ } \ } while (0) @@ -313,7 +313,7 @@ edma_default_config_cb(uint8_t id, uint8_t evdev_id, struct rte_event_dma_adapte ret = rte_event_dev_configure(evdev_id, &dev_conf); if (ret) { - RTE_EDEV_LOG_ERR("Failed to configure event dev %u\n", evdev_id); + RTE_EDEV_LOG_ERR("Failed to configure event dev %u", evdev_id); if (started) { if (rte_event_dev_start(evdev_id)) return -EIO; @@ -323,7 +323,7 @@ edma_default_config_cb(uint8_t id, uint8_t evdev_id, struct rte_event_dma_adapte ret = rte_event_port_setup(evdev_id, port_id, port_conf); if (ret) { - RTE_EDEV_LOG_ERR("Failed to setup event port %u\n", port_id); + RTE_EDEV_LOG_ERR("Failed to setup event port %u", port_id); return ret; } @@ -407,7 +407,7 @@ rte_event_dma_adapter_create_ext(uint8_t id, uint8_t evdev_id, num_dma_dev * sizeof(struct dma_device_info), 0, socket_id); if (adapter->dma_devs == NULL) { - RTE_EDEV_LOG_ERR("Failed to get memory for DMA devices\n"); + RTE_EDEV_LOG_ERR("Failed to get memory for DMA devices"); edma_circular_buffer_free(&adapter->ebuf); rte_free(adapter); return -ENOMEM; @@ -417,7 +417,7 @@ rte_event_dma_adapter_create_ext(uint8_t id, uint8_t evdev_id, for (i = 0; i < num_dma_dev; i++) { ret = rte_dma_info_get(i, &info); if (ret) { - RTE_EDEV_LOG_ERR("Failed to get dma device info\n"); + RTE_EDEV_LOG_ERR("Failed to get dma device info"); edma_circular_buffer_free(&adapter->ebuf); rte_free(adapter); return ret; @@ -1297,7 +1297,7 @@ rte_event_dma_adapter_runtime_params_set(uint8_t id, EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); if (params == NULL) { - RTE_EDEV_LOG_ERR("params pointer is NULL\n"); + RTE_EDEV_LOG_ERR("params pointer is NULL"); return -EINVAL; } @@ -1326,7 +1326,7 @@ rte_event_dma_adapter_runtime_params_get(uint8_t id, EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); if (params == NULL) { - RTE_EDEV_LOG_ERR("params pointer is NULL\n"); + RTE_EDEV_LOG_ERR("params pointer is NULL"); return -EINVAL; } diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c index 82ae31712d..1b83a55b5c 100644 --- a/lib/eventdev/rte_event_eth_rx_adapter.c +++ b/lib/eventdev/rte_event_eth_rx_adapter.c @@ -293,14 +293,14 @@ rxa_event_buf_get(struct event_eth_rx_adapter *rx_adapter, uint16_t eth_dev_id, #define RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) do { \ if (!rxa_validate_id(id)) { \ - RTE_EDEV_LOG_ERR("Invalid eth Rx adapter id = %d\n", id); \ + RTE_EDEV_LOG_ERR("Invalid eth Rx adapter id = %d", id); \ return retval; \ } \ } while (0) #define RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_GOTO_ERR_RET(id, retval) do { \ if (!rxa_validate_id(id)) { \ - RTE_EDEV_LOG_ERR("Invalid eth Rx adapter id = %d\n", id); \ + RTE_EDEV_LOG_ERR("Invalid eth Rx adapter id = %d", id); \ ret = retval; \ goto error; \ } \ @@ -308,7 +308,7 @@ rxa_event_buf_get(struct event_eth_rx_adapter *rx_adapter, uint16_t eth_dev_id, #define RTE_EVENT_ETH_RX_ADAPTER_TOKEN_VALID_OR_GOTO_ERR_RET(token, retval) do { \ if ((token) == NULL || strlen(token) == 0 || !isdigit(*token)) { \ - RTE_EDEV_LOG_ERR("Invalid eth Rx adapter token\n"); \ + RTE_EDEV_LOG_ERR("Invalid eth Rx adapter token"); \ ret = retval; \ goto error; \ } \ @@ -1540,7 +1540,7 @@ rxa_default_conf_cb(uint8_t id, uint8_t dev_id, ret = rte_event_dev_configure(dev_id, &dev_conf); if (ret) { - RTE_EDEV_LOG_ERR("failed to configure event dev %u\n", + RTE_EDEV_LOG_ERR("failed to configure event dev %u", dev_id); if (started) { if (rte_event_dev_start(dev_id)) @@ -1551,7 +1551,7 @@ rxa_default_conf_cb(uint8_t id, uint8_t dev_id, ret = rte_event_port_setup(dev_id, port_id, port_conf); if (ret) { - RTE_EDEV_LOG_ERR("failed to setup event port %u\n", + RTE_EDEV_LOG_ERR("failed to setup event port %u", port_id); return ret; } @@ -1628,7 +1628,7 @@ rxa_create_intr_thread(struct event_eth_rx_adapter *rx_adapter) if (!err) return 0; - RTE_EDEV_LOG_ERR("Failed to create interrupt thread err = %d\n", err); + RTE_EDEV_LOG_ERR("Failed to create interrupt thread err = %d", err); rte_free(rx_adapter->epoll_events); error: rte_ring_free(rx_adapter->intr_ring); @@ -1644,12 +1644,12 @@ rxa_destroy_intr_thread(struct event_eth_rx_adapter *rx_adapter) err = pthread_cancel((pthread_t)rx_adapter->rx_intr_thread.opaque_id); if (err) - RTE_EDEV_LOG_ERR("Can't cancel interrupt thread err = %d\n", + RTE_EDEV_LOG_ERR("Can't cancel interrupt thread err = %d", err); err = rte_thread_join(rx_adapter->rx_intr_thread, NULL); if (err) - RTE_EDEV_LOG_ERR("Can't join interrupt thread err = %d\n", err); + RTE_EDEV_LOG_ERR("Can't join interrupt thread err = %d", err); rte_free(rx_adapter->epoll_events); rte_ring_free(rx_adapter->intr_ring); @@ -1915,7 +1915,7 @@ rxa_init_service(struct event_eth_rx_adapter *rx_adapter, uint8_t id) if (rte_mbuf_dyn_rx_timestamp_register( &event_eth_rx_timestamp_dynfield_offset, &event_eth_rx_timestamp_dynflag) != 0) { - RTE_EDEV_LOG_ERR("Error registering timestamp field in mbuf\n"); + RTE_EDEV_LOG_ERR("Error registering timestamp field in mbuf"); return -rte_errno; } @@ -2445,7 +2445,7 @@ rxa_create(uint8_t id, uint8_t dev_id, RTE_DIM(default_rss_key)); if (rx_adapter->eth_devices == NULL) { - RTE_EDEV_LOG_ERR("failed to get mem for eth devices\n"); + RTE_EDEV_LOG_ERR("failed to get mem for eth devices"); rte_free(rx_adapter); return -ENOMEM; } @@ -2497,12 +2497,12 @@ rxa_config_params_validate(struct rte_event_eth_rx_adapter_params *rxa_params, return 0; } else if (!rxa_params->use_queue_event_buf && rxa_params->event_buf_size == 0) { - RTE_EDEV_LOG_ERR("event buffer size can't be zero\n"); + RTE_EDEV_LOG_ERR("event buffer size can't be zero"); return -EINVAL; } else if (rxa_params->use_queue_event_buf && rxa_params->event_buf_size != 0) { RTE_EDEV_LOG_ERR("event buffer size needs to be configured " - "as part of queue add\n"); + "as part of queue add"); return -EINVAL; } @@ -3597,7 +3597,7 @@ handle_rxa_stats(const char *cmd __rte_unused, /* Get Rx adapter stats */ if (rte_event_eth_rx_adapter_stats_get(rx_adapter_id, &rx_adptr_stats)) { - RTE_EDEV_LOG_ERR("Failed to get Rx adapter stats\n"); + RTE_EDEV_LOG_ERR("Failed to get Rx adapter stats"); return -1; } @@ -3636,7 +3636,7 @@ handle_rxa_stats_reset(const char *cmd __rte_unused, /* Reset Rx adapter stats */ if (rte_event_eth_rx_adapter_stats_reset(rx_adapter_id)) { - RTE_EDEV_LOG_ERR("Failed to reset Rx adapter stats\n"); + RTE_EDEV_LOG_ERR("Failed to reset Rx adapter stats"); return -1; } diff --git a/lib/eventdev/rte_event_eth_tx_adapter.c b/lib/eventdev/rte_event_eth_tx_adapter.c index 360d5caf6a..56435be991 100644 --- a/lib/eventdev/rte_event_eth_tx_adapter.c +++ b/lib/eventdev/rte_event_eth_tx_adapter.c @@ -334,7 +334,7 @@ txa_service_conf_cb(uint8_t __rte_unused id, uint8_t dev_id, ret = rte_event_port_setup(dev_id, port_id, pc); if (ret) { - RTE_EDEV_LOG_ERR("failed to setup event port %u\n", + RTE_EDEV_LOG_ERR("failed to setup event port %u", port_id); if (started) { if (rte_event_dev_start(dev_id)) diff --git a/lib/eventdev/rte_event_timer_adapter.c b/lib/eventdev/rte_event_timer_adapter.c index 27466707bc..3f22e85173 100644 --- a/lib/eventdev/rte_event_timer_adapter.c +++ b/lib/eventdev/rte_event_timer_adapter.c @@ -106,7 +106,7 @@ default_port_conf_cb(uint16_t id, uint8_t event_dev_id, uint8_t *event_port_id, ret = rte_event_dev_configure(dev_id, &dev_conf); if (ret < 0) { - EVTIM_LOG_ERR("failed to configure event dev %u\n", dev_id); + EVTIM_LOG_ERR("failed to configure event dev %u", dev_id); if (started) if (rte_event_dev_start(dev_id)) return -EIO; @@ -116,7 +116,7 @@ default_port_conf_cb(uint16_t id, uint8_t event_dev_id, uint8_t *event_port_id, ret = rte_event_port_setup(dev_id, port_id, port_conf); if (ret < 0) { - EVTIM_LOG_ERR("failed to setup event port %u on event dev %u\n", + EVTIM_LOG_ERR("failed to setup event port %u on event dev %u", port_id, dev_id); return ret; } diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index ae50821a3f..157752868d 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -1007,13 +1007,13 @@ rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t } if (*dev->dev_ops->port_link == NULL) { - RTE_EDEV_LOG_ERR("Function not supported\n"); + RTE_EDEV_LOG_ERR("Function not supported"); rte_errno = ENOTSUP; return 0; } if (profile_id && *dev->dev_ops->port_link_profile == NULL) { - RTE_EDEV_LOG_ERR("Function not supported\n"); + RTE_EDEV_LOG_ERR("Function not supported"); rte_errno = ENOTSUP; return 0; } diff --git a/lib/metrics/rte_metrics_telemetry.c b/lib/metrics/rte_metrics_telemetry.c index 5be21b2e86..1d133e1f8c 100644 --- a/lib/metrics/rte_metrics_telemetry.c +++ b/lib/metrics/rte_metrics_telemetry.c @@ -363,7 +363,7 @@ rte_metrics_tel_stat_names_to_ids(const char * const *stat_names, } } if (j == num_metrics) { - METRICS_LOG_WARN("Invalid stat name %s\n", + METRICS_LOG_WARN("Invalid stat name %s", stat_names[i]); free(names); return -EINVAL; diff --git a/lib/mldev/rte_mldev.c b/lib/mldev/rte_mldev.c index cc5f2e0cc6..196b1850e6 100644 --- a/lib/mldev/rte_mldev.c +++ b/lib/mldev/rte_mldev.c @@ -159,7 +159,7 @@ int rte_ml_dev_init(size_t dev_max) { if (dev_max == 0 || dev_max > INT16_MAX) { - RTE_MLDEV_LOG(ERR, "Invalid dev_max = %zu (> %d)\n", dev_max, INT16_MAX); + RTE_MLDEV_LOG(ERR, "Invalid dev_max = %zu (> %d)", dev_max, INT16_MAX); rte_errno = EINVAL; return -rte_errno; } @@ -217,7 +217,7 @@ rte_ml_dev_socket_id(int16_t dev_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -232,7 +232,7 @@ rte_ml_dev_info_get(int16_t dev_id, struct rte_ml_dev_info *dev_info) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -241,7 +241,7 @@ rte_ml_dev_info_get(int16_t dev_id, struct rte_ml_dev_info *dev_info) return -ENOTSUP; if (dev_info == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, dev_info cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, dev_info cannot be NULL", dev_id); return -EINVAL; } memset(dev_info, 0, sizeof(struct rte_ml_dev_info)); @@ -257,7 +257,7 @@ rte_ml_dev_configure(int16_t dev_id, const struct rte_ml_dev_config *config) int ret; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -271,7 +271,7 @@ rte_ml_dev_configure(int16_t dev_id, const struct rte_ml_dev_config *config) } if (config == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, config cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, config cannot be NULL", dev_id); return -EINVAL; } @@ -280,7 +280,7 @@ rte_ml_dev_configure(int16_t dev_id, const struct rte_ml_dev_config *config) return ret; if (config->nb_queue_pairs > dev_info.max_queue_pairs) { - RTE_MLDEV_LOG(ERR, "Device %d num of queues %u > %u\n", dev_id, + RTE_MLDEV_LOG(ERR, "Device %d num of queues %u > %u", dev_id, config->nb_queue_pairs, dev_info.max_queue_pairs); return -EINVAL; } @@ -294,7 +294,7 @@ rte_ml_dev_close(int16_t dev_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -318,7 +318,7 @@ rte_ml_dev_start(int16_t dev_id) int ret; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -345,7 +345,7 @@ rte_ml_dev_stop(int16_t dev_id) int ret; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -372,7 +372,7 @@ rte_ml_dev_queue_pair_setup(int16_t dev_id, uint16_t queue_pair_id, struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -386,7 +386,7 @@ rte_ml_dev_queue_pair_setup(int16_t dev_id, uint16_t queue_pair_id, } if (qp_conf == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, qp_conf cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, qp_conf cannot be NULL", dev_id); return -EINVAL; } @@ -404,7 +404,7 @@ rte_ml_dev_stats_get(int16_t dev_id, struct rte_ml_dev_stats *stats) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -413,7 +413,7 @@ rte_ml_dev_stats_get(int16_t dev_id, struct rte_ml_dev_stats *stats) return -ENOTSUP; if (stats == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, stats cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, stats cannot be NULL", dev_id); return -EINVAL; } memset(stats, 0, sizeof(struct rte_ml_dev_stats)); @@ -427,7 +427,7 @@ rte_ml_dev_stats_reset(int16_t dev_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return; } @@ -445,7 +445,7 @@ rte_ml_dev_xstats_names_get(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, in struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -462,7 +462,7 @@ rte_ml_dev_xstats_by_name_get(int16_t dev_id, const char *name, uint16_t *stat_i struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -471,12 +471,12 @@ rte_ml_dev_xstats_by_name_get(int16_t dev_id, const char *name, uint16_t *stat_i return -ENOTSUP; if (name == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, name cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, name cannot be NULL", dev_id); return -EINVAL; } if (value == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, value cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, value cannot be NULL", dev_id); return -EINVAL; } @@ -490,7 +490,7 @@ rte_ml_dev_xstats_get(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, int32_t struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -499,12 +499,12 @@ rte_ml_dev_xstats_get(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, int32_t return -ENOTSUP; if (stat_ids == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, stat_ids cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, stat_ids cannot be NULL", dev_id); return -EINVAL; } if (values == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, values cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, values cannot be NULL", dev_id); return -EINVAL; } @@ -518,7 +518,7 @@ rte_ml_dev_xstats_reset(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, int32_ struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -535,7 +535,7 @@ rte_ml_dev_dump(int16_t dev_id, FILE *fd) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -544,7 +544,7 @@ rte_ml_dev_dump(int16_t dev_id, FILE *fd) return -ENOTSUP; if (fd == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, file descriptor cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, file descriptor cannot be NULL", dev_id); return -EINVAL; } @@ -557,7 +557,7 @@ rte_ml_dev_selftest(int16_t dev_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -574,7 +574,7 @@ rte_ml_model_load(int16_t dev_id, struct rte_ml_model_params *params, uint16_t * struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -583,12 +583,12 @@ rte_ml_model_load(int16_t dev_id, struct rte_ml_model_params *params, uint16_t * return -ENOTSUP; if (params == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, params cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, params cannot be NULL", dev_id); return -EINVAL; } if (model_id == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, model_id cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, model_id cannot be NULL", dev_id); return -EINVAL; } @@ -601,7 +601,7 @@ rte_ml_model_unload(int16_t dev_id, uint16_t model_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -618,7 +618,7 @@ rte_ml_model_start(int16_t dev_id, uint16_t model_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -635,7 +635,7 @@ rte_ml_model_stop(int16_t dev_id, uint16_t model_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -652,7 +652,7 @@ rte_ml_model_info_get(int16_t dev_id, uint16_t model_id, struct rte_ml_model_inf struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -661,7 +661,7 @@ rte_ml_model_info_get(int16_t dev_id, uint16_t model_id, struct rte_ml_model_inf return -ENOTSUP; if (model_info == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, model_id %u, model_info cannot be NULL\n", dev_id, + RTE_MLDEV_LOG(ERR, "Dev %d, model_id %u, model_info cannot be NULL", dev_id, model_id); return -EINVAL; } @@ -675,7 +675,7 @@ rte_ml_model_params_update(int16_t dev_id, uint16_t model_id, void *buffer) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -684,7 +684,7 @@ rte_ml_model_params_update(int16_t dev_id, uint16_t model_id, void *buffer) return -ENOTSUP; if (buffer == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, buffer cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, buffer cannot be NULL", dev_id); return -EINVAL; } @@ -698,7 +698,7 @@ rte_ml_io_quantize(int16_t dev_id, uint16_t model_id, struct rte_ml_buff_seg **d struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -707,12 +707,12 @@ rte_ml_io_quantize(int16_t dev_id, uint16_t model_id, struct rte_ml_buff_seg **d return -ENOTSUP; if (dbuffer == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, dbuffer cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, dbuffer cannot be NULL", dev_id); return -EINVAL; } if (qbuffer == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, qbuffer cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, qbuffer cannot be NULL", dev_id); return -EINVAL; } @@ -726,7 +726,7 @@ rte_ml_io_dequantize(int16_t dev_id, uint16_t model_id, struct rte_ml_buff_seg * struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -735,12 +735,12 @@ rte_ml_io_dequantize(int16_t dev_id, uint16_t model_id, struct rte_ml_buff_seg * return -ENOTSUP; if (qbuffer == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, qbuffer cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, qbuffer cannot be NULL", dev_id); return -EINVAL; } if (dbuffer == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, dbuffer cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, dbuffer cannot be NULL", dev_id); return -EINVAL; } @@ -811,7 +811,7 @@ rte_ml_enqueue_burst(int16_t dev_id, uint16_t qp_id, struct rte_ml_op **ops, uin #ifdef RTE_LIBRTE_ML_DEV_DEBUG if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); rte_errno = -EINVAL; return 0; } @@ -823,13 +823,13 @@ rte_ml_enqueue_burst(int16_t dev_id, uint16_t qp_id, struct rte_ml_op **ops, uin } if (ops == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, ops cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, ops cannot be NULL", dev_id); rte_errno = -EINVAL; return 0; } if (qp_id >= dev->data->nb_queue_pairs) { - RTE_MLDEV_LOG(ERR, "Invalid qp_id %u\n", qp_id); + RTE_MLDEV_LOG(ERR, "Invalid qp_id %u", qp_id); rte_errno = -EINVAL; return 0; } @@ -847,7 +847,7 @@ rte_ml_dequeue_burst(int16_t dev_id, uint16_t qp_id, struct rte_ml_op **ops, uin #ifdef RTE_LIBRTE_ML_DEV_DEBUG if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); rte_errno = -EINVAL; return 0; } @@ -859,13 +859,13 @@ rte_ml_dequeue_burst(int16_t dev_id, uint16_t qp_id, struct rte_ml_op **ops, uin } if (ops == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, ops cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, ops cannot be NULL", dev_id); rte_errno = -EINVAL; return 0; } if (qp_id >= dev->data->nb_queue_pairs) { - RTE_MLDEV_LOG(ERR, "Invalid qp_id %u\n", qp_id); + RTE_MLDEV_LOG(ERR, "Invalid qp_id %u", qp_id); rte_errno = -EINVAL; return 0; } @@ -883,7 +883,7 @@ rte_ml_op_error_get(int16_t dev_id, struct rte_ml_op *op, struct rte_ml_op_error #ifdef RTE_LIBRTE_ML_DEV_DEBUG if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -892,12 +892,12 @@ rte_ml_op_error_get(int16_t dev_id, struct rte_ml_op *op, struct rte_ml_op_error return -ENOTSUP; if (op == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, op cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, op cannot be NULL", dev_id); return -EINVAL; } if (error == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, error cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, error cannot be NULL", dev_id); return -EINVAL; } #else diff --git a/lib/net/rte_net_crc.c b/lib/net/rte_net_crc.c index a685f9e7bb..900d6de7f4 100644 --- a/lib/net/rte_net_crc.c +++ b/lib/net/rte_net_crc.c @@ -179,7 +179,7 @@ avx512_vpclmulqdq_get_handlers(void) max_simd_bitwidth >= RTE_VECT_SIMD_512) return handlers_avx512; #endif - NET_LOG(INFO, "Requirements not met, can't use AVX512\n"); + NET_LOG(INFO, "Requirements not met, can't use AVX512"); return NULL; } @@ -205,7 +205,7 @@ sse42_pclmulqdq_get_handlers(void) max_simd_bitwidth >= RTE_VECT_SIMD_128) return handlers_sse42; #endif - NET_LOG(INFO, "Requirements not met, can't use SSE\n"); + NET_LOG(INFO, "Requirements not met, can't use SSE"); return NULL; } @@ -231,7 +231,7 @@ neon_pmull_get_handlers(void) max_simd_bitwidth >= RTE_VECT_SIMD_128) return handlers_neon; #endif - NET_LOG(INFO, "Requirements not met, can't use NEON\n"); + NET_LOG(INFO, "Requirements not met, can't use NEON"); return NULL; } diff --git a/lib/node/ethdev_rx.c b/lib/node/ethdev_rx.c index 3e8fac1df4..475eff6abe 100644 --- a/lib/node/ethdev_rx.c +++ b/lib/node/ethdev_rx.c @@ -160,13 +160,13 @@ ethdev_ptype_setup(uint16_t port, uint16_t queue) if (!l3_ipv4 || !l3_ipv6) { node_info("ethdev_rx", - "Enabling ptype callback for required ptypes on port %u\n", + "Enabling ptype callback for required ptypes on port %u", port); if (!rte_eth_add_rx_callback(port, queue, eth_pkt_parse_cb, NULL)) { node_err("ethdev_rx", - "Failed to add rx ptype cb: port=%d, queue=%d\n", + "Failed to add rx ptype cb: port=%d, queue=%d", port, queue); return -EINVAL; } diff --git a/lib/node/ip4_lookup.c b/lib/node/ip4_lookup.c index 0dbfde64fe..18955971f6 100644 --- a/lib/node/ip4_lookup.c +++ b/lib/node/ip4_lookup.c @@ -143,7 +143,7 @@ rte_node_ip4_route_add(uint32_t ip, uint8_t depth, uint16_t next_hop, ip, depth, val); if (ret < 0) { node_err("ip4_lookup", - "Unable to add entry %s / %d nh (%x) to LPM table on sock %d, rc=%d\n", + "Unable to add entry %s / %d nh (%x) to LPM table on sock %d, rc=%d", abuf, depth, val, socket, ret); return ret; } diff --git a/lib/node/ip6_lookup.c b/lib/node/ip6_lookup.c index 6f56eb5ec5..309964f60f 100644 --- a/lib/node/ip6_lookup.c +++ b/lib/node/ip6_lookup.c @@ -283,7 +283,7 @@ rte_node_ip6_route_add(const uint8_t *ip, uint8_t depth, uint16_t next_hop, if (ret < 0) { node_err("ip6_lookup", "Unable to add entry %s / %d nh (%x) to LPM " - "table on sock %d, rc=%d\n", + "table on sock %d, rc=%d", abuf, depth, val, socket, ret); return ret; } diff --git a/lib/node/kernel_rx.c b/lib/node/kernel_rx.c index 2dba7c8cc7..6c20cdbb1e 100644 --- a/lib/node/kernel_rx.c +++ b/lib/node/kernel_rx.c @@ -134,7 +134,7 @@ kernel_rx_node_do(struct rte_graph *graph, struct rte_node *node, kernel_rx_node if (len == 0 || len == 0xFFFF) { rte_pktmbuf_free(m); if (rx->idx <= 0) - node_dbg("kernel_rx", "rx_mbuf array is empty\n"); + node_dbg("kernel_rx", "rx_mbuf array is empty"); rx->idx--; break; } @@ -207,20 +207,20 @@ kernel_rx_node_init(const struct rte_graph *graph, struct rte_node *node) RTE_VERIFY(elem != NULL); if (ctx->pktmbuf_pool == NULL) { - node_err("kernel_rx", "Invalid mbuf pool on graph %s\n", graph->name); + node_err("kernel_rx", "Invalid mbuf pool on graph %s", graph->name); return -EINVAL; } recv_info = rte_zmalloc_socket("kernel_rx_info", sizeof(kernel_rx_info_t), RTE_CACHE_LINE_SIZE, graph->socket); if (!recv_info) { - node_err("kernel_rx", "Kernel recv_info is NULL\n"); + node_err("kernel_rx", "Kernel recv_info is NULL"); return -ENOMEM; } sock = socket(AF_INET, SOCK_RAW, IPPROTO_RAW); if (sock < 0) { - node_err("kernel_rx", "Unable to open RAW socket\n"); + node_err("kernel_rx", "Unable to open RAW socket"); return sock; } diff --git a/lib/node/kernel_tx.c b/lib/node/kernel_tx.c index 27d1808c71..3a96741622 100644 --- a/lib/node/kernel_tx.c +++ b/lib/node/kernel_tx.c @@ -36,7 +36,7 @@ kernel_tx_process_mbuf(struct rte_node *node, struct rte_mbuf **mbufs, uint16_t sin.sin_addr.s_addr = ip4->dst_addr; if (sendto(ctx->sock, buf, len, 0, (struct sockaddr *)&sin, sizeof(sin)) < 0) - node_err("kernel_tx", "Unable to send packets: %s\n", strerror(errno)); + node_err("kernel_tx", "Unable to send packets: %s", strerror(errno)); } } @@ -87,7 +87,7 @@ kernel_tx_node_init(const struct rte_graph *graph __rte_unused, struct rte_node ctx->sock = socket(AF_INET, SOCK_RAW, IPPROTO_RAW); if (ctx->sock < 0) - node_err("kernel_tx", "Unable to open RAW socket\n"); + node_err("kernel_tx", "Unable to open RAW socket"); return 0; } diff --git a/lib/rcu/rte_rcu_qsbr.c b/lib/rcu/rte_rcu_qsbr.c index a9f3d6cc98..41a44be4b9 100644 --- a/lib/rcu/rte_rcu_qsbr.c +++ b/lib/rcu/rte_rcu_qsbr.c @@ -92,7 +92,7 @@ rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id) return 1; } - __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n", + __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u", v->qsbr_cnt[thread_id].lock_cnt); id = thread_id & __RTE_QSBR_THRID_MASK; @@ -144,7 +144,7 @@ rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id) return 1; } - __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n", + __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u", v->qsbr_cnt[thread_id].lock_cnt); id = thread_id & __RTE_QSBR_THRID_MASK; diff --git a/lib/rcu/rte_rcu_qsbr.h b/lib/rcu/rte_rcu_qsbr.h index 5979fb0efb..6b908e7ee0 100644 --- a/lib/rcu/rte_rcu_qsbr.h +++ b/lib/rcu/rte_rcu_qsbr.h @@ -299,7 +299,7 @@ rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id) RTE_ASSERT(v != NULL && thread_id < v->max_threads); - __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n", + __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u", v->qsbr_cnt[thread_id].lock_cnt); /* Copy the current value of token. @@ -350,7 +350,7 @@ rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int thread_id) { RTE_ASSERT(v != NULL && thread_id < v->max_threads); - __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n", + __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u", v->qsbr_cnt[thread_id].lock_cnt); /* The reader can go offline only after the load of the @@ -427,7 +427,7 @@ rte_rcu_qsbr_unlock(__rte_unused struct rte_rcu_qsbr *v, 1, rte_memory_order_release); __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, WARNING, - "Lock counter %u. Nested locks?\n", + "Lock counter %u. Nested locks?", v->qsbr_cnt[thread_id].lock_cnt); #endif } @@ -481,7 +481,7 @@ rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int thread_id) RTE_ASSERT(v != NULL && thread_id < v->max_threads); - __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n", + __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u", v->qsbr_cnt[thread_id].lock_cnt); /* Acquire the changes to the shared data structure released diff --git a/lib/stack/rte_stack.c b/lib/stack/rte_stack.c index 1fabec2bfe..1dab6d6645 100644 --- a/lib/stack/rte_stack.c +++ b/lib/stack/rte_stack.c @@ -56,7 +56,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id, int ret; if (flags & ~(RTE_STACK_F_LF)) { - STACK_LOG_ERR("Unsupported stack flags %#x\n", flags); + STACK_LOG_ERR("Unsupported stack flags %#x", flags); return NULL; } @@ -65,7 +65,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id, #endif #if !defined(RTE_STACK_LF_SUPPORTED) if (flags & RTE_STACK_F_LF) { - STACK_LOG_ERR("Lock-free stack is not supported on your platform\n"); + STACK_LOG_ERR("Lock-free stack is not supported on your platform"); rte_errno = ENOTSUP; return NULL; } @@ -82,7 +82,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id, te = rte_zmalloc("STACK_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - STACK_LOG_ERR("Cannot reserve memory for tailq\n"); + STACK_LOG_ERR("Cannot reserve memory for tailq"); rte_errno = ENOMEM; return NULL; } @@ -92,7 +92,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id, mz = rte_memzone_reserve_aligned(mz_name, sz, socket_id, 0, __alignof__(*s)); if (mz == NULL) { - STACK_LOG_ERR("Cannot reserve stack memzone!\n"); + STACK_LOG_ERR("Cannot reserve stack memzone!"); rte_mcfg_tailq_write_unlock(); rte_free(te); return NULL; diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c index 3e1ef1ac25..6e5443e5f8 100644 --- a/lib/vhost/vhost_crypto.c +++ b/lib/vhost/vhost_crypto.c @@ -245,7 +245,7 @@ transform_cipher_param(struct rte_crypto_sym_xform *xform, return ret; if (param->cipher_key_len > VHOST_USER_CRYPTO_MAX_CIPHER_KEY_LENGTH) { - VC_LOG_DBG("Invalid cipher key length\n"); + VC_LOG_DBG("Invalid cipher key length"); return -VIRTIO_CRYPTO_BADMSG; } @@ -301,7 +301,7 @@ transform_chain_param(struct rte_crypto_sym_xform *xforms, return ret; if (param->cipher_key_len > VHOST_USER_CRYPTO_MAX_CIPHER_KEY_LENGTH) { - VC_LOG_DBG("Invalid cipher key length\n"); + VC_LOG_DBG("Invalid cipher key length"); return -VIRTIO_CRYPTO_BADMSG; } @@ -321,7 +321,7 @@ transform_chain_param(struct rte_crypto_sym_xform *xforms, return ret; if (param->auth_key_len > VHOST_USER_CRYPTO_MAX_HMAC_KEY_LENGTH) { - VC_LOG_DBG("Invalid auth key length\n"); + VC_LOG_DBG("Invalid auth key length"); return -VIRTIO_CRYPTO_BADMSG; } -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 05/14] lib: remove redundant newline from logs 2023-12-08 14:59 ` [RFC v2 05/14] lib: remove redundant newline from logs David Marchand @ 2023-12-08 17:02 ` Stephen Hemminger 2023-12-09 7:15 ` fengchengwen 2023-12-11 8:48 ` Mattias Rönnblom 2 siblings, 0 replies; 122+ messages in thread From: Stephen Hemminger @ 2023-12-08 17:02 UTC (permalink / raw) To: David Marchand Cc: dev, thomas, ferruh.yigit, bruce.richardson, mb, stable, Kai Ji, Pablo de Lara, Nicolas Chautru, Cristian Dumitrescu, Fan Zhang, Ashish Gupta, Akhil Goyal, Mattias Rönnblom, Chengwen Feng, Kevin Laatz, Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam, Jerin Jacob, Abhinandan Gujjar, Amit Prakash Shukla, Naga Harish K S V, Erik Gabriel Carrillo, Srikanth Yalavarthi, Jasvinder Singh, Nithin Dabilpuram, Pavan Nikhilesh, Honnappa Nagarahalli, Maxime Coquelin, Chenbo Xia On Fri, 8 Dec 2023 15:59:39 +0100 David Marchand <david.marchand@redhat.com> wrote: > Fix places where two newline characters may be logged. > > Cc: stable@dpdk.org > > Signed-off-by: David Marchand <david.marchand@redhat.com> > --- > Changes since RFC v1: > - split fixes on direct calls to printf or RTE_LOG in a previous patch, > > --- > drivers/crypto/ipsec_mb/ipsec_mb_ops.c | 2 +- > lib/bbdev/rte_bbdev.c | 6 +- > lib/cfgfile/rte_cfgfile.c | 14 ++-- > lib/compressdev/rte_compressdev_pmd.c | 4 +- > lib/cryptodev/rte_cryptodev.c | 2 +- > lib/dispatcher/rte_dispatcher.c | 12 +-- > lib/dmadev/rte_dmadev.c | 2 +- > lib/eal/windows/eal_memory.c | 2 +- > lib/eventdev/eventdev_pmd.h | 6 +- > lib/eventdev/rte_event_crypto_adapter.c | 12 +-- > lib/eventdev/rte_event_dma_adapter.c | 14 ++-- > lib/eventdev/rte_event_eth_rx_adapter.c | 28 +++---- > lib/eventdev/rte_event_eth_tx_adapter.c | 2 +- > lib/eventdev/rte_event_timer_adapter.c | 4 +- > lib/eventdev/rte_eventdev.c | 4 +- > lib/metrics/rte_metrics_telemetry.c | 2 +- > lib/mldev/rte_mldev.c | 102 ++++++++++++------------ > lib/net/rte_net_crc.c | 6 +- > lib/node/ethdev_rx.c | 4 +- > lib/node/ip4_lookup.c | 2 +- > lib/node/ip6_lookup.c | 2 +- > lib/node/kernel_rx.c | 8 +- > lib/node/kernel_tx.c | 4 +- > lib/rcu/rte_rcu_qsbr.c | 4 +- > lib/rcu/rte_rcu_qsbr.h | 8 +- > lib/stack/rte_stack.c | 8 +- > lib/vhost/vhost_crypto.c | 6 +- > 27 files changed, 135 insertions(+), 135 deletions(-) Acked-by: Stephen Hemminger <stephen@networkplumber.org> ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 05/14] lib: remove redundant newline from logs 2023-12-08 14:59 ` [RFC v2 05/14] lib: remove redundant newline from logs David Marchand 2023-12-08 17:02 ` Stephen Hemminger @ 2023-12-09 7:15 ` fengchengwen 2023-12-11 8:48 ` Mattias Rönnblom 2 siblings, 0 replies; 122+ messages in thread From: fengchengwen @ 2023-12-09 7:15 UTC (permalink / raw) To: David Marchand, dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Kai Ji, Pablo de Lara, Nicolas Chautru, Cristian Dumitrescu, Fan Zhang, Ashish Gupta, Akhil Goyal, Mattias Rönnblom, Kevin Laatz, Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam, Jerin Jacob, Abhinandan Gujjar, Amit Prakash Shukla, Naga Harish K S V, Erik Gabriel Carrillo, Srikanth Yalavarthi, Jasvinder Singh, Nithin Dabilpuram, Pavan Nikhilesh, Honnappa Nagarahalli, Maxime Coquelin, Chenbo Xia For lib/dmadev part Reviewed-by: Chengwen Feng <fengchengwen@huawei.com> On 2023/12/8 22:59, David Marchand wrote: > Fix places where two newline characters may be logged. > > Cc: stable@dpdk.org > > Signed-off-by: David Marchand <david.marchand@redhat.com> > --- > Changes since RFC v1: > - split fixes on direct calls to printf or RTE_LOG in a previous patch, > > --- > drivers/crypto/ipsec_mb/ipsec_mb_ops.c | 2 +- > lib/bbdev/rte_bbdev.c | 6 +- > lib/cfgfile/rte_cfgfile.c | 14 ++-- > lib/compressdev/rte_compressdev_pmd.c | 4 +- > lib/cryptodev/rte_cryptodev.c | 2 +- > lib/dispatcher/rte_dispatcher.c | 12 +-- > lib/dmadev/rte_dmadev.c | 2 +- > lib/eal/windows/eal_memory.c | 2 +- > lib/eventdev/eventdev_pmd.h | 6 +- > lib/eventdev/rte_event_crypto_adapter.c | 12 +-- > lib/eventdev/rte_event_dma_adapter.c | 14 ++-- > lib/eventdev/rte_event_eth_rx_adapter.c | 28 +++---- > lib/eventdev/rte_event_eth_tx_adapter.c | 2 +- > lib/eventdev/rte_event_timer_adapter.c | 4 +- > lib/eventdev/rte_eventdev.c | 4 +- > lib/metrics/rte_metrics_telemetry.c | 2 +- > lib/mldev/rte_mldev.c | 102 ++++++++++++------------ > lib/net/rte_net_crc.c | 6 +- > lib/node/ethdev_rx.c | 4 +- > lib/node/ip4_lookup.c | 2 +- > lib/node/ip6_lookup.c | 2 +- > lib/node/kernel_rx.c | 8 +- > lib/node/kernel_tx.c | 4 +- > lib/rcu/rte_rcu_qsbr.c | 4 +- > lib/rcu/rte_rcu_qsbr.h | 8 +- > lib/stack/rte_stack.c | 8 +- > lib/vhost/vhost_crypto.c | 6 +- > 27 files changed, 135 insertions(+), 135 deletions(-) > ... ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 05/14] lib: remove redundant newline from logs 2023-12-08 14:59 ` [RFC v2 05/14] lib: remove redundant newline from logs David Marchand 2023-12-08 17:02 ` Stephen Hemminger 2023-12-09 7:15 ` fengchengwen @ 2023-12-11 8:48 ` Mattias Rönnblom 2 siblings, 0 replies; 122+ messages in thread From: Mattias Rönnblom @ 2023-12-11 8:48 UTC (permalink / raw) To: David Marchand, dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Kai Ji, Pablo de Lara, Nicolas Chautru, Cristian Dumitrescu, Fan Zhang, Ashish Gupta, Akhil Goyal, Mattias Rönnblom, Chengwen Feng, Kevin Laatz, Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam, Jerin Jacob, Abhinandan Gujjar, Amit Prakash Shukla, Naga Harish K S V, Erik Gabriel Carrillo, Srikanth Yalavarthi, Jasvinder Singh, Nithin Dabilpuram, Pavan Nikhilesh, Honnappa Nagarahalli, Maxime Coquelin, Chenbo Xia On 2023-12-08 15:59, David Marchand wrote: > Fix places where two newline characters may be logged. > > Cc: stable@dpdk.org > > Signed-off-by: David Marchand <david.marchand@redhat.com> > --- > Changes since RFC v1: > - split fixes on direct calls to printf or RTE_LOG in a previous patch, > > --- > drivers/crypto/ipsec_mb/ipsec_mb_ops.c | 2 +- > lib/bbdev/rte_bbdev.c | 6 +- > lib/cfgfile/rte_cfgfile.c | 14 ++-- > lib/compressdev/rte_compressdev_pmd.c | 4 +- > lib/cryptodev/rte_cryptodev.c | 2 +- > lib/dispatcher/rte_dispatcher.c | 12 +-- > lib/dmadev/rte_dmadev.c | 2 +- > lib/eal/windows/eal_memory.c | 2 +- > lib/eventdev/eventdev_pmd.h | 6 +- > lib/eventdev/rte_event_crypto_adapter.c | 12 +-- > lib/eventdev/rte_event_dma_adapter.c | 14 ++-- > lib/eventdev/rte_event_eth_rx_adapter.c | 28 +++---- > lib/eventdev/rte_event_eth_tx_adapter.c | 2 +- > lib/eventdev/rte_event_timer_adapter.c | 4 +- > lib/eventdev/rte_eventdev.c | 4 +- > lib/metrics/rte_metrics_telemetry.c | 2 +- > lib/mldev/rte_mldev.c | 102 ++++++++++++------------ > lib/net/rte_net_crc.c | 6 +- > lib/node/ethdev_rx.c | 4 +- > lib/node/ip4_lookup.c | 2 +- > lib/node/ip6_lookup.c | 2 +- > lib/node/kernel_rx.c | 8 +- > lib/node/kernel_tx.c | 4 +- > lib/rcu/rte_rcu_qsbr.c | 4 +- > lib/rcu/rte_rcu_qsbr.h | 8 +- > lib/stack/rte_stack.c | 8 +- > lib/vhost/vhost_crypto.c | 6 +- > 27 files changed, 135 insertions(+), 135 deletions(-) > Acked-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com> ^ permalink raw reply [flat|nested] 122+ messages in thread
* [RFC v2 06/14] eal/linux: remove log paraphrasing the doc 2023-12-08 14:59 ` [RFC v2 00/14] " David Marchand ` (4 preceding siblings ...) 2023-12-08 14:59 ` [RFC v2 05/14] lib: remove redundant newline from logs David Marchand @ 2023-12-08 14:59 ` David Marchand 2023-12-08 17:03 ` Stephen Hemminger 2023-12-08 20:54 ` Tyler Retzlaff 2023-12-08 14:59 ` [RFC v2 07/14] bpf: remove log level in internal helper David Marchand ` (7 subsequent siblings) 13 siblings, 2 replies; 122+ messages in thread From: David Marchand @ 2023-12-08 14:59 UTC (permalink / raw) To: dev; +Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb An error log message does not need to paraphrase the DPDK documentation. Signed-off-by: David Marchand <david.marchand@redhat.com> --- lib/eal/linux/eal_timer.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/lib/eal/linux/eal_timer.c b/lib/eal/linux/eal_timer.c index 3a30284e3a..df9ad61ae9 100644 --- a/lib/eal/linux/eal_timer.c +++ b/lib/eal/linux/eal_timer.c @@ -152,11 +152,7 @@ rte_eal_hpet_init(int make_default) } eal_hpet = mmap(NULL, 1024, PROT_READ, MAP_SHARED, fd, 0); if (eal_hpet == MAP_FAILED) { - RTE_LOG(ERR, EAL, "ERROR: Cannot mmap "DEV_HPET"!\n" - "Please enable CONFIG_HPET_MMAP in your kernel configuration " - "to allow HPET support.\n" - "To run without using HPET, unset RTE_LIBEAL_USE_HPET " - "in your build configuration or use '--no-hpet' EAL flag.\n"); + RTE_LOG(ERR, EAL, "ERROR: Cannot mmap "DEV_HPET"!\n"); close(fd); internal_conf->no_hpet = 1; return -1; -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 06/14] eal/linux: remove log paraphrasing the doc 2023-12-08 14:59 ` [RFC v2 06/14] eal/linux: remove log paraphrasing the doc David Marchand @ 2023-12-08 17:03 ` Stephen Hemminger 2023-12-08 20:54 ` Tyler Retzlaff 1 sibling, 0 replies; 122+ messages in thread From: Stephen Hemminger @ 2023-12-08 17:03 UTC (permalink / raw) To: David Marchand; +Cc: dev, thomas, ferruh.yigit, bruce.richardson, mb On Fri, 8 Dec 2023 15:59:40 +0100 David Marchand <david.marchand@redhat.com> wrote: > An error log message does not need to paraphrase the DPDK documentation. > > Signed-off-by: David Marchand <david.marchand@redhat.com> Agree. Acked-by: Stephen Hemminger <stephen@networkplumber.org> ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 06/14] eal/linux: remove log paraphrasing the doc 2023-12-08 14:59 ` [RFC v2 06/14] eal/linux: remove log paraphrasing the doc David Marchand 2023-12-08 17:03 ` Stephen Hemminger @ 2023-12-08 20:54 ` Tyler Retzlaff 1 sibling, 0 replies; 122+ messages in thread From: Tyler Retzlaff @ 2023-12-08 20:54 UTC (permalink / raw) To: David Marchand; +Cc: dev, thomas, ferruh.yigit, bruce.richardson, stephen, mb On Fri, Dec 08, 2023 at 03:59:40PM +0100, David Marchand wrote: > An error log message does not need to paraphrase the DPDK documentation. > > Signed-off-by: David Marchand <david.marchand@redhat.com> > --- Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com> ^ permalink raw reply [flat|nested] 122+ messages in thread
* [RFC v2 07/14] bpf: remove log level in internal helper 2023-12-08 14:59 ` [RFC v2 00/14] " David Marchand ` (5 preceding siblings ...) 2023-12-08 14:59 ` [RFC v2 06/14] eal/linux: remove log paraphrasing the doc David Marchand @ 2023-12-08 14:59 ` David Marchand 2023-12-08 17:04 ` Stephen Hemminger 2023-12-08 21:02 ` Tyler Retzlaff 2023-12-08 14:59 ` [RFC v2 08/14] lib: simplify multilines log messages David Marchand ` (6 subsequent siblings) 13 siblings, 2 replies; 122+ messages in thread From: David Marchand @ 2023-12-08 14:59 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Konstantin Ananyev There is no other log level than debug, simplify this helper. Signed-off-by: David Marchand <david.marchand@redhat.com> --- lib/bpf/bpf_validate.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c index 95b9ef99ef..f246b3c5eb 100644 --- a/lib/bpf/bpf_validate.c +++ b/lib/bpf/bpf_validate.c @@ -2178,18 +2178,18 @@ restore_eval_state(struct bpf_verifier *bvf, struct inst_node *node) } static void -log_eval_state(const struct bpf_verifier *bvf, const struct ebpf_insn *ins, - uint32_t pc, int32_t loglvl) +log_dbg_eval_state(const struct bpf_verifier *bvf, const struct ebpf_insn *ins, + uint32_t pc) { const struct bpf_eval_state *st; const struct bpf_reg_val *rv; - rte_log(loglvl, rte_bpf_logtype, "%s(pc=%u):\n", __func__, pc); + RTE_BPF_LOG(DEBUG, "%s(pc=%u):\n", __func__, pc); st = bvf->evst; rv = st->rv + ins->dst_reg; - rte_log(loglvl, rte_bpf_logtype, + RTE_BPF_LOG(DEBUG, "r%u={\n" "\tv={type=%u, size=%zu},\n" "\tmask=0x%" PRIx64 ",\n" @@ -2269,7 +2269,7 @@ evaluate(struct bpf_verifier *bvf) } } - log_eval_state(bvf, ins + idx, idx, RTE_LOG_DEBUG); + log_dbg_eval_state(bvf, ins + idx, idx); bvf->evin = NULL; } -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 07/14] bpf: remove log level in internal helper 2023-12-08 14:59 ` [RFC v2 07/14] bpf: remove log level in internal helper David Marchand @ 2023-12-08 17:04 ` Stephen Hemminger 2023-12-08 21:02 ` Tyler Retzlaff 1 sibling, 0 replies; 122+ messages in thread From: Stephen Hemminger @ 2023-12-08 17:04 UTC (permalink / raw) To: David Marchand Cc: dev, thomas, ferruh.yigit, bruce.richardson, mb, Konstantin Ananyev On Fri, 8 Dec 2023 15:59:41 +0100 David Marchand <david.marchand@redhat.com> wrote: > There is no other log level than debug, simplify this helper. > > Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 07/14] bpf: remove log level in internal helper 2023-12-08 14:59 ` [RFC v2 07/14] bpf: remove log level in internal helper David Marchand 2023-12-08 17:04 ` Stephen Hemminger @ 2023-12-08 21:02 ` Tyler Retzlaff 1 sibling, 0 replies; 122+ messages in thread From: Tyler Retzlaff @ 2023-12-08 21:02 UTC (permalink / raw) To: David Marchand Cc: dev, thomas, ferruh.yigit, bruce.richardson, stephen, mb, Konstantin Ananyev On Fri, Dec 08, 2023 at 03:59:41PM +0100, David Marchand wrote: > There is no other log level than debug, simplify this helper. > > Signed-off-by: David Marchand <david.marchand@redhat.com> > --- Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com> ^ permalink raw reply [flat|nested] 122+ messages in thread
* [RFC v2 08/14] lib: simplify multilines log messages 2023-12-08 14:59 ` [RFC v2 00/14] " David Marchand ` (6 preceding siblings ...) 2023-12-08 14:59 ` [RFC v2 07/14] bpf: remove log level in internal helper David Marchand @ 2023-12-08 14:59 ` David Marchand 2023-12-08 17:05 ` Stephen Hemminger 2023-12-08 21:03 ` Tyler Retzlaff 2023-12-08 14:59 ` [RFC v2 09/14] rcu: introduce a logging helper David Marchand ` (5 subsequent siblings) 13 siblings, 2 replies; 122+ messages in thread From: David Marchand @ 2023-12-08 14:59 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Konstantin Ananyev, Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam, Andrew Rybchenko Those error log messages don't need to span on multiple lines. Signed-off-by: David Marchand <david.marchand@redhat.com> --- lib/acl/tb_mem.c | 2 +- lib/bpf/bpf_stub.c | 4 ++-- lib/eal/windows/eal_hugepages.c | 4 ++-- lib/ethdev/rte_ethdev.c | 14 +++++++------- 4 files changed, 12 insertions(+), 12 deletions(-) diff --git a/lib/acl/tb_mem.c b/lib/acl/tb_mem.c index 6a9d96aaed..238d65692a 100644 --- a/lib/acl/tb_mem.c +++ b/lib/acl/tb_mem.c @@ -26,7 +26,7 @@ tb_pool(struct tb_mem_pool *pool, size_t sz) size = sz + pool->alignment - 1; block = calloc(1, size + sizeof(*pool->block)); if (block == NULL) { - RTE_LOG(ERR, ACL, "%s(%zu)\n failed, currently allocated " + RTE_LOG(ERR, ACL, "%s(%zu) failed, currently allocated " "by pool: %zu bytes\n", __func__, sz, pool->alloc); siglongjmp(pool->fail, -ENOMEM); return NULL; diff --git a/lib/bpf/bpf_stub.c b/lib/bpf/bpf_stub.c index ebc5343896..e9f23304bc 100644 --- a/lib/bpf/bpf_stub.c +++ b/lib/bpf/bpf_stub.c @@ -19,7 +19,7 @@ rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname, return NULL; } - RTE_BPF_LOG(ERR, "%s() is not supported with current config\n" + RTE_BPF_LOG(ERR, "%s() is not supported with current config, " "rebuild with libelf installed\n", __func__); rte_errno = ENOTSUP; @@ -36,7 +36,7 @@ rte_bpf_convert(const struct bpf_program *prog) return NULL; } - RTE_BPF_LOG(ERR, "%s() is not supported with current config\n" + RTE_BPF_LOG(ERR, "%s() is not supported with current config, " "rebuild with libpcap installed\n", __func__); rte_errno = ENOTSUP; diff --git a/lib/eal/windows/eal_hugepages.c b/lib/eal/windows/eal_hugepages.c index b007dceb39..701cd0cb08 100644 --- a/lib/eal/windows/eal_hugepages.c +++ b/lib/eal/windows/eal_hugepages.c @@ -105,8 +105,8 @@ int eal_hugepage_info_init(void) { if (hugepage_claim_privilege() < 0) { - RTE_LOG(ERR, EAL, "Cannot claim hugepage privilege\n" - "Verify that large-page support privilege is assigned to the current user\n"); + RTE_LOG(ERR, EAL, "Cannot claim hugepage privilege, " + "verify that large-page support privilege is assigned to the current user\n"); return -1; } diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index b9d99ece15..b21764e6fa 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -6709,8 +6709,8 @@ rte_eth_ip_reassembly_capability_get(uint16_t port_id, if (dev->data->dev_configured == 0) { RTE_ETHDEV_LOG(ERR, - "Device with port_id=%u is not configured.\n" - "Cannot get IP reassembly capability\n", + "Device with port_id=%u is not configured, " + "cannot get IP reassembly capability\n", port_id); return -EINVAL; } @@ -6745,8 +6745,8 @@ rte_eth_ip_reassembly_conf_get(uint16_t port_id, if (dev->data->dev_configured == 0) { RTE_ETHDEV_LOG(ERR, - "Device with port_id=%u is not configured.\n" - "Cannot get IP reassembly configuration\n", + "Device with port_id=%u is not configured, " + "cannot get IP reassembly configuration\n", port_id); return -EINVAL; } @@ -6779,15 +6779,15 @@ rte_eth_ip_reassembly_conf_set(uint16_t port_id, if (dev->data->dev_configured == 0) { RTE_ETHDEV_LOG(ERR, - "Device with port_id=%u is not configured.\n" - "Cannot set IP reassembly configuration\n", + "Device with port_id=%u is not configured, " + "cannot set IP reassembly configuration\n", port_id); return -EINVAL; } if (dev->data->dev_started != 0) { RTE_ETHDEV_LOG(ERR, - "Device with port_id=%u started,\n" + "Device with port_id=%u started, " "cannot configure IP reassembly params.\n", port_id); return -EINVAL; -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 08/14] lib: simplify multilines log messages 2023-12-08 14:59 ` [RFC v2 08/14] lib: simplify multilines log messages David Marchand @ 2023-12-08 17:05 ` Stephen Hemminger 2023-12-16 9:26 ` Andrew Rybchenko 2023-12-08 21:03 ` Tyler Retzlaff 1 sibling, 1 reply; 122+ messages in thread From: Stephen Hemminger @ 2023-12-08 17:05 UTC (permalink / raw) To: David Marchand Cc: dev, thomas, ferruh.yigit, bruce.richardson, mb, Konstantin Ananyev, Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam, Andrew Rybchenko On Fri, 8 Dec 2023 15:59:42 +0100 David Marchand <david.marchand@redhat.com> wrote: > Those error log messages don't need to span on multiple lines. > > Signed-off-by: David Marchand <david.marchand@redhat.com> > --- > lib/acl/tb_mem.c | 2 +- > lib/bpf/bpf_stub.c | 4 ++-- > lib/eal/windows/eal_hugepages.c | 4 ++-- > lib/ethdev/rte_ethdev.c | 14 +++++++------- > 4 files changed, 12 insertions(+), 12 deletions(-) Could you fix these messages to not cross source lines as well? Feel free to shorten the wording as necessary. ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 08/14] lib: simplify multilines log messages 2023-12-08 17:05 ` Stephen Hemminger @ 2023-12-16 9:26 ` Andrew Rybchenko 0 siblings, 0 replies; 122+ messages in thread From: Andrew Rybchenko @ 2023-12-16 9:26 UTC (permalink / raw) To: Stephen Hemminger, David Marchand Cc: dev, thomas, ferruh.yigit, bruce.richardson, mb, Konstantin Ananyev, Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam On 12/8/23 20:05, Stephen Hemminger wrote: > On Fri, 8 Dec 2023 15:59:42 +0100 > David Marchand <david.marchand@redhat.com> wrote: > >> Those error log messages don't need to span on multiple lines. >> >> Signed-off-by: David Marchand <david.marchand@redhat.com> >> --- >> lib/acl/tb_mem.c | 2 +- >> lib/bpf/bpf_stub.c | 4 ++-- >> lib/eal/windows/eal_hugepages.c | 4 ++-- >> lib/ethdev/rte_ethdev.c | 14 +++++++------- >> 4 files changed, 12 insertions(+), 12 deletions(-) > > Could you fix these messages to not cross source lines as well? +1 > Feel free to shorten the wording as necessary. ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 08/14] lib: simplify multilines log messages 2023-12-08 14:59 ` [RFC v2 08/14] lib: simplify multilines log messages David Marchand 2023-12-08 17:05 ` Stephen Hemminger @ 2023-12-08 21:03 ` Tyler Retzlaff 1 sibling, 0 replies; 122+ messages in thread From: Tyler Retzlaff @ 2023-12-08 21:03 UTC (permalink / raw) To: David Marchand Cc: dev, thomas, ferruh.yigit, bruce.richardson, stephen, mb, Konstantin Ananyev, Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam, Andrew Rybchenko On Fri, Dec 08, 2023 at 03:59:42PM +0100, David Marchand wrote: > Those error log messages don't need to span on multiple lines. > > Signed-off-by: David Marchand <david.marchand@redhat.com> > --- Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com> ^ permalink raw reply [flat|nested] 122+ messages in thread
* [RFC v2 09/14] rcu: introduce a logging helper 2023-12-08 14:59 ` [RFC v2 00/14] " David Marchand ` (7 preceding siblings ...) 2023-12-08 14:59 ` [RFC v2 08/14] lib: simplify multilines log messages David Marchand @ 2023-12-08 14:59 ` David Marchand 2023-12-08 17:08 ` Stephen Hemminger ` (2 more replies) 2023-12-08 14:59 ` [RFC v2 10/14] vhost: improve log for memory dumping configuration David Marchand ` (4 subsequent siblings) 13 siblings, 3 replies; 122+ messages in thread From: David Marchand @ 2023-12-08 14:59 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Honnappa Nagarahalli Signed-off-by: David Marchand <david.marchand@redhat.com> --- lib/rcu/rte_rcu_qsbr.c | 62 ++++++++++++++++-------------------------- lib/rcu/rte_rcu_qsbr.h | 1 + 2 files changed, 24 insertions(+), 39 deletions(-) diff --git a/lib/rcu/rte_rcu_qsbr.c b/lib/rcu/rte_rcu_qsbr.c index 41a44be4b9..5b6530788a 100644 --- a/lib/rcu/rte_rcu_qsbr.c +++ b/lib/rcu/rte_rcu_qsbr.c @@ -19,6 +19,9 @@ #include "rte_rcu_qsbr.h" #include "rcu_qsbr_pvt.h" +#define RCU_LOG(level, fmt, args...) \ + RTE_LOG(level, RCU, "%s(): " fmt "\n", __func__, ## args) + /* Get the memory size of QSBR variable */ size_t rte_rcu_qsbr_get_memsize(uint32_t max_threads) @@ -26,9 +29,7 @@ rte_rcu_qsbr_get_memsize(uint32_t max_threads) size_t sz; if (max_threads == 0) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid max_threads %u\n", - __func__, max_threads); + RCU_LOG(ERR, "Invalid max_threads %u", max_threads); rte_errno = EINVAL; return 1; @@ -52,8 +53,7 @@ rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads) size_t sz; if (v == NULL) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(ERR, "Invalid input parameter"); rte_errno = EINVAL; return 1; @@ -85,8 +85,7 @@ rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id) uint64_t old_bmap, new_bmap; if (v == NULL || thread_id >= v->max_threads) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(ERR, "Invalid input parameter"); rte_errno = EINVAL; return 1; @@ -137,8 +136,7 @@ rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id) uint64_t old_bmap, new_bmap; if (v == NULL || thread_id >= v->max_threads) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(ERR, "Invalid input parameter"); rte_errno = EINVAL; return 1; @@ -211,8 +209,7 @@ rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v) uint32_t i, t, id; if (v == NULL || f == NULL) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(ERR, "Invalid input parameter"); rte_errno = EINVAL; return 1; @@ -282,8 +279,7 @@ rte_rcu_qsbr_dq_create(const struct rte_rcu_qsbr_dq_parameters *params) params->v == NULL || params->name == NULL || params->size == 0 || params->esize == 0 || (params->esize % 4 != 0)) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(ERR, "Invalid input parameter"); rte_errno = EINVAL; return NULL; @@ -293,9 +289,10 @@ rte_rcu_qsbr_dq_create(const struct rte_rcu_qsbr_dq_parameters *params) */ if ((params->trigger_reclaim_limit <= params->size) && (params->max_reclaim_size == 0)) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter, size = %u, trigger_reclaim_limit = %u, max_reclaim_size = %u\n", - __func__, params->size, params->trigger_reclaim_limit, + RCU_LOG(ERR, + "Invalid input parameter, size = %u, trigger_reclaim_limit = %u, " + "max_reclaim_size = %u", + params->size, params->trigger_reclaim_limit, params->max_reclaim_size); rte_errno = EINVAL; @@ -328,8 +325,7 @@ rte_rcu_qsbr_dq_create(const struct rte_rcu_qsbr_dq_parameters *params) __RTE_QSBR_TOKEN_SIZE + params->esize, qs_fifo_size, SOCKET_ID_ANY, flags); if (dq->r == NULL) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): defer queue create failed\n", __func__); + RCU_LOG(ERR, "defer queue create failed"); rte_free(dq); return NULL; } @@ -354,8 +350,7 @@ int rte_rcu_qsbr_dq_enqueue(struct rte_rcu_qsbr_dq *dq, void *e) uint32_t cur_size; if (dq == NULL || e == NULL) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(ERR, "Invalid input parameter"); rte_errno = EINVAL; return 1; @@ -372,8 +367,7 @@ int rte_rcu_qsbr_dq_enqueue(struct rte_rcu_qsbr_dq *dq, void *e) */ cur_size = rte_ring_count(dq->r); if (cur_size > dq->trigger_reclaim_limit) { - rte_log(RTE_LOG_INFO, rte_rcu_log_type, - "%s(): Triggering reclamation\n", __func__); + RCU_LOG(INFO, "Triggering reclamation"); rte_rcu_qsbr_dq_reclaim(dq, dq->max_reclaim_size, NULL, NULL, NULL); } @@ -391,23 +385,18 @@ int rte_rcu_qsbr_dq_enqueue(struct rte_rcu_qsbr_dq *dq, void *e) * Enqueue uses the configured flags when the DQ was created. */ if (rte_ring_enqueue_elem(dq->r, data, dq->esize) != 0) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Enqueue failed\n", __func__); + RCU_LOG(ERR, "Enqueue failed"); /* Note that the token generated above is not used. * Other than wasting tokens, it should not cause any * other issues. */ - rte_log(RTE_LOG_INFO, rte_rcu_log_type, - "%s(): Skipped enqueuing token = %" PRIu64 "\n", - __func__, dq_elem->token); + RCU_LOG(INFO, "Skipped enqueuing token = %" PRIu64, dq_elem->token); rte_errno = ENOSPC; return 1; } - rte_log(RTE_LOG_INFO, rte_rcu_log_type, - "%s(): Enqueued token = %" PRIu64 "\n", - __func__, dq_elem->token); + RCU_LOG(INFO, "Enqueued token = %" PRIu64, dq_elem->token); return 0; } @@ -422,8 +411,7 @@ rte_rcu_qsbr_dq_reclaim(struct rte_rcu_qsbr_dq *dq, unsigned int n, __rte_rcu_qsbr_dq_elem_t *dq_elem; if (dq == NULL || n == 0) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(ERR, "Invalid input parameter"); rte_errno = EINVAL; return 1; @@ -445,17 +433,14 @@ rte_rcu_qsbr_dq_reclaim(struct rte_rcu_qsbr_dq *dq, unsigned int n, } rte_ring_dequeue_elem_finish(dq->r, 1); - rte_log(RTE_LOG_INFO, rte_rcu_log_type, - "%s(): Reclaimed token = %" PRIu64 "\n", - __func__, dq_elem->token); + RCU_LOG(INFO, "Reclaimed token = %" PRIu64, dq_elem->token); dq->free_fn(dq->p, dq_elem->elem, 1); cnt++; } - rte_log(RTE_LOG_INFO, rte_rcu_log_type, - "%s(): Reclaimed %u resources\n", __func__, cnt); + RCU_LOG(INFO, "Reclaimed %u resources", cnt); if (freed != NULL) *freed = cnt; @@ -472,8 +457,7 @@ rte_rcu_qsbr_dq_delete(struct rte_rcu_qsbr_dq *dq) unsigned int pending; if (dq == NULL) { - rte_log(RTE_LOG_DEBUG, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(DEBUG, "Invalid input parameter"); return 0; } diff --git a/lib/rcu/rte_rcu_qsbr.h b/lib/rcu/rte_rcu_qsbr.h index 6b908e7ee0..0dca8310c0 100644 --- a/lib/rcu/rte_rcu_qsbr.h +++ b/lib/rcu/rte_rcu_qsbr.h @@ -36,6 +36,7 @@ extern "C" { #include <rte_ring.h> extern int rte_rcu_log_type; +#define RTE_LOGTYPE_RCU rte_rcu_log_type #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG #define __RTE_RCU_DP_LOG(level, fmt, args...) \ -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 09/14] rcu: introduce a logging helper 2023-12-08 14:59 ` [RFC v2 09/14] rcu: introduce a logging helper David Marchand @ 2023-12-08 17:08 ` Stephen Hemminger 2023-12-08 18:26 ` Honnappa Nagarahalli 2023-12-08 21:27 ` Tyler Retzlaff 2 siblings, 0 replies; 122+ messages in thread From: Stephen Hemminger @ 2023-12-08 17:08 UTC (permalink / raw) To: David Marchand Cc: dev, thomas, ferruh.yigit, bruce.richardson, mb, Honnappa Nagarahalli On Fri, 8 Dec 2023 15:59:43 +0100 David Marchand <david.marchand@redhat.com> wrote: > Signed-off-by: David Marchand <david.marchand@redhat.com> > --- > lib/rcu/rte_rcu_qsbr.c | 62 ++++++++++++++++-------------------------- > lib/rcu/rte_rcu_qsbr.h | 1 + > 2 files changed, 24 insertions(+), 39 deletions(-) Reviewed-by: Stephen Hemminger <stephen@networkplumber.org> ^ permalink raw reply [flat|nested] 122+ messages in thread
* RE: [RFC v2 09/14] rcu: introduce a logging helper 2023-12-08 14:59 ` [RFC v2 09/14] rcu: introduce a logging helper David Marchand 2023-12-08 17:08 ` Stephen Hemminger @ 2023-12-08 18:26 ` Honnappa Nagarahalli 2023-12-08 21:27 ` Tyler Retzlaff 2 siblings, 0 replies; 122+ messages in thread From: Honnappa Nagarahalli @ 2023-12-08 18:26 UTC (permalink / raw) To: David Marchand, dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, nd, nd > -----Original Message----- > From: David Marchand <david.marchand@redhat.com> > Sent: Friday, December 8, 2023 9:00 AM > To: dev@dpdk.org > Cc: thomas@monjalon.net; ferruh.yigit@amd.com; > bruce.richardson@intel.com; stephen@networkplumber.org; > mb@smartsharesystems.com; Honnappa Nagarahalli > <Honnappa.Nagarahalli@arm.com> > Subject: [RFC v2 09/14] rcu: introduce a logging helper > > Signed-off-by: David Marchand <david.marchand@redhat.com> Thank you, looks good Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> > --- > lib/rcu/rte_rcu_qsbr.c | 62 ++++++++++++++++-------------------------- > lib/rcu/rte_rcu_qsbr.h | 1 + > 2 files changed, 24 insertions(+), 39 deletions(-) > > diff --git a/lib/rcu/rte_rcu_qsbr.c b/lib/rcu/rte_rcu_qsbr.c index > 41a44be4b9..5b6530788a 100644 > --- a/lib/rcu/rte_rcu_qsbr.c > +++ b/lib/rcu/rte_rcu_qsbr.c > @@ -19,6 +19,9 @@ > #include "rte_rcu_qsbr.h" > #include "rcu_qsbr_pvt.h" > > +#define RCU_LOG(level, fmt, args...) \ > + RTE_LOG(level, RCU, "%s(): " fmt "\n", __func__, ## args) > + > /* Get the memory size of QSBR variable */ size_t > rte_rcu_qsbr_get_memsize(uint32_t max_threads) @@ -26,9 +29,7 @@ > rte_rcu_qsbr_get_memsize(uint32_t max_threads) > size_t sz; > > if (max_threads == 0) { > - rte_log(RTE_LOG_ERR, rte_rcu_log_type, > - "%s(): Invalid max_threads %u\n", > - __func__, max_threads); > + RCU_LOG(ERR, "Invalid max_threads %u", max_threads); > rte_errno = EINVAL; > > return 1; > @@ -52,8 +53,7 @@ rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t > max_threads) > size_t sz; > > if (v == NULL) { > - rte_log(RTE_LOG_ERR, rte_rcu_log_type, > - "%s(): Invalid input parameter\n", __func__); > + RCU_LOG(ERR, "Invalid input parameter"); > rte_errno = EINVAL; > > return 1; > @@ -85,8 +85,7 @@ rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, > unsigned int thread_id) > uint64_t old_bmap, new_bmap; > > if (v == NULL || thread_id >= v->max_threads) { > - rte_log(RTE_LOG_ERR, rte_rcu_log_type, > - "%s(): Invalid input parameter\n", __func__); > + RCU_LOG(ERR, "Invalid input parameter"); > rte_errno = EINVAL; > > return 1; > @@ -137,8 +136,7 @@ rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr > *v, unsigned int thread_id) > uint64_t old_bmap, new_bmap; > > if (v == NULL || thread_id >= v->max_threads) { > - rte_log(RTE_LOG_ERR, rte_rcu_log_type, > - "%s(): Invalid input parameter\n", __func__); > + RCU_LOG(ERR, "Invalid input parameter"); > rte_errno = EINVAL; > > return 1; > @@ -211,8 +209,7 @@ rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v) > uint32_t i, t, id; > > if (v == NULL || f == NULL) { > - rte_log(RTE_LOG_ERR, rte_rcu_log_type, > - "%s(): Invalid input parameter\n", __func__); > + RCU_LOG(ERR, "Invalid input parameter"); > rte_errno = EINVAL; > > return 1; > @@ -282,8 +279,7 @@ rte_rcu_qsbr_dq_create(const struct > rte_rcu_qsbr_dq_parameters *params) > params->v == NULL || params->name == NULL || > params->size == 0 || params->esize == 0 || > (params->esize % 4 != 0)) { > - rte_log(RTE_LOG_ERR, rte_rcu_log_type, > - "%s(): Invalid input parameter\n", __func__); > + RCU_LOG(ERR, "Invalid input parameter"); > rte_errno = EINVAL; > > return NULL; > @@ -293,9 +289,10 @@ rte_rcu_qsbr_dq_create(const struct > rte_rcu_qsbr_dq_parameters *params) > */ > if ((params->trigger_reclaim_limit <= params->size) && > (params->max_reclaim_size == 0)) { > - rte_log(RTE_LOG_ERR, rte_rcu_log_type, > - "%s(): Invalid input parameter, size = %u, > trigger_reclaim_limit = %u, max_reclaim_size = %u\n", > - __func__, params->size, params- > >trigger_reclaim_limit, > + RCU_LOG(ERR, > + "Invalid input parameter, size = %u, > trigger_reclaim_limit = %u, " > + "max_reclaim_size = %u", > + params->size, params->trigger_reclaim_limit, > params->max_reclaim_size); > rte_errno = EINVAL; > > @@ -328,8 +325,7 @@ rte_rcu_qsbr_dq_create(const struct > rte_rcu_qsbr_dq_parameters *params) > __RTE_QSBR_TOKEN_SIZE + params->esize, > qs_fifo_size, SOCKET_ID_ANY, flags); > if (dq->r == NULL) { > - rte_log(RTE_LOG_ERR, rte_rcu_log_type, > - "%s(): defer queue create failed\n", __func__); > + RCU_LOG(ERR, "defer queue create failed"); > rte_free(dq); > return NULL; > } > @@ -354,8 +350,7 @@ int rte_rcu_qsbr_dq_enqueue(struct rte_rcu_qsbr_dq > *dq, void *e) > uint32_t cur_size; > > if (dq == NULL || e == NULL) { > - rte_log(RTE_LOG_ERR, rte_rcu_log_type, > - "%s(): Invalid input parameter\n", __func__); > + RCU_LOG(ERR, "Invalid input parameter"); > rte_errno = EINVAL; > > return 1; > @@ -372,8 +367,7 @@ int rte_rcu_qsbr_dq_enqueue(struct rte_rcu_qsbr_dq > *dq, void *e) > */ > cur_size = rte_ring_count(dq->r); > if (cur_size > dq->trigger_reclaim_limit) { > - rte_log(RTE_LOG_INFO, rte_rcu_log_type, > - "%s(): Triggering reclamation\n", __func__); > + RCU_LOG(INFO, "Triggering reclamation"); > rte_rcu_qsbr_dq_reclaim(dq, dq->max_reclaim_size, > NULL, NULL, NULL); > } > @@ -391,23 +385,18 @@ int rte_rcu_qsbr_dq_enqueue(struct > rte_rcu_qsbr_dq *dq, void *e) > * Enqueue uses the configured flags when the DQ was created. > */ > if (rte_ring_enqueue_elem(dq->r, data, dq->esize) != 0) { > - rte_log(RTE_LOG_ERR, rte_rcu_log_type, > - "%s(): Enqueue failed\n", __func__); > + RCU_LOG(ERR, "Enqueue failed"); > /* Note that the token generated above is not used. > * Other than wasting tokens, it should not cause any > * other issues. > */ > - rte_log(RTE_LOG_INFO, rte_rcu_log_type, > - "%s(): Skipped enqueuing token = %" PRIu64 "\n", > - __func__, dq_elem->token); > + RCU_LOG(INFO, "Skipped enqueuing token = %" PRIu64, > dq_elem->token); > > rte_errno = ENOSPC; > return 1; > } > > - rte_log(RTE_LOG_INFO, rte_rcu_log_type, > - "%s(): Enqueued token = %" PRIu64 "\n", > - __func__, dq_elem->token); > + RCU_LOG(INFO, "Enqueued token = %" PRIu64, dq_elem->token); > > return 0; > } > @@ -422,8 +411,7 @@ rte_rcu_qsbr_dq_reclaim(struct rte_rcu_qsbr_dq *dq, > unsigned int n, > __rte_rcu_qsbr_dq_elem_t *dq_elem; > > if (dq == NULL || n == 0) { > - rte_log(RTE_LOG_ERR, rte_rcu_log_type, > - "%s(): Invalid input parameter\n", __func__); > + RCU_LOG(ERR, "Invalid input parameter"); > rte_errno = EINVAL; > > return 1; > @@ -445,17 +433,14 @@ rte_rcu_qsbr_dq_reclaim(struct rte_rcu_qsbr_dq > *dq, unsigned int n, > } > rte_ring_dequeue_elem_finish(dq->r, 1); > > - rte_log(RTE_LOG_INFO, rte_rcu_log_type, > - "%s(): Reclaimed token = %" PRIu64 "\n", > - __func__, dq_elem->token); > + RCU_LOG(INFO, "Reclaimed token = %" PRIu64, dq_elem- > >token); > > dq->free_fn(dq->p, dq_elem->elem, 1); > > cnt++; > } > > - rte_log(RTE_LOG_INFO, rte_rcu_log_type, > - "%s(): Reclaimed %u resources\n", __func__, cnt); > + RCU_LOG(INFO, "Reclaimed %u resources", cnt); > > if (freed != NULL) > *freed = cnt; > @@ -472,8 +457,7 @@ rte_rcu_qsbr_dq_delete(struct rte_rcu_qsbr_dq *dq) > unsigned int pending; > > if (dq == NULL) { > - rte_log(RTE_LOG_DEBUG, rte_rcu_log_type, > - "%s(): Invalid input parameter\n", __func__); > + RCU_LOG(DEBUG, "Invalid input parameter"); > > return 0; > } > diff --git a/lib/rcu/rte_rcu_qsbr.h b/lib/rcu/rte_rcu_qsbr.h index > 6b908e7ee0..0dca8310c0 100644 > --- a/lib/rcu/rte_rcu_qsbr.h > +++ b/lib/rcu/rte_rcu_qsbr.h > @@ -36,6 +36,7 @@ extern "C" { > #include <rte_ring.h> > > extern int rte_rcu_log_type; > +#define RTE_LOGTYPE_RCU rte_rcu_log_type > > #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG > #define __RTE_RCU_DP_LOG(level, fmt, args...) \ > -- > 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 09/14] rcu: introduce a logging helper 2023-12-08 14:59 ` [RFC v2 09/14] rcu: introduce a logging helper David Marchand 2023-12-08 17:08 ` Stephen Hemminger 2023-12-08 18:26 ` Honnappa Nagarahalli @ 2023-12-08 21:27 ` Tyler Retzlaff 2023-12-12 15:00 ` David Marchand 2 siblings, 1 reply; 122+ messages in thread From: Tyler Retzlaff @ 2023-12-08 21:27 UTC (permalink / raw) To: David Marchand Cc: dev, thomas, ferruh.yigit, bruce.richardson, stephen, mb, Honnappa Nagarahalli On Fri, Dec 08, 2023 at 03:59:43PM +0100, David Marchand wrote: > Signed-off-by: David Marchand <david.marchand@redhat.com> > --- Reviewed-by: Tyler Retzlaff <roretzla@linux.microsoft.com> > lib/rcu/rte_rcu_qsbr.c | 62 ++++++++++++++++-------------------------- > lib/rcu/rte_rcu_qsbr.h | 1 + > 2 files changed, 24 insertions(+), 39 deletions(-) > > diff --git a/lib/rcu/rte_rcu_qsbr.c b/lib/rcu/rte_rcu_qsbr.c > index 41a44be4b9..5b6530788a 100644 > --- a/lib/rcu/rte_rcu_qsbr.c > +++ b/lib/rcu/rte_rcu_qsbr.c > @@ -19,6 +19,9 @@ > #include "rte_rcu_qsbr.h" > #include "rcu_qsbr_pvt.h" > > +#define RCU_LOG(level, fmt, args...) \ > + RTE_LOG(level, RCU, "%s(): " fmt "\n", __func__, ## args) > + Since you are looking in the area for all the versions of gcc/clang we use able to support the non-standard __VA_ARGS__ that discard the comma? I know that some versions of gcc do and if it does I would like to move to using __VA_ARGS__ instead of args so we can use the same thing with msvc. ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 09/14] rcu: introduce a logging helper 2023-12-08 21:27 ` Tyler Retzlaff @ 2023-12-12 15:00 ` David Marchand 2023-12-12 19:11 ` Tyler Retzlaff 0 siblings, 1 reply; 122+ messages in thread From: David Marchand @ 2023-12-12 15:00 UTC (permalink / raw) To: Tyler Retzlaff Cc: dev, thomas, ferruh.yigit, bruce.richardson, stephen, mb, Honnappa Nagarahalli On Fri, Dec 8, 2023 at 10:27 PM Tyler Retzlaff <roretzla@linux.microsoft.com> wrote: > > On Fri, Dec 08, 2023 at 03:59:43PM +0100, David Marchand wrote: > > Signed-off-by: David Marchand <david.marchand@redhat.com> > > --- > > Reviewed-by: Tyler Retzlaff <roretzla@linux.microsoft.com> > > > lib/rcu/rte_rcu_qsbr.c | 62 ++++++++++++++++-------------------------- > > lib/rcu/rte_rcu_qsbr.h | 1 + > > 2 files changed, 24 insertions(+), 39 deletions(-) > > > > diff --git a/lib/rcu/rte_rcu_qsbr.c b/lib/rcu/rte_rcu_qsbr.c > > index 41a44be4b9..5b6530788a 100644 > > --- a/lib/rcu/rte_rcu_qsbr.c > > +++ b/lib/rcu/rte_rcu_qsbr.c > > @@ -19,6 +19,9 @@ > > #include "rte_rcu_qsbr.h" > > #include "rcu_qsbr_pvt.h" > > > > +#define RCU_LOG(level, fmt, args...) \ > > + RTE_LOG(level, RCU, "%s(): " fmt "\n", __func__, ## args) > > + > > Since you are looking in the area for all the versions of gcc/clang we > use able to support the non-standard __VA_ARGS__ that discard the comma? I suspect there is some typo (maybe s/for all/are all/ ?). Can you please clarify? > > I know that some versions of gcc do and if it does I would like to move > to using __VA_ARGS__ instead of args so we can use the same thing with > msvc. If the request is to translate the ## args stuff to __VA_ARGS__, I would prefer this is done in a separate series and not to mix with this already huge series. -- David Marchand ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 09/14] rcu: introduce a logging helper 2023-12-12 15:00 ` David Marchand @ 2023-12-12 19:11 ` Tyler Retzlaff 2023-12-18 9:37 ` David Marchand 0 siblings, 1 reply; 122+ messages in thread From: Tyler Retzlaff @ 2023-12-12 19:11 UTC (permalink / raw) To: David Marchand Cc: dev, thomas, ferruh.yigit, bruce.richardson, stephen, mb, Honnappa Nagarahalli On Tue, Dec 12, 2023 at 04:00:35PM +0100, David Marchand wrote: > On Fri, Dec 8, 2023 at 10:27 PM Tyler Retzlaff > <roretzla@linux.microsoft.com> wrote: > > > > On Fri, Dec 08, 2023 at 03:59:43PM +0100, David Marchand wrote: > > > Signed-off-by: David Marchand <david.marchand@redhat.com> > > > --- > > > > Reviewed-by: Tyler Retzlaff <roretzla@linux.microsoft.com> > > > > > lib/rcu/rte_rcu_qsbr.c | 62 ++++++++++++++++-------------------------- > > > lib/rcu/rte_rcu_qsbr.h | 1 + > > > 2 files changed, 24 insertions(+), 39 deletions(-) > > > > > > diff --git a/lib/rcu/rte_rcu_qsbr.c b/lib/rcu/rte_rcu_qsbr.c > > > index 41a44be4b9..5b6530788a 100644 > > > --- a/lib/rcu/rte_rcu_qsbr.c > > > +++ b/lib/rcu/rte_rcu_qsbr.c > > > @@ -19,6 +19,9 @@ > > > #include "rte_rcu_qsbr.h" > > > #include "rcu_qsbr_pvt.h" > > > > > > +#define RCU_LOG(level, fmt, args...) \ > > > + RTE_LOG(level, RCU, "%s(): " fmt "\n", __func__, ## args) > > > + > > > > Since you are looking in the area for all the versions of gcc/clang we > > use able to support the non-standard __VA_ARGS__ that discard the comma? > > I suspect there is some typo (maybe s/for all/are all/ ?). > Can you please clarify? > > > > > > I know that some versions of gcc do and if it does I would like to move > > to using __VA_ARGS__ instead of args so we can use the same thing with > > msvc. > > If the request is to translate the ## args stuff to __VA_ARGS__, I > would prefer this is done in a separate series and not to mix with > this already huge series. Yes. I was asking if translation was possible from ## args to __VA_ARGS__. I was also asking if you could help do the translation. I didn't mean to suggest it should be done in this series. Ty > > > -- > David Marchand ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 09/14] rcu: introduce a logging helper 2023-12-12 19:11 ` Tyler Retzlaff @ 2023-12-18 9:37 ` David Marchand 2023-12-18 19:52 ` Tyler Retzlaff 0 siblings, 1 reply; 122+ messages in thread From: David Marchand @ 2023-12-18 9:37 UTC (permalink / raw) To: Tyler Retzlaff Cc: dev, thomas, ferruh.yigit, bruce.richardson, stephen, mb, Honnappa Nagarahalli On Tue, Dec 12, 2023 at 8:11 PM Tyler Retzlaff <roretzla@linux.microsoft.com> wrote: > On Tue, Dec 12, 2023 at 04:00:35PM +0100, David Marchand wrote: > > On Fri, Dec 8, 2023 at 10:27 PM Tyler Retzlaff > > <roretzla@linux.microsoft.com> wrote: > > > > > > On Fri, Dec 08, 2023 at 03:59:43PM +0100, David Marchand wrote: > > > > Signed-off-by: David Marchand <david.marchand@redhat.com> > > > > --- > > > > > > Reviewed-by: Tyler Retzlaff <roretzla@linux.microsoft.com> > > > > > > > lib/rcu/rte_rcu_qsbr.c | 62 ++++++++++++++++-------------------------- > > > > lib/rcu/rte_rcu_qsbr.h | 1 + > > > > 2 files changed, 24 insertions(+), 39 deletions(-) > > > > > > > > diff --git a/lib/rcu/rte_rcu_qsbr.c b/lib/rcu/rte_rcu_qsbr.c > > > > index 41a44be4b9..5b6530788a 100644 > > > > --- a/lib/rcu/rte_rcu_qsbr.c > > > > +++ b/lib/rcu/rte_rcu_qsbr.c > > > > @@ -19,6 +19,9 @@ > > > > #include "rte_rcu_qsbr.h" > > > > #include "rcu_qsbr_pvt.h" > > > > > > > > +#define RCU_LOG(level, fmt, args...) \ > > > > + RTE_LOG(level, RCU, "%s(): " fmt "\n", __func__, ## args) > > > > + > > > > > > Since you are looking in the area for all the versions of gcc/clang we > > > use able to support the non-standard __VA_ARGS__ that discard the comma? > > > > I suspect there is some typo (maybe s/for all/are all/ ?). > > Can you please clarify? > > > > > > > > > > I know that some versions of gcc do and if it does I would like to move > > > to using __VA_ARGS__ instead of args so we can use the same thing with > > > msvc. > > > > If the request is to translate the ## args stuff to __VA_ARGS__, I > > would prefer this is done in a separate series and not to mix with > > this already huge series. > > Yes. > > I was asking if translation was possible from ## args to __VA_ARGS__. > > I was also asking if you could help do the translation. I didn't mean to > suggest it should be done in this series. Ok, I'll see if I can spend some cycles on this, but in January. -- David Marchand ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 09/14] rcu: introduce a logging helper 2023-12-18 9:37 ` David Marchand @ 2023-12-18 19:52 ` Tyler Retzlaff 0 siblings, 0 replies; 122+ messages in thread From: Tyler Retzlaff @ 2023-12-18 19:52 UTC (permalink / raw) To: David Marchand Cc: dev, thomas, ferruh.yigit, bruce.richardson, stephen, mb, Honnappa Nagarahalli On Mon, Dec 18, 2023 at 10:37:12AM +0100, David Marchand wrote: > On Tue, Dec 12, 2023 at 8:11 PM Tyler Retzlaff > <roretzla@linux.microsoft.com> wrote: > > On Tue, Dec 12, 2023 at 04:00:35PM +0100, David Marchand wrote: > > > On Fri, Dec 8, 2023 at 10:27 PM Tyler Retzlaff > > > <roretzla@linux.microsoft.com> wrote: > > > > > > > > On Fri, Dec 08, 2023 at 03:59:43PM +0100, David Marchand wrote: > > > > > Signed-off-by: David Marchand <david.marchand@redhat.com> > > > > > --- > > > > > > > > Reviewed-by: Tyler Retzlaff <roretzla@linux.microsoft.com> > > > > > > > > > lib/rcu/rte_rcu_qsbr.c | 62 ++++++++++++++++-------------------------- > > > > > lib/rcu/rte_rcu_qsbr.h | 1 + > > > > > 2 files changed, 24 insertions(+), 39 deletions(-) > > > > > > > > > > diff --git a/lib/rcu/rte_rcu_qsbr.c b/lib/rcu/rte_rcu_qsbr.c > > > > > index 41a44be4b9..5b6530788a 100644 > > > > > --- a/lib/rcu/rte_rcu_qsbr.c > > > > > +++ b/lib/rcu/rte_rcu_qsbr.c > > > > > @@ -19,6 +19,9 @@ > > > > > #include "rte_rcu_qsbr.h" > > > > > #include "rcu_qsbr_pvt.h" > > > > > > > > > > +#define RCU_LOG(level, fmt, args...) \ > > > > > + RTE_LOG(level, RCU, "%s(): " fmt "\n", __func__, ## args) > > > > > + > > > > > > > > Since you are looking in the area for all the versions of gcc/clang we > > > > use able to support the non-standard __VA_ARGS__ that discard the comma? > > > > > > I suspect there is some typo (maybe s/for all/are all/ ?). > > > Can you please clarify? > > > > > > > > > > > > > > I know that some versions of gcc do and if it does I would like to move > > > > to using __VA_ARGS__ instead of args so we can use the same thing with > > > > msvc. > > > > > > If the request is to translate the ## args stuff to __VA_ARGS__, I > > > would prefer this is done in a separate series and not to mix with > > > this already huge series. > > > > Yes. > > > > I was asking if translation was possible from ## args to __VA_ARGS__. > > > > I was also asking if you could help do the translation. I didn't mean to > > suggest it should be done in this series. > > Ok, I'll see if I can spend some cycles on this, but in January. fantastic i'd appreciate that a lot, i'll focus on some other things then. ty > > > -- > David Marchand ^ permalink raw reply [flat|nested] 122+ messages in thread
* [RFC v2 10/14] vhost: improve log for memory dumping configuration 2023-12-08 14:59 ` [RFC v2 00/14] " David Marchand ` (8 preceding siblings ...) 2023-12-08 14:59 ` [RFC v2 09/14] rcu: introduce a logging helper David Marchand @ 2023-12-08 14:59 ` David Marchand 2023-12-08 17:14 ` Stephen Hemminger 2023-12-08 14:59 ` [RFC v2 11/14] log: add a per line log helper David Marchand ` (3 subsequent siblings) 13 siblings, 1 reply; 122+ messages in thread From: David Marchand @ 2023-12-08 14:59 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Maxime Coquelin, Chenbo Xia Add the device name as a prefix of logs associated to madvise() calls. Signed-off-by: David Marchand <david.marchand@redhat.com> --- lib/vhost/iotlb.c | 18 +++++++++--------- lib/vhost/vhost.h | 2 +- lib/vhost/vhost_user.c | 26 +++++++++++++------------- 3 files changed, 23 insertions(+), 23 deletions(-) diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c index 87ac0e5126..10ab77262e 100644 --- a/lib/vhost/iotlb.c +++ b/lib/vhost/iotlb.c @@ -54,16 +54,16 @@ vhost_user_iotlb_share_page(struct vhost_iotlb_entry *a, struct vhost_iotlb_entr } static void -vhost_user_iotlb_set_dump(struct vhost_iotlb_entry *node) +vhost_user_iotlb_set_dump(struct virtio_net *dev, struct vhost_iotlb_entry *node) { uint64_t start; start = node->uaddr + node->uoffset; - mem_set_dump((void *)(uintptr_t)start, node->size, true, RTE_BIT64(node->page_shift)); + mem_set_dump(dev, (void *)(uintptr_t)start, node->size, true, RTE_BIT64(node->page_shift)); } static void -vhost_user_iotlb_clear_dump(struct vhost_iotlb_entry *node, +vhost_user_iotlb_clear_dump(struct virtio_net *dev, struct vhost_iotlb_entry *node, struct vhost_iotlb_entry *prev, struct vhost_iotlb_entry *next) { uint64_t start, end; @@ -80,7 +80,7 @@ vhost_user_iotlb_clear_dump(struct vhost_iotlb_entry *node, end = RTE_ALIGN_FLOOR(end, RTE_BIT64(node->page_shift)); if (end > start) - mem_set_dump((void *)(uintptr_t)start, end - start, false, + mem_set_dump(dev, (void *)(uintptr_t)start, end - start, false, RTE_BIT64(node->page_shift)); } @@ -204,7 +204,7 @@ vhost_user_iotlb_cache_remove_all(struct virtio_net *dev) vhost_user_iotlb_wr_lock_all(dev); RTE_TAILQ_FOREACH_SAFE(node, &dev->iotlb_list, next, temp_node) { - vhost_user_iotlb_clear_dump(node, NULL, NULL); + vhost_user_iotlb_clear_dump(dev, node, NULL, NULL); TAILQ_REMOVE(&dev->iotlb_list, node, next); vhost_user_iotlb_remove_notify(dev, node); @@ -230,7 +230,7 @@ vhost_user_iotlb_cache_random_evict(struct virtio_net *dev) if (!entry_idx) { struct vhost_iotlb_entry *next_node = RTE_TAILQ_NEXT(node, next); - vhost_user_iotlb_clear_dump(node, prev_node, next_node); + vhost_user_iotlb_clear_dump(dev, node, prev_node, next_node); TAILQ_REMOVE(&dev->iotlb_list, node, next); vhost_user_iotlb_remove_notify(dev, node); @@ -285,7 +285,7 @@ vhost_user_iotlb_cache_insert(struct virtio_net *dev, uint64_t iova, uint64_t ua vhost_user_iotlb_pool_put(dev, new_node); goto unlock; } else if (node->iova > new_node->iova) { - vhost_user_iotlb_set_dump(new_node); + vhost_user_iotlb_set_dump(dev, new_node); TAILQ_INSERT_BEFORE(node, new_node, next); dev->iotlb_cache_nr++; @@ -293,7 +293,7 @@ vhost_user_iotlb_cache_insert(struct virtio_net *dev, uint64_t iova, uint64_t ua } } - vhost_user_iotlb_set_dump(new_node); + vhost_user_iotlb_set_dump(dev, new_node); TAILQ_INSERT_TAIL(&dev->iotlb_list, new_node, next); dev->iotlb_cache_nr++; @@ -322,7 +322,7 @@ vhost_user_iotlb_cache_remove(struct virtio_net *dev, uint64_t iova, uint64_t si if (iova < node->iova + node->size) { struct vhost_iotlb_entry *next_node = RTE_TAILQ_NEXT(node, next); - vhost_user_iotlb_clear_dump(node, prev_node, next_node); + vhost_user_iotlb_clear_dump(dev, node, prev_node, next_node); TAILQ_REMOVE(&dev->iotlb_list, node, next); vhost_user_iotlb_remove_notify(dev, node); diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index f8624fba3d..5f24911190 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -1062,6 +1062,6 @@ mbuf_is_consumed(struct rte_mbuf *m) return true; } -void mem_set_dump(void *ptr, size_t size, bool enable, uint64_t alignment); +void mem_set_dump(struct virtio_net *dev, void *ptr, size_t size, bool enable, uint64_t alignment); #endif /* _VHOST_NET_CDEV_H_ */ diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index e36312181a..413f068bcd 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -763,7 +763,7 @@ hua_to_alignment(struct rte_vhost_memory *mem, void *ptr) } void -mem_set_dump(void *ptr, size_t size, bool enable, uint64_t pagesz) +mem_set_dump(struct virtio_net *dev, void *ptr, size_t size, bool enable, uint64_t pagesz) { #ifdef MADV_DONTDUMP void *start = RTE_PTR_ALIGN_FLOOR(ptr, pagesz); @@ -771,8 +771,8 @@ mem_set_dump(void *ptr, size_t size, bool enable, uint64_t pagesz) size_t len = end - (uintptr_t)start; if (madvise(start, len, enable ? MADV_DODUMP : MADV_DONTDUMP) == -1) { - rte_log(RTE_LOG_INFO, vhost_config_log_level, - "VHOST_CONFIG: could not set coredump preference (%s).\n", strerror(errno)); + VHOST_LOG_CONFIG(dev->ifname, INFO, + "could not set coredump preference (%s).\n", strerror(errno)); } #endif } @@ -807,7 +807,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->desc_packed, len, true, + mem_set_dump(dev, vq->desc_packed, len, true, hua_to_alignment(dev->mem, vq->desc_packed)); numa_realloc(&dev, &vq); *pdev = dev; @@ -824,7 +824,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->driver_event, len, true, + mem_set_dump(dev, vq->driver_event, len, true, hua_to_alignment(dev->mem, vq->driver_event)); len = sizeof(struct vring_packed_desc_event); vq->device_event = (struct vring_packed_desc_event *) @@ -837,7 +837,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->device_event, len, true, + mem_set_dump(dev, vq->device_event, len, true, hua_to_alignment(dev->mem, vq->device_event)); vq->access_ok = true; return; @@ -855,7 +855,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->desc, len, true, hua_to_alignment(dev->mem, vq->desc)); + mem_set_dump(dev, vq->desc, len, true, hua_to_alignment(dev->mem, vq->desc)); numa_realloc(&dev, &vq); *pdev = dev; *pvq = vq; @@ -871,7 +871,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->avail, len, true, hua_to_alignment(dev->mem, vq->avail)); + mem_set_dump(dev, vq->avail, len, true, hua_to_alignment(dev->mem, vq->avail)); len = sizeof(struct vring_used) + sizeof(struct vring_used_elem) * vq->size; if (dev->features & (1ULL << VIRTIO_RING_F_EVENT_IDX)) @@ -884,7 +884,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->used, len, true, hua_to_alignment(dev->mem, vq->used)); + mem_set_dump(dev, vq->used, len, true, hua_to_alignment(dev->mem, vq->used)); if (vq->last_used_idx != vq->used->idx) { VHOST_LOG_CONFIG(dev->ifname, WARNING, @@ -1274,7 +1274,7 @@ vhost_user_mmap_region(struct virtio_net *dev, region->mmap_addr = mmap_addr; region->mmap_size = mmap_size; region->host_user_addr = (uint64_t)(uintptr_t)mmap_addr + mmap_offset; - mem_set_dump(mmap_addr, mmap_size, false, alignment); + mem_set_dump(dev, mmap_addr, mmap_size, false, alignment); if (dev->async_copy) { if (add_guest_pages(dev, region, alignment) < 0) { @@ -1580,7 +1580,7 @@ inflight_mem_alloc(struct virtio_net *dev, const char *name, size_t size, int *f } alignment = get_blk_size(mfd); - mem_set_dump(ptr, size, false, alignment); + mem_set_dump(dev, ptr, size, false, alignment); *fd = mfd; return ptr; } @@ -1789,7 +1789,7 @@ vhost_user_set_inflight_fd(struct virtio_net **pdev, dev->inflight_info->fd = -1; } - mem_set_dump(addr, mmap_size, false, get_blk_size(fd)); + mem_set_dump(dev, addr, mmap_size, false, get_blk_size(fd)); dev->inflight_info->fd = fd; dev->inflight_info->addr = addr; dev->inflight_info->size = mmap_size; @@ -2343,7 +2343,7 @@ vhost_user_set_log_base(struct virtio_net **pdev, dev->log_addr = (uint64_t)(uintptr_t)addr; dev->log_base = dev->log_addr + off; dev->log_size = size; - mem_set_dump(addr, size + off, false, alignment); + mem_set_dump(dev, addr, size + off, false, alignment); for (i = 0; i < dev->nr_vring; i++) { struct vhost_virtqueue *vq = dev->virtqueue[i]; -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 10/14] vhost: improve log for memory dumping configuration 2023-12-08 14:59 ` [RFC v2 10/14] vhost: improve log for memory dumping configuration David Marchand @ 2023-12-08 17:14 ` Stephen Hemminger 0 siblings, 0 replies; 122+ messages in thread From: Stephen Hemminger @ 2023-12-08 17:14 UTC (permalink / raw) To: David Marchand Cc: dev, thomas, ferruh.yigit, bruce.richardson, mb, Maxime Coquelin, Chenbo Xia On Fri, 8 Dec 2023 15:59:44 +0100 David Marchand <david.marchand@redhat.com> wrote: > Add the device name as a prefix of logs associated to madvise() calls. > > Signed-off-by: David Marchand <david.marchand@redhat.com> > --- > lib/vhost/iotlb.c | 18 +++++++++--------- > lib/vhost/vhost.h | 2 +- > lib/vhost/vhost_user.c | 26 +++++++++++++------------- > 3 files changed, 23 insertions(+), 23 deletions(-) The logging part looks good, but looking at the code, the function mem_set_dump() has some things that should be addressed. Since it is global (but not exported) function the function name may potentially clash when used with static linkage. It should be renamed. Code is duplication of eal_mem_set_dump(). Would be better to have one version. Maybe rte_eal_mem_set_dump()? Acked-by: Stephen Hemminger <stephen@networkplumber.org> ^ permalink raw reply [flat|nested] 122+ messages in thread
* [RFC v2 11/14] log: add a per line log helper 2023-12-08 14:59 ` [RFC v2 00/14] " David Marchand ` (9 preceding siblings ...) 2023-12-08 14:59 ` [RFC v2 10/14] vhost: improve log for memory dumping configuration David Marchand @ 2023-12-08 14:59 ` David Marchand 2023-12-08 17:15 ` Stephen Hemminger 2023-12-09 7:21 ` fengchengwen 2023-12-08 14:59 ` [RFC v2 12/14] lib: convert to per line logging David Marchand ` (2 subsequent siblings) 13 siblings, 2 replies; 122+ messages in thread From: David Marchand @ 2023-12-08 14:59 UTC (permalink / raw) To: dev; +Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb gcc builtin __builtin_strchr can be used as a static assertion to check whether passed format strings contain a \n. This can be useful to detect double \n in log messages. Signed-off-by: David Marchand <david.marchand@redhat.com> --- Changes since RFC v1: - added a check in checkpatches.sh, --- devtools/checkpatches.sh | 8 ++++++++ lib/log/rte_log.h | 21 +++++++++++++++++++++ 2 files changed, 29 insertions(+) diff --git a/devtools/checkpatches.sh b/devtools/checkpatches.sh index 10b79ca2bc..10d1bf490b 100755 --- a/devtools/checkpatches.sh +++ b/devtools/checkpatches.sh @@ -53,6 +53,14 @@ print_usage () { check_forbidden_additions() { # <patch> res=0 + # refrain from new calls to RTE_LOG + awk -v FOLDERS="lib" \ + -v EXPRESSIONS="RTE_LOG\\\(" \ + -v RET_ON_FAIL=1 \ + -v MESSAGE='Prefer RTE_LOG_LINE' \ + -f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \ + "$1" || res=1 + # refrain from new additions of rte_panic() and rte_exit() # multiple folders and expressions are separated by spaces awk -v FOLDERS="lib drivers" \ diff --git a/lib/log/rte_log.h b/lib/log/rte_log.h index 27fb6129a7..da7e672e81 100644 --- a/lib/log/rte_log.h +++ b/lib/log/rte_log.h @@ -17,6 +17,7 @@ extern "C" { #endif +#include <assert.h> #include <stdint.h> #include <stdio.h> #include <stdarg.h> @@ -358,6 +359,26 @@ int rte_vlog(uint32_t level, uint32_t logtype, const char *format, va_list ap) RTE_LOGTYPE_ ## t, # t ": " __VA_ARGS__) : \ 0) +#ifdef RTE_TOOLCHAIN_GCC +#define RTE_LOG_CHECK_NO_NEWLINE(fmt) \ + static_assert(!__builtin_strchr(fmt, '\n'), \ + "This log format string contains a \\n") +#else +#define RTE_LOG_CHECK_NO_NEWLINE(...) +#endif + +#define RTE_LOG_LINE(l, t, ...) do { \ + RTE_LOG_CHECK_NO_NEWLINE(RTE_FMT_HEAD(__VA_ARGS__,)); \ + RTE_LOG(l, t, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_FMT_TAIL(__VA_ARGS__ ,))); \ +} while (0) + +#define RTE_LOG_DP_LINE(l, t, ...) do { \ + RTE_LOG_CHECK_NO_NEWLINE(RTE_FMT_HEAD(__VA_ARGS__,)); \ + RTE_LOG_DP(l, t, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_FMT_TAIL(__VA_ARGS__ ,))); \ +} while (0) + #define RTE_LOG_REGISTER_IMPL(type, name, level) \ int type; \ RTE_INIT(__##type) \ -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 11/14] log: add a per line log helper 2023-12-08 14:59 ` [RFC v2 11/14] log: add a per line log helper David Marchand @ 2023-12-08 17:15 ` Stephen Hemminger 2023-12-09 7:21 ` fengchengwen 1 sibling, 0 replies; 122+ messages in thread From: Stephen Hemminger @ 2023-12-08 17:15 UTC (permalink / raw) To: David Marchand; +Cc: dev, thomas, ferruh.yigit, bruce.richardson, mb On Fri, 8 Dec 2023 15:59:45 +0100 David Marchand <david.marchand@redhat.com> wrote: > gcc builtin __builtin_strchr can be used as a static assertion to check > whether passed format strings contain a \n. > This can be useful to detect double \n in log messages. > > Signed-off-by: David Marchand <david.marchand@redhat.com> Good idea. Acked-by: Stephen Hemminger <stephen@networkplumber.org> ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 11/14] log: add a per line log helper 2023-12-08 14:59 ` [RFC v2 11/14] log: add a per line log helper David Marchand 2023-12-08 17:15 ` Stephen Hemminger @ 2023-12-09 7:21 ` fengchengwen 1 sibling, 0 replies; 122+ messages in thread From: fengchengwen @ 2023-12-09 7:21 UTC (permalink / raw) To: David Marchand, dev; +Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb Acked-by: Chengwen Feng <fengchengwen@huawei.com> On 2023/12/8 22:59, David Marchand wrote: > gcc builtin __builtin_strchr can be used as a static assertion to check > whether passed format strings contain a \n. > This can be useful to detect double \n in log messages. > > Signed-off-by: David Marchand <david.marchand@redhat.com> > --- > Changes since RFC v1: > - added a check in checkpatches.sh, > > --- > devtools/checkpatches.sh | 8 ++++++++ > lib/log/rte_log.h | 21 +++++++++++++++++++++ > 2 files changed, 29 insertions(+) > > diff --git a/devtools/checkpatches.sh b/devtools/checkpatches.sh > index 10b79ca2bc..10d1bf490b 100755 > --- a/devtools/checkpatches.sh > +++ b/devtools/checkpatches.sh > @@ -53,6 +53,14 @@ print_usage () { > check_forbidden_additions() { # <patch> > res=0 > > + # refrain from new calls to RTE_LOG > + awk -v FOLDERS="lib" \ > + -v EXPRESSIONS="RTE_LOG\\\(" \ > + -v RET_ON_FAIL=1 \ > + -v MESSAGE='Prefer RTE_LOG_LINE' \ > + -f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \ > + "$1" || res=1 > + > # refrain from new additions of rte_panic() and rte_exit() > # multiple folders and expressions are separated by spaces > awk -v FOLDERS="lib drivers" \ > diff --git a/lib/log/rte_log.h b/lib/log/rte_log.h > index 27fb6129a7..da7e672e81 100644 > --- a/lib/log/rte_log.h > +++ b/lib/log/rte_log.h > @@ -17,6 +17,7 @@ > extern "C" { > #endif > > +#include <assert.h> > #include <stdint.h> > #include <stdio.h> > #include <stdarg.h> > @@ -358,6 +359,26 @@ int rte_vlog(uint32_t level, uint32_t logtype, const char *format, va_list ap) > RTE_LOGTYPE_ ## t, # t ": " __VA_ARGS__) : \ > 0) > > +#ifdef RTE_TOOLCHAIN_GCC > +#define RTE_LOG_CHECK_NO_NEWLINE(fmt) \ > + static_assert(!__builtin_strchr(fmt, '\n'), \ > + "This log format string contains a \\n") > +#else > +#define RTE_LOG_CHECK_NO_NEWLINE(...) > +#endif > + > +#define RTE_LOG_LINE(l, t, ...) do { \ > + RTE_LOG_CHECK_NO_NEWLINE(RTE_FMT_HEAD(__VA_ARGS__,)); \ > + RTE_LOG(l, t, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ > + RTE_FMT_TAIL(__VA_ARGS__ ,))); \ > +} while (0) > + > +#define RTE_LOG_DP_LINE(l, t, ...) do { \ > + RTE_LOG_CHECK_NO_NEWLINE(RTE_FMT_HEAD(__VA_ARGS__,)); \ > + RTE_LOG_DP(l, t, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ > + RTE_FMT_TAIL(__VA_ARGS__ ,))); \ > +} while (0) > + > #define RTE_LOG_REGISTER_IMPL(type, name, level) \ > int type; \ > RTE_INIT(__##type) \ > ^ permalink raw reply [flat|nested] 122+ messages in thread
* [RFC v2 12/14] lib: convert to per line logging 2023-12-08 14:59 ` [RFC v2 00/14] " David Marchand ` (10 preceding siblings ...) 2023-12-08 14:59 ` [RFC v2 11/14] log: add a per line log helper David Marchand @ 2023-12-08 14:59 ` David Marchand 2023-12-08 17:16 ` Stephen Hemminger 2023-12-16 9:30 ` Andrew Rybchenko 2023-12-08 14:59 ` [RFC v2 13/14] lib: replace logging helpers David Marchand 2023-12-08 14:59 ` [RFC v2 14/14] lib: use per line logging in helpers David Marchand 13 siblings, 2 replies; 122+ messages in thread From: David Marchand @ 2023-12-08 14:59 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Konstantin Ananyev, Anatoly Burakov, Harman Kalra, Jerin Jacob, Sunil Kumar Kori, Harry van Haaren, Stanislaw Kardach, Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam, Byron Marohn, Yipeng Wang, Vladimir Medvedkin, Sameh Gobriel, Reshma Pattan, Andrew Rybchenko, Cristian Dumitrescu, David Hunt, Sivaprasad Tummala, Honnappa Nagarahalli, Volodymyr Fialko, Maxime Coquelin, Chenbo Xia Convert many libraries that call RTE_LOG(... "\n", ...) to RTE_LOG_LINE. Note: - for acl and sched libraries that still has some debug multilines messages, a direct call to RTE_LOG is used: this will make it easier to notice such special cases, Signed-off-by: David Marchand <david.marchand@redhat.com> --- lib/acl/acl_bld.c | 28 +-- lib/acl/acl_gen.c | 8 +- lib/acl/rte_acl.c | 8 +- lib/acl/tb_mem.c | 4 +- lib/eal/common/eal_common_bus.c | 22 +- lib/eal/common/eal_common_class.c | 4 +- lib/eal/common/eal_common_config.c | 2 +- lib/eal/common/eal_common_debug.c | 6 +- lib/eal/common/eal_common_dev.c | 80 +++---- lib/eal/common/eal_common_devargs.c | 18 +- lib/eal/common/eal_common_dynmem.c | 34 +-- lib/eal/common/eal_common_fbarray.c | 12 +- lib/eal/common/eal_common_interrupts.c | 38 ++-- lib/eal/common/eal_common_lcore.c | 26 +-- lib/eal/common/eal_common_memalloc.c | 12 +- lib/eal/common/eal_common_memory.c | 66 +++--- lib/eal/common/eal_common_memzone.c | 24 +-- lib/eal/common/eal_common_options.c | 236 ++++++++++---------- lib/eal/common/eal_common_proc.c | 112 +++++----- lib/eal/common/eal_common_tailqs.c | 12 +- lib/eal/common/eal_common_thread.c | 12 +- lib/eal/common/eal_common_timer.c | 6 +- lib/eal/common/eal_common_trace_utils.c | 2 +- lib/eal/common/eal_trace.h | 4 +- lib/eal/common/hotplug_mp.c | 54 ++--- lib/eal/common/malloc_elem.c | 6 +- lib/eal/common/malloc_heap.c | 40 ++-- lib/eal/common/malloc_mp.c | 72 +++---- lib/eal/common/rte_keepalive.c | 2 +- lib/eal/common/rte_malloc.c | 10 +- lib/eal/common/rte_service.c | 8 +- lib/eal/freebsd/eal.c | 74 +++---- lib/eal/freebsd/eal_alarm.c | 2 +- lib/eal/freebsd/eal_dev.c | 8 +- lib/eal/freebsd/eal_hugepage_info.c | 22 +- lib/eal/freebsd/eal_interrupts.c | 60 +++--- lib/eal/freebsd/eal_lcore.c | 2 +- lib/eal/freebsd/eal_memalloc.c | 10 +- lib/eal/freebsd/eal_memory.c | 34 +-- lib/eal/freebsd/eal_thread.c | 2 +- lib/eal/freebsd/eal_timer.c | 10 +- lib/eal/linux/eal.c | 122 +++++------ lib/eal/linux/eal_alarm.c | 2 +- lib/eal/linux/eal_dev.c | 40 ++-- lib/eal/linux/eal_hugepage_info.c | 38 ++-- lib/eal/linux/eal_interrupts.c | 116 +++++----- lib/eal/linux/eal_lcore.c | 4 +- lib/eal/linux/eal_memalloc.c | 120 +++++------ lib/eal/linux/eal_memory.c | 208 +++++++++--------- lib/eal/linux/eal_thread.c | 4 +- lib/eal/linux/eal_timer.c | 10 +- lib/eal/linux/eal_vfio.c | 270 +++++++++++------------ lib/eal/linux/eal_vfio_mp_sync.c | 4 +- lib/eal/riscv/rte_cycles.c | 4 +- lib/eal/unix/eal_filesystem.c | 14 +- lib/eal/unix/eal_firmware.c | 2 +- lib/eal/unix/eal_unix_memory.c | 8 +- lib/eal/unix/rte_thread.c | 34 +-- lib/eal/windows/eal.c | 36 ++-- lib/eal/windows/eal_alarm.c | 12 +- lib/eal/windows/eal_debug.c | 8 +- lib/eal/windows/eal_dev.c | 8 +- lib/eal/windows/eal_hugepages.c | 10 +- lib/eal/windows/eal_interrupts.c | 10 +- lib/eal/windows/eal_lcore.c | 6 +- lib/eal/windows/eal_memalloc.c | 50 ++--- lib/eal/windows/eal_memory.c | 20 +- lib/eal/windows/eal_windows.h | 4 +- lib/eal/windows/include/rte_windows.h | 4 +- lib/eal/windows/rte_thread.c | 28 +-- lib/efd/rte_efd.c | 58 ++--- lib/fib/rte_fib.c | 14 +- lib/fib/rte_fib6.c | 14 +- lib/hash/rte_cuckoo_hash.c | 52 ++--- lib/hash/rte_fbk_hash.c | 4 +- lib/hash/rte_hash_crc.c | 12 +- lib/hash/rte_thash.c | 20 +- lib/hash/rte_thash_gfni.c | 8 +- lib/ip_frag/rte_ip_frag_common.c | 8 +- lib/latencystats/rte_latencystats.c | 41 ++-- lib/log/log.c | 6 +- lib/lpm/rte_lpm.c | 12 +- lib/lpm/rte_lpm6.c | 10 +- lib/mbuf/rte_mbuf.c | 14 +- lib/mbuf/rte_mbuf_dyn.c | 14 +- lib/mbuf/rte_mbuf_pool_ops.c | 4 +- lib/mempool/rte_mempool.c | 24 +-- lib/mempool/rte_mempool.h | 2 +- lib/mempool/rte_mempool_ops.c | 10 +- lib/pipeline/rte_pipeline.c | 228 ++++++++++---------- lib/port/rte_port_ethdev.c | 18 +- lib/port/rte_port_eventdev.c | 18 +- lib/port/rte_port_fd.c | 24 +-- lib/port/rte_port_frag.c | 14 +- lib/port/rte_port_ras.c | 12 +- lib/port/rte_port_ring.c | 18 +- lib/port/rte_port_sched.c | 12 +- lib/port/rte_port_source_sink.c | 48 ++--- lib/port/rte_port_sym_crypto.c | 18 +- lib/power/guest_channel.c | 38 ++-- lib/power/power_acpi_cpufreq.c | 106 ++++----- lib/power/power_amd_pstate_cpufreq.c | 120 +++++------ lib/power/power_common.c | 10 +- lib/power/power_cppc_cpufreq.c | 118 +++++----- lib/power/power_intel_uncore.c | 68 +++--- lib/power/power_kvm_vm.c | 22 +- lib/power/power_pstate_cpufreq.c | 144 ++++++------- lib/power/rte_power.c | 22 +- lib/power/rte_power_pmd_mgmt.c | 34 +-- lib/power/rte_power_uncore.c | 14 +- lib/rcu/rte_rcu_qsbr.c | 2 +- lib/reorder/rte_reorder.c | 32 +-- lib/rib/rte_rib.c | 10 +- lib/rib/rte_rib6.c | 10 +- lib/ring/rte_ring.c | 24 +-- lib/sched/rte_pie.c | 18 +- lib/sched/rte_sched.c | 274 ++++++++++++------------ lib/table/rte_table_acl.c | 72 +++---- lib/table/rte_table_array.c | 16 +- lib/table/rte_table_hash_cuckoo.c | 22 +- lib/table/rte_table_hash_ext.c | 22 +- lib/table/rte_table_hash_key16.c | 38 ++-- lib/table/rte_table_hash_key32.c | 38 ++-- lib/table/rte_table_hash_key8.c | 38 ++-- lib/table/rte_table_hash_lru.c | 22 +- lib/table/rte_table_lpm.c | 42 ++-- lib/table/rte_table_lpm_ipv6.c | 44 ++-- lib/table/rte_table_stub.c | 4 +- lib/vhost/fd_man.c | 8 +- 129 files changed, 2278 insertions(+), 2279 deletions(-) diff --git a/lib/acl/acl_bld.c b/lib/acl/acl_bld.c index eaf8770415..27bdd6b9a1 100644 --- a/lib/acl/acl_bld.c +++ b/lib/acl/acl_bld.c @@ -1017,8 +1017,8 @@ build_trie(struct acl_build_context *context, struct rte_acl_build_rule *head, break; default: - RTE_LOG(ERR, ACL, - "Error in rule[%u] type - %hhu\n", + RTE_LOG_LINE(ERR, ACL, + "Error in rule[%u] type - %hhu", rule->f->data.userdata, rule->config->defs[n].type); return NULL; @@ -1374,7 +1374,7 @@ acl_build_tries(struct acl_build_context *context, last = build_one_trie(context, rule_sets, n, context->node_max); if (context->bld_tries[n].trie == NULL) { - RTE_LOG(ERR, ACL, "Build of %u-th trie failed\n", n); + RTE_LOG_LINE(ERR, ACL, "Build of %u-th trie failed", n); return -ENOMEM; } @@ -1383,8 +1383,8 @@ acl_build_tries(struct acl_build_context *context, break; if (num_tries == RTE_DIM(context->tries)) { - RTE_LOG(ERR, ACL, - "Exceeded max number of tries: %u\n", + RTE_LOG_LINE(ERR, ACL, + "Exceeded max number of tries: %u", num_tries); return -ENOMEM; } @@ -1409,7 +1409,7 @@ acl_build_tries(struct acl_build_context *context, */ last = build_one_trie(context, rule_sets, n, INT32_MAX); if (context->bld_tries[n].trie == NULL || last != NULL) { - RTE_LOG(ERR, ACL, "Build of %u-th trie failed\n", n); + RTE_LOG_LINE(ERR, ACL, "Build of %u-th trie failed", n); return -ENOMEM; } @@ -1435,8 +1435,8 @@ acl_build_log(const struct acl_build_context *ctx) for (n = 0; n < RTE_DIM(ctx->tries); n++) { if (ctx->tries[n].count != 0) - RTE_LOG(DEBUG, ACL, - "trie %u: number of rules: %u, indexes: %u\n", + RTE_LOG_LINE(DEBUG, ACL, + "trie %u: number of rules: %u, indexes: %u", n, ctx->tries[n].count, ctx->tries[n].num_data_indexes); } @@ -1526,8 +1526,8 @@ acl_bld(struct acl_build_context *bcx, struct rte_acl_ctx *ctx, /* build phase runs out of memory. */ if (rc != 0) { - RTE_LOG(ERR, ACL, - "ACL context: %s, %s() failed with error code: %d\n", + RTE_LOG_LINE(ERR, ACL, + "ACL context: %s, %s() failed with error code: %d", bcx->acx->name, __func__, rc); return rc; } @@ -1568,8 +1568,8 @@ acl_check_bld_param(struct rte_acl_ctx *ctx, const struct rte_acl_config *cfg) for (i = 0; i != cfg->num_fields; i++) { if (cfg->defs[i].type > RTE_ACL_FIELD_TYPE_BITMASK) { - RTE_LOG(ERR, ACL, - "ACL context: %s, invalid type: %hhu for %u-th field\n", + RTE_LOG_LINE(ERR, ACL, + "ACL context: %s, invalid type: %hhu for %u-th field", ctx->name, cfg->defs[i].type, i); return -EINVAL; } @@ -1580,8 +1580,8 @@ acl_check_bld_param(struct rte_acl_ctx *ctx, const struct rte_acl_config *cfg) ; if (j == RTE_DIM(field_sizes)) { - RTE_LOG(ERR, ACL, - "ACL context: %s, invalid size: %hhu for %u-th field\n", + RTE_LOG_LINE(ERR, ACL, + "ACL context: %s, invalid size: %hhu for %u-th field", ctx->name, cfg->defs[i].size, i); return -EINVAL; } diff --git a/lib/acl/acl_gen.c b/lib/acl/acl_gen.c index 03a47ea231..2f612df1e0 100644 --- a/lib/acl/acl_gen.c +++ b/lib/acl/acl_gen.c @@ -471,9 +471,9 @@ rte_acl_gen(struct rte_acl_ctx *ctx, struct rte_acl_trie *trie, XMM_SIZE; if (total_size > max_size) { - RTE_LOG(DEBUG, ACL, + RTE_LOG_LINE(DEBUG, ACL, "Gen phase for ACL ctx \"%s\" exceeds max_size limit, " - "bytes required: %zu, allowed: %zu\n", + "bytes required: %zu, allowed: %zu", ctx->name, total_size, max_size); return -ERANGE; } @@ -481,8 +481,8 @@ rte_acl_gen(struct rte_acl_ctx *ctx, struct rte_acl_trie *trie, mem = rte_zmalloc_socket(ctx->name, total_size, RTE_CACHE_LINE_SIZE, ctx->socket_id); if (mem == NULL) { - RTE_LOG(ERR, ACL, - "allocation of %zu bytes on socket %d for %s failed\n", + RTE_LOG_LINE(ERR, ACL, + "allocation of %zu bytes on socket %d for %s failed", total_size, ctx->socket_id, ctx->name); return -ENOMEM; } diff --git a/lib/acl/rte_acl.c b/lib/acl/rte_acl.c index 760c3587d4..bec26d0a22 100644 --- a/lib/acl/rte_acl.c +++ b/lib/acl/rte_acl.c @@ -399,15 +399,15 @@ rte_acl_create(const struct rte_acl_param *param) te = rte_zmalloc("ACL_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, ACL, "Cannot allocate tailq entry!\n"); + RTE_LOG_LINE(ERR, ACL, "Cannot allocate tailq entry!"); goto exit; } ctx = rte_zmalloc_socket(name, sz, RTE_CACHE_LINE_SIZE, param->socket_id); if (ctx == NULL) { - RTE_LOG(ERR, ACL, - "allocation of %zu bytes on socket %d for %s failed\n", + RTE_LOG_LINE(ERR, ACL, + "allocation of %zu bytes on socket %d for %s failed", sz, param->socket_id, name); rte_free(te); goto exit; @@ -473,7 +473,7 @@ rte_acl_add_rules(struct rte_acl_ctx *ctx, const struct rte_acl_rule *rules, ((uintptr_t)rules + i * ctx->rule_sz); rc = acl_check_rule(&rv->data); if (rc != 0) { - RTE_LOG(ERR, ACL, "%s(%s): rule #%u is invalid\n", + RTE_LOG_LINE(ERR, ACL, "%s(%s): rule #%u is invalid", __func__, ctx->name, i + 1); return rc; } diff --git a/lib/acl/tb_mem.c b/lib/acl/tb_mem.c index 238d65692a..228e62c8cd 100644 --- a/lib/acl/tb_mem.c +++ b/lib/acl/tb_mem.c @@ -26,8 +26,8 @@ tb_pool(struct tb_mem_pool *pool, size_t sz) size = sz + pool->alignment - 1; block = calloc(1, size + sizeof(*pool->block)); if (block == NULL) { - RTE_LOG(ERR, ACL, "%s(%zu) failed, currently allocated " - "by pool: %zu bytes\n", __func__, sz, pool->alloc); + RTE_LOG_LINE(ERR, ACL, "%s(%zu) failed, currently allocated " + "by pool: %zu bytes", __func__, sz, pool->alloc); siglongjmp(pool->fail, -ENOMEM); return NULL; } diff --git a/lib/eal/common/eal_common_bus.c b/lib/eal/common/eal_common_bus.c index acac14131a..456f27112c 100644 --- a/lib/eal/common/eal_common_bus.c +++ b/lib/eal/common/eal_common_bus.c @@ -35,14 +35,14 @@ rte_bus_register(struct rte_bus *bus) RTE_VERIFY(!bus->plug || bus->unplug); TAILQ_INSERT_TAIL(&rte_bus_list, bus, next); - RTE_LOG(DEBUG, EAL, "Registered [%s] bus.\n", rte_bus_name(bus)); + RTE_LOG_LINE(DEBUG, EAL, "Registered [%s] bus.", rte_bus_name(bus)); } void rte_bus_unregister(struct rte_bus *bus) { TAILQ_REMOVE(&rte_bus_list, bus, next); - RTE_LOG(DEBUG, EAL, "Unregistered [%s] bus.\n", rte_bus_name(bus)); + RTE_LOG_LINE(DEBUG, EAL, "Unregistered [%s] bus.", rte_bus_name(bus)); } /* Scan all the buses for registered devices */ @@ -55,7 +55,7 @@ rte_bus_scan(void) TAILQ_FOREACH(bus, &rte_bus_list, next) { ret = bus->scan(); if (ret) - RTE_LOG(ERR, EAL, "Scan for (%s) bus failed.\n", + RTE_LOG_LINE(ERR, EAL, "Scan for (%s) bus failed.", rte_bus_name(bus)); } @@ -77,14 +77,14 @@ rte_bus_probe(void) ret = bus->probe(); if (ret) - RTE_LOG(ERR, EAL, "Bus (%s) probe failed.\n", + RTE_LOG_LINE(ERR, EAL, "Bus (%s) probe failed.", rte_bus_name(bus)); } if (vbus) { ret = vbus->probe(); if (ret) - RTE_LOG(ERR, EAL, "Bus (%s) probe failed.\n", + RTE_LOG_LINE(ERR, EAL, "Bus (%s) probe failed.", rte_bus_name(vbus)); } @@ -133,7 +133,7 @@ rte_bus_dump(FILE *f) TAILQ_FOREACH(bus, &rte_bus_list, next) { ret = bus_dump_one(f, bus); if (ret) { - RTE_LOG(ERR, EAL, "Unable to write to stream (%d)\n", + RTE_LOG_LINE(ERR, EAL, "Unable to write to stream (%d)", ret); break; } @@ -235,15 +235,15 @@ rte_bus_get_iommu_class(void) continue; bus_iova_mode = bus->get_iommu_class(); - RTE_LOG(DEBUG, EAL, "Bus %s wants IOVA as '%s'\n", + RTE_LOG_LINE(DEBUG, EAL, "Bus %s wants IOVA as '%s'", rte_bus_name(bus), bus_iova_mode == RTE_IOVA_DC ? "DC" : (bus_iova_mode == RTE_IOVA_PA ? "PA" : "VA")); if (bus_iova_mode == RTE_IOVA_PA) { buses_want_pa = true; if (!RTE_IOVA_IN_MBUF) - RTE_LOG(WARNING, EAL, - "Bus %s wants IOVA as PA not compatible with 'enable_iova_as_pa=false' build option.\n", + RTE_LOG_LINE(WARNING, EAL, + "Bus %s wants IOVA as PA not compatible with 'enable_iova_as_pa=false' build option.", rte_bus_name(bus)); } else if (bus_iova_mode == RTE_IOVA_VA) buses_want_va = true; @@ -255,8 +255,8 @@ rte_bus_get_iommu_class(void) } else { mode = RTE_IOVA_DC; if (buses_want_va) { - RTE_LOG(WARNING, EAL, "Some buses want 'VA' but forcing 'DC' because other buses want 'PA'.\n"); - RTE_LOG(WARNING, EAL, "Depending on the final decision by the EAL, not all buses may be able to initialize.\n"); + RTE_LOG_LINE(WARNING, EAL, "Some buses want 'VA' but forcing 'DC' because other buses want 'PA'."); + RTE_LOG_LINE(WARNING, EAL, "Depending on the final decision by the EAL, not all buses may be able to initialize."); } } diff --git a/lib/eal/common/eal_common_class.c b/lib/eal/common/eal_common_class.c index 0187076af1..02a983b286 100644 --- a/lib/eal/common/eal_common_class.c +++ b/lib/eal/common/eal_common_class.c @@ -19,14 +19,14 @@ rte_class_register(struct rte_class *class) RTE_VERIFY(class->name && strlen(class->name)); TAILQ_INSERT_TAIL(&rte_class_list, class, next); - RTE_LOG(DEBUG, EAL, "Registered [%s] device class.\n", class->name); + RTE_LOG_LINE(DEBUG, EAL, "Registered [%s] device class.", class->name); } void rte_class_unregister(struct rte_class *class) { TAILQ_REMOVE(&rte_class_list, class, next); - RTE_LOG(DEBUG, EAL, "Unregistered [%s] device class.\n", class->name); + RTE_LOG_LINE(DEBUG, EAL, "Unregistered [%s] device class.", class->name); } struct rte_class * diff --git a/lib/eal/common/eal_common_config.c b/lib/eal/common/eal_common_config.c index 0daf0f3188..4b6530f2fb 100644 --- a/lib/eal/common/eal_common_config.c +++ b/lib/eal/common/eal_common_config.c @@ -31,7 +31,7 @@ int eal_set_runtime_dir(const char *run_dir) { if (strlcpy(runtime_dir, run_dir, PATH_MAX) >= PATH_MAX) { - RTE_LOG(ERR, EAL, "Runtime directory string too long\n"); + RTE_LOG_LINE(ERR, EAL, "Runtime directory string too long"); return -1; } diff --git a/lib/eal/common/eal_common_debug.c b/lib/eal/common/eal_common_debug.c index 9cac9c6390..065843f34e 100644 --- a/lib/eal/common/eal_common_debug.c +++ b/lib/eal/common/eal_common_debug.c @@ -16,7 +16,7 @@ __rte_panic(const char *funcname, const char *format, ...) { va_list ap; - rte_log(RTE_LOG_CRIT, RTE_LOGTYPE_EAL, "PANIC in %s():\n", funcname); + RTE_LOG_LINE(CRIT, EAL, "PANIC in %s():", funcname); va_start(ap, format); rte_vlog(RTE_LOG_CRIT, RTE_LOGTYPE_EAL, format, ap); va_end(ap); @@ -42,7 +42,7 @@ rte_exit(int exit_code, const char *format, ...) va_end(ap); if (rte_eal_cleanup() != 0 && rte_errno != EALREADY) - RTE_LOG(CRIT, EAL, - "EAL could not release all resources\n"); + RTE_LOG_LINE(CRIT, EAL, + "EAL could not release all resources"); exit(exit_code); } diff --git a/lib/eal/common/eal_common_dev.c b/lib/eal/common/eal_common_dev.c index 614ef6c9fc..359907798a 100644 --- a/lib/eal/common/eal_common_dev.c +++ b/lib/eal/common/eal_common_dev.c @@ -182,7 +182,7 @@ local_dev_probe(const char *devargs, struct rte_device **new_dev) goto err_devarg; if (da->bus->plug == NULL) { - RTE_LOG(ERR, EAL, "Function plug not supported by bus (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Function plug not supported by bus (%s)", da->bus->name); ret = -ENOTSUP; goto err_devarg; @@ -199,7 +199,7 @@ local_dev_probe(const char *devargs, struct rte_device **new_dev) dev = da->bus->find_device(NULL, cmp_dev_name, da->name); if (dev == NULL) { - RTE_LOG(ERR, EAL, "Cannot find device (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Cannot find device (%s)", da->name); ret = -ENODEV; goto err_devarg; @@ -214,7 +214,7 @@ local_dev_probe(const char *devargs, struct rte_device **new_dev) ret = -ENOTSUP; if (ret && !rte_dev_is_probed(dev)) { /* if hasn't ever succeeded */ - RTE_LOG(ERR, EAL, "Driver cannot attach the device (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Driver cannot attach the device (%s)", dev->name); return ret; } @@ -248,13 +248,13 @@ rte_dev_probe(const char *devargs) */ ret = eal_dev_hotplug_request_to_primary(&req); if (ret != 0) { - RTE_LOG(ERR, EAL, - "Failed to send hotplug request to primary\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to send hotplug request to primary"); return -ENOMSG; } if (req.result != 0) - RTE_LOG(ERR, EAL, - "Failed to hotplug add device\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to hotplug add device"); return req.result; } @@ -264,8 +264,8 @@ rte_dev_probe(const char *devargs) ret = local_dev_probe(devargs, &dev); if (ret != 0) { - RTE_LOG(ERR, EAL, - "Failed to attach device on primary process\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to attach device on primary process"); /** * it is possible that secondary process failed to attached a @@ -282,8 +282,8 @@ rte_dev_probe(const char *devargs) /* if any communication error, we need to rollback. */ if (ret != 0) { - RTE_LOG(ERR, EAL, - "Failed to send hotplug add request to secondary\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to send hotplug add request to secondary"); ret = -ENOMSG; goto rollback; } @@ -293,8 +293,8 @@ rte_dev_probe(const char *devargs) * is necessary. */ if (req.result != 0) { - RTE_LOG(ERR, EAL, - "Failed to attach device on secondary process\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to attach device on secondary process"); ret = req.result; /* for -EEXIST, we don't need to rollback. */ @@ -310,15 +310,15 @@ rte_dev_probe(const char *devargs) /* primary send rollback request to secondary. */ if (eal_dev_hotplug_request_to_secondary(&req) != 0) - RTE_LOG(WARNING, EAL, + RTE_LOG_LINE(WARNING, EAL, "Failed to rollback device attach on secondary." - "Devices in secondary may not sync with primary\n"); + "Devices in secondary may not sync with primary"); /* primary rollback itself. */ if (local_dev_remove(dev) != 0) - RTE_LOG(WARNING, EAL, + RTE_LOG_LINE(WARNING, EAL, "Failed to rollback device attach on primary." - "Devices in secondary may not sync with primary\n"); + "Devices in secondary may not sync with primary"); return ret; } @@ -331,13 +331,13 @@ rte_eal_hotplug_remove(const char *busname, const char *devname) bus = rte_bus_find_by_name(busname); if (bus == NULL) { - RTE_LOG(ERR, EAL, "Cannot find bus (%s)\n", busname); + RTE_LOG_LINE(ERR, EAL, "Cannot find bus (%s)", busname); return -ENOENT; } dev = bus->find_device(NULL, cmp_dev_name, devname); if (dev == NULL) { - RTE_LOG(ERR, EAL, "Cannot find plugged device (%s)\n", devname); + RTE_LOG_LINE(ERR, EAL, "Cannot find plugged device (%s)", devname); return -EINVAL; } @@ -351,14 +351,14 @@ local_dev_remove(struct rte_device *dev) int ret; if (dev->bus->unplug == NULL) { - RTE_LOG(ERR, EAL, "Function unplug not supported by bus (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Function unplug not supported by bus (%s)", dev->bus->name); return -ENOTSUP; } ret = dev->bus->unplug(dev); if (ret) { - RTE_LOG(ERR, EAL, "Driver cannot detach the device (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Driver cannot detach the device (%s)", dev->name); return (ret < 0) ? ret : -ENOENT; } @@ -374,7 +374,7 @@ rte_dev_remove(struct rte_device *dev) int ret; if (!rte_dev_is_probed(dev)) { - RTE_LOG(ERR, EAL, "Device is not probed\n"); + RTE_LOG_LINE(ERR, EAL, "Device is not probed"); return -ENOENT; } @@ -394,13 +394,13 @@ rte_dev_remove(struct rte_device *dev) */ ret = eal_dev_hotplug_request_to_primary(&req); if (ret != 0) { - RTE_LOG(ERR, EAL, - "Failed to send hotplug request to primary\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to send hotplug request to primary"); return -ENOMSG; } if (req.result != 0) - RTE_LOG(ERR, EAL, - "Failed to hotplug remove device\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to hotplug remove device"); return req.result; } @@ -414,8 +414,8 @@ rte_dev_remove(struct rte_device *dev) * part of the secondary processes still detached it successfully. */ if (ret != 0) { - RTE_LOG(ERR, EAL, - "Failed to send device detach request to secondary\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to send device detach request to secondary"); ret = -ENOMSG; goto rollback; } @@ -425,8 +425,8 @@ rte_dev_remove(struct rte_device *dev) * is necessary. */ if (req.result != 0) { - RTE_LOG(ERR, EAL, - "Failed to detach device on secondary process\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to detach device on secondary process"); ret = req.result; /** * if -ENOENT, we don't need to rollback, since devices is @@ -441,8 +441,8 @@ rte_dev_remove(struct rte_device *dev) /* if primary failed, still need to consider if rollback is necessary */ if (ret != 0) { - RTE_LOG(ERR, EAL, - "Failed to detach device on primary process\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to detach device on primary process"); /* if -ENOENT, we don't need to rollback */ if (ret == -ENOENT) return ret; @@ -456,9 +456,9 @@ rte_dev_remove(struct rte_device *dev) /* primary send rollback request to secondary. */ if (eal_dev_hotplug_request_to_secondary(&req) != 0) - RTE_LOG(WARNING, EAL, + RTE_LOG_LINE(WARNING, EAL, "Failed to rollback device detach on secondary." - "Devices in secondary may not sync with primary\n"); + "Devices in secondary may not sync with primary"); return ret; } @@ -508,16 +508,16 @@ rte_dev_event_callback_register(const char *device_name, } TAILQ_INSERT_TAIL(&dev_event_cbs, event_cb, next); } else { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "Failed to allocate memory for device " "event callback."); ret = -ENOMEM; goto error; } } else { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "The callback is already exist, no need " - "to register again.\n"); + "to register again."); event_cb = NULL; ret = -EEXIST; goto error; @@ -635,17 +635,17 @@ rte_dev_iterator_init(struct rte_dev_iterator *it, * one layer specified. */ if (bus == NULL && cls == NULL) { - RTE_LOG(DEBUG, EAL, "Either bus or class must be specified.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Either bus or class must be specified."); rte_errno = EINVAL; goto get_out; } if (bus != NULL && bus->dev_iterate == NULL) { - RTE_LOG(DEBUG, EAL, "Bus %s not supported\n", bus->name); + RTE_LOG_LINE(DEBUG, EAL, "Bus %s not supported", bus->name); rte_errno = ENOTSUP; goto get_out; } if (cls != NULL && cls->dev_iterate == NULL) { - RTE_LOG(DEBUG, EAL, "Class %s not supported\n", cls->name); + RTE_LOG_LINE(DEBUG, EAL, "Class %s not supported", cls->name); rte_errno = ENOTSUP; goto get_out; } diff --git a/lib/eal/common/eal_common_devargs.c b/lib/eal/common/eal_common_devargs.c index fb5d0a293b..dbf5affa76 100644 --- a/lib/eal/common/eal_common_devargs.c +++ b/lib/eal/common/eal_common_devargs.c @@ -39,12 +39,12 @@ devargs_bus_parse_default(struct rte_devargs *devargs, /* Parse devargs name from bus key-value list. */ name = rte_kvargs_get(bus_args, "name"); if (name == NULL) { - RTE_LOG(DEBUG, EAL, "devargs name not found: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "devargs name not found: %s", devargs->data); return 0; } if (rte_strscpy(devargs->name, name, sizeof(devargs->name)) < 0) { - RTE_LOG(ERR, EAL, "devargs name too long: %s\n", + RTE_LOG_LINE(ERR, EAL, "devargs name too long: %s", devargs->data); return -E2BIG; } @@ -79,7 +79,7 @@ rte_devargs_layers_parse(struct rte_devargs *devargs, if (devargs->data != devstr) { devargs->data = strdup(devstr); if (devargs->data == NULL) { - RTE_LOG(ERR, EAL, "OOM\n"); + RTE_LOG_LINE(ERR, EAL, "OOM"); ret = -ENOMEM; goto get_out; } @@ -133,7 +133,7 @@ rte_devargs_layers_parse(struct rte_devargs *devargs, devargs->bus_str = layers[i].str; devargs->bus = rte_bus_find_by_name(kv->value); if (devargs->bus == NULL) { - RTE_LOG(ERR, EAL, "Could not find bus \"%s\"\n", + RTE_LOG_LINE(ERR, EAL, "Could not find bus \"%s\"", kv->value); ret = -EFAULT; goto get_out; @@ -142,7 +142,7 @@ rte_devargs_layers_parse(struct rte_devargs *devargs, devargs->cls_str = layers[i].str; devargs->cls = rte_class_find_by_name(kv->value); if (devargs->cls == NULL) { - RTE_LOG(ERR, EAL, "Could not find class \"%s\"\n", + RTE_LOG_LINE(ERR, EAL, "Could not find class \"%s\"", kv->value); ret = -EFAULT; goto get_out; @@ -217,7 +217,7 @@ rte_devargs_parse(struct rte_devargs *da, const char *dev) da->name[i] = devname[i]; i++; if (i == maxlen) { - RTE_LOG(WARNING, EAL, "Parsing \"%s\": device name should be shorter than %zu\n", + RTE_LOG_LINE(WARNING, EAL, "Parsing \"%s\": device name should be shorter than %zu", dev, maxlen); da->name[i - 1] = '\0'; return -EINVAL; @@ -227,7 +227,7 @@ rte_devargs_parse(struct rte_devargs *da, const char *dev) if (bus == NULL) { bus = rte_bus_find_by_device_name(da->name); if (bus == NULL) { - RTE_LOG(ERR, EAL, "failed to parse device \"%s\"\n", + RTE_LOG_LINE(ERR, EAL, "failed to parse device \"%s\"", da->name); return -EFAULT; } @@ -239,7 +239,7 @@ rte_devargs_parse(struct rte_devargs *da, const char *dev) else da->data = strdup(""); if (da->data == NULL) { - RTE_LOG(ERR, EAL, "not enough memory to parse arguments\n"); + RTE_LOG_LINE(ERR, EAL, "not enough memory to parse arguments"); return -ENOMEM; } da->drv_str = da->data; @@ -266,7 +266,7 @@ rte_devargs_parsef(struct rte_devargs *da, const char *format, ...) len += 1; dev = calloc(1, (size_t)len); if (dev == NULL) { - RTE_LOG(ERR, EAL, "not enough memory to parse device\n"); + RTE_LOG_LINE(ERR, EAL, "not enough memory to parse device"); return -ENOMEM; } diff --git a/lib/eal/common/eal_common_dynmem.c b/lib/eal/common/eal_common_dynmem.c index 95da55d9b0..721cb63bf2 100644 --- a/lib/eal/common/eal_common_dynmem.c +++ b/lib/eal/common/eal_common_dynmem.c @@ -76,7 +76,7 @@ eal_dynmem_memseg_lists_init(void) n_memtypes = internal_conf->num_hugepage_sizes * rte_socket_count(); memtypes = calloc(n_memtypes, sizeof(*memtypes)); if (memtypes == NULL) { - RTE_LOG(ERR, EAL, "Cannot allocate space for memory types\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate space for memory types"); return -1; } @@ -101,8 +101,8 @@ eal_dynmem_memseg_lists_init(void) memtypes[cur_type].page_sz = hugepage_sz; memtypes[cur_type].socket_id = socket_id; - RTE_LOG(DEBUG, EAL, "Detected memory type: " - "socket_id:%u hugepage_sz:%" PRIu64 "\n", + RTE_LOG_LINE(DEBUG, EAL, "Detected memory type: " + "socket_id:%u hugepage_sz:%" PRIu64, socket_id, hugepage_sz); } } @@ -120,7 +120,7 @@ eal_dynmem_memseg_lists_init(void) max_seglists_per_type = RTE_MAX_MEMSEG_LISTS / n_memtypes; if (max_seglists_per_type == 0) { - RTE_LOG(ERR, EAL, "Cannot accommodate all memory types, please increase RTE_MAX_MEMSEG_LISTS\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot accommodate all memory types, please increase RTE_MAX_MEMSEG_LISTS"); goto out; } @@ -171,15 +171,15 @@ eal_dynmem_memseg_lists_init(void) /* limit number of segment lists according to our maximum */ n_seglists = RTE_MIN(n_seglists, max_seglists_per_type); - RTE_LOG(DEBUG, EAL, "Creating %i segment lists: " - "n_segs:%i socket_id:%i hugepage_sz:%" PRIu64 "\n", + RTE_LOG_LINE(DEBUG, EAL, "Creating %i segment lists: " + "n_segs:%i socket_id:%i hugepage_sz:%" PRIu64, n_seglists, n_segs, socket_id, pagesz); /* create all segment lists */ for (cur_seglist = 0; cur_seglist < n_seglists; cur_seglist++) { if (msl_idx >= RTE_MAX_MEMSEG_LISTS) { - RTE_LOG(ERR, EAL, - "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS\n"); + RTE_LOG_LINE(ERR, EAL, + "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS"); goto out; } msl = &mcfg->memsegs[msl_idx++]; @@ -189,7 +189,7 @@ eal_dynmem_memseg_lists_init(void) goto out; if (eal_memseg_list_alloc(msl, 0)) { - RTE_LOG(ERR, EAL, "Cannot allocate VA space for memseg list\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate VA space for memseg list"); goto out; } } @@ -287,9 +287,9 @@ eal_dynmem_hugepage_init(void) if (num_pages == 0) continue; - RTE_LOG(DEBUG, EAL, + RTE_LOG_LINE(DEBUG, EAL, "Allocating %u pages of size %" PRIu64 "M " - "on socket %i\n", + "on socket %i", num_pages, hpi->hugepage_sz >> 20, socket_id); /* we may not be able to allocate all pages in one go, @@ -307,7 +307,7 @@ eal_dynmem_hugepage_init(void) pages = malloc(sizeof(*pages) * needed); if (pages == NULL) { - RTE_LOG(ERR, EAL, "Failed to malloc pages\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to malloc pages"); return -1; } @@ -342,7 +342,7 @@ eal_dynmem_hugepage_init(void) continue; if (rte_mem_alloc_validator_register("socket-limit", limits_callback, i, limit)) - RTE_LOG(ERR, EAL, "Failed to register socket limits validator callback\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to register socket limits validator callback"); } } return 0; @@ -515,8 +515,8 @@ eal_dynmem_calc_num_pages_per_socket( internal_conf->socket_mem[socket] / 0x100000); available = requested - ((unsigned int)(memory[socket] / 0x100000)); - RTE_LOG(ERR, EAL, "Not enough memory available on " - "socket %u! Requested: %uMB, available: %uMB\n", + RTE_LOG_LINE(ERR, EAL, "Not enough memory available on " + "socket %u! Requested: %uMB, available: %uMB", socket, requested, available); return -1; } @@ -526,8 +526,8 @@ eal_dynmem_calc_num_pages_per_socket( if (total_mem > 0) { requested = (unsigned int)(internal_conf->memory / 0x100000); available = requested - (unsigned int)(total_mem / 0x100000); - RTE_LOG(ERR, EAL, "Not enough memory available! " - "Requested: %uMB, available: %uMB\n", + RTE_LOG_LINE(ERR, EAL, "Not enough memory available! " + "Requested: %uMB, available: %uMB", requested, available); return -1; } diff --git a/lib/eal/common/eal_common_fbarray.c b/lib/eal/common/eal_common_fbarray.c index 2055bfa57d..7b90e01500 100644 --- a/lib/eal/common/eal_common_fbarray.c +++ b/lib/eal/common/eal_common_fbarray.c @@ -83,7 +83,7 @@ resize_and_map(int fd, const char *path, void *addr, size_t len) void *map_addr; if (eal_file_truncate(fd, len)) { - RTE_LOG(ERR, EAL, "Cannot truncate %s\n", path); + RTE_LOG_LINE(ERR, EAL, "Cannot truncate %s", path); return -1; } @@ -755,7 +755,7 @@ rte_fbarray_init(struct rte_fbarray *arr, const char *name, unsigned int len, void *new_data = rte_mem_map(data, mmap_len, RTE_PROT_READ | RTE_PROT_WRITE, flags, fd, 0); if (new_data == NULL) { - RTE_LOG(DEBUG, EAL, "%s(): couldn't remap anonymous memory: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): couldn't remap anonymous memory: %s", __func__, rte_strerror(rte_errno)); goto fail; } @@ -770,12 +770,12 @@ rte_fbarray_init(struct rte_fbarray *arr, const char *name, unsigned int len, */ fd = eal_file_open(path, EAL_OPEN_CREATE | EAL_OPEN_READWRITE); if (fd < 0) { - RTE_LOG(DEBUG, EAL, "%s(): couldn't open %s: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): couldn't open %s: %s", __func__, path, rte_strerror(rte_errno)); goto fail; } else if (eal_file_lock( fd, EAL_FLOCK_EXCLUSIVE, EAL_FLOCK_RETURN)) { - RTE_LOG(DEBUG, EAL, "%s(): couldn't lock %s: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): couldn't lock %s: %s", __func__, path, rte_strerror(rte_errno)); rte_errno = EBUSY; goto fail; @@ -1017,7 +1017,7 @@ rte_fbarray_destroy(struct rte_fbarray *arr) */ fd = tmp->fd; if (eal_file_lock(fd, EAL_FLOCK_EXCLUSIVE, EAL_FLOCK_RETURN)) { - RTE_LOG(DEBUG, EAL, "Cannot destroy fbarray - another process is using it\n"); + RTE_LOG_LINE(DEBUG, EAL, "Cannot destroy fbarray - another process is using it"); rte_errno = EBUSY; ret = -1; goto out; @@ -1026,7 +1026,7 @@ rte_fbarray_destroy(struct rte_fbarray *arr) /* we're OK to destroy the file */ eal_get_fbarray_path(path, sizeof(path), arr->name); if (unlink(path)) { - RTE_LOG(DEBUG, EAL, "Cannot unlink fbarray: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "Cannot unlink fbarray: %s", strerror(errno)); rte_errno = errno; /* diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c index 97b64fed58..6a5723a068 100644 --- a/lib/eal/common/eal_common_interrupts.c +++ b/lib/eal/common/eal_common_interrupts.c @@ -15,7 +15,7 @@ /* Macros to check for valid interrupt handle */ #define CHECK_VALID_INTR_HANDLE(intr_handle) do { \ if (intr_handle == NULL) { \ - RTE_LOG(DEBUG, EAL, "Interrupt instance unallocated\n"); \ + RTE_LOG_LINE(DEBUG, EAL, "Interrupt instance unallocated"); \ rte_errno = EINVAL; \ goto fail; \ } \ @@ -37,7 +37,7 @@ struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags) * defined flags. */ if ((flags & ~RTE_INTR_INSTANCE_KNOWN_FLAGS) != 0) { - RTE_LOG(DEBUG, EAL, "Invalid alloc flag passed 0x%x\n", flags); + RTE_LOG_LINE(DEBUG, EAL, "Invalid alloc flag passed 0x%x", flags); rte_errno = EINVAL; return NULL; } @@ -48,7 +48,7 @@ struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags) else intr_handle = calloc(1, sizeof(*intr_handle)); if (intr_handle == NULL) { - RTE_LOG(ERR, EAL, "Failed to allocate intr_handle\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to allocate intr_handle"); rte_errno = ENOMEM; return NULL; } @@ -61,7 +61,7 @@ struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags) sizeof(int)); } if (intr_handle->efds == NULL) { - RTE_LOG(ERR, EAL, "Fail to allocate event fd list\n"); + RTE_LOG_LINE(ERR, EAL, "Fail to allocate event fd list"); rte_errno = ENOMEM; goto fail; } @@ -75,7 +75,7 @@ struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags) sizeof(struct rte_epoll_event)); } if (intr_handle->elist == NULL) { - RTE_LOG(ERR, EAL, "fail to allocate event fd list\n"); + RTE_LOG_LINE(ERR, EAL, "fail to allocate event fd list"); rte_errno = ENOMEM; goto fail; } @@ -100,7 +100,7 @@ struct rte_intr_handle *rte_intr_instance_dup(const struct rte_intr_handle *src) struct rte_intr_handle *intr_handle; if (src == NULL) { - RTE_LOG(DEBUG, EAL, "Source interrupt instance unallocated\n"); + RTE_LOG_LINE(DEBUG, EAL, "Source interrupt instance unallocated"); rte_errno = EINVAL; return NULL; } @@ -129,7 +129,7 @@ int rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size) CHECK_VALID_INTR_HANDLE(intr_handle); if (size == 0) { - RTE_LOG(DEBUG, EAL, "Size can't be zero\n"); + RTE_LOG_LINE(DEBUG, EAL, "Size can't be zero"); rte_errno = EINVAL; goto fail; } @@ -143,7 +143,7 @@ int rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size) tmp_efds = realloc(intr_handle->efds, size * sizeof(int)); } if (tmp_efds == NULL) { - RTE_LOG(ERR, EAL, "Failed to realloc the efds list\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to realloc the efds list"); rte_errno = ENOMEM; goto fail; } @@ -157,7 +157,7 @@ int rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size) size * sizeof(struct rte_epoll_event)); } if (tmp_elist == NULL) { - RTE_LOG(ERR, EAL, "Failed to realloc the event list\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to realloc the event list"); rte_errno = ENOMEM; goto fail; } @@ -253,8 +253,8 @@ int rte_intr_max_intr_set(struct rte_intr_handle *intr_handle, CHECK_VALID_INTR_HANDLE(intr_handle); if (max_intr > intr_handle->nb_intr) { - RTE_LOG(DEBUG, EAL, "Maximum interrupt vector ID (%d) exceeds " - "the number of available events (%d)\n", max_intr, + RTE_LOG_LINE(DEBUG, EAL, "Maximum interrupt vector ID (%d) exceeds " + "the number of available events (%d)", max_intr, intr_handle->nb_intr); rte_errno = ERANGE; goto fail; @@ -332,7 +332,7 @@ int rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle, CHECK_VALID_INTR_HANDLE(intr_handle); if (index >= intr_handle->nb_intr) { - RTE_LOG(DEBUG, EAL, "Invalid index %d, max limit %d\n", index, + RTE_LOG_LINE(DEBUG, EAL, "Invalid index %d, max limit %d", index, intr_handle->nb_intr); rte_errno = EINVAL; goto fail; @@ -349,7 +349,7 @@ int rte_intr_efds_index_set(struct rte_intr_handle *intr_handle, CHECK_VALID_INTR_HANDLE(intr_handle); if (index >= intr_handle->nb_intr) { - RTE_LOG(DEBUG, EAL, "Invalid index %d, max limit %d\n", index, + RTE_LOG_LINE(DEBUG, EAL, "Invalid index %d, max limit %d", index, intr_handle->nb_intr); rte_errno = ERANGE; goto fail; @@ -368,7 +368,7 @@ struct rte_epoll_event *rte_intr_elist_index_get( CHECK_VALID_INTR_HANDLE(intr_handle); if (index >= intr_handle->nb_intr) { - RTE_LOG(DEBUG, EAL, "Invalid index %d, max limit %d\n", index, + RTE_LOG_LINE(DEBUG, EAL, "Invalid index %d, max limit %d", index, intr_handle->nb_intr); rte_errno = ERANGE; goto fail; @@ -385,7 +385,7 @@ int rte_intr_elist_index_set(struct rte_intr_handle *intr_handle, CHECK_VALID_INTR_HANDLE(intr_handle); if (index >= intr_handle->nb_intr) { - RTE_LOG(DEBUG, EAL, "Invalid index %d, max limit %d\n", index, + RTE_LOG_LINE(DEBUG, EAL, "Invalid index %d, max limit %d", index, intr_handle->nb_intr); rte_errno = ERANGE; goto fail; @@ -408,7 +408,7 @@ int rte_intr_vec_list_alloc(struct rte_intr_handle *intr_handle, return 0; if (size > intr_handle->nb_intr) { - RTE_LOG(DEBUG, EAL, "Invalid size %d, max limit %d\n", size, + RTE_LOG_LINE(DEBUG, EAL, "Invalid size %d, max limit %d", size, intr_handle->nb_intr); rte_errno = ERANGE; goto fail; @@ -419,7 +419,7 @@ int rte_intr_vec_list_alloc(struct rte_intr_handle *intr_handle, else intr_handle->intr_vec = calloc(size, sizeof(int)); if (intr_handle->intr_vec == NULL) { - RTE_LOG(ERR, EAL, "Failed to allocate %d intr_vec\n", size); + RTE_LOG_LINE(ERR, EAL, "Failed to allocate %d intr_vec", size); rte_errno = ENOMEM; goto fail; } @@ -437,7 +437,7 @@ int rte_intr_vec_list_index_get(const struct rte_intr_handle *intr_handle, CHECK_VALID_INTR_HANDLE(intr_handle); if (index >= intr_handle->vec_list_size) { - RTE_LOG(DEBUG, EAL, "Index %d greater than vec list size %d\n", + RTE_LOG_LINE(DEBUG, EAL, "Index %d greater than vec list size %d", index, intr_handle->vec_list_size); rte_errno = ERANGE; goto fail; @@ -454,7 +454,7 @@ int rte_intr_vec_list_index_set(struct rte_intr_handle *intr_handle, CHECK_VALID_INTR_HANDLE(intr_handle); if (index >= intr_handle->vec_list_size) { - RTE_LOG(DEBUG, EAL, "Index %d greater than vec list size %d\n", + RTE_LOG_LINE(DEBUG, EAL, "Index %d greater than vec list size %d", index, intr_handle->vec_list_size); rte_errno = ERANGE; goto fail; diff --git a/lib/eal/common/eal_common_lcore.c b/lib/eal/common/eal_common_lcore.c index 6807a38247..4ec1996d12 100644 --- a/lib/eal/common/eal_common_lcore.c +++ b/lib/eal/common/eal_common_lcore.c @@ -174,8 +174,8 @@ rte_eal_cpu_init(void) lcore_config[lcore_id].core_role = ROLE_RTE; lcore_config[lcore_id].core_id = eal_cpu_core_id(lcore_id); lcore_config[lcore_id].socket_id = socket_id; - RTE_LOG(DEBUG, EAL, "Detected lcore %u as " - "core %u on socket %u\n", + RTE_LOG_LINE(DEBUG, EAL, "Detected lcore %u as " + "core %u on socket %u", lcore_id, lcore_config[lcore_id].core_id, lcore_config[lcore_id].socket_id); count++; @@ -183,17 +183,17 @@ rte_eal_cpu_init(void) for (; lcore_id < CPU_SETSIZE; lcore_id++) { if (eal_cpu_detected(lcore_id) == 0) continue; - RTE_LOG(DEBUG, EAL, "Skipped lcore %u as core %u on socket %u\n", + RTE_LOG_LINE(DEBUG, EAL, "Skipped lcore %u as core %u on socket %u", lcore_id, eal_cpu_core_id(lcore_id), eal_cpu_socket_id(lcore_id)); } /* Set the count of enabled logical cores of the EAL configuration */ config->lcore_count = count; - RTE_LOG(DEBUG, EAL, - "Maximum logical cores by configuration: %u\n", + RTE_LOG_LINE(DEBUG, EAL, + "Maximum logical cores by configuration: %u", RTE_MAX_LCORE); - RTE_LOG(INFO, EAL, "Detected CPU lcores: %u\n", config->lcore_count); + RTE_LOG_LINE(INFO, EAL, "Detected CPU lcores: %u", config->lcore_count); /* sort all socket id's in ascending order */ qsort(lcore_to_socket_id, RTE_DIM(lcore_to_socket_id), @@ -208,7 +208,7 @@ rte_eal_cpu_init(void) socket_id; prev_socket_id = socket_id; } - RTE_LOG(INFO, EAL, "Detected NUMA nodes: %u\n", config->numa_node_count); + RTE_LOG_LINE(INFO, EAL, "Detected NUMA nodes: %u", config->numa_node_count); return 0; } @@ -247,7 +247,7 @@ callback_init(struct lcore_callback *callback, unsigned int lcore_id) { if (callback->init == NULL) return 0; - RTE_LOG(DEBUG, EAL, "Call init for lcore callback %s, lcore_id %u\n", + RTE_LOG_LINE(DEBUG, EAL, "Call init for lcore callback %s, lcore_id %u", callback->name, lcore_id); return callback->init(lcore_id, callback->arg); } @@ -257,7 +257,7 @@ callback_uninit(struct lcore_callback *callback, unsigned int lcore_id) { if (callback->uninit == NULL) return; - RTE_LOG(DEBUG, EAL, "Call uninit for lcore callback %s, lcore_id %u\n", + RTE_LOG_LINE(DEBUG, EAL, "Call uninit for lcore callback %s, lcore_id %u", callback->name, lcore_id); callback->uninit(lcore_id, callback->arg); } @@ -311,7 +311,7 @@ rte_lcore_callback_register(const char *name, rte_lcore_init_cb init, } no_init: TAILQ_INSERT_TAIL(&lcore_callbacks, callback, next); - RTE_LOG(DEBUG, EAL, "Registered new lcore callback %s (%sinit, %suninit).\n", + RTE_LOG_LINE(DEBUG, EAL, "Registered new lcore callback %s (%sinit, %suninit).", callback->name, callback->init == NULL ? "NO " : "", callback->uninit == NULL ? "NO " : ""); out: @@ -339,7 +339,7 @@ rte_lcore_callback_unregister(void *handle) no_uninit: TAILQ_REMOVE(&lcore_callbacks, callback, next); rte_rwlock_write_unlock(&lcore_lock); - RTE_LOG(DEBUG, EAL, "Unregistered lcore callback %s-%p.\n", + RTE_LOG_LINE(DEBUG, EAL, "Unregistered lcore callback %s-%p.", callback->name, callback->arg); free_callback(callback); } @@ -361,7 +361,7 @@ eal_lcore_non_eal_allocate(void) break; } if (lcore_id == RTE_MAX_LCORE) { - RTE_LOG(DEBUG, EAL, "No lcore available.\n"); + RTE_LOG_LINE(DEBUG, EAL, "No lcore available."); goto out; } TAILQ_FOREACH(callback, &lcore_callbacks, next) { @@ -375,7 +375,7 @@ eal_lcore_non_eal_allocate(void) callback_uninit(prev, lcore_id); prev = TAILQ_PREV(prev, lcore_callbacks_head, next); } - RTE_LOG(DEBUG, EAL, "Initialization refused for lcore %u.\n", + RTE_LOG_LINE(DEBUG, EAL, "Initialization refused for lcore %u.", lcore_id); cfg->lcore_role[lcore_id] = ROLE_OFF; cfg->lcore_count--; diff --git a/lib/eal/common/eal_common_memalloc.c b/lib/eal/common/eal_common_memalloc.c index ab04479c1c..feb22c2b2f 100644 --- a/lib/eal/common/eal_common_memalloc.c +++ b/lib/eal/common/eal_common_memalloc.c @@ -186,7 +186,7 @@ eal_memalloc_mem_event_callback_register(const char *name, ret = 0; - RTE_LOG(DEBUG, EAL, "Mem event callback '%s:%p' registered\n", + RTE_LOG_LINE(DEBUG, EAL, "Mem event callback '%s:%p' registered", name, arg); unlock: @@ -225,7 +225,7 @@ eal_memalloc_mem_event_callback_unregister(const char *name, void *arg) ret = 0; - RTE_LOG(DEBUG, EAL, "Mem event callback '%s:%p' unregistered\n", + RTE_LOG_LINE(DEBUG, EAL, "Mem event callback '%s:%p' unregistered", name, arg); unlock: @@ -242,7 +242,7 @@ eal_memalloc_mem_event_notify(enum rte_mem_event event, const void *start, rte_rwlock_read_lock(&mem_event_rwlock); TAILQ_FOREACH(entry, &mem_event_callback_list, next) { - RTE_LOG(DEBUG, EAL, "Calling mem event callback '%s:%p'\n", + RTE_LOG_LINE(DEBUG, EAL, "Calling mem event callback '%s:%p'", entry->name, entry->arg); entry->clb(event, start, len, entry->arg); } @@ -293,7 +293,7 @@ eal_memalloc_mem_alloc_validator_register(const char *name, ret = 0; - RTE_LOG(DEBUG, EAL, "Mem alloc validator '%s' on socket %i with limit %zu registered\n", + RTE_LOG_LINE(DEBUG, EAL, "Mem alloc validator '%s' on socket %i with limit %zu registered", name, socket_id, limit); unlock: @@ -332,7 +332,7 @@ eal_memalloc_mem_alloc_validator_unregister(const char *name, int socket_id) ret = 0; - RTE_LOG(DEBUG, EAL, "Mem alloc validator '%s' on socket %i unregistered\n", + RTE_LOG_LINE(DEBUG, EAL, "Mem alloc validator '%s' on socket %i unregistered", name, socket_id); unlock: @@ -351,7 +351,7 @@ eal_memalloc_mem_alloc_validate(int socket_id, size_t new_len) TAILQ_FOREACH(entry, &mem_alloc_validator_list, next) { if (entry->socket_id != socket_id || entry->limit > new_len) continue; - RTE_LOG(DEBUG, EAL, "Calling mem alloc validator '%s' on socket %i\n", + RTE_LOG_LINE(DEBUG, EAL, "Calling mem alloc validator '%s' on socket %i", entry->name, entry->socket_id); if (entry->clb(socket_id, entry->limit, new_len) < 0) ret = -1; diff --git a/lib/eal/common/eal_common_memory.c b/lib/eal/common/eal_common_memory.c index d9433db623..9e183669a6 100644 --- a/lib/eal/common/eal_common_memory.c +++ b/lib/eal/common/eal_common_memory.c @@ -57,7 +57,7 @@ eal_get_virtual_area(void *requested_addr, size_t *size, if (system_page_sz == 0) system_page_sz = rte_mem_page_size(); - RTE_LOG(DEBUG, EAL, "Ask a virtual area of 0x%zx bytes\n", *size); + RTE_LOG_LINE(DEBUG, EAL, "Ask a virtual area of 0x%zx bytes", *size); addr_is_hint = (flags & EAL_VIRTUAL_AREA_ADDR_IS_HINT) > 0; allow_shrink = (flags & EAL_VIRTUAL_AREA_ALLOW_SHRINK) > 0; @@ -94,7 +94,7 @@ eal_get_virtual_area(void *requested_addr, size_t *size, do { map_sz = no_align ? *size : *size + page_sz; if (map_sz > SIZE_MAX) { - RTE_LOG(ERR, EAL, "Map size too big\n"); + RTE_LOG_LINE(ERR, EAL, "Map size too big"); rte_errno = E2BIG; return NULL; } @@ -125,16 +125,16 @@ eal_get_virtual_area(void *requested_addr, size_t *size, RTE_PTR_ALIGN(mapped_addr, page_sz); if (*size == 0) { - RTE_LOG(ERR, EAL, "Cannot get a virtual area of any size: %s\n", + RTE_LOG_LINE(ERR, EAL, "Cannot get a virtual area of any size: %s", rte_strerror(rte_errno)); return NULL; } else if (mapped_addr == NULL) { - RTE_LOG(ERR, EAL, "Cannot get a virtual area: %s\n", + RTE_LOG_LINE(ERR, EAL, "Cannot get a virtual area: %s", rte_strerror(rte_errno)); return NULL; } else if (requested_addr != NULL && !addr_is_hint && aligned_addr != requested_addr) { - RTE_LOG(ERR, EAL, "Cannot get a virtual area at requested address: %p (got %p)\n", + RTE_LOG_LINE(ERR, EAL, "Cannot get a virtual area at requested address: %p (got %p)", requested_addr, aligned_addr); eal_mem_free(mapped_addr, map_sz); rte_errno = EADDRNOTAVAIL; @@ -146,19 +146,19 @@ eal_get_virtual_area(void *requested_addr, size_t *size, * a base virtual address. */ if (internal_conf->base_virtaddr != 0) { - RTE_LOG(WARNING, EAL, "WARNING! Base virtual address hint (%p != %p) not respected!\n", + RTE_LOG_LINE(WARNING, EAL, "WARNING! Base virtual address hint (%p != %p) not respected!", requested_addr, aligned_addr); - RTE_LOG(WARNING, EAL, " This may cause issues with mapping memory into secondary processes\n"); + RTE_LOG_LINE(WARNING, EAL, " This may cause issues with mapping memory into secondary processes"); } else { - RTE_LOG(DEBUG, EAL, "WARNING! Base virtual address hint (%p != %p) not respected!\n", + RTE_LOG_LINE(DEBUG, EAL, "WARNING! Base virtual address hint (%p != %p) not respected!", requested_addr, aligned_addr); - RTE_LOG(DEBUG, EAL, " This may cause issues with mapping memory into secondary processes\n"); + RTE_LOG_LINE(DEBUG, EAL, " This may cause issues with mapping memory into secondary processes"); } } else if (next_baseaddr != NULL) { next_baseaddr = RTE_PTR_ADD(aligned_addr, *size); } - RTE_LOG(DEBUG, EAL, "Virtual area found at %p (size = 0x%zx)\n", + RTE_LOG_LINE(DEBUG, EAL, "Virtual area found at %p (size = 0x%zx)", aligned_addr, *size); if (unmap) { @@ -202,7 +202,7 @@ eal_memseg_list_init_named(struct rte_memseg_list *msl, const char *name, { if (rte_fbarray_init(&msl->memseg_arr, name, n_segs, sizeof(struct rte_memseg))) { - RTE_LOG(ERR, EAL, "Cannot allocate memseg list: %s\n", + RTE_LOG_LINE(ERR, EAL, "Cannot allocate memseg list: %s", rte_strerror(rte_errno)); return -1; } @@ -212,8 +212,8 @@ eal_memseg_list_init_named(struct rte_memseg_list *msl, const char *name, msl->base_va = NULL; msl->heap = heap; - RTE_LOG(DEBUG, EAL, - "Memseg list allocated at socket %i, page size 0x%"PRIx64"kB\n", + RTE_LOG_LINE(DEBUG, EAL, + "Memseg list allocated at socket %i, page size 0x%"PRIx64"kB", socket_id, page_sz >> 10); return 0; @@ -251,8 +251,8 @@ eal_memseg_list_alloc(struct rte_memseg_list *msl, int reserve_flags) * including common code, so don't duplicate the message. */ if (rte_errno == EADDRNOTAVAIL) - RTE_LOG(ERR, EAL, "Cannot reserve %llu bytes at [%p] - " - "please use '--" OPT_BASE_VIRTADDR "' option\n", + RTE_LOG_LINE(ERR, EAL, "Cannot reserve %llu bytes at [%p] - " + "please use '--" OPT_BASE_VIRTADDR "' option", (unsigned long long)mem_sz, msl->base_va); #endif return -1; @@ -260,7 +260,7 @@ eal_memseg_list_alloc(struct rte_memseg_list *msl, int reserve_flags) msl->base_va = addr; msl->len = mem_sz; - RTE_LOG(DEBUG, EAL, "VA reserved for memseg list at %p, size %zx\n", + RTE_LOG_LINE(DEBUG, EAL, "VA reserved for memseg list at %p, size %zx", addr, mem_sz); return 0; @@ -472,7 +472,7 @@ rte_mem_event_callback_register(const char *name, rte_mem_event_callback_t clb, /* FreeBSD boots with legacy mem enabled by default */ if (internal_conf->legacy_mem) { - RTE_LOG(DEBUG, EAL, "Registering mem event callbacks not supported\n"); + RTE_LOG_LINE(DEBUG, EAL, "Registering mem event callbacks not supported"); rte_errno = ENOTSUP; return -1; } @@ -487,7 +487,7 @@ rte_mem_event_callback_unregister(const char *name, void *arg) /* FreeBSD boots with legacy mem enabled by default */ if (internal_conf->legacy_mem) { - RTE_LOG(DEBUG, EAL, "Registering mem event callbacks not supported\n"); + RTE_LOG_LINE(DEBUG, EAL, "Registering mem event callbacks not supported"); rte_errno = ENOTSUP; return -1; } @@ -503,7 +503,7 @@ rte_mem_alloc_validator_register(const char *name, /* FreeBSD boots with legacy mem enabled by default */ if (internal_conf->legacy_mem) { - RTE_LOG(DEBUG, EAL, "Registering mem alloc validators not supported\n"); + RTE_LOG_LINE(DEBUG, EAL, "Registering mem alloc validators not supported"); rte_errno = ENOTSUP; return -1; } @@ -519,7 +519,7 @@ rte_mem_alloc_validator_unregister(const char *name, int socket_id) /* FreeBSD boots with legacy mem enabled by default */ if (internal_conf->legacy_mem) { - RTE_LOG(DEBUG, EAL, "Registering mem alloc validators not supported\n"); + RTE_LOG_LINE(DEBUG, EAL, "Registering mem alloc validators not supported"); rte_errno = ENOTSUP; return -1; } @@ -545,10 +545,10 @@ check_iova(const struct rte_memseg_list *msl __rte_unused, if (!(iova & *mask)) return 0; - RTE_LOG(DEBUG, EAL, "memseg iova %"PRIx64", len %zx, out of range\n", + RTE_LOG_LINE(DEBUG, EAL, "memseg iova %"PRIx64", len %zx, out of range", ms->iova, ms->len); - RTE_LOG(DEBUG, EAL, "\tusing dma mask %"PRIx64"\n", *mask); + RTE_LOG_LINE(DEBUG, EAL, "\tusing dma mask %"PRIx64, *mask); return 1; } @@ -565,7 +565,7 @@ check_dma_mask(uint8_t maskbits, bool thread_unsafe) /* Sanity check. We only check width can be managed with 64 bits * variables. Indeed any higher value is likely wrong. */ if (maskbits > MAX_DMA_MASK_BITS) { - RTE_LOG(ERR, EAL, "wrong dma mask size %u (Max: %u)\n", + RTE_LOG_LINE(ERR, EAL, "wrong dma mask size %u (Max: %u)", maskbits, MAX_DMA_MASK_BITS); return -1; } @@ -925,7 +925,7 @@ rte_extmem_register(void *va_addr, size_t len, rte_iova_t iova_addrs[], /* get next available socket ID */ socket_id = mcfg->next_socket_id; if (socket_id > INT32_MAX) { - RTE_LOG(ERR, EAL, "Cannot assign new socket ID's\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot assign new socket ID's"); rte_errno = ENOSPC; ret = -1; goto unlock; @@ -1030,7 +1030,7 @@ rte_eal_memory_detach(void) /* detach internal memory subsystem data first */ if (eal_memalloc_cleanup()) - RTE_LOG(ERR, EAL, "Could not release memory subsystem data\n"); + RTE_LOG_LINE(ERR, EAL, "Could not release memory subsystem data"); for (i = 0; i < RTE_DIM(mcfg->memsegs); i++) { struct rte_memseg_list *msl = &mcfg->memsegs[i]; @@ -1047,7 +1047,7 @@ rte_eal_memory_detach(void) */ if (!msl->external) if (rte_mem_unmap(msl->base_va, msl->len) != 0) - RTE_LOG(ERR, EAL, "Could not unmap memory: %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not unmap memory: %s", rte_strerror(rte_errno)); /* @@ -1056,7 +1056,7 @@ rte_eal_memory_detach(void) * have no way of knowing if they still do. */ if (rte_fbarray_detach(&msl->memseg_arr)) - RTE_LOG(ERR, EAL, "Could not detach fbarray: %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not detach fbarray: %s", rte_strerror(rte_errno)); } rte_rwlock_write_unlock(&mcfg->memory_hotplug_lock); @@ -1068,7 +1068,7 @@ rte_eal_memory_detach(void) */ if (internal_conf->no_shconf == 0 && mcfg->mem_cfg_addr != 0) { if (rte_mem_unmap(mcfg, RTE_ALIGN(sizeof(*mcfg), page_sz)) != 0) - RTE_LOG(ERR, EAL, "Could not unmap shared memory config: %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not unmap shared memory config: %s", rte_strerror(rte_errno)); } rte_eal_get_configuration()->mem_config = NULL; @@ -1084,7 +1084,7 @@ rte_eal_memory_init(void) eal_get_internal_configuration(); int retval; - RTE_LOG(DEBUG, EAL, "Setting up physically contiguous memory...\n"); + RTE_LOG_LINE(DEBUG, EAL, "Setting up physically contiguous memory..."); if (rte_eal_memseg_init() < 0) goto fail; @@ -1213,7 +1213,7 @@ handle_eal_memzone_info_request(const char *cmd __rte_unused, /* go through each page occupied by this memzone */ msl = rte_mem_virt2memseg_list(mz->addr); if (!msl) { - RTE_LOG(DEBUG, EAL, "Skipping bad memzone\n"); + RTE_LOG_LINE(DEBUG, EAL, "Skipping bad memzone"); return -1; } page_sz = (size_t)mz->hugepage_sz; @@ -1404,7 +1404,7 @@ handle_eal_memseg_info_request(const char *cmd __rte_unused, ms = rte_fbarray_get(arr, ms_idx); if (ms == NULL) { rte_mcfg_mem_read_unlock(); - RTE_LOG(DEBUG, EAL, "Error fetching requested memseg.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Error fetching requested memseg."); return -1; } @@ -1477,7 +1477,7 @@ handle_eal_element_list_request(const char *cmd __rte_unused, ms = rte_fbarray_get(&msl->memseg_arr, ms_idx); if (ms == NULL) { rte_mcfg_mem_read_unlock(); - RTE_LOG(DEBUG, EAL, "Error fetching requested memseg.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Error fetching requested memseg."); return -1; } @@ -1555,7 +1555,7 @@ handle_eal_element_info_request(const char *cmd __rte_unused, ms = rte_fbarray_get(&msl->memseg_arr, ms_idx); if (ms == NULL) { rte_mcfg_mem_read_unlock(); - RTE_LOG(DEBUG, EAL, "Error fetching requested memseg.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Error fetching requested memseg."); return -1; } diff --git a/lib/eal/common/eal_common_memzone.c b/lib/eal/common/eal_common_memzone.c index 1f3e701499..fc478d0fac 100644 --- a/lib/eal/common/eal_common_memzone.c +++ b/lib/eal/common/eal_common_memzone.c @@ -31,13 +31,13 @@ rte_memzone_max_set(size_t max) struct rte_mem_config *mcfg; if (eal_get_internal_configuration()->init_complete > 0) { - RTE_LOG(ERR, EAL, "Max memzone cannot be set after EAL init\n"); + RTE_LOG_LINE(ERR, EAL, "Max memzone cannot be set after EAL init"); return -1; } mcfg = rte_eal_get_configuration()->mem_config; if (mcfg == NULL) { - RTE_LOG(ERR, EAL, "Failed to set max memzone count\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to set max memzone count"); return -1; } @@ -116,16 +116,16 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len, /* no more room in config */ if (arr->count >= arr->len) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "%s(): Number of requested memzone segments exceeds maximum " - "%u\n", __func__, arr->len); + "%u", __func__, arr->len); rte_errno = ENOSPC; return NULL; } if (strlen(name) > sizeof(mz->name) - 1) { - RTE_LOG(DEBUG, EAL, "%s(): memzone <%s>: name too long\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): memzone <%s>: name too long", __func__, name); rte_errno = ENAMETOOLONG; return NULL; @@ -133,7 +133,7 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len, /* zone already exist */ if ((memzone_lookup_thread_unsafe(name)) != NULL) { - RTE_LOG(DEBUG, EAL, "%s(): memzone <%s> already exists\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): memzone <%s> already exists", __func__, name); rte_errno = EEXIST; return NULL; @@ -141,7 +141,7 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len, /* if alignment is not a power of two */ if (align && !rte_is_power_of_2(align)) { - RTE_LOG(ERR, EAL, "%s(): Invalid alignment: %u\n", __func__, + RTE_LOG_LINE(ERR, EAL, "%s(): Invalid alignment: %u", __func__, align); rte_errno = EINVAL; return NULL; @@ -218,7 +218,7 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len, } if (mz == NULL) { - RTE_LOG(ERR, EAL, "%s(): Cannot find free memzone\n", __func__); + RTE_LOG_LINE(ERR, EAL, "%s(): Cannot find free memzone", __func__); malloc_heap_free(elem); rte_errno = ENOSPC; return NULL; @@ -323,7 +323,7 @@ rte_memzone_free(const struct rte_memzone *mz) if (found_mz == NULL) { ret = -EINVAL; } else if (found_mz->addr == NULL) { - RTE_LOG(ERR, EAL, "Memzone is not allocated\n"); + RTE_LOG_LINE(ERR, EAL, "Memzone is not allocated"); ret = -EINVAL; } else { addr = found_mz->addr; @@ -385,7 +385,7 @@ dump_memzone(const struct rte_memzone *mz, void *arg) /* go through each page occupied by this memzone */ msl = rte_mem_virt2memseg_list(mz->addr); if (!msl) { - RTE_LOG(DEBUG, EAL, "Skipping bad memzone\n"); + RTE_LOG_LINE(DEBUG, EAL, "Skipping bad memzone"); return; } page_sz = (size_t)mz->hugepage_sz; @@ -434,11 +434,11 @@ rte_eal_memzone_init(void) if (rte_eal_process_type() == RTE_PROC_PRIMARY && rte_fbarray_init(&mcfg->memzones, "memzone", rte_memzone_max_get(), sizeof(struct rte_memzone))) { - RTE_LOG(ERR, EAL, "Cannot allocate memzone list\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate memzone list"); ret = -1; } else if (rte_eal_process_type() == RTE_PROC_SECONDARY && rte_fbarray_attach(&mcfg->memzones)) { - RTE_LOG(ERR, EAL, "Cannot attach to memzone list\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot attach to memzone list"); ret = -1; } diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c index e9ba01fb89..c1af05b134 100644 --- a/lib/eal/common/eal_common_options.c +++ b/lib/eal/common/eal_common_options.c @@ -255,14 +255,14 @@ eal_option_device_add(enum rte_devtype type, const char *optarg) optlen = strlen(optarg) + 1; devopt = calloc(1, sizeof(*devopt) + optlen); if (devopt == NULL) { - RTE_LOG(ERR, EAL, "Unable to allocate device option\n"); + RTE_LOG_LINE(ERR, EAL, "Unable to allocate device option"); return -ENOMEM; } devopt->type = type; ret = strlcpy(devopt->arg, optarg, optlen); if (ret < 0) { - RTE_LOG(ERR, EAL, "Unable to copy device option\n"); + RTE_LOG_LINE(ERR, EAL, "Unable to copy device option"); free(devopt); return -EINVAL; } @@ -281,7 +281,7 @@ eal_option_device_parse(void) if (ret == 0) { ret = rte_devargs_add(devopt->type, devopt->arg); if (ret) - RTE_LOG(ERR, EAL, "Unable to parse device '%s'\n", + RTE_LOG_LINE(ERR, EAL, "Unable to parse device '%s'", devopt->arg); } TAILQ_REMOVE(&devopt_list, devopt, next); @@ -360,7 +360,7 @@ eal_plugin_add(const char *path) solib = malloc(sizeof(*solib)); if (solib == NULL) { - RTE_LOG(ERR, EAL, "malloc(solib) failed\n"); + RTE_LOG_LINE(ERR, EAL, "malloc(solib) failed"); return -1; } memset(solib, 0, sizeof(*solib)); @@ -390,7 +390,7 @@ eal_plugindir_init(const char *path) d = opendir(path); if (d == NULL) { - RTE_LOG(ERR, EAL, "failed to open directory %s: %s\n", + RTE_LOG_LINE(ERR, EAL, "failed to open directory %s: %s", path, strerror(errno)); return -1; } @@ -442,13 +442,13 @@ verify_perms(const char *dirpath) /* call stat to check for permissions and ensure not world writable */ if (stat(dirpath, &st) != 0) { - RTE_LOG(ERR, EAL, "Error with stat on %s, %s\n", + RTE_LOG_LINE(ERR, EAL, "Error with stat on %s, %s", dirpath, strerror(errno)); return -1; } if (st.st_mode & S_IWOTH) { - RTE_LOG(ERR, EAL, - "Error, directory path %s is world-writable and insecure\n", + RTE_LOG_LINE(ERR, EAL, + "Error, directory path %s is world-writable and insecure", dirpath); return -1; } @@ -466,16 +466,16 @@ eal_dlopen(const char *pathname) /* not a full or relative path, try a load from system dirs */ retval = dlopen(pathname, RTLD_NOW); if (retval == NULL) - RTE_LOG(ERR, EAL, "%s\n", dlerror()); + RTE_LOG_LINE(ERR, EAL, "%s", dlerror()); return retval; } if (realp == NULL) { - RTE_LOG(ERR, EAL, "Error with realpath for %s, %s\n", + RTE_LOG_LINE(ERR, EAL, "Error with realpath for %s, %s", pathname, strerror(errno)); goto out; } if (strnlen(realp, PATH_MAX) == PATH_MAX) { - RTE_LOG(ERR, EAL, "Error, driver path greater than PATH_MAX\n"); + RTE_LOG_LINE(ERR, EAL, "Error, driver path greater than PATH_MAX"); goto out; } @@ -485,7 +485,7 @@ eal_dlopen(const char *pathname) retval = dlopen(realp, RTLD_NOW); if (retval == NULL) - RTE_LOG(ERR, EAL, "%s\n", dlerror()); + RTE_LOG_LINE(ERR, EAL, "%s", dlerror()); out: free(realp); return retval; @@ -500,7 +500,7 @@ is_shared_build(void) len = strlcpy(soname, EAL_SO"."ABI_VERSION, sizeof(soname)); if (len > sizeof(soname)) { - RTE_LOG(ERR, EAL, "Shared lib name too long in shared build check\n"); + RTE_LOG_LINE(ERR, EAL, "Shared lib name too long in shared build check"); len = sizeof(soname) - 1; } @@ -508,10 +508,10 @@ is_shared_build(void) void *handle; /* check if we have this .so loaded, if so - shared build */ - RTE_LOG(DEBUG, EAL, "Checking presence of .so '%s'\n", soname); + RTE_LOG_LINE(DEBUG, EAL, "Checking presence of .so '%s'", soname); handle = dlopen(soname, RTLD_LAZY | RTLD_NOLOAD); if (handle != NULL) { - RTE_LOG(INFO, EAL, "Detected shared linkage of DPDK\n"); + RTE_LOG_LINE(INFO, EAL, "Detected shared linkage of DPDK"); dlclose(handle); return 1; } @@ -524,7 +524,7 @@ is_shared_build(void) } } - RTE_LOG(INFO, EAL, "Detected static linkage of DPDK\n"); + RTE_LOG_LINE(INFO, EAL, "Detected static linkage of DPDK"); return 0; } @@ -549,13 +549,13 @@ eal_plugins_init(void) if (stat(solib->name, &sb) == 0 && S_ISDIR(sb.st_mode)) { if (eal_plugindir_init(solib->name) == -1) { - RTE_LOG(ERR, EAL, - "Cannot init plugin directory %s\n", + RTE_LOG_LINE(ERR, EAL, + "Cannot init plugin directory %s", solib->name); return -1; } } else { - RTE_LOG(DEBUG, EAL, "open shared lib %s\n", + RTE_LOG_LINE(DEBUG, EAL, "open shared lib %s", solib->name); solib->lib_handle = eal_dlopen(solib->name); if (solib->lib_handle == NULL) @@ -626,15 +626,15 @@ eal_parse_service_coremask(const char *coremask) uint32_t lcore = idx; if (main_lcore_parsed && cfg->main_lcore == lcore) { - RTE_LOG(ERR, EAL, - "lcore %u is main lcore, cannot use as service core\n", + RTE_LOG_LINE(ERR, EAL, + "lcore %u is main lcore, cannot use as service core", idx); return -1; } if (eal_cpu_detected(idx) == 0) { - RTE_LOG(ERR, EAL, - "lcore %u unavailable\n", idx); + RTE_LOG_LINE(ERR, EAL, + "lcore %u unavailable", idx); return -1; } @@ -658,9 +658,9 @@ eal_parse_service_coremask(const char *coremask) return -1; if (core_parsed && taken_lcore_count != count) { - RTE_LOG(WARNING, EAL, + RTE_LOG_LINE(WARNING, EAL, "Not all service cores are in the coremask. " - "Please ensure -c or -l includes service cores\n"); + "Please ensure -c or -l includes service cores"); } cfg->service_lcore_count = count; @@ -689,7 +689,7 @@ update_lcore_config(int *cores) for (i = 0; i < RTE_MAX_LCORE; i++) { if (cores[i] != -1) { if (eal_cpu_detected(i) == 0) { - RTE_LOG(ERR, EAL, "lcore %u unavailable\n", i); + RTE_LOG_LINE(ERR, EAL, "lcore %u unavailable", i); ret = -1; continue; } @@ -717,7 +717,7 @@ check_core_list(int *lcores, unsigned int count) if (lcores[i] < RTE_MAX_LCORE) continue; - RTE_LOG(ERR, EAL, "lcore %d >= RTE_MAX_LCORE (%d)\n", + RTE_LOG_LINE(ERR, EAL, "lcore %d >= RTE_MAX_LCORE (%d)", lcores[i], RTE_MAX_LCORE); overflow = true; } @@ -737,9 +737,9 @@ check_core_list(int *lcores, unsigned int count) } if (len > 0) lcorestr[len - 1] = 0; - RTE_LOG(ERR, EAL, "To use high physical core ids, " + RTE_LOG_LINE(ERR, EAL, "To use high physical core ids, " "please use --lcores to map them to lcore ids below RTE_MAX_LCORE, " - "e.g. --lcores %s\n", lcorestr); + "e.g. --lcores %s", lcorestr); return -1; } @@ -769,7 +769,7 @@ rte_eal_parse_coremask(const char *coremask, int *cores) while ((i > 0) && isblank(coremask[i - 1])) i--; if (i == 0) { - RTE_LOG(ERR, EAL, "No lcores in coremask: [%s]\n", + RTE_LOG_LINE(ERR, EAL, "No lcores in coremask: [%s]", coremask_orig); return -1; } @@ -778,7 +778,7 @@ rte_eal_parse_coremask(const char *coremask, int *cores) c = coremask[i]; if (isxdigit(c) == 0) { /* invalid characters */ - RTE_LOG(ERR, EAL, "invalid characters in coremask: [%s]\n", + RTE_LOG_LINE(ERR, EAL, "invalid characters in coremask: [%s]", coremask_orig); return -1; } @@ -787,7 +787,7 @@ rte_eal_parse_coremask(const char *coremask, int *cores) { if ((1 << j) & val) { if (count >= RTE_MAX_LCORE) { - RTE_LOG(ERR, EAL, "Too many lcores provided. Cannot exceed RTE_MAX_LCORE (%d)\n", + RTE_LOG_LINE(ERR, EAL, "Too many lcores provided. Cannot exceed RTE_MAX_LCORE (%d)", RTE_MAX_LCORE); return -1; } @@ -796,7 +796,7 @@ rte_eal_parse_coremask(const char *coremask, int *cores) } } if (count == 0) { - RTE_LOG(ERR, EAL, "No lcores in coremask: [%s]\n", + RTE_LOG_LINE(ERR, EAL, "No lcores in coremask: [%s]", coremask_orig); return -1; } @@ -864,8 +864,8 @@ eal_parse_service_corelist(const char *corelist) uint32_t lcore = idx; if (cfg->main_lcore == lcore && main_lcore_parsed) { - RTE_LOG(ERR, EAL, - "Error: lcore %u is main lcore, cannot use as service core\n", + RTE_LOG_LINE(ERR, EAL, + "Error: lcore %u is main lcore, cannot use as service core", idx); return -1; } @@ -887,9 +887,9 @@ eal_parse_service_corelist(const char *corelist) return -1; if (core_parsed && taken_lcore_count != count) { - RTE_LOG(WARNING, EAL, + RTE_LOG_LINE(WARNING, EAL, "Not all service cores were in the coremask. " - "Please ensure -c or -l includes service cores\n"); + "Please ensure -c or -l includes service cores"); } return 0; @@ -943,7 +943,7 @@ eal_parse_corelist(const char *corelist, int *cores) if (dup) continue; if (count >= RTE_MAX_LCORE) { - RTE_LOG(ERR, EAL, "Too many lcores provided. Cannot exceed RTE_MAX_LCORE (%d)\n", + RTE_LOG_LINE(ERR, EAL, "Too many lcores provided. Cannot exceed RTE_MAX_LCORE (%d)", RTE_MAX_LCORE); return -1; } @@ -991,8 +991,8 @@ eal_parse_main_lcore(const char *arg) /* ensure main core is not used as service core */ if (lcore_config[cfg->main_lcore].core_role == ROLE_SERVICE) { - RTE_LOG(ERR, EAL, - "Error: Main lcore is used as a service core\n"); + RTE_LOG_LINE(ERR, EAL, + "Error: Main lcore is used as a service core"); return -1; } @@ -1132,8 +1132,8 @@ check_cpuset(rte_cpuset_t *set) continue; if (eal_cpu_detected(idx) == 0) { - RTE_LOG(ERR, EAL, "core %u " - "unavailable\n", idx); + RTE_LOG_LINE(ERR, EAL, "core %u " + "unavailable", idx); return -1; } } @@ -1612,8 +1612,8 @@ eal_parse_huge_unlink(const char *arg, struct hugepage_file_discipline *out) return 0; } if (strcmp(arg, HUGE_UNLINK_NEVER) == 0) { - RTE_LOG(WARNING, EAL, "Using --"OPT_HUGE_UNLINK"=" - HUGE_UNLINK_NEVER" may create data leaks.\n"); + RTE_LOG_LINE(WARNING, EAL, "Using --"OPT_HUGE_UNLINK"=" + HUGE_UNLINK_NEVER" may create data leaks."); out->unlink_existing = false; return 0; } @@ -1648,24 +1648,24 @@ eal_parse_common_option(int opt, const char *optarg, int lcore_indexes[RTE_MAX_LCORE]; if (eal_service_cores_parsed()) - RTE_LOG(WARNING, EAL, - "Service cores parsed before dataplane cores. Please ensure -c is before -s or -S\n"); + RTE_LOG_LINE(WARNING, EAL, + "Service cores parsed before dataplane cores. Please ensure -c is before -s or -S"); if (rte_eal_parse_coremask(optarg, lcore_indexes) < 0) { - RTE_LOG(ERR, EAL, "invalid coremask syntax\n"); + RTE_LOG_LINE(ERR, EAL, "invalid coremask syntax"); return -1; } if (update_lcore_config(lcore_indexes) < 0) { char *available = available_cores(); - RTE_LOG(ERR, EAL, - "invalid coremask, please check specified cores are part of %s\n", + RTE_LOG_LINE(ERR, EAL, + "invalid coremask, please check specified cores are part of %s", available); free(available); return -1; } if (core_parsed) { - RTE_LOG(ERR, EAL, "Option -c is ignored, because (%s) is set!\n", + RTE_LOG_LINE(ERR, EAL, "Option -c is ignored, because (%s) is set!", (core_parsed == LCORE_OPT_LST) ? "-l" : (core_parsed == LCORE_OPT_MAP) ? "--lcore" : "-c"); @@ -1680,25 +1680,25 @@ eal_parse_common_option(int opt, const char *optarg, int lcore_indexes[RTE_MAX_LCORE]; if (eal_service_cores_parsed()) - RTE_LOG(WARNING, EAL, - "Service cores parsed before dataplane cores. Please ensure -l is before -s or -S\n"); + RTE_LOG_LINE(WARNING, EAL, + "Service cores parsed before dataplane cores. Please ensure -l is before -s or -S"); if (eal_parse_corelist(optarg, lcore_indexes) < 0) { - RTE_LOG(ERR, EAL, "invalid core list syntax\n"); + RTE_LOG_LINE(ERR, EAL, "invalid core list syntax"); return -1; } if (update_lcore_config(lcore_indexes) < 0) { char *available = available_cores(); - RTE_LOG(ERR, EAL, - "invalid core list, please check specified cores are part of %s\n", + RTE_LOG_LINE(ERR, EAL, + "invalid core list, please check specified cores are part of %s", available); free(available); return -1; } if (core_parsed) { - RTE_LOG(ERR, EAL, "Option -l is ignored, because (%s) is set!\n", + RTE_LOG_LINE(ERR, EAL, "Option -l is ignored, because (%s) is set!", (core_parsed == LCORE_OPT_MSK) ? "-c" : (core_parsed == LCORE_OPT_MAP) ? "--lcore" : "-l"); @@ -1711,14 +1711,14 @@ eal_parse_common_option(int opt, const char *optarg, /* service coremask */ case 's': if (eal_parse_service_coremask(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid service coremask\n"); + RTE_LOG_LINE(ERR, EAL, "invalid service coremask"); return -1; } break; /* service corelist */ case 'S': if (eal_parse_service_corelist(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid service core list\n"); + RTE_LOG_LINE(ERR, EAL, "invalid service core list"); return -1; } break; @@ -1733,7 +1733,7 @@ eal_parse_common_option(int opt, const char *optarg, case 'n': conf->force_nchannel = atoi(optarg); if (conf->force_nchannel == 0) { - RTE_LOG(ERR, EAL, "invalid channel number\n"); + RTE_LOG_LINE(ERR, EAL, "invalid channel number"); return -1; } break; @@ -1742,7 +1742,7 @@ eal_parse_common_option(int opt, const char *optarg, conf->force_nrank = atoi(optarg); if (conf->force_nrank == 0 || conf->force_nrank > 16) { - RTE_LOG(ERR, EAL, "invalid rank number\n"); + RTE_LOG_LINE(ERR, EAL, "invalid rank number"); return -1; } break; @@ -1756,13 +1756,13 @@ eal_parse_common_option(int opt, const char *optarg, * write message at highest log level so it can always * be seen * even if info or warning messages are disabled */ - RTE_LOG(CRIT, EAL, "RTE Version: '%s'\n", rte_version()); + RTE_LOG_LINE(CRIT, EAL, "RTE Version: '%s'", rte_version()); break; /* long options */ case OPT_HUGE_UNLINK_NUM: if (eal_parse_huge_unlink(optarg, &conf->hugepage_file) < 0) { - RTE_LOG(ERR, EAL, "invalid --"OPT_HUGE_UNLINK" option\n"); + RTE_LOG_LINE(ERR, EAL, "invalid --"OPT_HUGE_UNLINK" option"); return -1; } break; @@ -1802,8 +1802,8 @@ eal_parse_common_option(int opt, const char *optarg, case OPT_MAIN_LCORE_NUM: if (eal_parse_main_lcore(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameter for --" - OPT_MAIN_LCORE "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameter for --" + OPT_MAIN_LCORE); return -1; } break; @@ -1818,8 +1818,8 @@ eal_parse_common_option(int opt, const char *optarg, #ifndef RTE_EXEC_ENV_WINDOWS case OPT_SYSLOG_NUM: if (eal_parse_syslog(optarg, conf) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_SYSLOG "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_SYSLOG); return -1; } break; @@ -1827,9 +1827,9 @@ eal_parse_common_option(int opt, const char *optarg, case OPT_LOG_LEVEL_NUM: { if (eal_parse_log_level(optarg) < 0) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" - OPT_LOG_LEVEL "\n"); + OPT_LOG_LEVEL); return -1; } break; @@ -1838,8 +1838,8 @@ eal_parse_common_option(int opt, const char *optarg, #ifndef RTE_EXEC_ENV_WINDOWS case OPT_TRACE_NUM: { if (eal_trace_args_save(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_TRACE "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_TRACE); return -1; } break; @@ -1847,8 +1847,8 @@ eal_parse_common_option(int opt, const char *optarg, case OPT_TRACE_DIR_NUM: { if (eal_trace_dir_args_save(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_TRACE_DIR "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_TRACE_DIR); return -1; } break; @@ -1856,8 +1856,8 @@ eal_parse_common_option(int opt, const char *optarg, case OPT_TRACE_BUF_SIZE_NUM: { if (eal_trace_bufsz_args_save(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_TRACE_BUF_SIZE "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_TRACE_BUF_SIZE); return -1; } break; @@ -1865,8 +1865,8 @@ eal_parse_common_option(int opt, const char *optarg, case OPT_TRACE_MODE_NUM: { if (eal_trace_mode_args_save(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_TRACE_MODE "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_TRACE_MODE); return -1; } break; @@ -1875,13 +1875,13 @@ eal_parse_common_option(int opt, const char *optarg, case OPT_LCORES_NUM: if (eal_parse_lcores(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameter for --" - OPT_LCORES "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameter for --" + OPT_LCORES); return -1; } if (core_parsed) { - RTE_LOG(ERR, EAL, "Option --lcore is ignored, because (%s) is set!\n", + RTE_LOG_LINE(ERR, EAL, "Option --lcore is ignored, because (%s) is set!", (core_parsed == LCORE_OPT_LST) ? "-l" : (core_parsed == LCORE_OPT_MSK) ? "-c" : "--lcore"); @@ -1898,15 +1898,15 @@ eal_parse_common_option(int opt, const char *optarg, break; case OPT_IOVA_MODE_NUM: if (eal_parse_iova_mode(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_IOVA_MODE "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_IOVA_MODE); return -1; } break; case OPT_BASE_VIRTADDR_NUM: if (eal_parse_base_virtaddr(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameter for --" - OPT_BASE_VIRTADDR "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameter for --" + OPT_BASE_VIRTADDR); return -1; } break; @@ -1917,8 +1917,8 @@ eal_parse_common_option(int opt, const char *optarg, break; case OPT_FORCE_MAX_SIMD_BITWIDTH_NUM: if (eal_parse_simd_bitwidth(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameter for --" - OPT_FORCE_MAX_SIMD_BITWIDTH "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameter for --" + OPT_FORCE_MAX_SIMD_BITWIDTH); return -1; } break; @@ -1932,8 +1932,8 @@ eal_parse_common_option(int opt, const char *optarg, return 0; ba_conflict: - RTE_LOG(ERR, EAL, - "Options allow (-a) and block (-b) can't be used at the same time\n"); + RTE_LOG_LINE(ERR, EAL, + "Options allow (-a) and block (-b) can't be used at the same time"); return -1; } @@ -2034,94 +2034,94 @@ eal_check_common_options(struct internal_config *internal_cfg) eal_get_internal_configuration(); if (cfg->lcore_role[cfg->main_lcore] != ROLE_RTE) { - RTE_LOG(ERR, EAL, "Main lcore is not enabled for DPDK\n"); + RTE_LOG_LINE(ERR, EAL, "Main lcore is not enabled for DPDK"); return -1; } if (internal_cfg->process_type == RTE_PROC_INVALID) { - RTE_LOG(ERR, EAL, "Invalid process type specified\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid process type specified"); return -1; } if (internal_cfg->hugefile_prefix != NULL && strlen(internal_cfg->hugefile_prefix) < 1) { - RTE_LOG(ERR, EAL, "Invalid length of --" OPT_FILE_PREFIX " option\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid length of --" OPT_FILE_PREFIX " option"); return -1; } if (internal_cfg->hugepage_dir != NULL && strlen(internal_cfg->hugepage_dir) < 1) { - RTE_LOG(ERR, EAL, "Invalid length of --" OPT_HUGE_DIR" option\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid length of --" OPT_HUGE_DIR" option"); return -1; } if (internal_cfg->user_mbuf_pool_ops_name != NULL && strlen(internal_cfg->user_mbuf_pool_ops_name) < 1) { - RTE_LOG(ERR, EAL, "Invalid length of --" OPT_MBUF_POOL_OPS_NAME" option\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid length of --" OPT_MBUF_POOL_OPS_NAME" option"); return -1; } if (strchr(eal_get_hugefile_prefix(), '%') != NULL) { - RTE_LOG(ERR, EAL, "Invalid char, '%%', in --"OPT_FILE_PREFIX" " - "option\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid char, '%%', in --"OPT_FILE_PREFIX" " + "option"); return -1; } if (mem_parsed && internal_cfg->force_sockets == 1) { - RTE_LOG(ERR, EAL, "Options -m and --"OPT_SOCKET_MEM" cannot " - "be specified at the same time\n"); + RTE_LOG_LINE(ERR, EAL, "Options -m and --"OPT_SOCKET_MEM" cannot " + "be specified at the same time"); return -1; } if (internal_cfg->no_hugetlbfs && internal_cfg->force_sockets == 1) { - RTE_LOG(ERR, EAL, "Option --"OPT_SOCKET_MEM" cannot " - "be specified together with --"OPT_NO_HUGE"\n"); + RTE_LOG_LINE(ERR, EAL, "Option --"OPT_SOCKET_MEM" cannot " + "be specified together with --"OPT_NO_HUGE); return -1; } if (internal_cfg->no_hugetlbfs && internal_cfg->hugepage_file.unlink_before_mapping && !internal_cfg->in_memory) { - RTE_LOG(ERR, EAL, "Option --"OPT_HUGE_UNLINK" cannot " - "be specified together with --"OPT_NO_HUGE"\n"); + RTE_LOG_LINE(ERR, EAL, "Option --"OPT_HUGE_UNLINK" cannot " + "be specified together with --"OPT_NO_HUGE); return -1; } if (internal_cfg->no_hugetlbfs && internal_cfg->huge_worker_stack_size != 0) { - RTE_LOG(ERR, EAL, "Option --"OPT_HUGE_WORKER_STACK" cannot " - "be specified together with --"OPT_NO_HUGE"\n"); + RTE_LOG_LINE(ERR, EAL, "Option --"OPT_HUGE_WORKER_STACK" cannot " + "be specified together with --"OPT_NO_HUGE); return -1; } if (internal_conf->force_socket_limits && internal_conf->legacy_mem) { - RTE_LOG(ERR, EAL, "Option --"OPT_SOCKET_LIMIT - " is only supported in non-legacy memory mode\n"); + RTE_LOG_LINE(ERR, EAL, "Option --"OPT_SOCKET_LIMIT + " is only supported in non-legacy memory mode"); } if (internal_cfg->single_file_segments && internal_cfg->hugepage_file.unlink_before_mapping && !internal_cfg->in_memory) { - RTE_LOG(ERR, EAL, "Option --"OPT_SINGLE_FILE_SEGMENTS" is " - "not compatible with --"OPT_HUGE_UNLINK"\n"); + RTE_LOG_LINE(ERR, EAL, "Option --"OPT_SINGLE_FILE_SEGMENTS" is " + "not compatible with --"OPT_HUGE_UNLINK); return -1; } if (!internal_cfg->hugepage_file.unlink_existing && internal_cfg->in_memory) { - RTE_LOG(ERR, EAL, "Option --"OPT_IN_MEMORY" is not compatible " - "with --"OPT_HUGE_UNLINK"="HUGE_UNLINK_NEVER"\n"); + RTE_LOG_LINE(ERR, EAL, "Option --"OPT_IN_MEMORY" is not compatible " + "with --"OPT_HUGE_UNLINK"="HUGE_UNLINK_NEVER); return -1; } if (internal_cfg->legacy_mem && internal_cfg->in_memory) { - RTE_LOG(ERR, EAL, "Option --"OPT_LEGACY_MEM" is not compatible " - "with --"OPT_IN_MEMORY"\n"); + RTE_LOG_LINE(ERR, EAL, "Option --"OPT_LEGACY_MEM" is not compatible " + "with --"OPT_IN_MEMORY); return -1; } if (internal_cfg->legacy_mem && internal_cfg->match_allocations) { - RTE_LOG(ERR, EAL, "Option --"OPT_LEGACY_MEM" is not compatible " - "with --"OPT_MATCH_ALLOCATIONS"\n"); + RTE_LOG_LINE(ERR, EAL, "Option --"OPT_LEGACY_MEM" is not compatible " + "with --"OPT_MATCH_ALLOCATIONS); return -1; } if (internal_cfg->no_hugetlbfs && internal_cfg->match_allocations) { - RTE_LOG(ERR, EAL, "Option --"OPT_NO_HUGE" is not compatible " - "with --"OPT_MATCH_ALLOCATIONS"\n"); + RTE_LOG_LINE(ERR, EAL, "Option --"OPT_NO_HUGE" is not compatible " + "with --"OPT_MATCH_ALLOCATIONS); return -1; } if (internal_cfg->legacy_mem && internal_cfg->memory == 0) { - RTE_LOG(NOTICE, EAL, "Static memory layout is selected, " + RTE_LOG_LINE(NOTICE, EAL, "Static memory layout is selected, " "amount of reserved memory can be adjusted with " - "-m or --"OPT_SOCKET_MEM"\n"); + "-m or --"OPT_SOCKET_MEM); } return 0; @@ -2141,12 +2141,12 @@ rte_vect_set_max_simd_bitwidth(uint16_t bitwidth) struct internal_config *internal_conf = eal_get_internal_configuration(); if (internal_conf->max_simd_bitwidth.forced) { - RTE_LOG(NOTICE, EAL, "Cannot set max SIMD bitwidth - user runtime override enabled\n"); + RTE_LOG_LINE(NOTICE, EAL, "Cannot set max SIMD bitwidth - user runtime override enabled"); return -EPERM; } if (bitwidth < RTE_VECT_SIMD_DISABLED || !rte_is_power_of_2(bitwidth)) { - RTE_LOG(ERR, EAL, "Invalid bitwidth value!\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid bitwidth value!"); return -EINVAL; } internal_conf->max_simd_bitwidth.bitwidth = bitwidth; diff --git a/lib/eal/common/eal_common_proc.c b/lib/eal/common/eal_common_proc.c index 728815c4a9..abc6117c65 100644 --- a/lib/eal/common/eal_common_proc.c +++ b/lib/eal/common/eal_common_proc.c @@ -181,12 +181,12 @@ static int validate_action_name(const char *name) { if (name == NULL) { - RTE_LOG(ERR, EAL, "Action name cannot be NULL\n"); + RTE_LOG_LINE(ERR, EAL, "Action name cannot be NULL"); rte_errno = EINVAL; return -1; } if (strnlen(name, RTE_MP_MAX_NAME_LEN) == 0) { - RTE_LOG(ERR, EAL, "Length of action name is zero\n"); + RTE_LOG_LINE(ERR, EAL, "Length of action name is zero"); rte_errno = EINVAL; return -1; } @@ -208,7 +208,7 @@ rte_mp_action_register(const char *name, rte_mp_t action) return -1; if (internal_conf->no_shconf) { - RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n"); + RTE_LOG_LINE(DEBUG, EAL, "No shared files mode enabled, IPC is disabled"); rte_errno = ENOTSUP; return -1; } @@ -244,7 +244,7 @@ rte_mp_action_unregister(const char *name) return; if (internal_conf->no_shconf) { - RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n"); + RTE_LOG_LINE(DEBUG, EAL, "No shared files mode enabled, IPC is disabled"); return; } @@ -291,12 +291,12 @@ read_msg(int fd, struct mp_msg_internal *m, struct sockaddr_un *s) if (errno == EINTR) goto retry; - RTE_LOG(ERR, EAL, "recvmsg failed, %s\n", strerror(errno)); + RTE_LOG_LINE(ERR, EAL, "recvmsg failed, %s", strerror(errno)); return -1; } if (msglen != buflen || (msgh.msg_flags & (MSG_TRUNC | MSG_CTRUNC))) { - RTE_LOG(ERR, EAL, "truncated msg\n"); + RTE_LOG_LINE(ERR, EAL, "truncated msg"); return -1; } @@ -311,11 +311,11 @@ read_msg(int fd, struct mp_msg_internal *m, struct sockaddr_un *s) } /* sanity-check the response */ if (m->msg.num_fds < 0 || m->msg.num_fds > RTE_MP_MAX_FD_NUM) { - RTE_LOG(ERR, EAL, "invalid number of fd's received\n"); + RTE_LOG_LINE(ERR, EAL, "invalid number of fd's received"); return -1; } if (m->msg.len_param < 0 || m->msg.len_param > RTE_MP_MAX_PARAM_LEN) { - RTE_LOG(ERR, EAL, "invalid received data length\n"); + RTE_LOG_LINE(ERR, EAL, "invalid received data length"); return -1; } return msglen; @@ -340,7 +340,7 @@ process_msg(struct mp_msg_internal *m, struct sockaddr_un *s) const struct internal_config *internal_conf = eal_get_internal_configuration(); - RTE_LOG(DEBUG, EAL, "msg: %s\n", msg->name); + RTE_LOG_LINE(DEBUG, EAL, "msg: %s", msg->name); if (m->type == MP_REP || m->type == MP_IGN) { struct pending_request *req = NULL; @@ -359,7 +359,7 @@ process_msg(struct mp_msg_internal *m, struct sockaddr_un *s) req = async_reply_handle_thread_unsafe( pending_req); } else { - RTE_LOG(ERR, EAL, "Drop mp reply: %s\n", msg->name); + RTE_LOG_LINE(ERR, EAL, "Drop mp reply: %s", msg->name); cleanup_msg_fds(msg); } pthread_mutex_unlock(&pending_requests.lock); @@ -388,12 +388,12 @@ process_msg(struct mp_msg_internal *m, struct sockaddr_un *s) strlcpy(dummy.name, msg->name, sizeof(dummy.name)); mp_send(&dummy, s->sun_path, MP_IGN); } else { - RTE_LOG(ERR, EAL, "Cannot find action: %s\n", + RTE_LOG_LINE(ERR, EAL, "Cannot find action: %s", msg->name); } cleanup_msg_fds(msg); } else if (action(msg, s->sun_path) < 0) { - RTE_LOG(ERR, EAL, "Fail to handle message: %s\n", msg->name); + RTE_LOG_LINE(ERR, EAL, "Fail to handle message: %s", msg->name); } } @@ -459,7 +459,7 @@ process_async_request(struct pending_request *sr, const struct timespec *now) tmp = realloc(user_msgs, sizeof(*msg) * (reply->nb_received + 1)); if (!tmp) { - RTE_LOG(ERR, EAL, "Fail to alloc reply for request %s:%s\n", + RTE_LOG_LINE(ERR, EAL, "Fail to alloc reply for request %s:%s", sr->dst, sr->request->name); /* this entry is going to be removed and its message * dropped, but we don't want to leak memory, so @@ -518,7 +518,7 @@ async_reply_handle_thread_unsafe(void *arg) struct timespec ts_now; if (clock_gettime(CLOCK_MONOTONIC, &ts_now) < 0) { - RTE_LOG(ERR, EAL, "Cannot get current time\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot get current time"); goto no_trigger; } @@ -532,10 +532,10 @@ async_reply_handle_thread_unsafe(void *arg) * handling the same message twice. */ if (rte_errno == EINPROGRESS) { - RTE_LOG(DEBUG, EAL, "Request handling is already in progress\n"); + RTE_LOG_LINE(DEBUG, EAL, "Request handling is already in progress"); goto no_trigger; } - RTE_LOG(ERR, EAL, "Failed to cancel alarm\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to cancel alarm"); } if (action == ACTION_TRIGGER) @@ -570,7 +570,7 @@ open_socket_fd(void) mp_fd = socket(AF_UNIX, SOCK_DGRAM, 0); if (mp_fd < 0) { - RTE_LOG(ERR, EAL, "failed to create unix socket\n"); + RTE_LOG_LINE(ERR, EAL, "failed to create unix socket"); return -1; } @@ -582,13 +582,13 @@ open_socket_fd(void) unlink(un.sun_path); /* May still exist since last run */ if (bind(mp_fd, (struct sockaddr *)&un, sizeof(un)) < 0) { - RTE_LOG(ERR, EAL, "failed to bind %s: %s\n", + RTE_LOG_LINE(ERR, EAL, "failed to bind %s: %s", un.sun_path, strerror(errno)); close(mp_fd); return -1; } - RTE_LOG(INFO, EAL, "Multi-process socket %s\n", un.sun_path); + RTE_LOG_LINE(INFO, EAL, "Multi-process socket %s", un.sun_path); return mp_fd; } @@ -614,7 +614,7 @@ rte_mp_channel_init(void) * so no need to initialize IPC. */ if (internal_conf->no_shconf) { - RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC will be disabled\n"); + RTE_LOG_LINE(DEBUG, EAL, "No shared files mode enabled, IPC will be disabled"); rte_errno = ENOTSUP; return -1; } @@ -630,13 +630,13 @@ rte_mp_channel_init(void) /* lock the directory */ dir_fd = open(mp_dir_path, O_RDONLY); if (dir_fd < 0) { - RTE_LOG(ERR, EAL, "failed to open %s: %s\n", + RTE_LOG_LINE(ERR, EAL, "failed to open %s: %s", mp_dir_path, strerror(errno)); return -1; } if (flock(dir_fd, LOCK_EX)) { - RTE_LOG(ERR, EAL, "failed to lock %s: %s\n", + RTE_LOG_LINE(ERR, EAL, "failed to lock %s: %s", mp_dir_path, strerror(errno)); close(dir_fd); return -1; @@ -649,7 +649,7 @@ rte_mp_channel_init(void) if (rte_thread_create_internal_control(&mp_handle_tid, "mp-msg", mp_handle, NULL) < 0) { - RTE_LOG(ERR, EAL, "failed to create mp thread: %s\n", + RTE_LOG_LINE(ERR, EAL, "failed to create mp thread: %s", strerror(errno)); close(dir_fd); close(rte_atomic_exchange_explicit(&mp_fd, -1, rte_memory_order_relaxed)); @@ -732,7 +732,7 @@ send_msg(const char *dst_path, struct rte_mp_msg *msg, int type) unlink(dst_path); return 0; } - RTE_LOG(ERR, EAL, "failed to send to (%s) due to %s\n", + RTE_LOG_LINE(ERR, EAL, "failed to send to (%s) due to %s", dst_path, strerror(errno)); return -1; } @@ -760,7 +760,7 @@ mp_send(struct rte_mp_msg *msg, const char *peer, int type) /* broadcast to all secondary processes */ mp_dir = opendir(mp_dir_path); if (!mp_dir) { - RTE_LOG(ERR, EAL, "Unable to open directory %s\n", + RTE_LOG_LINE(ERR, EAL, "Unable to open directory %s", mp_dir_path); rte_errno = errno; return -1; @@ -769,7 +769,7 @@ mp_send(struct rte_mp_msg *msg, const char *peer, int type) dir_fd = dirfd(mp_dir); /* lock the directory to prevent processes spinning up while we send */ if (flock(dir_fd, LOCK_SH)) { - RTE_LOG(ERR, EAL, "Unable to lock directory %s\n", + RTE_LOG_LINE(ERR, EAL, "Unable to lock directory %s", mp_dir_path); rte_errno = errno; closedir(mp_dir); @@ -799,7 +799,7 @@ static int check_input(const struct rte_mp_msg *msg) { if (msg == NULL) { - RTE_LOG(ERR, EAL, "Msg cannot be NULL\n"); + RTE_LOG_LINE(ERR, EAL, "Msg cannot be NULL"); rte_errno = EINVAL; return -1; } @@ -808,25 +808,25 @@ check_input(const struct rte_mp_msg *msg) return -1; if (msg->len_param < 0) { - RTE_LOG(ERR, EAL, "Message data length is negative\n"); + RTE_LOG_LINE(ERR, EAL, "Message data length is negative"); rte_errno = EINVAL; return -1; } if (msg->num_fds < 0) { - RTE_LOG(ERR, EAL, "Number of fd's is negative\n"); + RTE_LOG_LINE(ERR, EAL, "Number of fd's is negative"); rte_errno = EINVAL; return -1; } if (msg->len_param > RTE_MP_MAX_PARAM_LEN) { - RTE_LOG(ERR, EAL, "Message data is too long\n"); + RTE_LOG_LINE(ERR, EAL, "Message data is too long"); rte_errno = E2BIG; return -1; } if (msg->num_fds > RTE_MP_MAX_FD_NUM) { - RTE_LOG(ERR, EAL, "Cannot send more than %d FDs\n", + RTE_LOG_LINE(ERR, EAL, "Cannot send more than %d FDs", RTE_MP_MAX_FD_NUM); rte_errno = E2BIG; return -1; @@ -845,12 +845,12 @@ rte_mp_sendmsg(struct rte_mp_msg *msg) return -1; if (internal_conf->no_shconf) { - RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n"); + RTE_LOG_LINE(DEBUG, EAL, "No shared files mode enabled, IPC is disabled"); rte_errno = ENOTSUP; return -1; } - RTE_LOG(DEBUG, EAL, "sendmsg: %s\n", msg->name); + RTE_LOG_LINE(DEBUG, EAL, "sendmsg: %s", msg->name); return mp_send(msg, NULL, MP_MSG); } @@ -865,7 +865,7 @@ mp_request_async(const char *dst, struct rte_mp_msg *req, pending_req = calloc(1, sizeof(*pending_req)); reply_msg = calloc(1, sizeof(*reply_msg)); if (pending_req == NULL || reply_msg == NULL) { - RTE_LOG(ERR, EAL, "Could not allocate space for sync request\n"); + RTE_LOG_LINE(ERR, EAL, "Could not allocate space for sync request"); rte_errno = ENOMEM; ret = -1; goto fail; @@ -881,7 +881,7 @@ mp_request_async(const char *dst, struct rte_mp_msg *req, exist = find_pending_request(dst, req->name); if (exist) { - RTE_LOG(ERR, EAL, "A pending request %s:%s\n", dst, req->name); + RTE_LOG_LINE(ERR, EAL, "A pending request %s:%s", dst, req->name); rte_errno = EEXIST; ret = -1; goto fail; @@ -889,7 +889,7 @@ mp_request_async(const char *dst, struct rte_mp_msg *req, ret = send_msg(dst, req, MP_REQ); if (ret < 0) { - RTE_LOG(ERR, EAL, "Fail to send request %s:%s\n", + RTE_LOG_LINE(ERR, EAL, "Fail to send request %s:%s", dst, req->name); ret = -1; goto fail; @@ -902,7 +902,7 @@ mp_request_async(const char *dst, struct rte_mp_msg *req, /* if alarm set fails, we simply ignore the reply */ if (rte_eal_alarm_set(ts->tv_sec * 1000000 + ts->tv_nsec / 1000, async_reply_handle, pending_req) < 0) { - RTE_LOG(ERR, EAL, "Fail to set alarm for request %s:%s\n", + RTE_LOG_LINE(ERR, EAL, "Fail to set alarm for request %s:%s", dst, req->name); ret = -1; goto fail; @@ -936,14 +936,14 @@ mp_request_sync(const char *dst, struct rte_mp_msg *req, exist = find_pending_request(dst, req->name); if (exist) { - RTE_LOG(ERR, EAL, "A pending request %s:%s\n", dst, req->name); + RTE_LOG_LINE(ERR, EAL, "A pending request %s:%s", dst, req->name); rte_errno = EEXIST; return -1; } ret = send_msg(dst, req, MP_REQ); if (ret < 0) { - RTE_LOG(ERR, EAL, "Fail to send request %s:%s\n", + RTE_LOG_LINE(ERR, EAL, "Fail to send request %s:%s", dst, req->name); return -1; } else if (ret == 0) @@ -961,13 +961,13 @@ mp_request_sync(const char *dst, struct rte_mp_msg *req, TAILQ_REMOVE(&pending_requests.requests, &pending_req, next); if (pending_req.reply_received == 0) { - RTE_LOG(ERR, EAL, "Fail to recv reply for request %s:%s\n", + RTE_LOG_LINE(ERR, EAL, "Fail to recv reply for request %s:%s", dst, req->name); rte_errno = ETIMEDOUT; return -1; } if (pending_req.reply_received == -1) { - RTE_LOG(DEBUG, EAL, "Asked to ignore response\n"); + RTE_LOG_LINE(DEBUG, EAL, "Asked to ignore response"); /* not receiving this message is not an error, so decrement * number of sent messages */ @@ -977,7 +977,7 @@ mp_request_sync(const char *dst, struct rte_mp_msg *req, tmp = realloc(reply->msgs, sizeof(msg) * (reply->nb_received + 1)); if (!tmp) { - RTE_LOG(ERR, EAL, "Fail to alloc reply for request %s:%s\n", + RTE_LOG_LINE(ERR, EAL, "Fail to alloc reply for request %s:%s", dst, req->name); rte_errno = ENOMEM; return -1; @@ -999,7 +999,7 @@ rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply, const struct internal_config *internal_conf = eal_get_internal_configuration(); - RTE_LOG(DEBUG, EAL, "request: %s\n", req->name); + RTE_LOG_LINE(DEBUG, EAL, "request: %s", req->name); reply->nb_sent = 0; reply->nb_received = 0; @@ -1009,13 +1009,13 @@ rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply, goto end; if (internal_conf->no_shconf) { - RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n"); + RTE_LOG_LINE(DEBUG, EAL, "No shared files mode enabled, IPC is disabled"); rte_errno = ENOTSUP; return -1; } if (clock_gettime(CLOCK_MONOTONIC, &now) < 0) { - RTE_LOG(ERR, EAL, "Failed to get current time\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to get current time"); rte_errno = errno; goto end; } @@ -1035,7 +1035,7 @@ rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply, /* for primary process, broadcast request, and collect reply 1 by 1 */ mp_dir = opendir(mp_dir_path); if (!mp_dir) { - RTE_LOG(ERR, EAL, "Unable to open directory %s\n", mp_dir_path); + RTE_LOG_LINE(ERR, EAL, "Unable to open directory %s", mp_dir_path); rte_errno = errno; goto end; } @@ -1043,7 +1043,7 @@ rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply, dir_fd = dirfd(mp_dir); /* lock the directory to prevent processes spinning up while we send */ if (flock(dir_fd, LOCK_SH)) { - RTE_LOG(ERR, EAL, "Unable to lock directory %s\n", + RTE_LOG_LINE(ERR, EAL, "Unable to lock directory %s", mp_dir_path); rte_errno = errno; goto close_end; @@ -1102,19 +1102,19 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts, const struct internal_config *internal_conf = eal_get_internal_configuration(); - RTE_LOG(DEBUG, EAL, "request: %s\n", req->name); + RTE_LOG_LINE(DEBUG, EAL, "request: %s", req->name); if (check_input(req) != 0) return -1; if (internal_conf->no_shconf) { - RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n"); + RTE_LOG_LINE(DEBUG, EAL, "No shared files mode enabled, IPC is disabled"); rte_errno = ENOTSUP; return -1; } if (clock_gettime(CLOCK_MONOTONIC, &now) < 0) { - RTE_LOG(ERR, EAL, "Failed to get current time\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to get current time"); rte_errno = errno; return -1; } @@ -1122,7 +1122,7 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts, dummy = calloc(1, sizeof(*dummy)); param = calloc(1, sizeof(*param)); if (copy == NULL || dummy == NULL || param == NULL) { - RTE_LOG(ERR, EAL, "Failed to allocate memory for async reply\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to allocate memory for async reply"); rte_errno = ENOMEM; goto fail; } @@ -1180,7 +1180,7 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts, /* for primary process, broadcast request */ mp_dir = opendir(mp_dir_path); if (!mp_dir) { - RTE_LOG(ERR, EAL, "Unable to open directory %s\n", mp_dir_path); + RTE_LOG_LINE(ERR, EAL, "Unable to open directory %s", mp_dir_path); rte_errno = errno; goto unlock_fail; } @@ -1188,7 +1188,7 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts, /* lock the directory to prevent processes spinning up while we send */ if (flock(dir_fd, LOCK_SH)) { - RTE_LOG(ERR, EAL, "Unable to lock directory %s\n", + RTE_LOG_LINE(ERR, EAL, "Unable to lock directory %s", mp_dir_path); rte_errno = errno; goto closedir_fail; @@ -1240,7 +1240,7 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts, int rte_mp_reply(struct rte_mp_msg *msg, const char *peer) { - RTE_LOG(DEBUG, EAL, "reply: %s\n", msg->name); + RTE_LOG_LINE(DEBUG, EAL, "reply: %s", msg->name); const struct internal_config *internal_conf = eal_get_internal_configuration(); @@ -1248,13 +1248,13 @@ rte_mp_reply(struct rte_mp_msg *msg, const char *peer) return -1; if (peer == NULL) { - RTE_LOG(ERR, EAL, "peer is not specified\n"); + RTE_LOG_LINE(ERR, EAL, "peer is not specified"); rte_errno = EINVAL; return -1; } if (internal_conf->no_shconf) { - RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n"); + RTE_LOG_LINE(DEBUG, EAL, "No shared files mode enabled, IPC is disabled"); return 0; } diff --git a/lib/eal/common/eal_common_tailqs.c b/lib/eal/common/eal_common_tailqs.c index 580fbf24bc..06a6cac4ff 100644 --- a/lib/eal/common/eal_common_tailqs.c +++ b/lib/eal/common/eal_common_tailqs.c @@ -109,8 +109,8 @@ int rte_eal_tailq_register(struct rte_tailq_elem *t) { if (rte_eal_tailq_local_register(t) < 0) { - RTE_LOG(ERR, EAL, - "%s tailq is already registered\n", t->name); + RTE_LOG_LINE(ERR, EAL, + "%s tailq is already registered", t->name); goto error; } @@ -119,8 +119,8 @@ rte_eal_tailq_register(struct rte_tailq_elem *t) if (rte_tailqs_count >= 0) { rte_eal_tailq_update(t); if (t->head == NULL) { - RTE_LOG(ERR, EAL, - "Cannot initialize tailq: %s\n", t->name); + RTE_LOG_LINE(ERR, EAL, + "Cannot initialize tailq: %s", t->name); TAILQ_REMOVE(&rte_tailq_elem_head, t, next); goto error; } @@ -145,8 +145,8 @@ rte_eal_tailqs_init(void) * rte_eal_tailq_register and EAL_REGISTER_TAILQ */ rte_eal_tailq_update(t); if (t->head == NULL) { - RTE_LOG(ERR, EAL, - "Cannot initialize tailq: %s\n", t->name); + RTE_LOG_LINE(ERR, EAL, + "Cannot initialize tailq: %s", t->name); /* TAILQ_REMOVE not needed, error is already fatal */ goto fail; } diff --git a/lib/eal/common/eal_common_thread.c b/lib/eal/common/eal_common_thread.c index c422ea8b53..b0974a7aa5 100644 --- a/lib/eal/common/eal_common_thread.c +++ b/lib/eal/common/eal_common_thread.c @@ -86,7 +86,7 @@ int rte_thread_set_affinity(rte_cpuset_t *cpusetp) { if (rte_thread_set_affinity_by_id(rte_thread_self(), cpusetp) != 0) { - RTE_LOG(ERR, EAL, "rte_thread_set_affinity_by_id failed\n"); + RTE_LOG_LINE(ERR, EAL, "rte_thread_set_affinity_by_id failed"); return -1; } @@ -175,7 +175,7 @@ eal_thread_loop(void *arg) __rte_thread_init(lcore_id, &lcore_config[lcore_id].cpuset); ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset)); - RTE_LOG(DEBUG, EAL, "lcore %u is ready (tid=%zx;cpuset=[%s%s])\n", + RTE_LOG_LINE(DEBUG, EAL, "lcore %u is ready (tid=%zx;cpuset=[%s%s])", lcore_id, rte_thread_self().opaque_id, cpuset, ret == 0 ? "" : "..."); @@ -368,12 +368,12 @@ rte_thread_register(void) /* EAL init flushes all lcores, we can't register before. */ if (eal_get_internal_configuration()->init_complete != 1) { - RTE_LOG(DEBUG, EAL, "Called %s before EAL init.\n", __func__); + RTE_LOG_LINE(DEBUG, EAL, "Called %s before EAL init.", __func__); rte_errno = EINVAL; return -1; } if (!rte_mp_disable()) { - RTE_LOG(ERR, EAL, "Multiprocess in use, registering non-EAL threads is not supported.\n"); + RTE_LOG_LINE(ERR, EAL, "Multiprocess in use, registering non-EAL threads is not supported."); rte_errno = EINVAL; return -1; } @@ -387,7 +387,7 @@ rte_thread_register(void) rte_errno = ENOMEM; return -1; } - RTE_LOG(DEBUG, EAL, "Registered non-EAL thread as lcore %u.\n", + RTE_LOG_LINE(DEBUG, EAL, "Registered non-EAL thread as lcore %u.", lcore_id); return 0; } @@ -401,7 +401,7 @@ rte_thread_unregister(void) eal_lcore_non_eal_release(lcore_id); __rte_thread_uninit(); if (lcore_id != LCORE_ID_ANY) - RTE_LOG(DEBUG, EAL, "Unregistered non-EAL thread (was lcore %u).\n", + RTE_LOG_LINE(DEBUG, EAL, "Unregistered non-EAL thread (was lcore %u).", lcore_id); } diff --git a/lib/eal/common/eal_common_timer.c b/lib/eal/common/eal_common_timer.c index 5686a5102b..bd2ca85c6c 100644 --- a/lib/eal/common/eal_common_timer.c +++ b/lib/eal/common/eal_common_timer.c @@ -39,8 +39,8 @@ static uint64_t estimate_tsc_freq(void) { #define CYC_PER_10MHZ 1E7 - RTE_LOG(WARNING, EAL, "WARNING: TSC frequency estimated roughly" - " - clock timings may be less accurate.\n"); + RTE_LOG_LINE(WARNING, EAL, "WARNING: TSC frequency estimated roughly" + " - clock timings may be less accurate."); /* assume that the rte_delay_us_sleep() will sleep for 1 second */ uint64_t start = rte_rdtsc(); rte_delay_us_sleep(US_PER_S); @@ -71,7 +71,7 @@ set_tsc_freq(void) if (!freq) freq = estimate_tsc_freq(); - RTE_LOG(DEBUG, EAL, "TSC frequency is ~%" PRIu64 " KHz\n", freq / 1000); + RTE_LOG_LINE(DEBUG, EAL, "TSC frequency is ~%" PRIu64 " KHz", freq / 1000); eal_tsc_resolution_hz = freq; mcfg->tsc_hz = freq; } diff --git a/lib/eal/common/eal_common_trace_utils.c b/lib/eal/common/eal_common_trace_utils.c index 8561a0e198..f5e724f9cd 100644 --- a/lib/eal/common/eal_common_trace_utils.c +++ b/lib/eal/common/eal_common_trace_utils.c @@ -348,7 +348,7 @@ trace_mkdir(void) return -rte_errno; } - RTE_LOG(INFO, EAL, "Trace dir: %s\n", trace->dir); + RTE_LOG_LINE(INFO, EAL, "Trace dir: %s", trace->dir); already_done = true; return 0; } diff --git a/lib/eal/common/eal_trace.h b/lib/eal/common/eal_trace.h index ace2ef3ee5..4dbd6ea457 100644 --- a/lib/eal/common/eal_trace.h +++ b/lib/eal/common/eal_trace.h @@ -17,10 +17,10 @@ #include "eal_thread.h" #define trace_err(fmt, args...) \ - RTE_LOG(ERR, EAL, "%s():%u " fmt "\n", __func__, __LINE__, ## args) + RTE_LOG_LINE(ERR, EAL, "%s():%u " fmt, __func__, __LINE__, ## args) #define trace_crit(fmt, args...) \ - RTE_LOG(CRIT, EAL, "%s():%u " fmt "\n", __func__, __LINE__, ## args) + RTE_LOG_LINE(CRIT, EAL, "%s():%u " fmt, __func__, __LINE__, ## args) #define TRACE_CTF_MAGIC 0xC1FC1FC1 #define TRACE_MAX_ARGS 32 diff --git a/lib/eal/common/hotplug_mp.c b/lib/eal/common/hotplug_mp.c index 602781966c..cd47c248f5 100644 --- a/lib/eal/common/hotplug_mp.c +++ b/lib/eal/common/hotplug_mp.c @@ -77,7 +77,7 @@ send_response_to_secondary(const struct eal_dev_mp_req *req, ret = rte_mp_reply(&mp_resp, peer); if (ret != 0) - RTE_LOG(ERR, EAL, "failed to send response to secondary\n"); + RTE_LOG_LINE(ERR, EAL, "failed to send response to secondary"); return ret; } @@ -101,18 +101,18 @@ __handle_secondary_request(void *param) if (req->t == EAL_DEV_REQ_TYPE_ATTACH) { ret = local_dev_probe(req->devargs, &dev); if (ret != 0 && ret != -EEXIST) { - RTE_LOG(ERR, EAL, "Failed to hotplug add device on primary\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to hotplug add device on primary"); goto finish; } ret = eal_dev_hotplug_request_to_secondary(&tmp_req); if (ret != 0) { - RTE_LOG(ERR, EAL, "Failed to send hotplug request to secondary\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to send hotplug request to secondary"); ret = -ENOMSG; goto rollback; } if (tmp_req.result != 0) { ret = tmp_req.result; - RTE_LOG(ERR, EAL, "Failed to hotplug add device on secondary\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to hotplug add device on secondary"); if (ret != -EEXIST) goto rollback; } @@ -123,27 +123,27 @@ __handle_secondary_request(void *param) ret = eal_dev_hotplug_request_to_secondary(&tmp_req); if (ret != 0) { - RTE_LOG(ERR, EAL, "Failed to send hotplug request to secondary\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to send hotplug request to secondary"); ret = -ENOMSG; goto rollback; } bus = rte_bus_find_by_name(da.bus->name); if (bus == NULL) { - RTE_LOG(ERR, EAL, "Cannot find bus (%s)\n", da.bus->name); + RTE_LOG_LINE(ERR, EAL, "Cannot find bus (%s)", da.bus->name); ret = -ENOENT; goto finish; } dev = bus->find_device(NULL, cmp_dev_name, da.name); if (dev == NULL) { - RTE_LOG(ERR, EAL, "Cannot find plugged device (%s)\n", da.name); + RTE_LOG_LINE(ERR, EAL, "Cannot find plugged device (%s)", da.name); ret = -ENOENT; goto finish; } if (tmp_req.result != 0) { - RTE_LOG(ERR, EAL, "Failed to hotplug remove device on secondary\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to hotplug remove device on secondary"); ret = tmp_req.result; if (ret != -ENOENT) goto rollback; @@ -151,12 +151,12 @@ __handle_secondary_request(void *param) ret = local_dev_remove(dev); if (ret != 0) { - RTE_LOG(ERR, EAL, "Failed to hotplug remove device on primary\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to hotplug remove device on primary"); if (ret != -ENOENT) goto rollback; } } else { - RTE_LOG(ERR, EAL, "unsupported secondary to primary request\n"); + RTE_LOG_LINE(ERR, EAL, "unsupported secondary to primary request"); ret = -ENOTSUP; } goto finish; @@ -174,7 +174,7 @@ __handle_secondary_request(void *param) finish: ret = send_response_to_secondary(&tmp_req, ret, bundle->peer); if (ret) - RTE_LOG(ERR, EAL, "failed to send response to secondary\n"); + RTE_LOG_LINE(ERR, EAL, "failed to send response to secondary"); rte_devargs_reset(&da); free(bundle->peer); @@ -191,7 +191,7 @@ handle_secondary_request(const struct rte_mp_msg *msg, const void *peer) bundle = malloc(sizeof(*bundle)); if (bundle == NULL) { - RTE_LOG(ERR, EAL, "not enough memory\n"); + RTE_LOG_LINE(ERR, EAL, "not enough memory"); return send_response_to_secondary(req, -ENOMEM, peer); } @@ -204,7 +204,7 @@ handle_secondary_request(const struct rte_mp_msg *msg, const void *peer) bundle->peer = strdup(peer); if (bundle->peer == NULL) { free(bundle); - RTE_LOG(ERR, EAL, "not enough memory\n"); + RTE_LOG_LINE(ERR, EAL, "not enough memory"); return send_response_to_secondary(req, -ENOMEM, peer); } @@ -214,7 +214,7 @@ handle_secondary_request(const struct rte_mp_msg *msg, const void *peer) */ ret = rte_eal_alarm_set(1, __handle_secondary_request, bundle); if (ret != 0) { - RTE_LOG(ERR, EAL, "failed to add mp task\n"); + RTE_LOG_LINE(ERR, EAL, "failed to add mp task"); free(bundle->peer); free(bundle); return send_response_to_secondary(req, ret, peer); @@ -257,14 +257,14 @@ static void __handle_primary_request(void *param) bus = rte_bus_find_by_name(da->bus->name); if (bus == NULL) { - RTE_LOG(ERR, EAL, "Cannot find bus (%s)\n", da->bus->name); + RTE_LOG_LINE(ERR, EAL, "Cannot find bus (%s)", da->bus->name); ret = -ENOENT; goto quit; } dev = bus->find_device(NULL, cmp_dev_name, da->name); if (dev == NULL) { - RTE_LOG(ERR, EAL, "Cannot find plugged device (%s)\n", da->name); + RTE_LOG_LINE(ERR, EAL, "Cannot find plugged device (%s)", da->name); ret = -ENOENT; goto quit; } @@ -296,7 +296,7 @@ static void __handle_primary_request(void *param) memcpy(resp, req, sizeof(*resp)); resp->result = ret; if (rte_mp_reply(&mp_resp, bundle->peer) < 0) - RTE_LOG(ERR, EAL, "failed to send reply to primary request\n"); + RTE_LOG_LINE(ERR, EAL, "failed to send reply to primary request"); free(bundle->peer); free(bundle); @@ -320,11 +320,11 @@ handle_primary_request(const struct rte_mp_msg *msg, const void *peer) bundle = calloc(1, sizeof(*bundle)); if (bundle == NULL) { - RTE_LOG(ERR, EAL, "not enough memory\n"); + RTE_LOG_LINE(ERR, EAL, "not enough memory"); resp->result = -ENOMEM; ret = rte_mp_reply(&mp_resp, peer); if (ret) - RTE_LOG(ERR, EAL, "failed to send reply to primary request\n"); + RTE_LOG_LINE(ERR, EAL, "failed to send reply to primary request"); return ret; } @@ -336,12 +336,12 @@ handle_primary_request(const struct rte_mp_msg *msg, const void *peer) */ bundle->peer = (void *)strdup(peer); if (bundle->peer == NULL) { - RTE_LOG(ERR, EAL, "not enough memory\n"); + RTE_LOG_LINE(ERR, EAL, "not enough memory"); free(bundle); resp->result = -ENOMEM; ret = rte_mp_reply(&mp_resp, peer); if (ret) - RTE_LOG(ERR, EAL, "failed to send reply to primary request\n"); + RTE_LOG_LINE(ERR, EAL, "failed to send reply to primary request"); return ret; } @@ -356,7 +356,7 @@ handle_primary_request(const struct rte_mp_msg *msg, const void *peer) resp->result = ret; ret = rte_mp_reply(&mp_resp, peer); if (ret != 0) { - RTE_LOG(ERR, EAL, "failed to send reply to primary request\n"); + RTE_LOG_LINE(ERR, EAL, "failed to send reply to primary request"); return ret; } } @@ -378,7 +378,7 @@ int eal_dev_hotplug_request_to_primary(struct eal_dev_mp_req *req) ret = rte_mp_request_sync(&mp_req, &mp_reply, &ts); if (ret || mp_reply.nb_received != 1) { - RTE_LOG(ERR, EAL, "Cannot send request to primary\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot send request to primary"); if (!ret) return -1; return ret; @@ -408,14 +408,14 @@ int eal_dev_hotplug_request_to_secondary(struct eal_dev_mp_req *req) if (ret != 0) { /* if IPC is not supported, behave as if the call succeeded */ if (rte_errno != ENOTSUP) - RTE_LOG(ERR, EAL, "rte_mp_request_sync failed\n"); + RTE_LOG_LINE(ERR, EAL, "rte_mp_request_sync failed"); else ret = 0; return ret; } if (mp_reply.nb_sent != mp_reply.nb_received) { - RTE_LOG(ERR, EAL, "not all secondary reply\n"); + RTE_LOG_LINE(ERR, EAL, "not all secondary reply"); free(mp_reply.msgs); return -1; } @@ -448,7 +448,7 @@ int eal_mp_dev_hotplug_init(void) handle_secondary_request); /* primary is allowed to not support IPC */ if (ret != 0 && rte_errno != ENOTSUP) { - RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n", + RTE_LOG_LINE(ERR, EAL, "Couldn't register '%s' action", EAL_DEV_MP_ACTION_REQUEST); return ret; } @@ -456,7 +456,7 @@ int eal_mp_dev_hotplug_init(void) ret = rte_mp_action_register(EAL_DEV_MP_ACTION_REQUEST, handle_primary_request); if (ret != 0) { - RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n", + RTE_LOG_LINE(ERR, EAL, "Couldn't register '%s' action", EAL_DEV_MP_ACTION_REQUEST); return ret; } diff --git a/lib/eal/common/malloc_elem.c b/lib/eal/common/malloc_elem.c index f5d1c8c2e2..6e9d5b8660 100644 --- a/lib/eal/common/malloc_elem.c +++ b/lib/eal/common/malloc_elem.c @@ -148,7 +148,7 @@ malloc_elem_insert(struct malloc_elem *elem) /* first and last elements must be both NULL or both non-NULL */ if ((heap->first == NULL) != (heap->last == NULL)) { - RTE_LOG(ERR, EAL, "Heap is probably corrupt\n"); + RTE_LOG_LINE(ERR, EAL, "Heap is probably corrupt"); return; } @@ -628,7 +628,7 @@ malloc_elem_hide_region(struct malloc_elem *elem, void *start, size_t len) malloc_elem_free_list_insert(hide_end); } else if (len_after > 0) { - RTE_LOG(ERR, EAL, "Unaligned element, heap is probably corrupt\n"); + RTE_LOG_LINE(ERR, EAL, "Unaligned element, heap is probably corrupt"); return; } } @@ -647,7 +647,7 @@ malloc_elem_hide_region(struct malloc_elem *elem, void *start, size_t len) malloc_elem_free_list_insert(prev); } else if (len_before > 0) { - RTE_LOG(ERR, EAL, "Unaligned element, heap is probably corrupt\n"); + RTE_LOG_LINE(ERR, EAL, "Unaligned element, heap is probably corrupt"); return; } } diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c index 6b6cf9174c..010c84c36c 100644 --- a/lib/eal/common/malloc_heap.c +++ b/lib/eal/common/malloc_heap.c @@ -117,7 +117,7 @@ malloc_add_seg(const struct rte_memseg_list *msl, heap_idx = malloc_socket_to_heap_id(msl->socket_id); if (heap_idx < 0) { - RTE_LOG(ERR, EAL, "Memseg list has invalid socket id\n"); + RTE_LOG_LINE(ERR, EAL, "Memseg list has invalid socket id"); return -1; } heap = &mcfg->malloc_heaps[heap_idx]; @@ -135,7 +135,7 @@ malloc_add_seg(const struct rte_memseg_list *msl, heap->total_size += len; - RTE_LOG(DEBUG, EAL, "Added %zuM to heap on socket %i\n", len >> 20, + RTE_LOG_LINE(DEBUG, EAL, "Added %zuM to heap on socket %i", len >> 20, msl->socket_id); return 0; } @@ -308,7 +308,7 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size, /* first, check if we're allowed to allocate this memory */ if (eal_memalloc_mem_alloc_validate(socket, heap->total_size + alloc_sz) < 0) { - RTE_LOG(DEBUG, EAL, "User has disallowed allocation\n"); + RTE_LOG_LINE(DEBUG, EAL, "User has disallowed allocation"); return NULL; } @@ -324,7 +324,7 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size, /* check if we wanted contiguous memory but didn't get it */ if (contig && !eal_memalloc_is_contig(msl, map_addr, alloc_sz)) { - RTE_LOG(DEBUG, EAL, "%s(): couldn't allocate physically contiguous space\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): couldn't allocate physically contiguous space", __func__); goto fail; } @@ -352,8 +352,8 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size, * which could solve some situations when IOVA VA is not * really needed. */ - RTE_LOG(ERR, EAL, - "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask\n", + RTE_LOG_LINE(ERR, EAL, + "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask", __func__); /* @@ -363,8 +363,8 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size, */ if ((rte_eal_iova_mode() == RTE_IOVA_VA) && rte_eal_using_phys_addrs()) - RTE_LOG(ERR, EAL, - "%s(): Please try initializing EAL with --iova-mode=pa parameter\n", + RTE_LOG_LINE(ERR, EAL, + "%s(): Please try initializing EAL with --iova-mode=pa parameter", __func__); goto fail; } @@ -440,7 +440,7 @@ try_expand_heap_primary(struct malloc_heap *heap, uint64_t pg_sz, } heap->total_size += alloc_sz; - RTE_LOG(DEBUG, EAL, "Heap on socket %d was expanded by %zdMB\n", + RTE_LOG_LINE(DEBUG, EAL, "Heap on socket %d was expanded by %zdMB", socket, alloc_sz >> 20ULL); free(ms); @@ -693,7 +693,7 @@ malloc_heap_alloc_on_heap_id(const char *type, size_t size, /* this should have succeeded */ if (ret == NULL) - RTE_LOG(ERR, EAL, "Error allocating from heap\n"); + RTE_LOG_LINE(ERR, EAL, "Error allocating from heap"); } alloc_unlock: rte_spinlock_unlock(&(heap->lock)); @@ -1040,7 +1040,7 @@ malloc_heap_free(struct malloc_elem *elem) /* we didn't exit early, meaning we have unmapped some pages */ unmapped = true; - RTE_LOG(DEBUG, EAL, "Heap on socket %d was shrunk by %zdMB\n", + RTE_LOG_LINE(DEBUG, EAL, "Heap on socket %d was shrunk by %zdMB", msl->socket_id, aligned_len >> 20ULL); rte_mcfg_mem_write_unlock(); @@ -1199,7 +1199,7 @@ malloc_heap_create_external_seg(void *va_addr, rte_iova_t iova_addrs[], } } if (msl == NULL) { - RTE_LOG(ERR, EAL, "Couldn't find empty memseg list\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't find empty memseg list"); rte_errno = ENOSPC; return NULL; } @@ -1210,7 +1210,7 @@ malloc_heap_create_external_seg(void *va_addr, rte_iova_t iova_addrs[], /* create the backing fbarray */ if (rte_fbarray_init(&msl->memseg_arr, fbarray_name, n_pages, sizeof(struct rte_memseg)) < 0) { - RTE_LOG(ERR, EAL, "Couldn't create fbarray backing the memseg list\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't create fbarray backing the memseg list"); return NULL; } arr = &msl->memseg_arr; @@ -1310,7 +1310,7 @@ malloc_heap_add_external_memory(struct malloc_heap *heap, heap->total_size += msl->len; /* all done! */ - RTE_LOG(DEBUG, EAL, "Added segment for heap %s starting at %p\n", + RTE_LOG_LINE(DEBUG, EAL, "Added segment for heap %s starting at %p", heap->name, msl->base_va); /* notify all subscribers that a new memory area has been added */ @@ -1356,7 +1356,7 @@ malloc_heap_create(struct malloc_heap *heap, const char *heap_name) /* prevent overflow. did you really create 2 billion heaps??? */ if (next_socket_id > INT32_MAX) { - RTE_LOG(ERR, EAL, "Cannot assign new socket ID's\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot assign new socket ID's"); rte_errno = ENOSPC; return -1; } @@ -1382,17 +1382,17 @@ int malloc_heap_destroy(struct malloc_heap *heap) { if (heap->alloc_count != 0) { - RTE_LOG(ERR, EAL, "Heap is still in use\n"); + RTE_LOG_LINE(ERR, EAL, "Heap is still in use"); rte_errno = EBUSY; return -1; } if (heap->first != NULL || heap->last != NULL) { - RTE_LOG(ERR, EAL, "Heap still contains memory segments\n"); + RTE_LOG_LINE(ERR, EAL, "Heap still contains memory segments"); rte_errno = EBUSY; return -1; } if (heap->total_size != 0) - RTE_LOG(ERR, EAL, "Total size not zero, heap is likely corrupt\n"); + RTE_LOG_LINE(ERR, EAL, "Total size not zero, heap is likely corrupt"); /* Reset all of the heap but the (hold) lock so caller can release it. */ RTE_BUILD_BUG_ON(offsetof(struct malloc_heap, lock) != 0); @@ -1411,7 +1411,7 @@ rte_eal_malloc_heap_init(void) eal_get_internal_configuration(); if (internal_conf->match_allocations) - RTE_LOG(DEBUG, EAL, "Hugepages will be freed exactly as allocated.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Hugepages will be freed exactly as allocated."); if (rte_eal_process_type() == RTE_PROC_PRIMARY) { /* assign min socket ID to external heaps */ @@ -1431,7 +1431,7 @@ rte_eal_malloc_heap_init(void) } if (register_mp_requests()) { - RTE_LOG(ERR, EAL, "Couldn't register malloc multiprocess actions\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't register malloc multiprocess actions"); return -1; } diff --git a/lib/eal/common/malloc_mp.c b/lib/eal/common/malloc_mp.c index 4d62397aba..e0f49bc471 100644 --- a/lib/eal/common/malloc_mp.c +++ b/lib/eal/common/malloc_mp.c @@ -156,7 +156,7 @@ handle_sync(const struct rte_mp_msg *msg, const void *peer) int ret; if (req->t != REQ_TYPE_SYNC) { - RTE_LOG(ERR, EAL, "Unexpected request from primary\n"); + RTE_LOG_LINE(ERR, EAL, "Unexpected request from primary"); return -1; } @@ -189,19 +189,19 @@ handle_free_request(const struct malloc_mp_req *m) /* check if the requested memory actually exists */ msl = rte_mem_virt2memseg_list(start); if (msl == NULL) { - RTE_LOG(ERR, EAL, "Requested to free unknown memory\n"); + RTE_LOG_LINE(ERR, EAL, "Requested to free unknown memory"); return -1; } /* check if end is within the same memory region */ if (rte_mem_virt2memseg_list(end) != msl) { - RTE_LOG(ERR, EAL, "Requested to free memory spanning multiple regions\n"); + RTE_LOG_LINE(ERR, EAL, "Requested to free memory spanning multiple regions"); return -1; } /* we're supposed to only free memory that's not external */ if (msl->external) { - RTE_LOG(ERR, EAL, "Requested to free external memory\n"); + RTE_LOG_LINE(ERR, EAL, "Requested to free external memory"); return -1; } @@ -228,13 +228,13 @@ handle_alloc_request(const struct malloc_mp_req *m, /* this is checked by the API, but we need to prevent divide by zero */ if (ar->page_sz == 0 || !rte_is_power_of_2(ar->page_sz)) { - RTE_LOG(ERR, EAL, "Attempting to allocate with invalid page size\n"); + RTE_LOG_LINE(ERR, EAL, "Attempting to allocate with invalid page size"); return -1; } /* heap idx is index into the heap array, not socket ID */ if (ar->malloc_heap_idx >= RTE_MAX_HEAPS) { - RTE_LOG(ERR, EAL, "Attempting to allocate from invalid heap\n"); + RTE_LOG_LINE(ERR, EAL, "Attempting to allocate from invalid heap"); return -1; } @@ -247,7 +247,7 @@ handle_alloc_request(const struct malloc_mp_req *m, * socket ID's are always lower than RTE_MAX_NUMA_NODES. */ if (heap->socket_id >= RTE_MAX_NUMA_NODES) { - RTE_LOG(ERR, EAL, "Attempting to allocate from external heap\n"); + RTE_LOG_LINE(ERR, EAL, "Attempting to allocate from external heap"); return -1; } @@ -258,7 +258,7 @@ handle_alloc_request(const struct malloc_mp_req *m, /* we can't know in advance how many pages we'll need, so we malloc */ ms = malloc(sizeof(*ms) * n_segs); if (ms == NULL) { - RTE_LOG(ERR, EAL, "Couldn't allocate memory for request state\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't allocate memory for request state"); return -1; } memset(ms, 0, sizeof(*ms) * n_segs); @@ -307,13 +307,13 @@ handle_request(const struct rte_mp_msg *msg, const void *peer __rte_unused) /* make sure it's not a dupe */ entry = find_request_by_id(m->id); if (entry != NULL) { - RTE_LOG(ERR, EAL, "Duplicate request id\n"); + RTE_LOG_LINE(ERR, EAL, "Duplicate request id"); goto fail; } entry = malloc(sizeof(*entry)); if (entry == NULL) { - RTE_LOG(ERR, EAL, "Unable to allocate memory for request\n"); + RTE_LOG_LINE(ERR, EAL, "Unable to allocate memory for request"); goto fail; } @@ -325,7 +325,7 @@ handle_request(const struct rte_mp_msg *msg, const void *peer __rte_unused) } else if (m->t == REQ_TYPE_FREE) { ret = handle_free_request(m); } else { - RTE_LOG(ERR, EAL, "Unexpected request from secondary\n"); + RTE_LOG_LINE(ERR, EAL, "Unexpected request from secondary"); goto fail; } @@ -345,7 +345,7 @@ handle_request(const struct rte_mp_msg *msg, const void *peer __rte_unused) resp->id = m->id; if (rte_mp_sendmsg(&resp_msg)) { - RTE_LOG(ERR, EAL, "Couldn't send response\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't send response"); goto fail; } /* we did not modify the request */ @@ -376,7 +376,7 @@ handle_request(const struct rte_mp_msg *msg, const void *peer __rte_unused) handle_sync_response); } while (ret != 0 && rte_errno == EEXIST); if (ret != 0) { - RTE_LOG(ERR, EAL, "Couldn't send sync request\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't send sync request"); if (m->t == REQ_TYPE_ALLOC) free(entry->alloc_state.ms); goto fail; @@ -414,7 +414,7 @@ handle_sync_response(const struct rte_mp_msg *request, entry = find_request_by_id(mpreq->id); if (entry == NULL) { - RTE_LOG(ERR, EAL, "Wrong request ID\n"); + RTE_LOG_LINE(ERR, EAL, "Wrong request ID"); goto fail; } @@ -428,12 +428,12 @@ handle_sync_response(const struct rte_mp_msg *request, (struct malloc_mp_req *)reply->msgs[i].param; if (resp->t != REQ_TYPE_SYNC) { - RTE_LOG(ERR, EAL, "Unexpected response to sync request\n"); + RTE_LOG_LINE(ERR, EAL, "Unexpected response to sync request"); result = REQ_RESULT_FAIL; break; } if (resp->id != entry->user_req.id) { - RTE_LOG(ERR, EAL, "Response to wrong sync request\n"); + RTE_LOG_LINE(ERR, EAL, "Response to wrong sync request"); result = REQ_RESULT_FAIL; break; } @@ -458,7 +458,7 @@ handle_sync_response(const struct rte_mp_msg *request, strlcpy(msg.name, MP_ACTION_RESPONSE, sizeof(msg.name)); if (rte_mp_sendmsg(&msg)) - RTE_LOG(ERR, EAL, "Could not send message to secondary process\n"); + RTE_LOG_LINE(ERR, EAL, "Could not send message to secondary process"); TAILQ_REMOVE(&mp_request_list.list, entry, next); free(entry); @@ -482,7 +482,7 @@ handle_sync_response(const struct rte_mp_msg *request, strlcpy(msg.name, MP_ACTION_RESPONSE, sizeof(msg.name)); if (rte_mp_sendmsg(&msg)) - RTE_LOG(ERR, EAL, "Could not send message to secondary process\n"); + RTE_LOG_LINE(ERR, EAL, "Could not send message to secondary process"); TAILQ_REMOVE(&mp_request_list.list, entry, next); free(entry->alloc_state.ms); @@ -524,7 +524,7 @@ handle_sync_response(const struct rte_mp_msg *request, handle_rollback_response); } while (ret != 0 && rte_errno == EEXIST); if (ret != 0) { - RTE_LOG(ERR, EAL, "Could not send rollback request to secondary process\n"); + RTE_LOG_LINE(ERR, EAL, "Could not send rollback request to secondary process"); /* we couldn't send rollback request, but that's OK - * secondary will time out, and memory has been removed @@ -536,7 +536,7 @@ handle_sync_response(const struct rte_mp_msg *request, goto fail; } } else { - RTE_LOG(ERR, EAL, " to sync request of unknown type\n"); + RTE_LOG_LINE(ERR, EAL, " to sync request of unknown type"); goto fail; } @@ -564,12 +564,12 @@ handle_rollback_response(const struct rte_mp_msg *request, entry = find_request_by_id(mpreq->id); if (entry == NULL) { - RTE_LOG(ERR, EAL, "Wrong request ID\n"); + RTE_LOG_LINE(ERR, EAL, "Wrong request ID"); goto fail; } if (entry->user_req.t != REQ_TYPE_ALLOC) { - RTE_LOG(ERR, EAL, "Unexpected active request\n"); + RTE_LOG_LINE(ERR, EAL, "Unexpected active request"); goto fail; } @@ -582,7 +582,7 @@ handle_rollback_response(const struct rte_mp_msg *request, strlcpy(msg.name, MP_ACTION_RESPONSE, sizeof(msg.name)); if (rte_mp_sendmsg(&msg)) - RTE_LOG(ERR, EAL, "Could not send message to secondary process\n"); + RTE_LOG_LINE(ERR, EAL, "Could not send message to secondary process"); /* clean up */ TAILQ_REMOVE(&mp_request_list.list, entry, next); @@ -657,14 +657,14 @@ request_sync(void) if (ret != 0) { /* if IPC is unsupported, behave as if the call succeeded */ if (rte_errno != ENOTSUP) - RTE_LOG(ERR, EAL, "Could not send sync request to secondary process\n"); + RTE_LOG_LINE(ERR, EAL, "Could not send sync request to secondary process"); else ret = 0; goto out; } if (reply.nb_received != reply.nb_sent) { - RTE_LOG(ERR, EAL, "Not all secondaries have responded\n"); + RTE_LOG_LINE(ERR, EAL, "Not all secondaries have responded"); goto out; } @@ -672,15 +672,15 @@ request_sync(void) struct malloc_mp_req *resp = (struct malloc_mp_req *)reply.msgs[i].param; if (resp->t != REQ_TYPE_SYNC) { - RTE_LOG(ERR, EAL, "Unexpected response from secondary\n"); + RTE_LOG_LINE(ERR, EAL, "Unexpected response from secondary"); goto out; } if (resp->id != req->id) { - RTE_LOG(ERR, EAL, "Wrong request ID\n"); + RTE_LOG_LINE(ERR, EAL, "Wrong request ID"); goto out; } if (resp->result != REQ_RESULT_SUCCESS) { - RTE_LOG(ERR, EAL, "Secondary process failed to synchronize\n"); + RTE_LOG_LINE(ERR, EAL, "Secondary process failed to synchronize"); goto out; } } @@ -711,14 +711,14 @@ request_to_primary(struct malloc_mp_req *user_req) entry = malloc(sizeof(*entry)); if (entry == NULL) { - RTE_LOG(ERR, EAL, "Cannot allocate memory for request\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate memory for request"); goto fail; } memset(entry, 0, sizeof(*entry)); if (gettimeofday(&now, NULL) < 0) { - RTE_LOG(ERR, EAL, "Cannot get current time\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot get current time"); goto fail; } @@ -740,7 +740,7 @@ request_to_primary(struct malloc_mp_req *user_req) memcpy(msg_req, user_req, sizeof(*msg_req)); if (rte_mp_sendmsg(&msg)) { - RTE_LOG(ERR, EAL, "Cannot send message to primary\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot send message to primary"); goto fail; } @@ -759,7 +759,7 @@ request_to_primary(struct malloc_mp_req *user_req) } while (ret != 0 && ret != ETIMEDOUT); if (entry->state != REQ_STATE_COMPLETE) { - RTE_LOG(ERR, EAL, "Request timed out\n"); + RTE_LOG_LINE(ERR, EAL, "Request timed out"); ret = -1; } else { ret = 0; @@ -783,24 +783,24 @@ register_mp_requests(void) /* it's OK for primary to not support IPC */ if (rte_mp_action_register(MP_ACTION_REQUEST, handle_request) && rte_errno != ENOTSUP) { - RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n", + RTE_LOG_LINE(ERR, EAL, "Couldn't register '%s' action", MP_ACTION_REQUEST); return -1; } } else { if (rte_mp_action_register(MP_ACTION_SYNC, handle_sync)) { - RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n", + RTE_LOG_LINE(ERR, EAL, "Couldn't register '%s' action", MP_ACTION_SYNC); return -1; } if (rte_mp_action_register(MP_ACTION_ROLLBACK, handle_sync)) { - RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n", + RTE_LOG_LINE(ERR, EAL, "Couldn't register '%s' action", MP_ACTION_SYNC); return -1; } if (rte_mp_action_register(MP_ACTION_RESPONSE, handle_response)) { - RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n", + RTE_LOG_LINE(ERR, EAL, "Couldn't register '%s' action", MP_ACTION_RESPONSE); return -1; } diff --git a/lib/eal/common/rte_keepalive.c b/lib/eal/common/rte_keepalive.c index e0494b2010..699022ae1c 100644 --- a/lib/eal/common/rte_keepalive.c +++ b/lib/eal/common/rte_keepalive.c @@ -53,7 +53,7 @@ struct rte_keepalive { static void print_trace(const char *msg, struct rte_keepalive *keepcfg, int idx_core) { - RTE_LOG(INFO, EAL, "%sLast seen %" PRId64 "ms ago.\n", + RTE_LOG_LINE(INFO, EAL, "%sLast seen %" PRId64 "ms ago.", msg, ((rte_rdtsc() - keepcfg->last_alive[idx_core])*1000) / rte_get_tsc_hz() diff --git a/lib/eal/common/rte_malloc.c b/lib/eal/common/rte_malloc.c index 9db0c399ae..9b3038805a 100644 --- a/lib/eal/common/rte_malloc.c +++ b/lib/eal/common/rte_malloc.c @@ -35,7 +35,7 @@ mem_free(void *addr, const bool trace_ena) if (addr == NULL) return; if (malloc_heap_free(malloc_elem_from_data(addr)) < 0) - RTE_LOG(ERR, EAL, "Error: Invalid memory\n"); + RTE_LOG_LINE(ERR, EAL, "Error: Invalid memory"); } void @@ -171,7 +171,7 @@ rte_realloc_socket(void *ptr, size_t size, unsigned int align, int socket) struct malloc_elem *elem = malloc_elem_from_data(ptr); if (elem == NULL) { - RTE_LOG(ERR, EAL, "Error: memory corruption detected\n"); + RTE_LOG_LINE(ERR, EAL, "Error: memory corruption detected"); return NULL; } @@ -598,7 +598,7 @@ rte_malloc_heap_create(const char *heap_name) /* existing heap */ if (strncmp(heap_name, tmp->name, RTE_HEAP_NAME_MAX_LEN) == 0) { - RTE_LOG(ERR, EAL, "Heap %s already exists\n", + RTE_LOG_LINE(ERR, EAL, "Heap %s already exists", heap_name); rte_errno = EEXIST; ret = -1; @@ -611,7 +611,7 @@ rte_malloc_heap_create(const char *heap_name) } } if (heap == NULL) { - RTE_LOG(ERR, EAL, "Cannot create new heap: no space\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot create new heap: no space"); rte_errno = ENOSPC; ret = -1; goto unlock; @@ -643,7 +643,7 @@ rte_malloc_heap_destroy(const char *heap_name) /* start from non-socket heaps */ heap = find_named_heap(heap_name); if (heap == NULL) { - RTE_LOG(ERR, EAL, "Heap %s not found\n", heap_name); + RTE_LOG_LINE(ERR, EAL, "Heap %s not found", heap_name); rte_errno = ENOENT; ret = -1; goto unlock; diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c index e183d2e631..3ed4186add 100644 --- a/lib/eal/common/rte_service.c +++ b/lib/eal/common/rte_service.c @@ -87,8 +87,8 @@ rte_service_init(void) RTE_BUILD_BUG_ON(RTE_SERVICE_NUM_MAX > 64); if (rte_service_library_initialized) { - RTE_LOG(NOTICE, EAL, - "service library init() called, init flag %d\n", + RTE_LOG_LINE(NOTICE, EAL, + "service library init() called, init flag %d", rte_service_library_initialized); return -EALREADY; } @@ -97,14 +97,14 @@ rte_service_init(void) sizeof(struct rte_service_spec_impl), RTE_CACHE_LINE_SIZE); if (!rte_services) { - RTE_LOG(ERR, EAL, "error allocating rte services array\n"); + RTE_LOG_LINE(ERR, EAL, "error allocating rte services array"); goto fail_mem; } lcore_states = rte_calloc("rte_service_core_states", RTE_MAX_LCORE, sizeof(struct core_state), RTE_CACHE_LINE_SIZE); if (!lcore_states) { - RTE_LOG(ERR, EAL, "error allocating core states array\n"); + RTE_LOG_LINE(ERR, EAL, "error allocating core states array"); goto fail_mem; } diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c index 568e06e9ed..2c5d196af0 100644 --- a/lib/eal/freebsd/eal.c +++ b/lib/eal/freebsd/eal.c @@ -117,7 +117,7 @@ rte_eal_config_create(void) if (mem_cfg_fd < 0){ mem_cfg_fd = open(pathname, O_RDWR | O_CREAT, 0600); if (mem_cfg_fd < 0) { - RTE_LOG(ERR, EAL, "Cannot open '%s' for rte_mem_config\n", + RTE_LOG_LINE(ERR, EAL, "Cannot open '%s' for rte_mem_config", pathname); return -1; } @@ -127,7 +127,7 @@ rte_eal_config_create(void) if (retval < 0){ close(mem_cfg_fd); mem_cfg_fd = -1; - RTE_LOG(ERR, EAL, "Cannot resize '%s' for rte_mem_config\n", + RTE_LOG_LINE(ERR, EAL, "Cannot resize '%s' for rte_mem_config", pathname); return -1; } @@ -136,8 +136,8 @@ rte_eal_config_create(void) if (retval < 0){ close(mem_cfg_fd); mem_cfg_fd = -1; - RTE_LOG(ERR, EAL, "Cannot create lock on '%s'. Is another primary " - "process running?\n", pathname); + RTE_LOG_LINE(ERR, EAL, "Cannot create lock on '%s'. Is another primary " + "process running?", pathname); return -1; } @@ -145,7 +145,7 @@ rte_eal_config_create(void) rte_mem_cfg_addr = eal_get_virtual_area(rte_mem_cfg_addr, &cfg_len_aligned, page_sz, 0, 0); if (rte_mem_cfg_addr == NULL) { - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config"); close(mem_cfg_fd); mem_cfg_fd = -1; return -1; @@ -156,7 +156,7 @@ rte_eal_config_create(void) cfg_len_aligned, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_FIXED, mem_cfg_fd, 0); if (mapped_mem_cfg_addr == MAP_FAILED) { - RTE_LOG(ERR, EAL, "Cannot remap memory for rte_config\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot remap memory for rte_config"); munmap(rte_mem_cfg_addr, cfg_len); close(mem_cfg_fd); mem_cfg_fd = -1; @@ -190,7 +190,7 @@ rte_eal_config_attach(void) if (mem_cfg_fd < 0){ mem_cfg_fd = open(pathname, O_RDWR); if (mem_cfg_fd < 0) { - RTE_LOG(ERR, EAL, "Cannot open '%s' for rte_mem_config\n", + RTE_LOG_LINE(ERR, EAL, "Cannot open '%s' for rte_mem_config", pathname); return -1; } @@ -202,7 +202,7 @@ rte_eal_config_attach(void) if (rte_mem_cfg_addr == MAP_FAILED) { close(mem_cfg_fd); mem_cfg_fd = -1; - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)", errno, strerror(errno)); return -1; } @@ -242,14 +242,14 @@ rte_eal_config_reattach(void) if (mem_config == MAP_FAILED || mem_config != rte_mem_cfg_addr) { if (mem_config != MAP_FAILED) { /* errno is stale, don't use */ - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config at [%p], got [%p]" + RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config at [%p], got [%p]" " - please use '--" OPT_BASE_VIRTADDR - "' option\n", + "' option", rte_mem_cfg_addr, mem_config); munmap(mem_config, sizeof(struct rte_mem_config)); return -1; } - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)", errno, strerror(errno)); return -1; } @@ -280,7 +280,7 @@ eal_proc_type_detect(void) ptype = RTE_PROC_SECONDARY; } - RTE_LOG(INFO, EAL, "Auto-detected process type: %s\n", + RTE_LOG_LINE(INFO, EAL, "Auto-detected process type: %s", ptype == RTE_PROC_PRIMARY ? "PRIMARY" : "SECONDARY"); return ptype; @@ -307,20 +307,20 @@ rte_config_init(void) return -1; eal_mcfg_wait_complete(); if (eal_mcfg_check_version() < 0) { - RTE_LOG(ERR, EAL, "Primary and secondary process DPDK version mismatch\n"); + RTE_LOG_LINE(ERR, EAL, "Primary and secondary process DPDK version mismatch"); return -1; } if (rte_eal_config_reattach() < 0) return -1; if (!__rte_mp_enable()) { - RTE_LOG(ERR, EAL, "Primary process refused secondary attachment\n"); + RTE_LOG_LINE(ERR, EAL, "Primary process refused secondary attachment"); return -1; } eal_mcfg_update_internal(); break; case RTE_PROC_AUTO: case RTE_PROC_INVALID: - RTE_LOG(ERR, EAL, "Invalid process type %d\n", + RTE_LOG_LINE(ERR, EAL, "Invalid process type %d", config->process_type); return -1; } @@ -454,7 +454,7 @@ eal_parse_args(int argc, char **argv) { char *ops_name = strdup(optarg); if (ops_name == NULL) - RTE_LOG(ERR, EAL, "Could not store mbuf pool ops name\n"); + RTE_LOG_LINE(ERR, EAL, "Could not store mbuf pool ops name"); else { /* free old ops name */ free(internal_conf->user_mbuf_pool_ops_name); @@ -469,16 +469,16 @@ eal_parse_args(int argc, char **argv) exit(EXIT_SUCCESS); default: if (opt < OPT_LONG_MIN_NUM && isprint(opt)) { - RTE_LOG(ERR, EAL, "Option %c is not supported " - "on FreeBSD\n", opt); + RTE_LOG_LINE(ERR, EAL, "Option %c is not supported " + "on FreeBSD", opt); } else if (opt >= OPT_LONG_MIN_NUM && opt < OPT_LONG_MAX_NUM) { - RTE_LOG(ERR, EAL, "Option %s is not supported " - "on FreeBSD\n", + RTE_LOG_LINE(ERR, EAL, "Option %s is not supported " + "on FreeBSD", eal_long_options[option_index].name); } else { - RTE_LOG(ERR, EAL, "Option %d is not supported " - "on FreeBSD\n", opt); + RTE_LOG_LINE(ERR, EAL, "Option %d is not supported " + "on FreeBSD", opt); } eal_usage(prgname); ret = -1; @@ -489,11 +489,11 @@ eal_parse_args(int argc, char **argv) /* create runtime data directory. In no_shconf mode, skip any errors */ if (eal_create_runtime_dir() < 0) { if (internal_conf->no_shconf == 0) { - RTE_LOG(ERR, EAL, "Cannot create runtime directory\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot create runtime directory"); ret = -1; goto out; } else - RTE_LOG(WARNING, EAL, "No DPDK runtime directory created\n"); + RTE_LOG_LINE(WARNING, EAL, "No DPDK runtime directory created"); } if (eal_adjust_config(internal_conf) != 0) { @@ -545,7 +545,7 @@ eal_check_mem_on_local_socket(void) socket_id = rte_lcore_to_socket_id(config->main_lcore); if (rte_memseg_list_walk(check_socket, &socket_id) == 0) - RTE_LOG(WARNING, EAL, "WARNING: Main core has no memory on local socket!\n"); + RTE_LOG_LINE(WARNING, EAL, "WARNING: Main core has no memory on local socket!"); } @@ -572,7 +572,7 @@ rte_eal_iopl_init(void) static void rte_eal_init_alert(const char *msg) { fprintf(stderr, "EAL: FATAL: %s\n", msg); - RTE_LOG(ERR, EAL, "%s\n", msg); + RTE_LOG_LINE(ERR, EAL, "%s", msg); } /* Launch threads, called at application init(). */ @@ -629,7 +629,7 @@ rte_eal_init(int argc, char **argv) /* FreeBSD always uses legacy memory model */ internal_conf->legacy_mem = true; if (internal_conf->in_memory) { - RTE_LOG(WARNING, EAL, "Warning: ignoring unsupported flag, '%s'\n", OPT_IN_MEMORY); + RTE_LOG_LINE(WARNING, EAL, "Warning: ignoring unsupported flag, '%s'", OPT_IN_MEMORY); internal_conf->in_memory = false; } @@ -695,14 +695,14 @@ rte_eal_init(int argc, char **argv) has_phys_addr = internal_conf->no_hugetlbfs == 0; iova_mode = internal_conf->iova_mode; if (iova_mode == RTE_IOVA_DC) { - RTE_LOG(DEBUG, EAL, "Specific IOVA mode is not requested, autodetecting\n"); + RTE_LOG_LINE(DEBUG, EAL, "Specific IOVA mode is not requested, autodetecting"); if (has_phys_addr) { - RTE_LOG(DEBUG, EAL, "Selecting IOVA mode according to bus requests\n"); + RTE_LOG_LINE(DEBUG, EAL, "Selecting IOVA mode according to bus requests"); iova_mode = rte_bus_get_iommu_class(); if (iova_mode == RTE_IOVA_DC) { if (!RTE_IOVA_IN_MBUF) { iova_mode = RTE_IOVA_VA; - RTE_LOG(DEBUG, EAL, "IOVA as VA mode is forced by build option.\n"); + RTE_LOG_LINE(DEBUG, EAL, "IOVA as VA mode is forced by build option."); } else { iova_mode = RTE_IOVA_PA; } @@ -725,7 +725,7 @@ rte_eal_init(int argc, char **argv) } rte_eal_get_configuration()->iova_mode = iova_mode; - RTE_LOG(INFO, EAL, "Selected IOVA mode '%s'\n", + RTE_LOG_LINE(INFO, EAL, "Selected IOVA mode '%s'", rte_eal_iova_mode() == RTE_IOVA_PA ? "PA" : "VA"); if (internal_conf->no_hugetlbfs == 0) { @@ -751,11 +751,11 @@ rte_eal_init(int argc, char **argv) if (internal_conf->vmware_tsc_map == 1) { #ifdef RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT rte_cycles_vmware_tsc_map = 1; - RTE_LOG (DEBUG, EAL, "Using VMWARE TSC MAP, " - "you must have monitor_control.pseudo_perfctr = TRUE\n"); + RTE_LOG_LINE(DEBUG, EAL, "Using VMWARE TSC MAP, " + "you must have monitor_control.pseudo_perfctr = TRUE"); #else - RTE_LOG (WARNING, EAL, "Ignoring --vmware-tsc-map because " - "RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT is not set\n"); + RTE_LOG_LINE(WARNING, EAL, "Ignoring --vmware-tsc-map because " + "RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT is not set"); #endif } @@ -818,7 +818,7 @@ rte_eal_init(int argc, char **argv) ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset)); - RTE_LOG(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])\n", + RTE_LOG_LINE(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])", config->main_lcore, (uintptr_t)pthread_self(), cpuset, ret == 0 ? "" : "..."); @@ -917,7 +917,7 @@ rte_eal_cleanup(void) if (!rte_atomic_compare_exchange_strong_explicit(&run_once, &has_run, 1, rte_memory_order_relaxed, rte_memory_order_relaxed)) { - RTE_LOG(WARNING, EAL, "Already called cleanup\n"); + RTE_LOG_LINE(WARNING, EAL, "Already called cleanup"); rte_errno = EALREADY; return -1; } diff --git a/lib/eal/freebsd/eal_alarm.c b/lib/eal/freebsd/eal_alarm.c index e5b0909a45..2493adf8ae 100644 --- a/lib/eal/freebsd/eal_alarm.c +++ b/lib/eal/freebsd/eal_alarm.c @@ -59,7 +59,7 @@ rte_eal_alarm_init(void) intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE); if (intr_handle == NULL) { - RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n"); + RTE_LOG_LINE(ERR, EAL, "Fail to allocate intr_handle"); goto error; } diff --git a/lib/eal/freebsd/eal_dev.c b/lib/eal/freebsd/eal_dev.c index c3dfe9108f..8d35148ba3 100644 --- a/lib/eal/freebsd/eal_dev.c +++ b/lib/eal/freebsd/eal_dev.c @@ -8,27 +8,27 @@ int rte_dev_event_monitor_start(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for FreeBSD\n"); + RTE_LOG_LINE(ERR, EAL, "Device event is not supported for FreeBSD"); return -1; } int rte_dev_event_monitor_stop(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for FreeBSD\n"); + RTE_LOG_LINE(ERR, EAL, "Device event is not supported for FreeBSD"); return -1; } int rte_dev_hotplug_handle_enable(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for FreeBSD\n"); + RTE_LOG_LINE(ERR, EAL, "Device event is not supported for FreeBSD"); return -1; } int rte_dev_hotplug_handle_disable(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for FreeBSD\n"); + RTE_LOG_LINE(ERR, EAL, "Device event is not supported for FreeBSD"); return -1; } diff --git a/lib/eal/freebsd/eal_hugepage_info.c b/lib/eal/freebsd/eal_hugepage_info.c index e58e618469..3c97daa444 100644 --- a/lib/eal/freebsd/eal_hugepage_info.c +++ b/lib/eal/freebsd/eal_hugepage_info.c @@ -72,7 +72,7 @@ eal_hugepage_info_init(void) &sysctl_size, NULL, 0); if (error != 0) { - RTE_LOG(ERR, EAL, "could not read sysctl hw.contigmem.num_buffers\n"); + RTE_LOG_LINE(ERR, EAL, "could not read sysctl hw.contigmem.num_buffers"); return -1; } @@ -81,28 +81,28 @@ eal_hugepage_info_init(void) &sysctl_size, NULL, 0); if (error != 0) { - RTE_LOG(ERR, EAL, "could not read sysctl hw.contigmem.buffer_size\n"); + RTE_LOG_LINE(ERR, EAL, "could not read sysctl hw.contigmem.buffer_size"); return -1; } fd = open(CONTIGMEM_DEV, O_RDWR); if (fd < 0) { - RTE_LOG(ERR, EAL, "could not open "CONTIGMEM_DEV"\n"); + RTE_LOG_LINE(ERR, EAL, "could not open "CONTIGMEM_DEV); return -1; } if (flock(fd, LOCK_EX | LOCK_NB) < 0) { - RTE_LOG(ERR, EAL, "could not lock memory. Is another DPDK process running?\n"); + RTE_LOG_LINE(ERR, EAL, "could not lock memory. Is another DPDK process running?"); return -1; } if (buffer_size >= 1<<30) - RTE_LOG(INFO, EAL, "Contigmem driver has %d buffers, each of size %dGB\n", + RTE_LOG_LINE(INFO, EAL, "Contigmem driver has %d buffers, each of size %dGB", num_buffers, (int)(buffer_size>>30)); else if (buffer_size >= 1<<20) - RTE_LOG(INFO, EAL, "Contigmem driver has %d buffers, each of size %dMB\n", + RTE_LOG_LINE(INFO, EAL, "Contigmem driver has %d buffers, each of size %dMB", num_buffers, (int)(buffer_size>>20)); else - RTE_LOG(INFO, EAL, "Contigmem driver has %d buffers, each of size %dKB\n", + RTE_LOG_LINE(INFO, EAL, "Contigmem driver has %d buffers, each of size %dKB", num_buffers, (int)(buffer_size>>10)); strlcpy(hpi->hugedir, CONTIGMEM_DEV, sizeof(hpi->hugedir)); @@ -117,7 +117,7 @@ eal_hugepage_info_init(void) tmp_hpi = create_shared_memory(eal_hugepage_info_path(), sizeof(internal_conf->hugepage_info)); if (tmp_hpi == NULL ) { - RTE_LOG(ERR, EAL, "Failed to create shared memory!\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to create shared memory!"); return -1; } @@ -132,7 +132,7 @@ eal_hugepage_info_init(void) } if (munmap(tmp_hpi, sizeof(internal_conf->hugepage_info)) < 0) { - RTE_LOG(ERR, EAL, "Failed to unmap shared memory!\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to unmap shared memory!"); return -1; } @@ -154,14 +154,14 @@ eal_hugepage_info_read(void) tmp_hpi = open_shared_memory(eal_hugepage_info_path(), sizeof(internal_conf->hugepage_info)); if (tmp_hpi == NULL) { - RTE_LOG(ERR, EAL, "Failed to open shared memory!\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to open shared memory!"); return -1; } memcpy(hpi, tmp_hpi, sizeof(internal_conf->hugepage_info)); if (munmap(tmp_hpi, sizeof(internal_conf->hugepage_info)) < 0) { - RTE_LOG(ERR, EAL, "Failed to unmap shared memory!\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to unmap shared memory!"); return -1; } return 0; diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c index 2b31dfb099..ffba823808 100644 --- a/lib/eal/freebsd/eal_interrupts.c +++ b/lib/eal/freebsd/eal_interrupts.c @@ -90,12 +90,12 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, /* first do parameter checking */ if (rte_intr_fd_get(intr_handle) < 0 || cb == NULL) { - RTE_LOG(ERR, EAL, - "Registering with invalid input parameter\n"); + RTE_LOG_LINE(ERR, EAL, + "Registering with invalid input parameter"); return -EINVAL; } if (kq < 0) { - RTE_LOG(ERR, EAL, "Kqueue is not active: %d\n", kq); + RTE_LOG_LINE(ERR, EAL, "Kqueue is not active: %d", kq); return -ENODEV; } @@ -120,7 +120,7 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, /* allocate a new interrupt callback entity */ callback = calloc(1, sizeof(*callback)); if (callback == NULL) { - RTE_LOG(ERR, EAL, "Can not allocate memory\n"); + RTE_LOG_LINE(ERR, EAL, "Can not allocate memory"); ret = -ENOMEM; goto fail; } @@ -132,13 +132,13 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, if (src == NULL) { src = calloc(1, sizeof(*src)); if (src == NULL) { - RTE_LOG(ERR, EAL, "Can not allocate memory\n"); + RTE_LOG_LINE(ERR, EAL, "Can not allocate memory"); ret = -ENOMEM; goto fail; } else { src->intr_handle = rte_intr_instance_dup(intr_handle); if (src->intr_handle == NULL) { - RTE_LOG(ERR, EAL, "Can not create intr instance\n"); + RTE_LOG_LINE(ERR, EAL, "Can not create intr instance"); ret = -ENOMEM; free(src); src = NULL; @@ -167,7 +167,7 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, ke.flags = EV_ADD; /* mark for addition to the queue */ if (intr_source_to_kevent(intr_handle, &ke) < 0) { - RTE_LOG(ERR, EAL, "Cannot convert interrupt handle to kevent\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot convert interrupt handle to kevent"); ret = -ENODEV; goto fail; } @@ -181,10 +181,10 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, * user. so, don't output it unless debug log level set. */ if (errno == ENODEV) - RTE_LOG(DEBUG, EAL, "Interrupt handle %d not supported\n", + RTE_LOG_LINE(DEBUG, EAL, "Interrupt handle %d not supported", rte_intr_fd_get(src->intr_handle)); else - RTE_LOG(ERR, EAL, "Error adding fd %d kevent, %s\n", + RTE_LOG_LINE(ERR, EAL, "Error adding fd %d kevent, %s", rte_intr_fd_get(src->intr_handle), strerror(errno)); ret = -errno; @@ -222,13 +222,13 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle, /* do parameter checking first */ if (rte_intr_fd_get(intr_handle) < 0) { - RTE_LOG(ERR, EAL, - "Unregistering with invalid input parameter\n"); + RTE_LOG_LINE(ERR, EAL, + "Unregistering with invalid input parameter"); return -EINVAL; } if (kq < 0) { - RTE_LOG(ERR, EAL, "Kqueue is not active\n"); + RTE_LOG_LINE(ERR, EAL, "Kqueue is not active"); return -ENODEV; } @@ -277,12 +277,12 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle, /* do parameter checking first */ if (rte_intr_fd_get(intr_handle) < 0) { - RTE_LOG(ERR, EAL, - "Unregistering with invalid input parameter\n"); + RTE_LOG_LINE(ERR, EAL, + "Unregistering with invalid input parameter"); return -EINVAL; } if (kq < 0) { - RTE_LOG(ERR, EAL, "Kqueue is not active\n"); + RTE_LOG_LINE(ERR, EAL, "Kqueue is not active"); return -ENODEV; } @@ -312,7 +312,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle, ke.flags = EV_DELETE; /* mark for deletion from the queue */ if (intr_source_to_kevent(intr_handle, &ke) < 0) { - RTE_LOG(ERR, EAL, "Cannot convert to kevent\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot convert to kevent"); ret = -ENODEV; goto out; } @@ -321,7 +321,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle, * remove intr file descriptor from wait list. */ if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) { - RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n", + RTE_LOG_LINE(ERR, EAL, "Error removing fd %d kevent, %s", rte_intr_fd_get(src->intr_handle), strerror(errno)); /* removing non-existent even is an expected condition @@ -396,7 +396,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle) break; /* unknown handle type */ default: - RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Unknown handle type of fd %d", rte_intr_fd_get(intr_handle)); rc = -1; break; @@ -437,7 +437,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle) break; /* unknown handle type */ default: - RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Unknown handle type of fd %d", rte_intr_fd_get(intr_handle)); rc = -1; break; @@ -513,13 +513,13 @@ eal_intr_process_interrupts(struct kevent *events, int nfds) if (errno == EINTR || errno == EWOULDBLOCK) continue; - RTE_LOG(ERR, EAL, "Error reading from file " - "descriptor %d: %s\n", + RTE_LOG_LINE(ERR, EAL, "Error reading from file " + "descriptor %d: %s", event_fd, strerror(errno)); } else if (bytes_read == 0) - RTE_LOG(ERR, EAL, "Read nothing from file " - "descriptor %d\n", event_fd); + RTE_LOG_LINE(ERR, EAL, "Read nothing from file " + "descriptor %d", event_fd); else call = true; } @@ -556,7 +556,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds) ke.flags = EV_DELETE; if (intr_source_to_kevent(src->intr_handle, &ke) < 0) { - RTE_LOG(ERR, EAL, "Cannot convert to kevent\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot convert to kevent"); rte_spinlock_unlock(&intr_lock); return; } @@ -565,7 +565,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds) * remove intr file descriptor from wait list. */ if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) { - RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n", + RTE_LOG_LINE(ERR, EAL, "Error removing fd %d kevent, %s", rte_intr_fd_get(src->intr_handle), strerror(errno)); /* removing non-existent even is an expected @@ -606,8 +606,8 @@ eal_intr_thread_main(void *arg __rte_unused) if (nfds < 0) { if (errno == EINTR) continue; - RTE_LOG(ERR, EAL, - "kevent returns with fail\n"); + RTE_LOG_LINE(ERR, EAL, + "kevent returns with fail"); break; } /* kevent timeout, will never happen here */ @@ -632,7 +632,7 @@ rte_eal_intr_init(void) kq = kqueue(); if (kq < 0) { - RTE_LOG(ERR, EAL, "Cannot create kqueue instance\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot create kqueue instance"); return -1; } @@ -641,8 +641,8 @@ rte_eal_intr_init(void) eal_intr_thread_main, NULL); if (ret != 0) { rte_errno = -ret; - RTE_LOG(ERR, EAL, - "Failed to create thread for interrupt handling\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to create thread for interrupt handling"); } return ret; diff --git a/lib/eal/freebsd/eal_lcore.c b/lib/eal/freebsd/eal_lcore.c index d9ef4bc9c5..cfd375076a 100644 --- a/lib/eal/freebsd/eal_lcore.c +++ b/lib/eal/freebsd/eal_lcore.c @@ -30,7 +30,7 @@ eal_get_ncpus(void) if (ncpu < 0) { sysctl(mib, 2, &ncpu, &len, NULL, 0); - RTE_LOG(INFO, EAL, "Sysctl reports %d cpus\n", ncpu); + RTE_LOG_LINE(INFO, EAL, "Sysctl reports %d cpus", ncpu); } return ncpu; } diff --git a/lib/eal/freebsd/eal_memalloc.c b/lib/eal/freebsd/eal_memalloc.c index 00ab02cb63..f96ed2ce21 100644 --- a/lib/eal/freebsd/eal_memalloc.c +++ b/lib/eal/freebsd/eal_memalloc.c @@ -15,21 +15,21 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms __rte_unused, int __rte_unused n_segs, size_t __rte_unused page_sz, int __rte_unused socket, bool __rte_unused exact) { - RTE_LOG(ERR, EAL, "Memory hotplug not supported on FreeBSD\n"); + RTE_LOG_LINE(ERR, EAL, "Memory hotplug not supported on FreeBSD"); return -1; } struct rte_memseg * eal_memalloc_alloc_seg(size_t __rte_unused page_sz, int __rte_unused socket) { - RTE_LOG(ERR, EAL, "Memory hotplug not supported on FreeBSD\n"); + RTE_LOG_LINE(ERR, EAL, "Memory hotplug not supported on FreeBSD"); return NULL; } int eal_memalloc_free_seg(struct rte_memseg *ms __rte_unused) { - RTE_LOG(ERR, EAL, "Memory hotplug not supported on FreeBSD\n"); + RTE_LOG_LINE(ERR, EAL, "Memory hotplug not supported on FreeBSD"); return -1; } @@ -37,14 +37,14 @@ int eal_memalloc_free_seg_bulk(struct rte_memseg **ms __rte_unused, int n_segs __rte_unused) { - RTE_LOG(ERR, EAL, "Memory hotplug not supported on FreeBSD\n"); + RTE_LOG_LINE(ERR, EAL, "Memory hotplug not supported on FreeBSD"); return -1; } int eal_memalloc_sync_with_primary(void) { - RTE_LOG(ERR, EAL, "Memory hotplug not supported on FreeBSD\n"); + RTE_LOG_LINE(ERR, EAL, "Memory hotplug not supported on FreeBSD"); return -1; } diff --git a/lib/eal/freebsd/eal_memory.c b/lib/eal/freebsd/eal_memory.c index 5c6165c580..195f570da0 100644 --- a/lib/eal/freebsd/eal_memory.c +++ b/lib/eal/freebsd/eal_memory.c @@ -84,7 +84,7 @@ rte_eal_hugepage_init(void) addr = mmap(NULL, mem_sz, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); if (addr == MAP_FAILED) { - RTE_LOG(ERR, EAL, "%s: mmap() failed: %s\n", __func__, + RTE_LOG_LINE(ERR, EAL, "%s: mmap() failed: %s", __func__, strerror(errno)); return -1; } @@ -132,8 +132,8 @@ rte_eal_hugepage_init(void) error = sysctlbyname(physaddr_str, &physaddr, &sysctl_size, NULL, 0); if (error < 0) { - RTE_LOG(ERR, EAL, "Failed to get physical addr for buffer %u " - "from %s\n", j, hpi->hugedir); + RTE_LOG_LINE(ERR, EAL, "Failed to get physical addr for buffer %u " + "from %s", j, hpi->hugedir); return -1; } @@ -172,8 +172,8 @@ rte_eal_hugepage_init(void) break; } if (msl_idx == RTE_MAX_MEMSEG_LISTS) { - RTE_LOG(ERR, EAL, "Could not find space for memseg. Please increase RTE_MAX_MEMSEG_PER_LIST " - "RTE_MAX_MEMSEG_PER_TYPE and/or RTE_MAX_MEM_MB_PER_TYPE in configuration.\n"); + RTE_LOG_LINE(ERR, EAL, "Could not find space for memseg. Please increase RTE_MAX_MEMSEG_PER_LIST " + "RTE_MAX_MEMSEG_PER_TYPE and/or RTE_MAX_MEM_MB_PER_TYPE in configuration."); return -1; } arr = &msl->memseg_arr; @@ -190,7 +190,7 @@ rte_eal_hugepage_init(void) hpi->lock_descriptor, j * EAL_PAGE_SIZE); if (addr == MAP_FAILED) { - RTE_LOG(ERR, EAL, "Failed to mmap buffer %u from %s\n", + RTE_LOG_LINE(ERR, EAL, "Failed to mmap buffer %u from %s", j, hpi->hugedir); return -1; } @@ -205,8 +205,8 @@ rte_eal_hugepage_init(void) rte_fbarray_set_used(arr, ms_idx); - RTE_LOG(INFO, EAL, "Mapped memory segment %u @ %p: physaddr:0x%" - PRIx64", len %zu\n", + RTE_LOG_LINE(INFO, EAL, "Mapped memory segment %u @ %p: physaddr:0x%" + PRIx64", len %zu", seg_idx++, addr, physaddr, page_sz); total_mem += seg->len; @@ -215,9 +215,9 @@ rte_eal_hugepage_init(void) break; } if (total_mem < internal_conf->memory) { - RTE_LOG(ERR, EAL, "Couldn't reserve requested memory, " + RTE_LOG_LINE(ERR, EAL, "Couldn't reserve requested memory, " "requested: %" PRIu64 "M " - "available: %" PRIu64 "M\n", + "available: %" PRIu64 "M", internal_conf->memory >> 20, total_mem >> 20); return -1; } @@ -268,7 +268,7 @@ rte_eal_hugepage_attach(void) /* Obtain a file descriptor for contiguous memory */ fd_hugepage = open(cur_hpi->hugedir, O_RDWR); if (fd_hugepage < 0) { - RTE_LOG(ERR, EAL, "Could not open %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not open %s", cur_hpi->hugedir); goto error; } @@ -277,7 +277,7 @@ rte_eal_hugepage_attach(void) /* Map the contiguous memory into each memory segment */ if (rte_memseg_walk(attach_segment, &wa) < 0) { - RTE_LOG(ERR, EAL, "Failed to mmap buffer %u from %s\n", + RTE_LOG_LINE(ERR, EAL, "Failed to mmap buffer %u from %s", wa.seg_idx, cur_hpi->hugedir); goto error; } @@ -402,8 +402,8 @@ memseg_primary_init(void) unsigned int n_segs; if (msl_idx >= RTE_MAX_MEMSEG_LISTS) { - RTE_LOG(ERR, EAL, - "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS\n"); + RTE_LOG_LINE(ERR, EAL, + "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS"); return -1; } @@ -424,7 +424,7 @@ memseg_primary_init(void) type_msl_idx++; if (memseg_list_alloc(msl)) { - RTE_LOG(ERR, EAL, "Cannot allocate VA space for memseg list\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate VA space for memseg list"); return -1; } } @@ -449,13 +449,13 @@ memseg_secondary_init(void) continue; if (rte_fbarray_attach(&msl->memseg_arr)) { - RTE_LOG(ERR, EAL, "Cannot attach to primary process memseg lists\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot attach to primary process memseg lists"); return -1; } /* preallocate VA space */ if (memseg_list_alloc(msl)) { - RTE_LOG(ERR, EAL, "Cannot preallocate VA space for hugepage memory\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot preallocate VA space for hugepage memory"); return -1; } } diff --git a/lib/eal/freebsd/eal_thread.c b/lib/eal/freebsd/eal_thread.c index 6f97a3c2c1..0f7284768a 100644 --- a/lib/eal/freebsd/eal_thread.c +++ b/lib/eal/freebsd/eal_thread.c @@ -38,7 +38,7 @@ void rte_thread_set_name(rte_thread_t thread_id, const char *thread_name) const size_t truncatedsz = sizeof(truncated); if (strlcpy(truncated, thread_name, truncatedsz) >= truncatedsz) - RTE_LOG(DEBUG, EAL, "Truncated thread name\n"); + RTE_LOG_LINE(DEBUG, EAL, "Truncated thread name"); pthread_set_name_np((pthread_t)thread_id.opaque_id, truncated); } diff --git a/lib/eal/freebsd/eal_timer.c b/lib/eal/freebsd/eal_timer.c index beff755a47..61488ff641 100644 --- a/lib/eal/freebsd/eal_timer.c +++ b/lib/eal/freebsd/eal_timer.c @@ -36,20 +36,20 @@ get_tsc_freq(void) tmp = 0; if (sysctlbyname("kern.timecounter.smp_tsc", &tmp, &sz, NULL, 0)) - RTE_LOG(WARNING, EAL, "%s\n", strerror(errno)); + RTE_LOG_LINE(WARNING, EAL, "%s", strerror(errno)); else if (tmp != 1) - RTE_LOG(WARNING, EAL, "TSC is not safe to use in SMP mode\n"); + RTE_LOG_LINE(WARNING, EAL, "TSC is not safe to use in SMP mode"); tmp = 0; if (sysctlbyname("kern.timecounter.invariant_tsc", &tmp, &sz, NULL, 0)) - RTE_LOG(WARNING, EAL, "%s\n", strerror(errno)); + RTE_LOG_LINE(WARNING, EAL, "%s", strerror(errno)); else if (tmp != 1) - RTE_LOG(WARNING, EAL, "TSC is not invariant\n"); + RTE_LOG_LINE(WARNING, EAL, "TSC is not invariant"); sz = sizeof(tsc_hz); if (sysctlbyname("machdep.tsc_freq", &tsc_hz, &sz, NULL, 0)) { - RTE_LOG(WARNING, EAL, "%s\n", strerror(errno)); + RTE_LOG_LINE(WARNING, EAL, "%s", strerror(errno)); return 0; } diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c index 57da058cec..8aaff34d54 100644 --- a/lib/eal/linux/eal.c +++ b/lib/eal/linux/eal.c @@ -94,7 +94,7 @@ eal_clean_runtime_dir(void) /* open directory */ dir = opendir(runtime_dir); if (!dir) { - RTE_LOG(ERR, EAL, "Unable to open runtime directory %s\n", + RTE_LOG_LINE(ERR, EAL, "Unable to open runtime directory %s", runtime_dir); goto error; } @@ -102,14 +102,14 @@ eal_clean_runtime_dir(void) /* lock the directory before doing anything, to avoid races */ if (flock(dir_fd, LOCK_EX) < 0) { - RTE_LOG(ERR, EAL, "Unable to lock runtime directory %s\n", + RTE_LOG_LINE(ERR, EAL, "Unable to lock runtime directory %s", runtime_dir); goto error; } dirent = readdir(dir); if (!dirent) { - RTE_LOG(ERR, EAL, "Unable to read runtime directory %s\n", + RTE_LOG_LINE(ERR, EAL, "Unable to read runtime directory %s", runtime_dir); goto error; } @@ -159,7 +159,7 @@ eal_clean_runtime_dir(void) if (dir) closedir(dir); - RTE_LOG(ERR, EAL, "Error while clearing runtime dir: %s\n", + RTE_LOG_LINE(ERR, EAL, "Error while clearing runtime dir: %s", strerror(errno)); return -1; @@ -200,7 +200,7 @@ rte_eal_config_create(void) if (mem_cfg_fd < 0){ mem_cfg_fd = open(pathname, O_RDWR | O_CREAT, 0600); if (mem_cfg_fd < 0) { - RTE_LOG(ERR, EAL, "Cannot open '%s' for rte_mem_config\n", + RTE_LOG_LINE(ERR, EAL, "Cannot open '%s' for rte_mem_config", pathname); return -1; } @@ -210,7 +210,7 @@ rte_eal_config_create(void) if (retval < 0){ close(mem_cfg_fd); mem_cfg_fd = -1; - RTE_LOG(ERR, EAL, "Cannot resize '%s' for rte_mem_config\n", + RTE_LOG_LINE(ERR, EAL, "Cannot resize '%s' for rte_mem_config", pathname); return -1; } @@ -219,8 +219,8 @@ rte_eal_config_create(void) if (retval < 0){ close(mem_cfg_fd); mem_cfg_fd = -1; - RTE_LOG(ERR, EAL, "Cannot create lock on '%s'. Is another primary " - "process running?\n", pathname); + RTE_LOG_LINE(ERR, EAL, "Cannot create lock on '%s'. Is another primary " + "process running?", pathname); return -1; } @@ -228,7 +228,7 @@ rte_eal_config_create(void) rte_mem_cfg_addr = eal_get_virtual_area(rte_mem_cfg_addr, &cfg_len_aligned, page_sz, 0, 0); if (rte_mem_cfg_addr == NULL) { - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config"); close(mem_cfg_fd); mem_cfg_fd = -1; return -1; @@ -242,7 +242,7 @@ rte_eal_config_create(void) munmap(rte_mem_cfg_addr, cfg_len); close(mem_cfg_fd); mem_cfg_fd = -1; - RTE_LOG(ERR, EAL, "Cannot remap memory for rte_config\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot remap memory for rte_config"); return -1; } @@ -275,7 +275,7 @@ rte_eal_config_attach(void) if (mem_cfg_fd < 0){ mem_cfg_fd = open(pathname, O_RDWR); if (mem_cfg_fd < 0) { - RTE_LOG(ERR, EAL, "Cannot open '%s' for rte_mem_config\n", + RTE_LOG_LINE(ERR, EAL, "Cannot open '%s' for rte_mem_config", pathname); return -1; } @@ -287,7 +287,7 @@ rte_eal_config_attach(void) if (mem_config == MAP_FAILED) { close(mem_cfg_fd); mem_cfg_fd = -1; - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)", errno, strerror(errno)); return -1; } @@ -328,13 +328,13 @@ rte_eal_config_reattach(void) if (mem_config == MAP_FAILED || mem_config != rte_mem_cfg_addr) { if (mem_config != MAP_FAILED) { /* errno is stale, don't use */ - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config at [%p], got [%p]" + RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config at [%p], got [%p]" " - please use '--" OPT_BASE_VIRTADDR - "' option\n", rte_mem_cfg_addr, mem_config); + "' option", rte_mem_cfg_addr, mem_config); munmap(mem_config, sizeof(struct rte_mem_config)); return -1; } - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)", errno, strerror(errno)); return -1; } @@ -365,7 +365,7 @@ eal_proc_type_detect(void) ptype = RTE_PROC_SECONDARY; } - RTE_LOG(INFO, EAL, "Auto-detected process type: %s\n", + RTE_LOG_LINE(INFO, EAL, "Auto-detected process type: %s", ptype == RTE_PROC_PRIMARY ? "PRIMARY" : "SECONDARY"); return ptype; @@ -392,20 +392,20 @@ rte_config_init(void) return -1; eal_mcfg_wait_complete(); if (eal_mcfg_check_version() < 0) { - RTE_LOG(ERR, EAL, "Primary and secondary process DPDK version mismatch\n"); + RTE_LOG_LINE(ERR, EAL, "Primary and secondary process DPDK version mismatch"); return -1; } if (rte_eal_config_reattach() < 0) return -1; if (!__rte_mp_enable()) { - RTE_LOG(ERR, EAL, "Primary process refused secondary attachment\n"); + RTE_LOG_LINE(ERR, EAL, "Primary process refused secondary attachment"); return -1; } eal_mcfg_update_internal(); break; case RTE_PROC_AUTO: case RTE_PROC_INVALID: - RTE_LOG(ERR, EAL, "Invalid process type %d\n", + RTE_LOG_LINE(ERR, EAL, "Invalid process type %d", config->process_type); return -1; } @@ -474,7 +474,7 @@ eal_parse_socket_arg(char *strval, volatile uint64_t *socket_arg) len = strnlen(strval, SOCKET_MEM_STRLEN); if (len == SOCKET_MEM_STRLEN) { - RTE_LOG(ERR, EAL, "--socket-mem is too long\n"); + RTE_LOG_LINE(ERR, EAL, "--socket-mem is too long"); return -1; } @@ -595,13 +595,13 @@ eal_parse_huge_worker_stack(const char *arg) int ret; if (pthread_attr_init(&attr) != 0) { - RTE_LOG(ERR, EAL, "Could not retrieve default stack size\n"); + RTE_LOG_LINE(ERR, EAL, "Could not retrieve default stack size"); return -1; } ret = pthread_attr_getstacksize(&attr, &cfg->huge_worker_stack_size); pthread_attr_destroy(&attr); if (ret != 0) { - RTE_LOG(ERR, EAL, "Could not retrieve default stack size\n"); + RTE_LOG_LINE(ERR, EAL, "Could not retrieve default stack size"); return -1; } } else { @@ -617,7 +617,7 @@ eal_parse_huge_worker_stack(const char *arg) cfg->huge_worker_stack_size = stack_size * 1024; } - RTE_LOG(DEBUG, EAL, "Each worker thread will use %zu kB of DPDK memory as stack\n", + RTE_LOG_LINE(DEBUG, EAL, "Each worker thread will use %zu kB of DPDK memory as stack", cfg->huge_worker_stack_size / 1024); return 0; } @@ -673,7 +673,7 @@ eal_parse_args(int argc, char **argv) { char *hdir = strdup(optarg); if (hdir == NULL) - RTE_LOG(ERR, EAL, "Could not store hugepage directory\n"); + RTE_LOG_LINE(ERR, EAL, "Could not store hugepage directory"); else { /* free old hugepage dir */ free(internal_conf->hugepage_dir); @@ -685,7 +685,7 @@ eal_parse_args(int argc, char **argv) { char *prefix = strdup(optarg); if (prefix == NULL) - RTE_LOG(ERR, EAL, "Could not store file prefix\n"); + RTE_LOG_LINE(ERR, EAL, "Could not store file prefix"); else { /* free old prefix */ free(internal_conf->hugefile_prefix); @@ -696,8 +696,8 @@ eal_parse_args(int argc, char **argv) case OPT_SOCKET_MEM_NUM: if (eal_parse_socket_arg(optarg, internal_conf->socket_mem) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_SOCKET_MEM "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_SOCKET_MEM); eal_usage(prgname); ret = -1; goto out; @@ -708,8 +708,8 @@ eal_parse_args(int argc, char **argv) case OPT_SOCKET_LIMIT_NUM: if (eal_parse_socket_arg(optarg, internal_conf->socket_limit) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_SOCKET_LIMIT "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_SOCKET_LIMIT); eal_usage(prgname); ret = -1; goto out; @@ -719,8 +719,8 @@ eal_parse_args(int argc, char **argv) case OPT_VFIO_INTR_NUM: if (eal_parse_vfio_intr(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_VFIO_INTR "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_VFIO_INTR); eal_usage(prgname); ret = -1; goto out; @@ -729,8 +729,8 @@ eal_parse_args(int argc, char **argv) case OPT_VFIO_VF_TOKEN_NUM: if (eal_parse_vfio_vf_token(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_VFIO_VF_TOKEN "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_VFIO_VF_TOKEN); eal_usage(prgname); ret = -1; goto out; @@ -745,7 +745,7 @@ eal_parse_args(int argc, char **argv) { char *ops_name = strdup(optarg); if (ops_name == NULL) - RTE_LOG(ERR, EAL, "Could not store mbuf pool ops name\n"); + RTE_LOG_LINE(ERR, EAL, "Could not store mbuf pool ops name"); else { /* free old ops name */ free(internal_conf->user_mbuf_pool_ops_name); @@ -761,8 +761,8 @@ eal_parse_args(int argc, char **argv) case OPT_HUGE_WORKER_STACK_NUM: if (eal_parse_huge_worker_stack(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameter for --" - OPT_HUGE_WORKER_STACK"\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameter for --" + OPT_HUGE_WORKER_STACK); eal_usage(prgname); ret = -1; goto out; @@ -771,16 +771,16 @@ eal_parse_args(int argc, char **argv) default: if (opt < OPT_LONG_MIN_NUM && isprint(opt)) { - RTE_LOG(ERR, EAL, "Option %c is not supported " - "on Linux\n", opt); + RTE_LOG_LINE(ERR, EAL, "Option %c is not supported " + "on Linux", opt); } else if (opt >= OPT_LONG_MIN_NUM && opt < OPT_LONG_MAX_NUM) { - RTE_LOG(ERR, EAL, "Option %s is not supported " - "on Linux\n", + RTE_LOG_LINE(ERR, EAL, "Option %s is not supported " + "on Linux", eal_long_options[option_index].name); } else { - RTE_LOG(ERR, EAL, "Option %d is not supported " - "on Linux\n", opt); + RTE_LOG_LINE(ERR, EAL, "Option %d is not supported " + "on Linux", opt); } eal_usage(prgname); ret = -1; @@ -791,11 +791,11 @@ eal_parse_args(int argc, char **argv) /* create runtime data directory. In no_shconf mode, skip any errors */ if (eal_create_runtime_dir() < 0) { if (internal_conf->no_shconf == 0) { - RTE_LOG(ERR, EAL, "Cannot create runtime directory\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot create runtime directory"); ret = -1; goto out; } else - RTE_LOG(WARNING, EAL, "No DPDK runtime directory created\n"); + RTE_LOG_LINE(WARNING, EAL, "No DPDK runtime directory created"); } if (eal_adjust_config(internal_conf) != 0) { @@ -843,7 +843,7 @@ eal_check_mem_on_local_socket(void) socket_id = rte_lcore_to_socket_id(config->main_lcore); if (rte_memseg_list_walk(check_socket, &socket_id) == 0) - RTE_LOG(WARNING, EAL, "WARNING: Main core has no memory on local socket!\n"); + RTE_LOG_LINE(WARNING, EAL, "WARNING: Main core has no memory on local socket!"); } static int @@ -880,7 +880,7 @@ static int rte_eal_vfio_setup(void) static void rte_eal_init_alert(const char *msg) { fprintf(stderr, "EAL: FATAL: %s\n", msg); - RTE_LOG(ERR, EAL, "%s\n", msg); + RTE_LOG_LINE(ERR, EAL, "%s", msg); } /* @@ -1073,27 +1073,27 @@ rte_eal_init(int argc, char **argv) enum rte_iova_mode iova_mode = rte_bus_get_iommu_class(); if (iova_mode == RTE_IOVA_DC) { - RTE_LOG(DEBUG, EAL, "Buses did not request a specific IOVA mode.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Buses did not request a specific IOVA mode."); if (!RTE_IOVA_IN_MBUF) { iova_mode = RTE_IOVA_VA; - RTE_LOG(DEBUG, EAL, "IOVA as VA mode is forced by build option.\n"); + RTE_LOG_LINE(DEBUG, EAL, "IOVA as VA mode is forced by build option."); } else if (!phys_addrs) { /* if we have no access to physical addresses, * pick IOVA as VA mode. */ iova_mode = RTE_IOVA_VA; - RTE_LOG(DEBUG, EAL, "Physical addresses are unavailable, selecting IOVA as VA mode.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Physical addresses are unavailable, selecting IOVA as VA mode."); } else if (is_iommu_enabled()) { /* we have an IOMMU, pick IOVA as VA mode */ iova_mode = RTE_IOVA_VA; - RTE_LOG(DEBUG, EAL, "IOMMU is available, selecting IOVA as VA mode.\n"); + RTE_LOG_LINE(DEBUG, EAL, "IOMMU is available, selecting IOVA as VA mode."); } else { /* physical addresses available, and no IOMMU * found, so pick IOVA as PA. */ iova_mode = RTE_IOVA_PA; - RTE_LOG(DEBUG, EAL, "IOMMU is not available, selecting IOVA as PA mode.\n"); + RTE_LOG_LINE(DEBUG, EAL, "IOMMU is not available, selecting IOVA as PA mode."); } } rte_eal_get_configuration()->iova_mode = iova_mode; @@ -1114,7 +1114,7 @@ rte_eal_init(int argc, char **argv) return -1; } - RTE_LOG(INFO, EAL, "Selected IOVA mode '%s'\n", + RTE_LOG_LINE(INFO, EAL, "Selected IOVA mode '%s'", rte_eal_iova_mode() == RTE_IOVA_PA ? "PA" : "VA"); if (internal_conf->no_hugetlbfs == 0) { @@ -1138,11 +1138,11 @@ rte_eal_init(int argc, char **argv) if (internal_conf->vmware_tsc_map == 1) { #ifdef RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT rte_cycles_vmware_tsc_map = 1; - RTE_LOG (DEBUG, EAL, "Using VMWARE TSC MAP, " - "you must have monitor_control.pseudo_perfctr = TRUE\n"); + RTE_LOG_LINE(DEBUG, EAL, "Using VMWARE TSC MAP, " + "you must have monitor_control.pseudo_perfctr = TRUE"); #else - RTE_LOG (WARNING, EAL, "Ignoring --vmware-tsc-map because " - "RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT is not set\n"); + RTE_LOG_LINE(WARNING, EAL, "Ignoring --vmware-tsc-map because " + "RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT is not set"); #endif } @@ -1229,7 +1229,7 @@ rte_eal_init(int argc, char **argv) &lcore_config[config->main_lcore].cpuset); ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset)); - RTE_LOG(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])\n", + RTE_LOG_LINE(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])", config->main_lcore, (uintptr_t)pthread_self(), cpuset, ret == 0 ? "" : "..."); @@ -1350,7 +1350,7 @@ rte_eal_cleanup(void) if (!rte_atomic_compare_exchange_strong_explicit(&run_once, &has_run, 1, rte_memory_order_relaxed, rte_memory_order_relaxed)) { - RTE_LOG(WARNING, EAL, "Already called cleanup\n"); + RTE_LOG_LINE(WARNING, EAL, "Already called cleanup"); rte_errno = EALREADY; return -1; } @@ -1420,7 +1420,7 @@ rte_eal_check_module(const char *module_name) /* Check if there is sysfs mounted */ if (stat("/sys/module", &st) != 0) { - RTE_LOG(DEBUG, EAL, "sysfs is not mounted! error %i (%s)\n", + RTE_LOG_LINE(DEBUG, EAL, "sysfs is not mounted! error %i (%s)", errno, strerror(errno)); return -1; } @@ -1428,12 +1428,12 @@ rte_eal_check_module(const char *module_name) /* A module might be built-in, therefore try sysfs */ n = snprintf(sysfs_mod_name, PATH_MAX, "/sys/module/%s", module_name); if (n < 0 || n > PATH_MAX) { - RTE_LOG(DEBUG, EAL, "Could not format module path\n"); + RTE_LOG_LINE(DEBUG, EAL, "Could not format module path"); return -1; } if (stat(sysfs_mod_name, &st) != 0) { - RTE_LOG(DEBUG, EAL, "Module %s not found! error %i (%s)\n", + RTE_LOG_LINE(DEBUG, EAL, "Module %s not found! error %i (%s)", sysfs_mod_name, errno, strerror(errno)); return 0; } diff --git a/lib/eal/linux/eal_alarm.c b/lib/eal/linux/eal_alarm.c index 766ba2c251..3c0464ad10 100644 --- a/lib/eal/linux/eal_alarm.c +++ b/lib/eal/linux/eal_alarm.c @@ -65,7 +65,7 @@ rte_eal_alarm_init(void) intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE); if (intr_handle == NULL) { - RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n"); + RTE_LOG_LINE(ERR, EAL, "Fail to allocate intr_handle"); goto error; } diff --git a/lib/eal/linux/eal_dev.c b/lib/eal/linux/eal_dev.c index ac76f6174d..16e817121d 100644 --- a/lib/eal/linux/eal_dev.c +++ b/lib/eal/linux/eal_dev.c @@ -64,7 +64,7 @@ static void sigbus_handler(int signum, siginfo_t *info, { int ret; - RTE_LOG(DEBUG, EAL, "Thread catch SIGBUS, fault address:%p\n", + RTE_LOG_LINE(DEBUG, EAL, "Thread catch SIGBUS, fault address:%p", info->si_addr); rte_spinlock_lock(&failure_handle_lock); @@ -88,7 +88,7 @@ static void sigbus_handler(int signum, siginfo_t *info, } } - RTE_LOG(DEBUG, EAL, "Success to handle SIGBUS for hot-unplug!\n"); + RTE_LOG_LINE(DEBUG, EAL, "Success to handle SIGBUS for hot-unplug!"); } static int cmp_dev_name(const struct rte_device *dev, @@ -108,7 +108,7 @@ dev_uev_socket_fd_create(void) fd = socket(PF_NETLINK, SOCK_RAW | SOCK_CLOEXEC | SOCK_NONBLOCK, NETLINK_KOBJECT_UEVENT); if (fd < 0) { - RTE_LOG(ERR, EAL, "create uevent fd failed.\n"); + RTE_LOG_LINE(ERR, EAL, "create uevent fd failed."); return -1; } @@ -119,7 +119,7 @@ dev_uev_socket_fd_create(void) ret = bind(fd, (struct sockaddr *) &addr, sizeof(addr)); if (ret < 0) { - RTE_LOG(ERR, EAL, "Failed to bind uevent socket.\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to bind uevent socket."); goto err; } @@ -245,18 +245,18 @@ dev_uev_handler(__rte_unused void *param) return; else if (ret <= 0) { /* connection is closed or broken, can not up again. */ - RTE_LOG(ERR, EAL, "uevent socket connection is broken.\n"); + RTE_LOG_LINE(ERR, EAL, "uevent socket connection is broken."); rte_eal_alarm_set(1, dev_delayed_unregister, NULL); return; } ret = dev_uev_parse(buf, &uevent, EAL_UEV_MSG_LEN); if (ret < 0) { - RTE_LOG(DEBUG, EAL, "Ignoring uevent '%s'\n", buf); + RTE_LOG_LINE(DEBUG, EAL, "Ignoring uevent '%s'", buf); return; } - RTE_LOG(DEBUG, EAL, "receive uevent(name:%s, type:%d, subsystem:%d)\n", + RTE_LOG_LINE(DEBUG, EAL, "receive uevent(name:%s, type:%d, subsystem:%d)", uevent.devname, uevent.type, uevent.subsystem); switch (uevent.subsystem) { @@ -273,7 +273,7 @@ dev_uev_handler(__rte_unused void *param) rte_spinlock_lock(&failure_handle_lock); bus = rte_bus_find_by_name(busname); if (bus == NULL) { - RTE_LOG(ERR, EAL, "Cannot find bus (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Cannot find bus (%s)", busname); goto failure_handle_err; } @@ -281,15 +281,15 @@ dev_uev_handler(__rte_unused void *param) dev = bus->find_device(NULL, cmp_dev_name, uevent.devname); if (dev == NULL) { - RTE_LOG(ERR, EAL, "Cannot find device (%s) on " - "bus (%s)\n", uevent.devname, busname); + RTE_LOG_LINE(ERR, EAL, "Cannot find device (%s) on " + "bus (%s)", uevent.devname, busname); goto failure_handle_err; } ret = bus->hot_unplug_handler(dev); if (ret) { - RTE_LOG(ERR, EAL, "Can not handle hot-unplug " - "for device (%s)\n", dev->name); + RTE_LOG_LINE(ERR, EAL, "Can not handle hot-unplug " + "for device (%s)", dev->name); } rte_spinlock_unlock(&failure_handle_lock); } @@ -318,7 +318,7 @@ rte_dev_event_monitor_start(void) intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE); if (intr_handle == NULL) { - RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n"); + RTE_LOG_LINE(ERR, EAL, "Fail to allocate intr_handle"); goto exit; } @@ -332,7 +332,7 @@ rte_dev_event_monitor_start(void) ret = dev_uev_socket_fd_create(); if (ret) { - RTE_LOG(ERR, EAL, "error create device event fd.\n"); + RTE_LOG_LINE(ERR, EAL, "error create device event fd."); goto exit; } @@ -362,7 +362,7 @@ rte_dev_event_monitor_stop(void) rte_rwlock_write_lock(&monitor_lock); if (!monitor_refcount) { - RTE_LOG(ERR, EAL, "device event monitor already stopped\n"); + RTE_LOG_LINE(ERR, EAL, "device event monitor already stopped"); goto exit; } @@ -374,7 +374,7 @@ rte_dev_event_monitor_stop(void) ret = rte_intr_callback_unregister(intr_handle, dev_uev_handler, (void *)-1); if (ret < 0) { - RTE_LOG(ERR, EAL, "fail to unregister uevent callback.\n"); + RTE_LOG_LINE(ERR, EAL, "fail to unregister uevent callback."); goto exit; } @@ -429,8 +429,8 @@ rte_dev_hotplug_handle_enable(void) ret = dev_sigbus_handler_register(); if (ret < 0) - RTE_LOG(ERR, EAL, - "fail to register sigbus handler for devices.\n"); + RTE_LOG_LINE(ERR, EAL, + "fail to register sigbus handler for devices."); hotplug_handle = true; @@ -444,8 +444,8 @@ rte_dev_hotplug_handle_disable(void) ret = dev_sigbus_handler_unregister(); if (ret < 0) - RTE_LOG(ERR, EAL, - "fail to unregister sigbus handler for devices.\n"); + RTE_LOG_LINE(ERR, EAL, + "fail to unregister sigbus handler for devices."); hotplug_handle = false; diff --git a/lib/eal/linux/eal_hugepage_info.c b/lib/eal/linux/eal_hugepage_info.c index 36a495fb1f..971c57989d 100644 --- a/lib/eal/linux/eal_hugepage_info.c +++ b/lib/eal/linux/eal_hugepage_info.c @@ -110,7 +110,7 @@ get_num_hugepages(const char *subdir, size_t sz, unsigned int reusable_pages) over_pages = 0; if (num_pages == 0 && over_pages == 0 && reusable_pages) - RTE_LOG(WARNING, EAL, "No available %zu kB hugepages reported\n", + RTE_LOG_LINE(WARNING, EAL, "No available %zu kB hugepages reported", sz >> 10); num_pages += over_pages; @@ -155,7 +155,7 @@ get_num_hugepages_on_node(const char *subdir, unsigned int socket, size_t sz) return 0; if (num_pages == 0) - RTE_LOG(WARNING, EAL, "No free %zu kB hugepages reported on node %u\n", + RTE_LOG_LINE(WARNING, EAL, "No free %zu kB hugepages reported on node %u", sz >> 10, socket); /* @@ -239,7 +239,7 @@ get_hugepage_dir(uint64_t hugepage_sz, char *hugedir, int len) if (rte_strsplit(buf, sizeof(buf), splitstr, _FIELDNAME_MAX, split_tok) != _FIELDNAME_MAX) { - RTE_LOG(ERR, EAL, "Error parsing %s\n", proc_mounts); + RTE_LOG_LINE(ERR, EAL, "Error parsing %s", proc_mounts); break; /* return NULL */ } @@ -325,7 +325,7 @@ walk_hugedir(const char *hugedir, walk_hugedir_t *cb, void *user_data) dir = opendir(hugedir); if (!dir) { - RTE_LOG(ERR, EAL, "Unable to open hugepage directory %s\n", + RTE_LOG_LINE(ERR, EAL, "Unable to open hugepage directory %s", hugedir); goto error; } @@ -333,7 +333,7 @@ walk_hugedir(const char *hugedir, walk_hugedir_t *cb, void *user_data) dirent = readdir(dir); if (!dirent) { - RTE_LOG(ERR, EAL, "Unable to read hugepage directory %s\n", + RTE_LOG_LINE(ERR, EAL, "Unable to read hugepage directory %s", hugedir); goto error; } @@ -377,7 +377,7 @@ walk_hugedir(const char *hugedir, walk_hugedir_t *cb, void *user_data) if (dir) closedir(dir); - RTE_LOG(ERR, EAL, "Error while walking hugepage dir: %s\n", + RTE_LOG_LINE(ERR, EAL, "Error while walking hugepage dir: %s", strerror(errno)); return -1; @@ -403,7 +403,7 @@ inspect_hugedir_cb(const struct walk_hugedir_data *whd) struct stat st; if (fstat(whd->file_fd, &st) < 0) - RTE_LOG(DEBUG, EAL, "%s(): stat(\"%s\") failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): stat(\"%s\") failed: %s", __func__, whd->file_name, strerror(errno)); else (*total_size) += st.st_size; @@ -492,8 +492,8 @@ hugepage_info_init(void) dir = opendir(sys_dir_path); if (dir == NULL) { - RTE_LOG(ERR, EAL, - "Cannot open directory %s to read system hugepage info\n", + RTE_LOG_LINE(ERR, EAL, + "Cannot open directory %s to read system hugepage info", sys_dir_path); return -1; } @@ -520,10 +520,10 @@ hugepage_info_init(void) num_pages = get_num_hugepages(dirent->d_name, hpi->hugepage_sz, 0); if (num_pages > 0) - RTE_LOG(NOTICE, EAL, + RTE_LOG_LINE(NOTICE, EAL, "%" PRIu32 " hugepages of size " "%" PRIu64 " reserved, but no mounted " - "hugetlbfs found for that size\n", + "hugetlbfs found for that size", num_pages, hpi->hugepage_sz); /* if we have kernel support for reserving hugepages * through mmap, and we're in in-memory mode, treat this @@ -533,9 +533,9 @@ hugepage_info_init(void) */ #ifdef MAP_HUGE_SHIFT if (internal_conf->in_memory) { - RTE_LOG(DEBUG, EAL, "In-memory mode enabled, " + RTE_LOG_LINE(DEBUG, EAL, "In-memory mode enabled, " "hugepages of size %" PRIu64 " bytes " - "will be allocated anonymously\n", + "will be allocated anonymously", hpi->hugepage_sz); calc_num_pages(hpi, dirent, 0); num_sizes++; @@ -549,8 +549,8 @@ hugepage_info_init(void) /* if blocking lock failed */ if (flock(hpi->lock_descriptor, LOCK_EX) == -1) { - RTE_LOG(CRIT, EAL, - "Failed to lock hugepage directory!\n"); + RTE_LOG_LINE(CRIT, EAL, + "Failed to lock hugepage directory!"); break; } @@ -626,7 +626,7 @@ eal_hugepage_info_init(void) tmp_hpi = create_shared_memory(eal_hugepage_info_path(), sizeof(internal_conf->hugepage_info)); if (tmp_hpi == NULL) { - RTE_LOG(ERR, EAL, "Failed to create shared memory!\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to create shared memory!"); return -1; } @@ -641,7 +641,7 @@ eal_hugepage_info_init(void) } if (munmap(tmp_hpi, sizeof(internal_conf->hugepage_info)) < 0) { - RTE_LOG(ERR, EAL, "Failed to unmap shared memory!\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to unmap shared memory!"); return -1; } return 0; @@ -657,14 +657,14 @@ int eal_hugepage_info_read(void) tmp_hpi = open_shared_memory(eal_hugepage_info_path(), sizeof(internal_conf->hugepage_info)); if (tmp_hpi == NULL) { - RTE_LOG(ERR, EAL, "Failed to open shared memory!\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to open shared memory!"); return -1; } memcpy(hpi, tmp_hpi, sizeof(internal_conf->hugepage_info)); if (munmap(tmp_hpi, sizeof(internal_conf->hugepage_info)) < 0) { - RTE_LOG(ERR, EAL, "Failed to unmap shared memory!\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to unmap shared memory!"); return -1; } return 0; diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c index eabac24992..9a7169c4e4 100644 --- a/lib/eal/linux/eal_interrupts.c +++ b/lib/eal/linux/eal_interrupts.c @@ -123,7 +123,7 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) { ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { - RTE_LOG(ERR, EAL, "Error enabling INTx interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error enabling INTx interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -140,7 +140,7 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) { ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { - RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error unmasking INTx interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -168,7 +168,7 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) { ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { - RTE_LOG(ERR, EAL, "Error masking INTx interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error masking INTx interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -184,7 +184,7 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) { ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { - RTE_LOG(ERR, EAL, "Error disabling INTx interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error disabling INTx interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -208,7 +208,7 @@ vfio_ack_intx(const struct rte_intr_handle *intr_handle) vfio_dev_fd = rte_intr_dev_fd_get(intr_handle); if (ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) { - RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error unmasking INTx interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -238,7 +238,7 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) { ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { - RTE_LOG(ERR, EAL, "Error enabling MSI interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error enabling MSI interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -264,7 +264,7 @@ vfio_disable_msi(const struct rte_intr_handle *intr_handle) { vfio_dev_fd = rte_intr_dev_fd_get(intr_handle); ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) - RTE_LOG(ERR, EAL, "Error disabling MSI interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error disabling MSI interrupts for fd %d", rte_intr_fd_get(intr_handle)); return ret; @@ -303,7 +303,7 @@ vfio_enable_msix(const struct rte_intr_handle *intr_handle) { ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { - RTE_LOG(ERR, EAL, "Error enabling MSI-X interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error enabling MSI-X interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -331,7 +331,7 @@ vfio_disable_msix(const struct rte_intr_handle *intr_handle) { ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) - RTE_LOG(ERR, EAL, "Error disabling MSI-X interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error disabling MSI-X interrupts for fd %d", rte_intr_fd_get(intr_handle)); return ret; @@ -363,7 +363,7 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle) ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { - RTE_LOG(ERR, EAL, "Error enabling req interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error enabling req interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -392,7 +392,7 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle) ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) - RTE_LOG(ERR, EAL, "Error disabling req interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error disabling req interrupts for fd %d", rte_intr_fd_get(intr_handle)); return ret; @@ -409,16 +409,16 @@ uio_intx_intr_disable(const struct rte_intr_handle *intr_handle) /* use UIO config file descriptor for uio_pci_generic */ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle); if (uio_cfg_fd < 0 || pread(uio_cfg_fd, &command_high, 1, 5) != 1) { - RTE_LOG(ERR, EAL, - "Error reading interrupts status for fd %d\n", + RTE_LOG_LINE(ERR, EAL, + "Error reading interrupts status for fd %d", uio_cfg_fd); return -1; } /* disable interrupts */ command_high |= 0x4; if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) { - RTE_LOG(ERR, EAL, - "Error disabling interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, + "Error disabling interrupts for fd %d", uio_cfg_fd); return -1; } @@ -435,16 +435,16 @@ uio_intx_intr_enable(const struct rte_intr_handle *intr_handle) /* use UIO config file descriptor for uio_pci_generic */ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle); if (uio_cfg_fd < 0 || pread(uio_cfg_fd, &command_high, 1, 5) != 1) { - RTE_LOG(ERR, EAL, - "Error reading interrupts status for fd %d\n", + RTE_LOG_LINE(ERR, EAL, + "Error reading interrupts status for fd %d", uio_cfg_fd); return -1; } /* enable interrupts */ command_high &= ~0x4; if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) { - RTE_LOG(ERR, EAL, - "Error enabling interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, + "Error enabling interrupts for fd %d", uio_cfg_fd); return -1; } @@ -459,7 +459,7 @@ uio_intr_disable(const struct rte_intr_handle *intr_handle) if (rte_intr_fd_get(intr_handle) < 0 || write(rte_intr_fd_get(intr_handle), &value, sizeof(value)) < 0) { - RTE_LOG(ERR, EAL, "Error disabling interrupts for fd %d (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Error disabling interrupts for fd %d (%s)", rte_intr_fd_get(intr_handle), strerror(errno)); return -1; } @@ -473,7 +473,7 @@ uio_intr_enable(const struct rte_intr_handle *intr_handle) if (rte_intr_fd_get(intr_handle) < 0 || write(rte_intr_fd_get(intr_handle), &value, sizeof(value)) < 0) { - RTE_LOG(ERR, EAL, "Error enabling interrupts for fd %d (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Error enabling interrupts for fd %d (%s)", rte_intr_fd_get(intr_handle), strerror(errno)); return -1; } @@ -492,14 +492,14 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, /* first do parameter checking */ if (rte_intr_fd_get(intr_handle) < 0 || cb == NULL) { - RTE_LOG(ERR, EAL, "Registering with invalid input parameter\n"); + RTE_LOG_LINE(ERR, EAL, "Registering with invalid input parameter"); return -EINVAL; } /* allocate a new interrupt callback entity */ callback = calloc(1, sizeof(*callback)); if (callback == NULL) { - RTE_LOG(ERR, EAL, "Can not allocate memory\n"); + RTE_LOG_LINE(ERR, EAL, "Can not allocate memory"); return -ENOMEM; } callback->cb_fn = cb; @@ -526,14 +526,14 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, if (src == NULL) { src = calloc(1, sizeof(*src)); if (src == NULL) { - RTE_LOG(ERR, EAL, "Can not allocate memory\n"); + RTE_LOG_LINE(ERR, EAL, "Can not allocate memory"); ret = -ENOMEM; free(callback); callback = NULL; } else { src->intr_handle = rte_intr_instance_dup(intr_handle); if (src->intr_handle == NULL) { - RTE_LOG(ERR, EAL, "Can not create intr instance\n"); + RTE_LOG_LINE(ERR, EAL, "Can not create intr instance"); ret = -ENOMEM; free(callback); callback = NULL; @@ -575,7 +575,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle, /* do parameter checking first */ if (rte_intr_fd_get(intr_handle) < 0) { - RTE_LOG(ERR, EAL, "Unregistering with invalid input parameter\n"); + RTE_LOG_LINE(ERR, EAL, "Unregistering with invalid input parameter"); return -EINVAL; } @@ -625,7 +625,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle, /* do parameter checking first */ if (rte_intr_fd_get(intr_handle) < 0) { - RTE_LOG(ERR, EAL, "Unregistering with invalid input parameter\n"); + RTE_LOG_LINE(ERR, EAL, "Unregistering with invalid input parameter"); return -EINVAL; } @@ -752,7 +752,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle) break; /* unknown handle type */ default: - RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Unknown handle type of fd %d", rte_intr_fd_get(intr_handle)); rc = -1; break; @@ -817,7 +817,7 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle) return -1; /* unknown handle type */ default: - RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Unknown handle type of fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -884,7 +884,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle) break; /* unknown handle type */ default: - RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Unknown handle type of fd %d", rte_intr_fd_get(intr_handle)); rc = -1; break; @@ -972,8 +972,8 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds) if (errno == EINTR || errno == EWOULDBLOCK) continue; - RTE_LOG(ERR, EAL, "Error reading from file " - "descriptor %d: %s\n", + RTE_LOG_LINE(ERR, EAL, "Error reading from file " + "descriptor %d: %s", events[n].data.fd, strerror(errno)); /* @@ -995,8 +995,8 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds) free(src); return -1; } else if (bytes_read == 0) - RTE_LOG(ERR, EAL, "Read nothing from file " - "descriptor %d\n", events[n].data.fd); + RTE_LOG_LINE(ERR, EAL, "Read nothing from file " + "descriptor %d", events[n].data.fd); else call = true; } @@ -1080,8 +1080,8 @@ eal_intr_handle_interrupts(int pfd, unsigned totalfds) if (nfds < 0) { if (errno == EINTR) continue; - RTE_LOG(ERR, EAL, - "epoll_wait returns with fail\n"); + RTE_LOG_LINE(ERR, EAL, + "epoll_wait returns with fail"); return; } /* epoll_wait timeout, will never happens here */ @@ -1192,8 +1192,8 @@ rte_eal_intr_init(void) eal_intr_thread_main, NULL); if (ret != 0) { rte_errno = -ret; - RTE_LOG(ERR, EAL, - "Failed to create thread for interrupt handling\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to create thread for interrupt handling"); } return ret; @@ -1226,7 +1226,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle) return; default: bytes_read = 1; - RTE_LOG(INFO, EAL, "unexpected intr type\n"); + RTE_LOG_LINE(INFO, EAL, "unexpected intr type"); break; } @@ -1242,11 +1242,11 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle) if (errno == EINTR || errno == EWOULDBLOCK || errno == EAGAIN) continue; - RTE_LOG(ERR, EAL, - "Error reading from fd %d: %s\n", + RTE_LOG_LINE(ERR, EAL, + "Error reading from fd %d: %s", fd, strerror(errno)); } else if (nbytes == 0) - RTE_LOG(ERR, EAL, "Read nothing from fd %d\n", fd); + RTE_LOG_LINE(ERR, EAL, "Read nothing from fd %d", fd); return; } while (1); } @@ -1296,8 +1296,8 @@ eal_init_tls_epfd(void) int pfd = epoll_create(255); if (pfd < 0) { - RTE_LOG(ERR, EAL, - "Cannot create epoll instance\n"); + RTE_LOG_LINE(ERR, EAL, + "Cannot create epoll instance"); return -1; } return pfd; @@ -1320,7 +1320,7 @@ eal_epoll_wait(int epfd, struct rte_epoll_event *events, int rc; if (!events) { - RTE_LOG(ERR, EAL, "rte_epoll_event can't be NULL\n"); + RTE_LOG_LINE(ERR, EAL, "rte_epoll_event can't be NULL"); return -1; } @@ -1342,7 +1342,7 @@ eal_epoll_wait(int epfd, struct rte_epoll_event *events, continue; } /* epoll_wait fail */ - RTE_LOG(ERR, EAL, "epoll_wait returns with fail %s\n", + RTE_LOG_LINE(ERR, EAL, "epoll_wait returns with fail %s", strerror(errno)); rc = -1; break; @@ -1393,7 +1393,7 @@ rte_epoll_ctl(int epfd, int op, int fd, struct epoll_event ev; if (!event) { - RTE_LOG(ERR, EAL, "rte_epoll_event can't be NULL\n"); + RTE_LOG_LINE(ERR, EAL, "rte_epoll_event can't be NULL"); return -1; } @@ -1411,7 +1411,7 @@ rte_epoll_ctl(int epfd, int op, int fd, ev.events = event->epdata.event; if (epoll_ctl(epfd, op, fd, &ev) < 0) { - RTE_LOG(ERR, EAL, "Error op %d fd %d epoll_ctl, %s\n", + RTE_LOG_LINE(ERR, EAL, "Error op %d fd %d epoll_ctl, %s", op, fd, strerror(errno)); if (op == EPOLL_CTL_ADD) /* rollback status when CTL_ADD fail */ @@ -1442,7 +1442,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd, if (intr_handle == NULL || rte_intr_nb_efd_get(intr_handle) == 0 || efd_idx >= (unsigned int)rte_intr_nb_efd_get(intr_handle)) { - RTE_LOG(ERR, EAL, "Wrong intr vector number.\n"); + RTE_LOG_LINE(ERR, EAL, "Wrong intr vector number."); return -EPERM; } @@ -1452,7 +1452,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd, rev = rte_intr_elist_index_get(intr_handle, efd_idx); if (rte_atomic_load_explicit(&rev->status, rte_memory_order_relaxed) != RTE_EPOLL_INVALID) { - RTE_LOG(INFO, EAL, "Event already been added.\n"); + RTE_LOG_LINE(INFO, EAL, "Event already been added."); return -EEXIST; } @@ -1465,9 +1465,9 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd, rc = rte_epoll_ctl(epfd, epfd_op, rte_intr_efds_index_get(intr_handle, efd_idx), rev); if (!rc) - RTE_LOG(DEBUG, EAL, - "efd %d associated with vec %d added on epfd %d" - "\n", rev->fd, vec, epfd); + RTE_LOG_LINE(DEBUG, EAL, + "efd %d associated with vec %d added on epfd %d", + rev->fd, vec, epfd); else rc = -EPERM; break; @@ -1476,7 +1476,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd, rev = rte_intr_elist_index_get(intr_handle, efd_idx); if (rte_atomic_load_explicit(&rev->status, rte_memory_order_relaxed) == RTE_EPOLL_INVALID) { - RTE_LOG(INFO, EAL, "Event does not exist.\n"); + RTE_LOG_LINE(INFO, EAL, "Event does not exist."); return -EPERM; } @@ -1485,7 +1485,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd, rc = -EPERM; break; default: - RTE_LOG(ERR, EAL, "event op type mismatch\n"); + RTE_LOG_LINE(ERR, EAL, "event op type mismatch"); rc = -EPERM; } @@ -1523,8 +1523,8 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd) for (i = 0; i < n; i++) { fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC); if (fd < 0) { - RTE_LOG(ERR, EAL, - "can't setup eventfd, error %i (%s)\n", + RTE_LOG_LINE(ERR, EAL, + "can't setup eventfd, error %i (%s)", errno, strerror(errno)); return -errno; } @@ -1542,7 +1542,7 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd) /* only check, initialization would be done in vdev driver.*/ if ((uint64_t)rte_intr_efd_counter_size_get(intr_handle) > sizeof(union rte_intr_read_buffer)) { - RTE_LOG(ERR, EAL, "the efd_counter_size is oversized\n"); + RTE_LOG_LINE(ERR, EAL, "the efd_counter_size is oversized"); return -EINVAL; } } else { diff --git a/lib/eal/linux/eal_lcore.c b/lib/eal/linux/eal_lcore.c index 2e6a350603..42bf0ee7a1 100644 --- a/lib/eal/linux/eal_lcore.c +++ b/lib/eal/linux/eal_lcore.c @@ -68,7 +68,7 @@ eal_cpu_core_id(unsigned lcore_id) return (unsigned)id; err: - RTE_LOG(ERR, EAL, "Error reading core id value from %s " - "for lcore %u - assuming core 0\n", SYS_CPU_DIR, lcore_id); + RTE_LOG_LINE(ERR, EAL, "Error reading core id value from %s " + "for lcore %u - assuming core 0", SYS_CPU_DIR, lcore_id); return 0; } diff --git a/lib/eal/linux/eal_memalloc.c b/lib/eal/linux/eal_memalloc.c index 9853ec78a2..35a1868e32 100644 --- a/lib/eal/linux/eal_memalloc.c +++ b/lib/eal/linux/eal_memalloc.c @@ -147,7 +147,7 @@ check_numa(void) bool ret = true; /* Check if kernel supports NUMA. */ if (numa_available() != 0) { - RTE_LOG(DEBUG, EAL, "NUMA is not supported.\n"); + RTE_LOG_LINE(DEBUG, EAL, "NUMA is not supported."); ret = false; } return ret; @@ -156,16 +156,16 @@ check_numa(void) static void prepare_numa(int *oldpolicy, struct bitmask *oldmask, int socket_id) { - RTE_LOG(DEBUG, EAL, "Trying to obtain current memory policy.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Trying to obtain current memory policy."); if (get_mempolicy(oldpolicy, oldmask->maskp, oldmask->size + 1, 0, 0) < 0) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "Failed to get current mempolicy: %s. " - "Assuming MPOL_DEFAULT.\n", strerror(errno)); + "Assuming MPOL_DEFAULT.", strerror(errno)); *oldpolicy = MPOL_DEFAULT; } - RTE_LOG(DEBUG, EAL, - "Setting policy MPOL_PREFERRED for socket %d\n", + RTE_LOG_LINE(DEBUG, EAL, + "Setting policy MPOL_PREFERRED for socket %d", socket_id); numa_set_preferred(socket_id); } @@ -173,13 +173,13 @@ prepare_numa(int *oldpolicy, struct bitmask *oldmask, int socket_id) static void restore_numa(int *oldpolicy, struct bitmask *oldmask) { - RTE_LOG(DEBUG, EAL, - "Restoring previous memory policy: %d\n", *oldpolicy); + RTE_LOG_LINE(DEBUG, EAL, + "Restoring previous memory policy: %d", *oldpolicy); if (*oldpolicy == MPOL_DEFAULT) { numa_set_localalloc(); } else if (set_mempolicy(*oldpolicy, oldmask->maskp, oldmask->size + 1) < 0) { - RTE_LOG(ERR, EAL, "Failed to restore mempolicy: %s\n", + RTE_LOG_LINE(ERR, EAL, "Failed to restore mempolicy: %s", strerror(errno)); numa_set_localalloc(); } @@ -223,7 +223,7 @@ static int lock(int fd, int type) /* couldn't lock */ return 0; } else if (ret) { - RTE_LOG(ERR, EAL, "%s(): error calling flock(): %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): error calling flock(): %s", __func__, strerror(errno)); return -1; } @@ -251,7 +251,7 @@ get_seg_memfd(struct hugepage_info *hi __rte_unused, snprintf(segname, sizeof(segname), "seg_%i", list_idx); fd = memfd_create(segname, flags); if (fd < 0) { - RTE_LOG(DEBUG, EAL, "%s(): memfd create failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): memfd create failed: %s", __func__, strerror(errno)); return -1; } @@ -265,7 +265,7 @@ get_seg_memfd(struct hugepage_info *hi __rte_unused, list_idx, seg_idx); fd = memfd_create(segname, flags); if (fd < 0) { - RTE_LOG(DEBUG, EAL, "%s(): memfd create failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): memfd create failed: %s", __func__, strerror(errno)); return -1; } @@ -316,7 +316,7 @@ get_seg_fd(char *path, int buflen, struct hugepage_info *hi, */ ret = stat(path, &st); if (ret < 0 && errno != ENOENT) { - RTE_LOG(DEBUG, EAL, "%s(): stat() for '%s' failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): stat() for '%s' failed: %s", __func__, path, strerror(errno)); return -1; } @@ -342,7 +342,7 @@ get_seg_fd(char *path, int buflen, struct hugepage_info *hi, ret == 0) { /* coverity[toctou] */ if (unlink(path) < 0) { - RTE_LOG(DEBUG, EAL, "%s(): could not remove '%s': %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): could not remove '%s': %s", __func__, path, strerror(errno)); return -1; } @@ -351,13 +351,13 @@ get_seg_fd(char *path, int buflen, struct hugepage_info *hi, /* coverity[toctou] */ fd = open(path, O_CREAT | O_RDWR, 0600); if (fd < 0) { - RTE_LOG(ERR, EAL, "%s(): open '%s' failed: %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): open '%s' failed: %s", __func__, path, strerror(errno)); return -1; } /* take out a read lock */ if (lock(fd, LOCK_SH) < 0) { - RTE_LOG(ERR, EAL, "%s(): lock '%s' failed: %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): lock '%s' failed: %s", __func__, path, strerror(errno)); close(fd); return -1; @@ -378,7 +378,7 @@ resize_hugefile_in_memory(int fd, uint64_t fa_offset, ret = fallocate(fd, flags, fa_offset, page_sz); if (ret < 0) { - RTE_LOG(DEBUG, EAL, "%s(): fallocate() failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): fallocate() failed: %s", __func__, strerror(errno)); return -1; @@ -402,7 +402,7 @@ resize_hugefile_in_filesystem(int fd, uint64_t fa_offset, uint64_t page_sz, */ if (!grow) { - RTE_LOG(DEBUG, EAL, "%s(): fallocate not supported, not freeing page back to the system\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): fallocate not supported, not freeing page back to the system", __func__); return -1; } @@ -414,7 +414,7 @@ resize_hugefile_in_filesystem(int fd, uint64_t fa_offset, uint64_t page_sz, *dirty = new_size <= cur_size; if (new_size > cur_size && ftruncate(fd, new_size) < 0) { - RTE_LOG(DEBUG, EAL, "%s(): ftruncate() failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): ftruncate() failed: %s", __func__, strerror(errno)); return -1; } @@ -444,12 +444,12 @@ resize_hugefile_in_filesystem(int fd, uint64_t fa_offset, uint64_t page_sz, if (ret < 0) { if (fallocate_supported == -1 && errno == ENOTSUP) { - RTE_LOG(ERR, EAL, "%s(): fallocate() not supported, hugepage deallocation will be disabled\n", + RTE_LOG_LINE(ERR, EAL, "%s(): fallocate() not supported, hugepage deallocation will be disabled", __func__); again = true; fallocate_supported = 0; } else { - RTE_LOG(DEBUG, EAL, "%s(): fallocate() failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): fallocate() failed: %s", __func__, strerror(errno)); return -1; @@ -483,7 +483,7 @@ close_hugefile(int fd, char *path, int list_idx) if (!internal_conf->in_memory && rte_eal_process_type() == RTE_PROC_PRIMARY && unlink(path)) - RTE_LOG(ERR, EAL, "%s(): unlinking '%s' failed: %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): unlinking '%s' failed: %s", __func__, path, strerror(errno)); close(fd); @@ -536,12 +536,12 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, /* these are checked at init, but code analyzers don't know that */ if (internal_conf->in_memory && !anonymous_hugepages_supported) { - RTE_LOG(ERR, EAL, "Anonymous hugepages not supported, in-memory mode cannot allocate memory\n"); + RTE_LOG_LINE(ERR, EAL, "Anonymous hugepages not supported, in-memory mode cannot allocate memory"); return -1; } if (internal_conf->in_memory && !memfd_create_supported && internal_conf->single_file_segments) { - RTE_LOG(ERR, EAL, "Single-file segments are not supported without memfd support\n"); + RTE_LOG_LINE(ERR, EAL, "Single-file segments are not supported without memfd support"); return -1; } @@ -569,7 +569,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, fd = get_seg_fd(path, sizeof(path), hi, list_idx, seg_idx, &dirty); if (fd < 0) { - RTE_LOG(ERR, EAL, "Couldn't get fd on hugepage file\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't get fd on hugepage file"); return -1; } @@ -584,14 +584,14 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, } else { map_offset = 0; if (ftruncate(fd, alloc_sz) < 0) { - RTE_LOG(DEBUG, EAL, "%s(): ftruncate() failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): ftruncate() failed: %s", __func__, strerror(errno)); goto resized; } if (internal_conf->hugepage_file.unlink_before_mapping && !internal_conf->in_memory) { if (unlink(path)) { - RTE_LOG(DEBUG, EAL, "%s(): unlink() failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): unlink() failed: %s", __func__, strerror(errno)); goto resized; } @@ -610,7 +610,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, map_offset); if (va == MAP_FAILED) { - RTE_LOG(DEBUG, EAL, "%s(): mmap() failed: %s\n", __func__, + RTE_LOG_LINE(DEBUG, EAL, "%s(): mmap() failed: %s", __func__, strerror(errno)); /* mmap failed, but the previous region might have been * unmapped anyway. try to remap it @@ -618,7 +618,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, goto unmapped; } if (va != addr) { - RTE_LOG(DEBUG, EAL, "%s(): wrong mmap() address\n", __func__); + RTE_LOG_LINE(DEBUG, EAL, "%s(): wrong mmap() address", __func__); munmap(va, alloc_sz); goto resized; } @@ -631,7 +631,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, * back here. */ if (huge_wrap_sigsetjmp()) { - RTE_LOG(DEBUG, EAL, "SIGBUS: Cannot mmap more hugepages of size %uMB\n", + RTE_LOG_LINE(DEBUG, EAL, "SIGBUS: Cannot mmap more hugepages of size %uMB", (unsigned int)(alloc_sz >> 20)); goto mapped; } @@ -645,7 +645,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, iova = rte_mem_virt2iova(addr); if (iova == RTE_BAD_PHYS_ADDR) { - RTE_LOG(DEBUG, EAL, "%s(): can't get IOVA addr\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): can't get IOVA addr", __func__); goto mapped; } @@ -661,19 +661,19 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, ret = get_mempolicy(&cur_socket_id, NULL, 0, addr, MPOL_F_NODE | MPOL_F_ADDR); if (ret < 0) { - RTE_LOG(DEBUG, EAL, "%s(): get_mempolicy: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): get_mempolicy: %s", __func__, strerror(errno)); goto mapped; } else if (cur_socket_id != socket_id) { - RTE_LOG(DEBUG, EAL, - "%s(): allocation happened on wrong socket (wanted %d, got %d)\n", + RTE_LOG_LINE(DEBUG, EAL, + "%s(): allocation happened on wrong socket (wanted %d, got %d)", __func__, socket_id, cur_socket_id); goto mapped; } } #else if (rte_socket_count() > 1) - RTE_LOG(DEBUG, EAL, "%s(): not checking hugepage NUMA node.\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): not checking hugepage NUMA node.", __func__); #endif @@ -703,7 +703,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, * somebody else maps this hole now, we could accidentally * override it in the future. */ - RTE_LOG(CRIT, EAL, "Can't mmap holes in our virtual address space\n"); + RTE_LOG_LINE(CRIT, EAL, "Can't mmap holes in our virtual address space"); } /* roll back the ref count */ if (internal_conf->single_file_segments) @@ -748,7 +748,7 @@ free_seg(struct rte_memseg *ms, struct hugepage_info *hi, if (mmap(ms->addr, ms->len, PROT_NONE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED, -1, 0) == MAP_FAILED) { - RTE_LOG(DEBUG, EAL, "couldn't unmap page\n"); + RTE_LOG_LINE(DEBUG, EAL, "couldn't unmap page"); return -1; } @@ -873,13 +873,13 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg) if (wa->hi->lock_descriptor == -1 && !internal_conf->in_memory) { dir_fd = open(wa->hi->hugedir, O_RDONLY); if (dir_fd < 0) { - RTE_LOG(ERR, EAL, "%s(): Cannot open '%s': %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): Cannot open '%s': %s", __func__, wa->hi->hugedir, strerror(errno)); return -1; } /* blocking writelock */ if (flock(dir_fd, LOCK_EX)) { - RTE_LOG(ERR, EAL, "%s(): Cannot lock '%s': %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): Cannot lock '%s': %s", __func__, wa->hi->hugedir, strerror(errno)); close(dir_fd); return -1; @@ -896,7 +896,7 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg) if (alloc_seg(cur, map_addr, wa->socket, wa->hi, msl_idx, cur_idx)) { - RTE_LOG(DEBUG, EAL, "attempted to allocate %i segments, but only %i were allocated\n", + RTE_LOG_LINE(DEBUG, EAL, "attempted to allocate %i segments, but only %i were allocated", need, i); /* if exact number wasn't requested, stop */ @@ -916,7 +916,7 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg) * may fail. */ if (free_seg(tmp, wa->hi, msl_idx, j)) - RTE_LOG(DEBUG, EAL, "Cannot free page\n"); + RTE_LOG_LINE(DEBUG, EAL, "Cannot free page"); } /* clear the list */ if (wa->ms) @@ -980,13 +980,13 @@ free_seg_walk(const struct rte_memseg_list *msl, void *arg) if (wa->hi->lock_descriptor == -1 && !internal_conf->in_memory) { dir_fd = open(wa->hi->hugedir, O_RDONLY); if (dir_fd < 0) { - RTE_LOG(ERR, EAL, "%s(): Cannot open '%s': %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): Cannot open '%s': %s", __func__, wa->hi->hugedir, strerror(errno)); return -1; } /* blocking writelock */ if (flock(dir_fd, LOCK_EX)) { - RTE_LOG(ERR, EAL, "%s(): Cannot lock '%s': %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): Cannot lock '%s': %s", __func__, wa->hi->hugedir, strerror(errno)); close(dir_fd); return -1; @@ -1037,7 +1037,7 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs, size_t page_sz, } } if (!hi) { - RTE_LOG(ERR, EAL, "%s(): can't find relevant hugepage_info entry\n", + RTE_LOG_LINE(ERR, EAL, "%s(): can't find relevant hugepage_info entry", __func__); return -1; } @@ -1061,7 +1061,7 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs, size_t page_sz, /* memalloc is locked, so it's safe to use thread-unsafe version */ ret = rte_memseg_list_walk_thread_unsafe(alloc_seg_walk, &wa); if (ret == 0) { - RTE_LOG(ERR, EAL, "%s(): couldn't find suitable memseg_list\n", + RTE_LOG_LINE(ERR, EAL, "%s(): couldn't find suitable memseg_list", __func__); ret = -1; } else if (ret > 0) { @@ -1104,7 +1104,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs) /* if this page is marked as unfreeable, fail */ if (cur->flags & RTE_MEMSEG_FLAG_DO_NOT_FREE) { - RTE_LOG(DEBUG, EAL, "Page is not allowed to be freed\n"); + RTE_LOG_LINE(DEBUG, EAL, "Page is not allowed to be freed"); ret = -1; continue; } @@ -1118,7 +1118,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs) break; } if (i == (int)RTE_DIM(internal_conf->hugepage_info)) { - RTE_LOG(ERR, EAL, "Can't find relevant hugepage_info entry\n"); + RTE_LOG_LINE(ERR, EAL, "Can't find relevant hugepage_info entry"); ret = -1; continue; } @@ -1133,7 +1133,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs) if (walk_res == 1) continue; if (walk_res == 0) - RTE_LOG(ERR, EAL, "Couldn't find memseg list\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't find memseg list"); ret = -1; } return ret; @@ -1344,13 +1344,13 @@ sync_existing(struct rte_memseg_list *primary_msl, */ dir_fd = open(hi->hugedir, O_RDONLY); if (dir_fd < 0) { - RTE_LOG(ERR, EAL, "%s(): Cannot open '%s': %s\n", __func__, + RTE_LOG_LINE(ERR, EAL, "%s(): Cannot open '%s': %s", __func__, hi->hugedir, strerror(errno)); return -1; } /* blocking writelock */ if (flock(dir_fd, LOCK_EX)) { - RTE_LOG(ERR, EAL, "%s(): Cannot lock '%s': %s\n", __func__, + RTE_LOG_LINE(ERR, EAL, "%s(): Cannot lock '%s': %s", __func__, hi->hugedir, strerror(errno)); close(dir_fd); return -1; @@ -1405,7 +1405,7 @@ sync_walk(const struct rte_memseg_list *msl, void *arg __rte_unused) } } if (!hi) { - RTE_LOG(ERR, EAL, "Can't find relevant hugepage_info entry\n"); + RTE_LOG_LINE(ERR, EAL, "Can't find relevant hugepage_info entry"); return -1; } @@ -1454,7 +1454,7 @@ secondary_msl_create_walk(const struct rte_memseg_list *msl, primary_msl->memseg_arr.len, primary_msl->memseg_arr.elt_sz); if (ret < 0) { - RTE_LOG(ERR, EAL, "Cannot initialize local memory map\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot initialize local memory map"); return -1; } local_msl->base_va = primary_msl->base_va; @@ -1479,7 +1479,7 @@ secondary_msl_destroy_walk(const struct rte_memseg_list *msl, ret = rte_fbarray_destroy(&local_msl->memseg_arr); if (ret < 0) { - RTE_LOG(ERR, EAL, "Cannot destroy local memory map\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot destroy local memory map"); return -1; } local_msl->base_va = NULL; @@ -1501,7 +1501,7 @@ alloc_list(int list_idx, int len) /* ensure we have space to store fd per each possible segment */ data = malloc(sizeof(int) * len); if (data == NULL) { - RTE_LOG(ERR, EAL, "Unable to allocate space for file descriptors\n"); + RTE_LOG_LINE(ERR, EAL, "Unable to allocate space for file descriptors"); return -1; } /* set all fd's as invalid */ @@ -1750,13 +1750,13 @@ eal_memalloc_init(void) int mfd_res = test_memfd_create(); if (mfd_res < 0) { - RTE_LOG(ERR, EAL, "Unable to check if memfd is supported\n"); + RTE_LOG_LINE(ERR, EAL, "Unable to check if memfd is supported"); return -1; } if (mfd_res == 1) - RTE_LOG(DEBUG, EAL, "Using memfd for anonymous memory\n"); + RTE_LOG_LINE(DEBUG, EAL, "Using memfd for anonymous memory"); else - RTE_LOG(INFO, EAL, "Using memfd is not supported, falling back to anonymous hugepages\n"); + RTE_LOG_LINE(INFO, EAL, "Using memfd is not supported, falling back to anonymous hugepages"); /* we only support single-file segments mode with in-memory mode * if we support hugetlbfs with memfd_create. this code will @@ -1764,18 +1764,18 @@ eal_memalloc_init(void) */ if (internal_conf->single_file_segments && mfd_res != 1) { - RTE_LOG(ERR, EAL, "Single-file segments mode cannot be used without memfd support\n"); + RTE_LOG_LINE(ERR, EAL, "Single-file segments mode cannot be used without memfd support"); return -1; } /* this cannot ever happen but better safe than sorry */ if (!anonymous_hugepages_supported) { - RTE_LOG(ERR, EAL, "Using anonymous memory is not supported\n"); + RTE_LOG_LINE(ERR, EAL, "Using anonymous memory is not supported"); return -1; } /* safety net, should be impossible to configure */ if (internal_conf->hugepage_file.unlink_before_mapping && !internal_conf->hugepage_file.unlink_existing) { - RTE_LOG(ERR, EAL, "Unlinking existing hugepage files is prohibited, cannot unlink them before mapping.\n"); + RTE_LOG_LINE(ERR, EAL, "Unlinking existing hugepage files is prohibited, cannot unlink them before mapping."); return -1; } } diff --git a/lib/eal/linux/eal_memory.c b/lib/eal/linux/eal_memory.c index 9b6f08fba8..2f2551588b 100644 --- a/lib/eal/linux/eal_memory.c +++ b/lib/eal/linux/eal_memory.c @@ -104,7 +104,7 @@ rte_mem_virt2phy(const void *virtaddr) fd = open("/proc/self/pagemap", O_RDONLY); if (fd < 0) { - RTE_LOG(INFO, EAL, "%s(): cannot open /proc/self/pagemap: %s\n", + RTE_LOG_LINE(INFO, EAL, "%s(): cannot open /proc/self/pagemap: %s", __func__, strerror(errno)); return RTE_BAD_IOVA; } @@ -112,7 +112,7 @@ rte_mem_virt2phy(const void *virtaddr) virt_pfn = (unsigned long)virtaddr / page_size; offset = sizeof(uint64_t) * virt_pfn; if (lseek(fd, offset, SEEK_SET) == (off_t) -1) { - RTE_LOG(INFO, EAL, "%s(): seek error in /proc/self/pagemap: %s\n", + RTE_LOG_LINE(INFO, EAL, "%s(): seek error in /proc/self/pagemap: %s", __func__, strerror(errno)); close(fd); return RTE_BAD_IOVA; @@ -121,12 +121,12 @@ rte_mem_virt2phy(const void *virtaddr) retval = read(fd, &page, PFN_MASK_SIZE); close(fd); if (retval < 0) { - RTE_LOG(INFO, EAL, "%s(): cannot read /proc/self/pagemap: %s\n", + RTE_LOG_LINE(INFO, EAL, "%s(): cannot read /proc/self/pagemap: %s", __func__, strerror(errno)); return RTE_BAD_IOVA; } else if (retval != PFN_MASK_SIZE) { - RTE_LOG(INFO, EAL, "%s(): read %d bytes from /proc/self/pagemap " - "but expected %d:\n", + RTE_LOG_LINE(INFO, EAL, "%s(): read %d bytes from /proc/self/pagemap " + "but expected %d:", __func__, retval, PFN_MASK_SIZE); return RTE_BAD_IOVA; } @@ -237,7 +237,7 @@ static int huge_wrap_sigsetjmp(void) /* Callback for numa library. */ void numa_error(char *where) { - RTE_LOG(ERR, EAL, "%s failed: %s\n", where, strerror(errno)); + RTE_LOG_LINE(ERR, EAL, "%s failed: %s", where, strerror(errno)); } #endif @@ -267,18 +267,18 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, /* Check if kernel supports NUMA. */ if (numa_available() != 0) { - RTE_LOG(DEBUG, EAL, "NUMA is not supported.\n"); + RTE_LOG_LINE(DEBUG, EAL, "NUMA is not supported."); have_numa = false; } if (have_numa) { - RTE_LOG(DEBUG, EAL, "Trying to obtain current memory policy.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Trying to obtain current memory policy."); oldmask = numa_allocate_nodemask(); if (get_mempolicy(&oldpolicy, oldmask->maskp, oldmask->size + 1, 0, 0) < 0) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "Failed to get current mempolicy: %s. " - "Assuming MPOL_DEFAULT.\n", strerror(errno)); + "Assuming MPOL_DEFAULT.", strerror(errno)); oldpolicy = MPOL_DEFAULT; } for (i = 0; i < RTE_MAX_NUMA_NODES; i++) @@ -316,8 +316,8 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, essential_memory[j] -= hugepage_sz; } - RTE_LOG(DEBUG, EAL, - "Setting policy MPOL_PREFERRED for socket %d\n", + RTE_LOG_LINE(DEBUG, EAL, + "Setting policy MPOL_PREFERRED for socket %d", node_id); numa_set_preferred(node_id); } @@ -332,7 +332,7 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, /* try to create hugepage file */ fd = open(hf->filepath, O_CREAT | O_RDWR, 0600); if (fd < 0) { - RTE_LOG(DEBUG, EAL, "%s(): open failed: %s\n", __func__, + RTE_LOG_LINE(DEBUG, EAL, "%s(): open failed: %s", __func__, strerror(errno)); goto out; } @@ -345,7 +345,7 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, virtaddr = mmap(NULL, hugepage_sz, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_POPULATE, fd, 0); if (virtaddr == MAP_FAILED) { - RTE_LOG(DEBUG, EAL, "%s(): mmap failed: %s\n", __func__, + RTE_LOG_LINE(DEBUG, EAL, "%s(): mmap failed: %s", __func__, strerror(errno)); close(fd); goto out; @@ -361,8 +361,8 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, * back here. */ if (huge_wrap_sigsetjmp()) { - RTE_LOG(DEBUG, EAL, "SIGBUS: Cannot mmap more " - "hugepages of size %u MB\n", + RTE_LOG_LINE(DEBUG, EAL, "SIGBUS: Cannot mmap more " + "hugepages of size %u MB", (unsigned int)(hugepage_sz / 0x100000)); munmap(virtaddr, hugepage_sz); close(fd); @@ -378,7 +378,7 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, /* set shared lock on the file. */ if (flock(fd, LOCK_SH) < 0) { - RTE_LOG(DEBUG, EAL, "%s(): Locking file failed:%s \n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): Locking file failed:%s ", __func__, strerror(errno)); close(fd); goto out; @@ -390,13 +390,13 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, out: #ifdef RTE_EAL_NUMA_AWARE_HUGEPAGES if (maxnode) { - RTE_LOG(DEBUG, EAL, - "Restoring previous memory policy: %d\n", oldpolicy); + RTE_LOG_LINE(DEBUG, EAL, + "Restoring previous memory policy: %d", oldpolicy); if (oldpolicy == MPOL_DEFAULT) { numa_set_localalloc(); } else if (set_mempolicy(oldpolicy, oldmask->maskp, oldmask->size + 1) < 0) { - RTE_LOG(ERR, EAL, "Failed to restore mempolicy: %s\n", + RTE_LOG_LINE(ERR, EAL, "Failed to restore mempolicy: %s", strerror(errno)); numa_set_localalloc(); } @@ -424,8 +424,8 @@ find_numasocket(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi) f = fopen("/proc/self/numa_maps", "r"); if (f == NULL) { - RTE_LOG(NOTICE, EAL, "NUMA support not available" - " consider that all memory is in socket_id 0\n"); + RTE_LOG_LINE(NOTICE, EAL, "NUMA support not available" + " consider that all memory is in socket_id 0"); return 0; } @@ -443,20 +443,20 @@ find_numasocket(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi) /* get zone addr */ virt_addr = strtoull(buf, &end, 16); if (virt_addr == 0 || end == buf) { - RTE_LOG(ERR, EAL, "%s(): error in numa_maps parsing\n", __func__); + RTE_LOG_LINE(ERR, EAL, "%s(): error in numa_maps parsing", __func__); goto error; } /* get node id (socket id) */ nodestr = strstr(buf, " N"); if (nodestr == NULL) { - RTE_LOG(ERR, EAL, "%s(): error in numa_maps parsing\n", __func__); + RTE_LOG_LINE(ERR, EAL, "%s(): error in numa_maps parsing", __func__); goto error; } nodestr += 2; end = strstr(nodestr, "="); if (end == NULL) { - RTE_LOG(ERR, EAL, "%s(): error in numa_maps parsing\n", __func__); + RTE_LOG_LINE(ERR, EAL, "%s(): error in numa_maps parsing", __func__); goto error; } end[0] = '\0'; @@ -464,7 +464,7 @@ find_numasocket(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi) socket_id = strtoul(nodestr, &end, 0); if ((nodestr[0] == '\0') || (end == NULL) || (*end != '\0')) { - RTE_LOG(ERR, EAL, "%s(): error in numa_maps parsing\n", __func__); + RTE_LOG_LINE(ERR, EAL, "%s(): error in numa_maps parsing", __func__); goto error; } @@ -475,8 +475,8 @@ find_numasocket(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi) hugepg_tbl[i].socket_id = socket_id; hp_count++; #ifdef RTE_EAL_NUMA_AWARE_HUGEPAGES - RTE_LOG(DEBUG, EAL, - "Hugepage %s is on socket %d\n", + RTE_LOG_LINE(DEBUG, EAL, + "Hugepage %s is on socket %d", hugepg_tbl[i].filepath, socket_id); #endif } @@ -589,7 +589,7 @@ unlink_hugepage_files(struct hugepage_file *hugepg_tbl, struct hugepage_file *hp = &hugepg_tbl[page]; if (hp->orig_va != NULL && unlink(hp->filepath)) { - RTE_LOG(WARNING, EAL, "%s(): Removing %s failed: %s\n", + RTE_LOG_LINE(WARNING, EAL, "%s(): Removing %s failed: %s", __func__, hp->filepath, strerror(errno)); } } @@ -639,7 +639,7 @@ unmap_unneeded_hugepages(struct hugepage_file *hugepg_tbl, hp->orig_va = NULL; if (unlink(hp->filepath) == -1) { - RTE_LOG(ERR, EAL, "%s(): Removing %s failed: %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): Removing %s failed: %s", __func__, hp->filepath, strerror(errno)); return -1; } @@ -676,7 +676,7 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end) socket_id = hugepages[seg_start].socket_id; seg_len = seg_end - seg_start; - RTE_LOG(DEBUG, EAL, "Attempting to map %" PRIu64 "M on socket %i\n", + RTE_LOG_LINE(DEBUG, EAL, "Attempting to map %" PRIu64 "M on socket %i", (seg_len * page_sz) >> 20ULL, socket_id); /* find free space in memseg lists */ @@ -716,8 +716,8 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end) break; } if (msl_idx == RTE_MAX_MEMSEG_LISTS) { - RTE_LOG(ERR, EAL, "Could not find space for memseg. Please increase RTE_MAX_MEMSEG_PER_LIST " - "RTE_MAX_MEMSEG_PER_TYPE and/or RTE_MAX_MEM_MB_PER_TYPE in configuration.\n"); + RTE_LOG_LINE(ERR, EAL, "Could not find space for memseg. Please increase RTE_MAX_MEMSEG_PER_LIST " + "RTE_MAX_MEMSEG_PER_TYPE and/or RTE_MAX_MEM_MB_PER_TYPE in configuration."); return -1; } @@ -735,13 +735,13 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end) fd = open(hfile->filepath, O_RDWR); if (fd < 0) { - RTE_LOG(ERR, EAL, "Could not open '%s': %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not open '%s': %s", hfile->filepath, strerror(errno)); return -1; } /* set shared lock on the file. */ if (flock(fd, LOCK_SH) < 0) { - RTE_LOG(DEBUG, EAL, "Could not lock '%s': %s\n", + RTE_LOG_LINE(DEBUG, EAL, "Could not lock '%s': %s", hfile->filepath, strerror(errno)); close(fd); return -1; @@ -755,7 +755,7 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end) addr = mmap(addr, page_sz, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_POPULATE | MAP_FIXED, fd, 0); if (addr == MAP_FAILED) { - RTE_LOG(ERR, EAL, "Couldn't remap '%s': %s\n", + RTE_LOG_LINE(ERR, EAL, "Couldn't remap '%s': %s", hfile->filepath, strerror(errno)); close(fd); return -1; @@ -790,10 +790,10 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end) /* store segment fd internally */ if (eal_memalloc_set_seg_fd(msl_idx, ms_idx, fd) < 0) - RTE_LOG(ERR, EAL, "Could not store segment fd: %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not store segment fd: %s", rte_strerror(rte_errno)); } - RTE_LOG(DEBUG, EAL, "Allocated %" PRIu64 "M on socket %i\n", + RTE_LOG_LINE(DEBUG, EAL, "Allocated %" PRIu64 "M on socket %i", (seg_len * page_sz) >> 20, socket_id); return seg_len; } @@ -819,7 +819,7 @@ static int memseg_list_free(struct rte_memseg_list *msl) { if (rte_fbarray_destroy(&msl->memseg_arr)) { - RTE_LOG(ERR, EAL, "Cannot destroy memseg list\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot destroy memseg list"); return -1; } memset(msl, 0, sizeof(*msl)); @@ -965,7 +965,7 @@ prealloc_segments(struct hugepage_file *hugepages, int n_pages) break; } if (msl_idx == RTE_MAX_MEMSEG_LISTS) { - RTE_LOG(ERR, EAL, "Not enough space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS\n"); + RTE_LOG_LINE(ERR, EAL, "Not enough space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS"); return -1; } @@ -976,7 +976,7 @@ prealloc_segments(struct hugepage_file *hugepages, int n_pages) /* finally, allocate VA space */ if (eal_memseg_list_alloc(msl, 0) < 0) { - RTE_LOG(ERR, EAL, "Cannot preallocate 0x%"PRIx64"kB hugepages\n", + RTE_LOG_LINE(ERR, EAL, "Cannot preallocate 0x%"PRIx64"kB hugepages", page_sz >> 10); return -1; } @@ -1177,15 +1177,15 @@ eal_legacy_hugepage_init(void) /* create a memfd and store it in the segment fd table */ memfd = memfd_create("nohuge", 0); if (memfd < 0) { - RTE_LOG(DEBUG, EAL, "Cannot create memfd: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "Cannot create memfd: %s", strerror(errno)); - RTE_LOG(DEBUG, EAL, "Falling back to anonymous map\n"); + RTE_LOG_LINE(DEBUG, EAL, "Falling back to anonymous map"); } else { /* we got an fd - now resize it */ if (ftruncate(memfd, internal_conf->memory) < 0) { - RTE_LOG(ERR, EAL, "Cannot resize memfd: %s\n", + RTE_LOG_LINE(ERR, EAL, "Cannot resize memfd: %s", strerror(errno)); - RTE_LOG(ERR, EAL, "Falling back to anonymous map\n"); + RTE_LOG_LINE(ERR, EAL, "Falling back to anonymous map"); close(memfd); } else { /* creating memfd-backed file was successful. @@ -1193,7 +1193,7 @@ eal_legacy_hugepage_init(void) * other processes (such as vhost backend), so * map it as shared memory. */ - RTE_LOG(DEBUG, EAL, "Using memfd for anonymous memory\n"); + RTE_LOG_LINE(DEBUG, EAL, "Using memfd for anonymous memory"); fd = memfd; flags = MAP_SHARED; } @@ -1203,7 +1203,7 @@ eal_legacy_hugepage_init(void) * fit into the DMA mask. */ if (eal_memseg_list_alloc(msl, 0)) { - RTE_LOG(ERR, EAL, "Cannot preallocate VA space for hugepage memory\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot preallocate VA space for hugepage memory"); return -1; } @@ -1211,7 +1211,7 @@ eal_legacy_hugepage_init(void) addr = mmap(prealloc_addr, mem_sz, PROT_READ | PROT_WRITE, flags | MAP_FIXED, fd, 0); if (addr == MAP_FAILED || addr != prealloc_addr) { - RTE_LOG(ERR, EAL, "%s: mmap() failed: %s\n", __func__, + RTE_LOG_LINE(ERR, EAL, "%s: mmap() failed: %s", __func__, strerror(errno)); munmap(prealloc_addr, mem_sz); return -1; @@ -1222,7 +1222,7 @@ eal_legacy_hugepage_init(void) */ if (fd != -1) { if (eal_memalloc_set_seg_list_fd(0, fd) < 0) { - RTE_LOG(ERR, EAL, "Cannot set up segment list fd\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot set up segment list fd"); /* not a serious error, proceed */ } } @@ -1231,13 +1231,13 @@ eal_legacy_hugepage_init(void) if (mcfg->dma_maskbits && rte_mem_check_dma_mask_thread_unsafe(mcfg->dma_maskbits)) { - RTE_LOG(ERR, EAL, - "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask.\n", + RTE_LOG_LINE(ERR, EAL, + "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask.", __func__); if (rte_eal_iova_mode() == RTE_IOVA_VA && rte_eal_using_phys_addrs()) - RTE_LOG(ERR, EAL, - "%s(): Please try initializing EAL with --iova-mode=pa parameter.\n", + RTE_LOG_LINE(ERR, EAL, + "%s(): Please try initializing EAL with --iova-mode=pa parameter.", __func__); goto fail; } @@ -1292,8 +1292,8 @@ eal_legacy_hugepage_init(void) pages_old = hpi->num_pages[0]; pages_new = map_all_hugepages(&tmp_hp[hp_offset], hpi, memory); if (pages_new < pages_old) { - RTE_LOG(DEBUG, EAL, - "%d not %d hugepages of size %u MB allocated\n", + RTE_LOG_LINE(DEBUG, EAL, + "%d not %d hugepages of size %u MB allocated", pages_new, pages_old, (unsigned)(hpi->hugepage_sz / 0x100000)); @@ -1309,23 +1309,23 @@ eal_legacy_hugepage_init(void) rte_eal_iova_mode() != RTE_IOVA_VA) { /* find physical addresses for each hugepage */ if (find_physaddrs(&tmp_hp[hp_offset], hpi) < 0) { - RTE_LOG(DEBUG, EAL, "Failed to find phys addr " - "for %u MB pages\n", + RTE_LOG_LINE(DEBUG, EAL, "Failed to find phys addr " + "for %u MB pages", (unsigned int)(hpi->hugepage_sz / 0x100000)); goto fail; } } else { /* set physical addresses for each hugepage */ if (set_physaddrs(&tmp_hp[hp_offset], hpi) < 0) { - RTE_LOG(DEBUG, EAL, "Failed to set phys addr " - "for %u MB pages\n", + RTE_LOG_LINE(DEBUG, EAL, "Failed to set phys addr " + "for %u MB pages", (unsigned int)(hpi->hugepage_sz / 0x100000)); goto fail; } } if (find_numasocket(&tmp_hp[hp_offset], hpi) < 0){ - RTE_LOG(DEBUG, EAL, "Failed to find NUMA socket for %u MB pages\n", + RTE_LOG_LINE(DEBUG, EAL, "Failed to find NUMA socket for %u MB pages", (unsigned)(hpi->hugepage_sz / 0x100000)); goto fail; } @@ -1382,9 +1382,9 @@ eal_legacy_hugepage_init(void) for (i = 0; i < (int) internal_conf->num_hugepage_sizes; i++) { for (j = 0; j < RTE_MAX_NUMA_NODES; j++) { if (used_hp[i].num_pages[j] > 0) { - RTE_LOG(DEBUG, EAL, + RTE_LOG_LINE(DEBUG, EAL, "Requesting %u pages of size %uMB" - " from socket %i\n", + " from socket %i", used_hp[i].num_pages[j], (unsigned) (used_hp[i].hugepage_sz / 0x100000), @@ -1398,7 +1398,7 @@ eal_legacy_hugepage_init(void) nr_hugefiles * sizeof(struct hugepage_file)); if (hugepage == NULL) { - RTE_LOG(ERR, EAL, "Failed to create shared memory!\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to create shared memory!"); goto fail; } memset(hugepage, 0, nr_hugefiles * sizeof(struct hugepage_file)); @@ -1409,7 +1409,7 @@ eal_legacy_hugepage_init(void) */ if (unmap_unneeded_hugepages(tmp_hp, used_hp, internal_conf->num_hugepage_sizes) < 0) { - RTE_LOG(ERR, EAL, "Unmapping and locking hugepages failed!\n"); + RTE_LOG_LINE(ERR, EAL, "Unmapping and locking hugepages failed!"); goto fail; } @@ -1420,7 +1420,7 @@ eal_legacy_hugepage_init(void) */ if (copy_hugepages_to_shared_mem(hugepage, nr_hugefiles, tmp_hp, nr_hugefiles) < 0) { - RTE_LOG(ERR, EAL, "Copying tables to shared memory failed!\n"); + RTE_LOG_LINE(ERR, EAL, "Copying tables to shared memory failed!"); goto fail; } @@ -1428,7 +1428,7 @@ eal_legacy_hugepage_init(void) /* for legacy 32-bit mode, we did not preallocate VA space, so do it */ if (internal_conf->legacy_mem && prealloc_segments(hugepage, nr_hugefiles)) { - RTE_LOG(ERR, EAL, "Could not preallocate VA space for hugepages\n"); + RTE_LOG_LINE(ERR, EAL, "Could not preallocate VA space for hugepages"); goto fail; } #endif @@ -1437,14 +1437,14 @@ eal_legacy_hugepage_init(void) * pages become first-class citizens in DPDK memory subsystem */ if (remap_needed_hugepages(hugepage, nr_hugefiles)) { - RTE_LOG(ERR, EAL, "Couldn't remap hugepage files into memseg lists\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't remap hugepage files into memseg lists"); goto fail; } /* free the hugepage backing files */ if (internal_conf->hugepage_file.unlink_before_mapping && unlink_hugepage_files(tmp_hp, internal_conf->num_hugepage_sizes) < 0) { - RTE_LOG(ERR, EAL, "Unlinking hugepage files failed!\n"); + RTE_LOG_LINE(ERR, EAL, "Unlinking hugepage files failed!"); goto fail; } @@ -1480,8 +1480,8 @@ eal_legacy_hugepage_init(void) if (mcfg->dma_maskbits && rte_mem_check_dma_mask_thread_unsafe(mcfg->dma_maskbits)) { - RTE_LOG(ERR, EAL, - "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask.\n", + RTE_LOG_LINE(ERR, EAL, + "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask.", __func__); goto fail; } @@ -1527,15 +1527,15 @@ eal_legacy_hugepage_attach(void) int fd, fd_hugepage = -1; if (aslr_enabled() > 0) { - RTE_LOG(WARNING, EAL, "WARNING: Address Space Layout Randomization " - "(ASLR) is enabled in the kernel.\n"); - RTE_LOG(WARNING, EAL, " This may cause issues with mapping memory " - "into secondary processes\n"); + RTE_LOG_LINE(WARNING, EAL, "WARNING: Address Space Layout Randomization " + "(ASLR) is enabled in the kernel."); + RTE_LOG_LINE(WARNING, EAL, " This may cause issues with mapping memory " + "into secondary processes"); } fd_hugepage = open(eal_hugepage_data_path(), O_RDONLY); if (fd_hugepage < 0) { - RTE_LOG(ERR, EAL, "Could not open %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not open %s", eal_hugepage_data_path()); goto error; } @@ -1543,13 +1543,13 @@ eal_legacy_hugepage_attach(void) size = getFileSize(fd_hugepage); hp = mmap(NULL, size, PROT_READ, MAP_PRIVATE, fd_hugepage, 0); if (hp == MAP_FAILED) { - RTE_LOG(ERR, EAL, "Could not mmap %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not mmap %s", eal_hugepage_data_path()); goto error; } num_hp = size / sizeof(struct hugepage_file); - RTE_LOG(DEBUG, EAL, "Analysing %u files\n", num_hp); + RTE_LOG_LINE(DEBUG, EAL, "Analysing %u files", num_hp); /* map all segments into memory to make sure we get the addrs. the * segments themselves are already in memseg list (which is shared and @@ -1570,7 +1570,7 @@ eal_legacy_hugepage_attach(void) fd = open(hf->filepath, O_RDWR); if (fd < 0) { - RTE_LOG(ERR, EAL, "Could not open %s: %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not open %s: %s", hf->filepath, strerror(errno)); goto error; } @@ -1578,14 +1578,14 @@ eal_legacy_hugepage_attach(void) map_addr = mmap(map_addr, map_sz, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_FIXED, fd, 0); if (map_addr == MAP_FAILED) { - RTE_LOG(ERR, EAL, "Could not map %s: %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not map %s: %s", hf->filepath, strerror(errno)); goto fd_error; } /* set shared lock on the file. */ if (flock(fd, LOCK_SH) < 0) { - RTE_LOG(DEBUG, EAL, "%s(): Locking file failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): Locking file failed: %s", __func__, strerror(errno)); goto mmap_error; } @@ -1593,13 +1593,13 @@ eal_legacy_hugepage_attach(void) /* find segment data */ msl = rte_mem_virt2memseg_list(map_addr); if (msl == NULL) { - RTE_LOG(DEBUG, EAL, "%s(): Cannot find memseg list\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): Cannot find memseg list", __func__); goto mmap_error; } ms = rte_mem_virt2memseg(map_addr, msl); if (ms == NULL) { - RTE_LOG(DEBUG, EAL, "%s(): Cannot find memseg\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): Cannot find memseg", __func__); goto mmap_error; } @@ -1607,14 +1607,14 @@ eal_legacy_hugepage_attach(void) msl_idx = msl - mcfg->memsegs; ms_idx = rte_fbarray_find_idx(&msl->memseg_arr, ms); if (ms_idx < 0) { - RTE_LOG(DEBUG, EAL, "%s(): Cannot find memseg idx\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): Cannot find memseg idx", __func__); goto mmap_error; } /* store segment fd internally */ if (eal_memalloc_set_seg_fd(msl_idx, ms_idx, fd) < 0) - RTE_LOG(ERR, EAL, "Could not store segment fd: %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not store segment fd: %s", rte_strerror(rte_errno)); } /* unmap the hugepage config file, since we are done using it */ @@ -1642,9 +1642,9 @@ static int eal_hugepage_attach(void) { if (eal_memalloc_sync_with_primary()) { - RTE_LOG(ERR, EAL, "Could not map memory from primary process\n"); + RTE_LOG_LINE(ERR, EAL, "Could not map memory from primary process"); if (aslr_enabled() > 0) - RTE_LOG(ERR, EAL, "It is recommended to disable ASLR in the kernel and retry running both primary and secondary processes\n"); + RTE_LOG_LINE(ERR, EAL, "It is recommended to disable ASLR in the kernel and retry running both primary and secondary processes"); return -1; } return 0; @@ -1740,7 +1740,7 @@ memseg_primary_init_32(void) max_mem = (uint64_t)RTE_MAX_MEM_MB << 20; if (total_requested_mem > max_mem) { - RTE_LOG(ERR, EAL, "Invalid parameters: 32-bit process can at most use %uM of memory\n", + RTE_LOG_LINE(ERR, EAL, "Invalid parameters: 32-bit process can at most use %uM of memory", (unsigned int)(max_mem >> 20)); return -1; } @@ -1787,7 +1787,7 @@ memseg_primary_init_32(void) skip |= active_sockets == 0 && socket_id != main_lcore_socket; if (skip) { - RTE_LOG(DEBUG, EAL, "Will not preallocate memory on socket %u\n", + RTE_LOG_LINE(DEBUG, EAL, "Will not preallocate memory on socket %u", socket_id); continue; } @@ -1819,8 +1819,8 @@ memseg_primary_init_32(void) max_pagesz_mem = RTE_ALIGN_FLOOR(max_pagesz_mem, hugepage_sz); - RTE_LOG(DEBUG, EAL, "Attempting to preallocate " - "%" PRIu64 "M on socket %i\n", + RTE_LOG_LINE(DEBUG, EAL, "Attempting to preallocate " + "%" PRIu64 "M on socket %i", max_pagesz_mem >> 20, socket_id); type_msl_idx = 0; @@ -1830,8 +1830,8 @@ memseg_primary_init_32(void) unsigned int n_segs; if (msl_idx >= RTE_MAX_MEMSEG_LISTS) { - RTE_LOG(ERR, EAL, - "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS\n"); + RTE_LOG_LINE(ERR, EAL, + "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS"); return -1; } @@ -1847,7 +1847,7 @@ memseg_primary_init_32(void) /* failing to allocate a memseg list is * a serious error. */ - RTE_LOG(ERR, EAL, "Cannot allocate memseg list\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate memseg list"); return -1; } @@ -1855,7 +1855,7 @@ memseg_primary_init_32(void) /* if we couldn't allocate VA space, we * can try with smaller page sizes. */ - RTE_LOG(ERR, EAL, "Cannot allocate VA space for memseg list, retrying with different page size\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate VA space for memseg list, retrying with different page size"); /* deallocate memseg list */ if (memseg_list_free(msl)) return -1; @@ -1870,7 +1870,7 @@ memseg_primary_init_32(void) cur_socket_mem += cur_pagesz_mem; } if (cur_socket_mem == 0) { - RTE_LOG(ERR, EAL, "Cannot allocate VA space on socket %u\n", + RTE_LOG_LINE(ERR, EAL, "Cannot allocate VA space on socket %u", socket_id); return -1; } @@ -1901,13 +1901,13 @@ memseg_secondary_init(void) continue; if (rte_fbarray_attach(&msl->memseg_arr)) { - RTE_LOG(ERR, EAL, "Cannot attach to primary process memseg lists\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot attach to primary process memseg lists"); return -1; } /* preallocate VA space */ if (eal_memseg_list_alloc(msl, 0)) { - RTE_LOG(ERR, EAL, "Cannot preallocate VA space for hugepage memory\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot preallocate VA space for hugepage memory"); return -1; } } @@ -1930,21 +1930,21 @@ rte_eal_memseg_init(void) lim.rlim_cur = lim.rlim_max; if (setrlimit(RLIMIT_NOFILE, &lim) < 0) { - RTE_LOG(DEBUG, EAL, "Setting maximum number of open files failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "Setting maximum number of open files failed: %s", strerror(errno)); } else { - RTE_LOG(DEBUG, EAL, "Setting maximum number of open files to %" - PRIu64 "\n", + RTE_LOG_LINE(DEBUG, EAL, "Setting maximum number of open files to %" + PRIu64, (uint64_t)lim.rlim_cur); } } else { - RTE_LOG(ERR, EAL, "Cannot get current resource limits\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot get current resource limits"); } #ifndef RTE_EAL_NUMA_AWARE_HUGEPAGES if (!internal_conf->legacy_mem && rte_socket_count() > 1) { - RTE_LOG(WARNING, EAL, "DPDK is running on a NUMA system, but is compiled without NUMA support.\n"); - RTE_LOG(WARNING, EAL, "This will have adverse consequences for performance and usability.\n"); - RTE_LOG(WARNING, EAL, "Please use --"OPT_LEGACY_MEM" option, or recompile with NUMA support.\n"); + RTE_LOG_LINE(WARNING, EAL, "DPDK is running on a NUMA system, but is compiled without NUMA support."); + RTE_LOG_LINE(WARNING, EAL, "This will have adverse consequences for performance and usability."); + RTE_LOG_LINE(WARNING, EAL, "Please use --"OPT_LEGACY_MEM" option, or recompile with NUMA support."); } #endif diff --git a/lib/eal/linux/eal_thread.c b/lib/eal/linux/eal_thread.c index 880070c627..80b6f19a9e 100644 --- a/lib/eal/linux/eal_thread.c +++ b/lib/eal/linux/eal_thread.c @@ -28,7 +28,7 @@ void rte_thread_set_name(rte_thread_t thread_id, const char *thread_name) const size_t truncatedsz = sizeof(truncated); if (strlcpy(truncated, thread_name, truncatedsz) >= truncatedsz) - RTE_LOG(DEBUG, EAL, "Truncated thread name\n"); + RTE_LOG_LINE(DEBUG, EAL, "Truncated thread name"); ret = pthread_setname_np((pthread_t)thread_id.opaque_id, truncated); #endif @@ -37,5 +37,5 @@ void rte_thread_set_name(rte_thread_t thread_id, const char *thread_name) RTE_SET_USED(thread_name); if (ret != 0) - RTE_LOG(DEBUG, EAL, "Failed to set thread name\n"); + RTE_LOG_LINE(DEBUG, EAL, "Failed to set thread name"); } diff --git a/lib/eal/linux/eal_timer.c b/lib/eal/linux/eal_timer.c index df9ad61ae9..3813b1a66e 100644 --- a/lib/eal/linux/eal_timer.c +++ b/lib/eal/linux/eal_timer.c @@ -139,20 +139,20 @@ rte_eal_hpet_init(int make_default) eal_get_internal_configuration(); if (internal_conf->no_hpet) { - RTE_LOG(NOTICE, EAL, "HPET is disabled\n"); + RTE_LOG_LINE(NOTICE, EAL, "HPET is disabled"); return -1; } fd = open(DEV_HPET, O_RDONLY); if (fd < 0) { - RTE_LOG(ERR, EAL, "ERROR: Cannot open "DEV_HPET": %s!\n", + RTE_LOG_LINE(ERR, EAL, "ERROR: Cannot open "DEV_HPET": %s!", strerror(errno)); internal_conf->no_hpet = 1; return -1; } eal_hpet = mmap(NULL, 1024, PROT_READ, MAP_SHARED, fd, 0); if (eal_hpet == MAP_FAILED) { - RTE_LOG(ERR, EAL, "ERROR: Cannot mmap "DEV_HPET"!\n"); + RTE_LOG_LINE(ERR, EAL, "ERROR: Cannot mmap "DEV_HPET"!"); close(fd); internal_conf->no_hpet = 1; return -1; @@ -166,7 +166,7 @@ rte_eal_hpet_init(int make_default) eal_hpet_resolution_hz = (1000ULL*1000ULL*1000ULL*1000ULL*1000ULL) / (uint64_t)eal_hpet_resolution_fs; - RTE_LOG(INFO, EAL, "HPET frequency is ~%"PRIu64" kHz\n", + RTE_LOG_LINE(INFO, EAL, "HPET frequency is ~%"PRIu64" kHz", eal_hpet_resolution_hz/1000); eal_hpet_msb = (eal_hpet->counter_l >> 30); @@ -176,7 +176,7 @@ rte_eal_hpet_init(int make_default) ret = rte_thread_create_internal_control(&msb_inc_thread_id, "hpet-msb", hpet_msb_inc, NULL); if (ret != 0) { - RTE_LOG(ERR, EAL, "ERROR: Cannot create HPET timer thread!\n"); + RTE_LOG_LINE(ERR, EAL, "ERROR: Cannot create HPET timer thread!"); internal_conf->no_hpet = 1; return -1; } diff --git a/lib/eal/linux/eal_vfio.c b/lib/eal/linux/eal_vfio.c index ad3c1654b2..e8a783aaa8 100644 --- a/lib/eal/linux/eal_vfio.c +++ b/lib/eal/linux/eal_vfio.c @@ -367,7 +367,7 @@ vfio_open_group_fd(int iommu_group_num) if (vfio_group_fd < 0) { /* if file not found, it's not an error */ if (errno != ENOENT) { - RTE_LOG(ERR, EAL, "Cannot open %s: %s\n", + RTE_LOG_LINE(ERR, EAL, "Cannot open %s: %s", filename, strerror(errno)); return -1; } @@ -379,8 +379,8 @@ vfio_open_group_fd(int iommu_group_num) vfio_group_fd = open(filename, O_RDWR); if (vfio_group_fd < 0) { if (errno != ENOENT) { - RTE_LOG(ERR, EAL, - "Cannot open %s: %s\n", + RTE_LOG_LINE(ERR, EAL, + "Cannot open %s: %s", filename, strerror(errno)); return -1; } @@ -408,14 +408,14 @@ vfio_open_group_fd(int iommu_group_num) if (p->result == SOCKET_OK && mp_rep->num_fds == 1) { vfio_group_fd = mp_rep->fds[0]; } else if (p->result == SOCKET_NO_FD) { - RTE_LOG(ERR, EAL, "Bad VFIO group fd\n"); + RTE_LOG_LINE(ERR, EAL, "Bad VFIO group fd"); vfio_group_fd = -ENOENT; } } free(mp_reply.msgs); if (vfio_group_fd < 0 && vfio_group_fd != -ENOENT) - RTE_LOG(ERR, EAL, "Cannot request VFIO group fd\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot request VFIO group fd"); return vfio_group_fd; } @@ -452,7 +452,7 @@ vfio_get_group_fd(struct vfio_config *vfio_cfg, /* Lets see first if there is room for a new group */ if (vfio_cfg->vfio_active_groups == VFIO_MAX_GROUPS) { - RTE_LOG(ERR, EAL, "Maximum number of VFIO groups reached!\n"); + RTE_LOG_LINE(ERR, EAL, "Maximum number of VFIO groups reached!"); return -1; } @@ -465,13 +465,13 @@ vfio_get_group_fd(struct vfio_config *vfio_cfg, /* This should not happen */ if (i == VFIO_MAX_GROUPS) { - RTE_LOG(ERR, EAL, "No VFIO group free slot found\n"); + RTE_LOG_LINE(ERR, EAL, "No VFIO group free slot found"); return -1; } vfio_group_fd = vfio_open_group_fd(iommu_group_num); if (vfio_group_fd < 0) { - RTE_LOG(ERR, EAL, "Failed to open VFIO group %d\n", + RTE_LOG_LINE(ERR, EAL, "Failed to open VFIO group %d", iommu_group_num); return vfio_group_fd; } @@ -551,13 +551,13 @@ vfio_group_device_get(int vfio_group_fd) vfio_cfg = get_vfio_cfg_by_group_fd(vfio_group_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO group fd!\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid VFIO group fd!"); return; } i = get_vfio_group_idx(vfio_group_fd); if (i < 0 || i > (VFIO_MAX_GROUPS - 1)) - RTE_LOG(ERR, EAL, "Wrong VFIO group index (%d)\n", i); + RTE_LOG_LINE(ERR, EAL, "Wrong VFIO group index (%d)", i); else vfio_cfg->vfio_groups[i].devices++; } @@ -570,13 +570,13 @@ vfio_group_device_put(int vfio_group_fd) vfio_cfg = get_vfio_cfg_by_group_fd(vfio_group_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO group fd!\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid VFIO group fd!"); return; } i = get_vfio_group_idx(vfio_group_fd); if (i < 0 || i > (VFIO_MAX_GROUPS - 1)) - RTE_LOG(ERR, EAL, "Wrong VFIO group index (%d)\n", i); + RTE_LOG_LINE(ERR, EAL, "Wrong VFIO group index (%d)", i); else vfio_cfg->vfio_groups[i].devices--; } @@ -589,13 +589,13 @@ vfio_group_device_count(int vfio_group_fd) vfio_cfg = get_vfio_cfg_by_group_fd(vfio_group_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO group fd!\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid VFIO group fd!"); return -1; } i = get_vfio_group_idx(vfio_group_fd); if (i < 0 || i > (VFIO_MAX_GROUPS - 1)) { - RTE_LOG(ERR, EAL, "Wrong VFIO group index (%d)\n", i); + RTE_LOG_LINE(ERR, EAL, "Wrong VFIO group index (%d)", i); return -1; } @@ -636,8 +636,8 @@ vfio_mem_event_callback(enum rte_mem_event type, const void *addr, size_t len, while (cur_len < len) { /* some memory segments may have invalid IOVA */ if (ms->iova == RTE_BAD_IOVA) { - RTE_LOG(DEBUG, EAL, - "Memory segment at %p has bad IOVA, skipping\n", + RTE_LOG_LINE(DEBUG, EAL, + "Memory segment at %p has bad IOVA, skipping", ms->addr); goto next; } @@ -670,7 +670,7 @@ vfio_sync_default_container(void) /* default container fd should have been opened in rte_vfio_enable() */ if (!default_vfio_cfg->vfio_enabled || default_vfio_cfg->vfio_container_fd < 0) { - RTE_LOG(ERR, EAL, "VFIO support is not initialized\n"); + RTE_LOG_LINE(ERR, EAL, "VFIO support is not initialized"); return -1; } @@ -690,8 +690,8 @@ vfio_sync_default_container(void) } free(mp_reply.msgs); if (iommu_type_id < 0) { - RTE_LOG(ERR, EAL, - "Could not get IOMMU type for default container\n"); + RTE_LOG_LINE(ERR, EAL, + "Could not get IOMMU type for default container"); return -1; } @@ -708,7 +708,7 @@ vfio_sync_default_container(void) return 0; } - RTE_LOG(ERR, EAL, "Could not find IOMMU type id (%i)\n", + RTE_LOG_LINE(ERR, EAL, "Could not find IOMMU type id (%i)", iommu_type_id); return -1; } @@ -721,7 +721,7 @@ rte_vfio_clear_group(int vfio_group_fd) vfio_cfg = get_vfio_cfg_by_group_fd(vfio_group_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO group fd!\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid VFIO group fd!"); return -1; } @@ -756,8 +756,8 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, /* get group number */ ret = rte_vfio_get_group_num(sysfs_base, dev_addr, &iommu_group_num); if (ret == 0) { - RTE_LOG(NOTICE, EAL, - "%s not managed by VFIO driver, skipping\n", + RTE_LOG_LINE(NOTICE, EAL, + "%s not managed by VFIO driver, skipping", dev_addr); return 1; } @@ -776,8 +776,8 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, * isn't managed by VFIO */ if (vfio_group_fd == -ENOENT) { - RTE_LOG(NOTICE, EAL, - "%s not managed by VFIO driver, skipping\n", + RTE_LOG_LINE(NOTICE, EAL, + "%s not managed by VFIO driver, skipping", dev_addr); return 1; } @@ -790,14 +790,14 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, /* check if the group is viable */ ret = ioctl(vfio_group_fd, VFIO_GROUP_GET_STATUS, &group_status); if (ret) { - RTE_LOG(ERR, EAL, "%s cannot get VFIO group status, " - "error %i (%s)\n", dev_addr, errno, strerror(errno)); + RTE_LOG_LINE(ERR, EAL, "%s cannot get VFIO group status, " + "error %i (%s)", dev_addr, errno, strerror(errno)); close(vfio_group_fd); rte_vfio_clear_group(vfio_group_fd); return -1; } else if (!(group_status.flags & VFIO_GROUP_FLAGS_VIABLE)) { - RTE_LOG(ERR, EAL, "%s VFIO group is not viable! " - "Not all devices in IOMMU group bound to VFIO or unbound\n", + RTE_LOG_LINE(ERR, EAL, "%s VFIO group is not viable! " + "Not all devices in IOMMU group bound to VFIO or unbound", dev_addr); close(vfio_group_fd); rte_vfio_clear_group(vfio_group_fd); @@ -817,9 +817,9 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, ret = ioctl(vfio_group_fd, VFIO_GROUP_SET_CONTAINER, &vfio_container_fd); if (ret) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "%s cannot add VFIO group to container, error " - "%i (%s)\n", dev_addr, errno, strerror(errno)); + "%i (%s)", dev_addr, errno, strerror(errno)); close(vfio_group_fd); rte_vfio_clear_group(vfio_group_fd); return -1; @@ -841,8 +841,8 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, /* select an IOMMU type which we will be using */ t = vfio_set_iommu_type(vfio_container_fd); if (!t) { - RTE_LOG(ERR, EAL, - "%s failed to select IOMMU type\n", + RTE_LOG_LINE(ERR, EAL, + "%s failed to select IOMMU type", dev_addr); close(vfio_group_fd); rte_vfio_clear_group(vfio_group_fd); @@ -857,9 +857,9 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, else ret = 0; if (ret) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "%s DMA remapping failed, error " - "%i (%s)\n", + "%i (%s)", dev_addr, errno, strerror(errno)); close(vfio_group_fd); rte_vfio_clear_group(vfio_group_fd); @@ -886,10 +886,10 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, map->addr, map->iova, map->len, 1); if (ret) { - RTE_LOG(ERR, EAL, "Couldn't map user memory for DMA: " + RTE_LOG_LINE(ERR, EAL, "Couldn't map user memory for DMA: " "va: 0x%" PRIx64 " " "iova: 0x%" PRIx64 " " - "len: 0x%" PRIu64 "\n", + "len: 0x%" PRIu64, map->addr, map->iova, map->len); rte_spinlock_recursive_unlock( @@ -911,13 +911,13 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, rte_mcfg_mem_read_unlock(); if (ret && rte_errno != ENOTSUP) { - RTE_LOG(ERR, EAL, "Could not install memory event callback for VFIO\n"); + RTE_LOG_LINE(ERR, EAL, "Could not install memory event callback for VFIO"); return -1; } if (ret) - RTE_LOG(DEBUG, EAL, "Memory event callbacks not supported\n"); + RTE_LOG_LINE(DEBUG, EAL, "Memory event callbacks not supported"); else - RTE_LOG(DEBUG, EAL, "Installed memory event callback for VFIO\n"); + RTE_LOG_LINE(DEBUG, EAL, "Installed memory event callback for VFIO"); } } else if (rte_eal_process_type() != RTE_PROC_PRIMARY && vfio_cfg == default_vfio_cfg && @@ -929,7 +929,7 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, */ ret = vfio_sync_default_container(); if (ret < 0) { - RTE_LOG(ERR, EAL, "Could not sync default VFIO container\n"); + RTE_LOG_LINE(ERR, EAL, "Could not sync default VFIO container"); close(vfio_group_fd); rte_vfio_clear_group(vfio_group_fd); return -1; @@ -937,7 +937,7 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, /* we have successfully initialized VFIO, notify user */ const struct vfio_iommu_type *t = default_vfio_cfg->vfio_iommu_type; - RTE_LOG(INFO, EAL, "Using IOMMU type %d (%s)\n", + RTE_LOG_LINE(INFO, EAL, "Using IOMMU type %d (%s)", t->type_id, t->name); } @@ -965,7 +965,7 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, * the VFIO group or the container not having IOMMU configured. */ - RTE_LOG(WARNING, EAL, "Getting a vfio_dev_fd for %s failed\n", + RTE_LOG_LINE(WARNING, EAL, "Getting a vfio_dev_fd for %s failed", dev_addr); close(vfio_group_fd); rte_vfio_clear_group(vfio_group_fd); @@ -976,8 +976,8 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, dev_get_info: ret = ioctl(*vfio_dev_fd, VFIO_DEVICE_GET_INFO, device_info); if (ret) { - RTE_LOG(ERR, EAL, "%s cannot get device info, " - "error %i (%s)\n", dev_addr, errno, + RTE_LOG_LINE(ERR, EAL, "%s cannot get device info, " + "error %i (%s)", dev_addr, errno, strerror(errno)); close(*vfio_dev_fd); close(vfio_group_fd); @@ -1007,7 +1007,7 @@ rte_vfio_release_device(const char *sysfs_base, const char *dev_addr, /* get group number */ ret = rte_vfio_get_group_num(sysfs_base, dev_addr, &iommu_group_num); if (ret <= 0) { - RTE_LOG(WARNING, EAL, "%s not managed by VFIO driver\n", + RTE_LOG_LINE(WARNING, EAL, "%s not managed by VFIO driver", dev_addr); /* This is an error at this point. */ ret = -1; @@ -1017,7 +1017,7 @@ rte_vfio_release_device(const char *sysfs_base, const char *dev_addr, /* get the actual group fd */ vfio_group_fd = rte_vfio_get_group_fd(iommu_group_num); if (vfio_group_fd < 0) { - RTE_LOG(INFO, EAL, "rte_vfio_get_group_fd failed for %s\n", + RTE_LOG_LINE(INFO, EAL, "rte_vfio_get_group_fd failed for %s", dev_addr); ret = vfio_group_fd; goto out; @@ -1034,7 +1034,7 @@ rte_vfio_release_device(const char *sysfs_base, const char *dev_addr, /* Closing a device */ if (close(vfio_dev_fd) < 0) { - RTE_LOG(INFO, EAL, "Error when closing vfio_dev_fd for %s\n", + RTE_LOG_LINE(INFO, EAL, "Error when closing vfio_dev_fd for %s", dev_addr); ret = -1; goto out; @@ -1047,14 +1047,14 @@ rte_vfio_release_device(const char *sysfs_base, const char *dev_addr, if (!vfio_group_device_count(vfio_group_fd)) { if (close(vfio_group_fd) < 0) { - RTE_LOG(INFO, EAL, "Error when closing vfio_group_fd for %s\n", + RTE_LOG_LINE(INFO, EAL, "Error when closing vfio_group_fd for %s", dev_addr); ret = -1; goto out; } if (rte_vfio_clear_group(vfio_group_fd) < 0) { - RTE_LOG(INFO, EAL, "Error when clearing group for %s\n", + RTE_LOG_LINE(INFO, EAL, "Error when clearing group for %s", dev_addr); ret = -1; goto out; @@ -1101,21 +1101,21 @@ rte_vfio_enable(const char *modname) } } - RTE_LOG(DEBUG, EAL, "Probing VFIO support...\n"); + RTE_LOG_LINE(DEBUG, EAL, "Probing VFIO support..."); /* check if vfio module is loaded */ vfio_available = rte_eal_check_module(modname); /* return error directly */ if (vfio_available == -1) { - RTE_LOG(INFO, EAL, "Could not get loaded module details!\n"); + RTE_LOG_LINE(INFO, EAL, "Could not get loaded module details!"); return -1; } /* return 0 if VFIO modules not loaded */ if (vfio_available == 0) { - RTE_LOG(DEBUG, EAL, - "VFIO modules not loaded, skipping VFIO support...\n"); + RTE_LOG_LINE(DEBUG, EAL, + "VFIO modules not loaded, skipping VFIO support..."); return 0; } @@ -1131,10 +1131,10 @@ rte_vfio_enable(const char *modname) /* check if we have VFIO driver enabled */ if (default_vfio_cfg->vfio_container_fd != -1) { - RTE_LOG(INFO, EAL, "VFIO support initialized\n"); + RTE_LOG_LINE(INFO, EAL, "VFIO support initialized"); default_vfio_cfg->vfio_enabled = 1; } else { - RTE_LOG(NOTICE, EAL, "VFIO support could not be initialized\n"); + RTE_LOG_LINE(NOTICE, EAL, "VFIO support could not be initialized"); } return 0; @@ -1186,7 +1186,7 @@ vfio_get_default_container_fd(void) } free(mp_reply.msgs); - RTE_LOG(ERR, EAL, "Cannot request default VFIO container fd\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot request default VFIO container fd"); return -1; } @@ -1209,13 +1209,13 @@ vfio_set_iommu_type(int vfio_container_fd) int ret = ioctl(vfio_container_fd, VFIO_SET_IOMMU, t->type_id); if (!ret) { - RTE_LOG(INFO, EAL, "Using IOMMU type %d (%s)\n", + RTE_LOG_LINE(INFO, EAL, "Using IOMMU type %d (%s)", t->type_id, t->name); return t; } /* not an error, there may be more supported IOMMU types */ - RTE_LOG(DEBUG, EAL, "Set IOMMU type %d (%s) failed, error " - "%i (%s)\n", t->type_id, t->name, errno, + RTE_LOG_LINE(DEBUG, EAL, "Set IOMMU type %d (%s) failed, error " + "%i (%s)", t->type_id, t->name, errno, strerror(errno)); } /* if we didn't find a suitable IOMMU type, fail */ @@ -1233,15 +1233,15 @@ vfio_has_supported_extensions(int vfio_container_fd) ret = ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION, t->type_id); if (ret < 0) { - RTE_LOG(ERR, EAL, "Could not get IOMMU type, error " - "%i (%s)\n", errno, strerror(errno)); + RTE_LOG_LINE(ERR, EAL, "Could not get IOMMU type, error " + "%i (%s)", errno, strerror(errno)); close(vfio_container_fd); return -1; } else if (ret == 1) { /* we found a supported extension */ n_extensions++; } - RTE_LOG(DEBUG, EAL, "IOMMU type %d (%s) is %s\n", + RTE_LOG_LINE(DEBUG, EAL, "IOMMU type %d (%s) is %s", t->type_id, t->name, ret ? "supported" : "not supported"); } @@ -1271,9 +1271,9 @@ rte_vfio_get_container_fd(void) if (internal_conf->process_type == RTE_PROC_PRIMARY) { vfio_container_fd = open(VFIO_CONTAINER_PATH, O_RDWR); if (vfio_container_fd < 0) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "Cannot open VFIO container %s, error " - "%i (%s)\n", VFIO_CONTAINER_PATH, + "%i (%s)", VFIO_CONTAINER_PATH, errno, strerror(errno)); return -1; } @@ -1282,19 +1282,19 @@ rte_vfio_get_container_fd(void) ret = ioctl(vfio_container_fd, VFIO_GET_API_VERSION); if (ret != VFIO_API_VERSION) { if (ret < 0) - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "Could not get VFIO API version, error " - "%i (%s)\n", errno, strerror(errno)); + "%i (%s)", errno, strerror(errno)); else - RTE_LOG(ERR, EAL, "Unsupported VFIO API version!\n"); + RTE_LOG_LINE(ERR, EAL, "Unsupported VFIO API version!"); close(vfio_container_fd); return -1; } ret = vfio_has_supported_extensions(vfio_container_fd); if (ret) { - RTE_LOG(ERR, EAL, - "No supported IOMMU extensions found!\n"); + RTE_LOG_LINE(ERR, EAL, + "No supported IOMMU extensions found!"); return -1; } @@ -1322,7 +1322,7 @@ rte_vfio_get_container_fd(void) } free(mp_reply.msgs); - RTE_LOG(ERR, EAL, "Cannot request VFIO container fd\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot request VFIO container fd"); return -1; } @@ -1352,7 +1352,7 @@ rte_vfio_get_group_num(const char *sysfs_base, tok, RTE_DIM(tok), '/'); if (ret <= 0) { - RTE_LOG(ERR, EAL, "%s cannot get IOMMU group\n", dev_addr); + RTE_LOG_LINE(ERR, EAL, "%s cannot get IOMMU group", dev_addr); return -1; } @@ -1362,7 +1362,7 @@ rte_vfio_get_group_num(const char *sysfs_base, end = group_tok; *iommu_group_num = strtol(group_tok, &end, 10); if ((end != group_tok && *end != '\0') || errno != 0) { - RTE_LOG(ERR, EAL, "%s error parsing IOMMU number!\n", dev_addr); + RTE_LOG_LINE(ERR, EAL, "%s error parsing IOMMU number!", dev_addr); return -1; } @@ -1411,12 +1411,12 @@ vfio_type1_dma_mem_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, * returned from kernel. */ if (errno == EEXIST) { - RTE_LOG(DEBUG, EAL, + RTE_LOG_LINE(DEBUG, EAL, "Memory segment is already mapped, skipping"); } else { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "Cannot set up DMA remapping, error " - "%i (%s)\n", errno, strerror(errno)); + "%i (%s)", errno, strerror(errno)); return -1; } } @@ -1429,12 +1429,12 @@ vfio_type1_dma_mem_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, ret = ioctl(vfio_container_fd, VFIO_IOMMU_UNMAP_DMA, &dma_unmap); if (ret) { - RTE_LOG(ERR, EAL, "Cannot clear DMA remapping, error " - "%i (%s)\n", errno, strerror(errno)); + RTE_LOG_LINE(ERR, EAL, "Cannot clear DMA remapping, error " + "%i (%s)", errno, strerror(errno)); return -1; } else if (dma_unmap.size != len) { - RTE_LOG(ERR, EAL, "Unexpected size %"PRIu64 - " of DMA remapping cleared instead of %"PRIu64"\n", + RTE_LOG_LINE(ERR, EAL, "Unexpected size %"PRIu64 + " of DMA remapping cleared instead of %"PRIu64, (uint64_t)dma_unmap.size, len); rte_errno = EIO; return -1; @@ -1470,16 +1470,16 @@ vfio_spapr_dma_do_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, struct vfio_iommu_type1_dma_map dma_map; if (iova + len > spapr_dma_win_len) { - RTE_LOG(ERR, EAL, "DMA map attempt outside DMA window\n"); + RTE_LOG_LINE(ERR, EAL, "DMA map attempt outside DMA window"); return -1; } ret = ioctl(vfio_container_fd, VFIO_IOMMU_SPAPR_REGISTER_MEMORY, ®); if (ret) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "Cannot register vaddr for IOMMU, error " - "%i (%s)\n", errno, strerror(errno)); + "%i (%s)", errno, strerror(errno)); return -1; } @@ -1493,8 +1493,8 @@ vfio_spapr_dma_do_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, ret = ioctl(vfio_container_fd, VFIO_IOMMU_MAP_DMA, &dma_map); if (ret) { - RTE_LOG(ERR, EAL, "Cannot map vaddr for IOMMU, error " - "%i (%s)\n", errno, strerror(errno)); + RTE_LOG_LINE(ERR, EAL, "Cannot map vaddr for IOMMU, error " + "%i (%s)", errno, strerror(errno)); return -1; } @@ -1509,17 +1509,17 @@ vfio_spapr_dma_do_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, ret = ioctl(vfio_container_fd, VFIO_IOMMU_UNMAP_DMA, &dma_unmap); if (ret) { - RTE_LOG(ERR, EAL, "Cannot unmap vaddr for IOMMU, error " - "%i (%s)\n", errno, strerror(errno)); + RTE_LOG_LINE(ERR, EAL, "Cannot unmap vaddr for IOMMU, error " + "%i (%s)", errno, strerror(errno)); return -1; } ret = ioctl(vfio_container_fd, VFIO_IOMMU_SPAPR_UNREGISTER_MEMORY, ®); if (ret) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "Cannot unregister vaddr for IOMMU, error " - "%i (%s)\n", errno, strerror(errno)); + "%i (%s)", errno, strerror(errno)); return -1; } } @@ -1599,7 +1599,7 @@ find_highest_mem_addr(struct spapr_size_walk_param *param) */ FILE *fd = fopen(proc_iomem, "r"); if (fd == NULL) { - RTE_LOG(ERR, EAL, "Cannot open %s\n", proc_iomem); + RTE_LOG_LINE(ERR, EAL, "Cannot open %s", proc_iomem); return -1; } /* Scan /proc/iomem for the highest PA in the system */ @@ -1612,15 +1612,15 @@ find_highest_mem_addr(struct spapr_size_walk_param *param) /* Validate the format of the memory string */ if (space == NULL || dash == NULL || space < dash) { - RTE_LOG(ERR, EAL, "Can't parse line \"%s\" in file %s\n", + RTE_LOG_LINE(ERR, EAL, "Can't parse line \"%s\" in file %s", line, proc_iomem); continue; } start = strtoull(line, NULL, 16); end = strtoull(dash + 1, NULL, 16); - RTE_LOG(DEBUG, EAL, "Found system RAM from 0x%" PRIx64 - " to 0x%" PRIx64 "\n", start, end); + RTE_LOG_LINE(DEBUG, EAL, "Found system RAM from 0x%" PRIx64 + " to 0x%" PRIx64, start, end); if (end > max) max = end; } @@ -1628,22 +1628,22 @@ find_highest_mem_addr(struct spapr_size_walk_param *param) fclose(fd); if (max == 0) { - RTE_LOG(ERR, EAL, "Failed to find valid \"System RAM\" " - "entry in file %s\n", proc_iomem); + RTE_LOG_LINE(ERR, EAL, "Failed to find valid \"System RAM\" " + "entry in file %s", proc_iomem); return -1; } spapr_dma_win_len = rte_align64pow2(max + 1); return 0; } else if (rte_eal_iova_mode() == RTE_IOVA_VA) { - RTE_LOG(DEBUG, EAL, "Highest VA address in memseg list is 0x%" - PRIx64 "\n", param->max_va); + RTE_LOG_LINE(DEBUG, EAL, "Highest VA address in memseg list is 0x%" + PRIx64, param->max_va); spapr_dma_win_len = rte_align64pow2(param->max_va); return 0; } spapr_dma_win_len = 0; - RTE_LOG(ERR, EAL, "Unsupported IOVA mode\n"); + RTE_LOG_LINE(ERR, EAL, "Unsupported IOVA mode"); return -1; } @@ -1668,18 +1668,18 @@ spapr_dma_win_size(void) /* walk the memseg list to find the page size/max VA address */ memset(¶m, 0, sizeof(param)); if (rte_memseg_list_walk(vfio_spapr_size_walk, ¶m) < 0) { - RTE_LOG(ERR, EAL, "Failed to walk memseg list for DMA window size\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to walk memseg list for DMA window size"); return -1; } /* we can't be sure if DMA window covers external memory */ if (param.is_user_managed) - RTE_LOG(WARNING, EAL, "Detected user managed external memory which may not be managed by the IOMMU\n"); + RTE_LOG_LINE(WARNING, EAL, "Detected user managed external memory which may not be managed by the IOMMU"); /* check physical/virtual memory size */ if (find_highest_mem_addr(¶m) < 0) return -1; - RTE_LOG(DEBUG, EAL, "Setting DMA window size to 0x%" PRIx64 "\n", + RTE_LOG_LINE(DEBUG, EAL, "Setting DMA window size to 0x%" PRIx64, spapr_dma_win_len); spapr_dma_win_page_sz = param.page_sz; rte_mem_set_dma_mask(rte_ctz64(spapr_dma_win_len)); @@ -1703,7 +1703,7 @@ vfio_spapr_create_dma_window(int vfio_container_fd) ret = ioctl(vfio_container_fd, VFIO_IOMMU_SPAPR_TCE_GET_INFO, &info); if (ret) { - RTE_LOG(ERR, EAL, "Cannot get IOMMU info, error %i (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Cannot get IOMMU info, error %i (%s)", errno, strerror(errno)); return -1; } @@ -1744,17 +1744,17 @@ vfio_spapr_create_dma_window(int vfio_container_fd) } #endif /* VFIO_IOMMU_SPAPR_INFO_DDW */ if (ret) { - RTE_LOG(ERR, EAL, "Cannot create new DMA window, error " - "%i (%s)\n", errno, strerror(errno)); - RTE_LOG(ERR, EAL, - "Consider using a larger hugepage size if supported by the system\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot create new DMA window, error " + "%i (%s)", errno, strerror(errno)); + RTE_LOG_LINE(ERR, EAL, + "Consider using a larger hugepage size if supported by the system"); return -1; } /* verify the start address */ if (create.start_addr != 0) { - RTE_LOG(ERR, EAL, "Received unsupported start address 0x%" - PRIx64 "\n", (uint64_t)create.start_addr); + RTE_LOG_LINE(ERR, EAL, "Received unsupported start address 0x%" + PRIx64, (uint64_t)create.start_addr); return -1; } return ret; @@ -1769,13 +1769,13 @@ vfio_spapr_dma_mem_map(int vfio_container_fd, uint64_t vaddr, if (do_map) { if (vfio_spapr_dma_do_map(vfio_container_fd, vaddr, iova, len, 1)) { - RTE_LOG(ERR, EAL, "Failed to map DMA\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to map DMA"); ret = -1; } } else { if (vfio_spapr_dma_do_map(vfio_container_fd, vaddr, iova, len, 0)) { - RTE_LOG(ERR, EAL, "Failed to unmap DMA\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to unmap DMA"); ret = -1; } } @@ -1787,7 +1787,7 @@ static int vfio_spapr_dma_map(int vfio_container_fd) { if (vfio_spapr_create_dma_window(vfio_container_fd) < 0) { - RTE_LOG(ERR, EAL, "Could not create new DMA window!\n"); + RTE_LOG_LINE(ERR, EAL, "Could not create new DMA window!"); return -1; } @@ -1822,14 +1822,14 @@ vfio_dma_mem_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, const struct vfio_iommu_type *t = vfio_cfg->vfio_iommu_type; if (!t) { - RTE_LOG(ERR, EAL, "VFIO support not initialized\n"); + RTE_LOG_LINE(ERR, EAL, "VFIO support not initialized"); rte_errno = ENODEV; return -1; } if (!t->dma_user_map_func) { - RTE_LOG(ERR, EAL, - "VFIO custom DMA region mapping not supported by IOMMU %s\n", + RTE_LOG_LINE(ERR, EAL, + "VFIO custom DMA region mapping not supported by IOMMU %s", t->name); rte_errno = ENOTSUP; return -1; @@ -1851,7 +1851,7 @@ container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, user_mem_maps = &vfio_cfg->mem_maps; rte_spinlock_recursive_lock(&user_mem_maps->lock); if (user_mem_maps->n_maps == VFIO_MAX_USER_MEM_MAPS) { - RTE_LOG(ERR, EAL, "No more space for user mem maps\n"); + RTE_LOG_LINE(ERR, EAL, "No more space for user mem maps"); rte_errno = ENOMEM; ret = -1; goto out; @@ -1865,7 +1865,7 @@ container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, * this to be unsupported, because we can't just store any old * mapping and pollute list of active mappings willy-nilly. */ - RTE_LOG(ERR, EAL, "Couldn't map new region for DMA\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't map new region for DMA"); ret = -1; goto out; } @@ -1921,7 +1921,7 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, orig_maps, RTE_DIM(orig_maps)); /* did we find anything? */ if (n_orig < 0) { - RTE_LOG(ERR, EAL, "Couldn't find previously mapped region\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't find previously mapped region"); rte_errno = EINVAL; ret = -1; goto out; @@ -1943,7 +1943,7 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, vaddr + len, iova + len); if (!start_aligned || !end_aligned) { - RTE_LOG(DEBUG, EAL, "DMA partial unmap unsupported\n"); + RTE_LOG_LINE(DEBUG, EAL, "DMA partial unmap unsupported"); rte_errno = ENOTSUP; ret = -1; goto out; @@ -1961,7 +1961,7 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, /* can we store the new maps in our list? */ newlen = (user_mem_maps->n_maps - n_orig) + n_new; if (newlen >= VFIO_MAX_USER_MEM_MAPS) { - RTE_LOG(ERR, EAL, "Not enough space to store partial mapping\n"); + RTE_LOG_LINE(ERR, EAL, "Not enough space to store partial mapping"); rte_errno = ENOMEM; ret = -1; goto out; @@ -1978,11 +1978,11 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, * within our mapped range but had invalid alignment). */ if (rte_errno != ENODEV && rte_errno != ENOTSUP) { - RTE_LOG(ERR, EAL, "Couldn't unmap region for DMA\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't unmap region for DMA"); ret = -1; goto out; } else { - RTE_LOG(DEBUG, EAL, "DMA unmapping failed, but removing mappings anyway\n"); + RTE_LOG_LINE(DEBUG, EAL, "DMA unmapping failed, but removing mappings anyway"); } } @@ -2005,8 +2005,8 @@ rte_vfio_noiommu_is_enabled(void) fd = open(VFIO_NOIOMMU_MODE, O_RDONLY); if (fd < 0) { if (errno != ENOENT) { - RTE_LOG(ERR, EAL, "Cannot open VFIO noiommu file " - "%i (%s)\n", errno, strerror(errno)); + RTE_LOG_LINE(ERR, EAL, "Cannot open VFIO noiommu file " + "%i (%s)", errno, strerror(errno)); return -1; } /* @@ -2019,8 +2019,8 @@ rte_vfio_noiommu_is_enabled(void) cnt = read(fd, &c, 1); close(fd); if (cnt != 1) { - RTE_LOG(ERR, EAL, "Unable to read from VFIO noiommu file " - "%i (%s)\n", errno, strerror(errno)); + RTE_LOG_LINE(ERR, EAL, "Unable to read from VFIO noiommu file " + "%i (%s)", errno, strerror(errno)); return -1; } @@ -2039,13 +2039,13 @@ rte_vfio_container_create(void) } if (i == VFIO_MAX_CONTAINERS) { - RTE_LOG(ERR, EAL, "Exceed max VFIO container limit\n"); + RTE_LOG_LINE(ERR, EAL, "Exceed max VFIO container limit"); return -1; } vfio_cfgs[i].vfio_container_fd = rte_vfio_get_container_fd(); if (vfio_cfgs[i].vfio_container_fd < 0) { - RTE_LOG(NOTICE, EAL, "Fail to create a new VFIO container\n"); + RTE_LOG_LINE(NOTICE, EAL, "Fail to create a new VFIO container"); return -1; } @@ -2060,7 +2060,7 @@ rte_vfio_container_destroy(int container_fd) vfio_cfg = get_vfio_cfg_by_container_fd(container_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO container fd\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid VFIO container fd"); return -1; } @@ -2084,7 +2084,7 @@ rte_vfio_container_group_bind(int container_fd, int iommu_group_num) vfio_cfg = get_vfio_cfg_by_container_fd(container_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO container fd\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid VFIO container fd"); return -1; } @@ -2100,7 +2100,7 @@ rte_vfio_container_group_unbind(int container_fd, int iommu_group_num) vfio_cfg = get_vfio_cfg_by_container_fd(container_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO container fd\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid VFIO container fd"); return -1; } @@ -2113,14 +2113,14 @@ rte_vfio_container_group_unbind(int container_fd, int iommu_group_num) /* This should not happen */ if (i == VFIO_MAX_GROUPS || cur_grp == NULL) { - RTE_LOG(ERR, EAL, "Specified VFIO group number not found\n"); + RTE_LOG_LINE(ERR, EAL, "Specified VFIO group number not found"); return -1; } if (cur_grp->fd >= 0 && close(cur_grp->fd) < 0) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "Error when closing vfio_group_fd for iommu_group_num " - "%d\n", iommu_group_num); + "%d", iommu_group_num); return -1; } cur_grp->group_num = -1; @@ -2144,7 +2144,7 @@ rte_vfio_container_dma_map(int container_fd, uint64_t vaddr, uint64_t iova, vfio_cfg = get_vfio_cfg_by_container_fd(container_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO container fd\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid VFIO container fd"); return -1; } @@ -2164,7 +2164,7 @@ rte_vfio_container_dma_unmap(int container_fd, uint64_t vaddr, uint64_t iova, vfio_cfg = get_vfio_cfg_by_container_fd(container_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO container fd\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid VFIO container fd"); return -1; } diff --git a/lib/eal/linux/eal_vfio_mp_sync.c b/lib/eal/linux/eal_vfio_mp_sync.c index 157f20e583..a78113844b 100644 --- a/lib/eal/linux/eal_vfio_mp_sync.c +++ b/lib/eal/linux/eal_vfio_mp_sync.c @@ -33,7 +33,7 @@ vfio_mp_primary(const struct rte_mp_msg *msg, const void *peer) (const struct vfio_mp_param *)msg->param; if (msg->len_param != sizeof(*m)) { - RTE_LOG(ERR, EAL, "vfio received invalid message!\n"); + RTE_LOG_LINE(ERR, EAL, "vfio received invalid message!"); return -1; } @@ -95,7 +95,7 @@ vfio_mp_primary(const struct rte_mp_msg *msg, const void *peer) break; } default: - RTE_LOG(ERR, EAL, "vfio received invalid message!\n"); + RTE_LOG_LINE(ERR, EAL, "vfio received invalid message!"); return -1; } diff --git a/lib/eal/riscv/rte_cycles.c b/lib/eal/riscv/rte_cycles.c index 358f271311..e27e02d9a9 100644 --- a/lib/eal/riscv/rte_cycles.c +++ b/lib/eal/riscv/rte_cycles.c @@ -38,14 +38,14 @@ __rte_riscv_timefrq(void) break; } fail: - RTE_LOG(WARNING, EAL, "Unable to read timebase-frequency from FDT.\n"); + RTE_LOG_LINE(WARNING, EAL, "Unable to read timebase-frequency from FDT."); return 0; } uint64_t get_tsc_freq_arch(void) { - RTE_LOG(NOTICE, EAL, "TSC using RISC-V %s.\n", + RTE_LOG_LINE(NOTICE, EAL, "TSC using RISC-V %s.", RTE_RISCV_RDTSC_USE_HPM ? "rdcycle" : "rdtime"); if (!RTE_RISCV_RDTSC_USE_HPM) return __rte_riscv_timefrq(); diff --git a/lib/eal/unix/eal_filesystem.c b/lib/eal/unix/eal_filesystem.c index afbab9368a..4d90c2707f 100644 --- a/lib/eal/unix/eal_filesystem.c +++ b/lib/eal/unix/eal_filesystem.c @@ -41,7 +41,7 @@ int eal_create_runtime_dir(void) /* create DPDK subdirectory under runtime dir */ ret = snprintf(tmp, sizeof(tmp), "%s/dpdk", directory); if (ret < 0 || ret == sizeof(tmp)) { - RTE_LOG(ERR, EAL, "Error creating DPDK runtime path name\n"); + RTE_LOG_LINE(ERR, EAL, "Error creating DPDK runtime path name"); return -1; } @@ -49,7 +49,7 @@ int eal_create_runtime_dir(void) ret = snprintf(run_dir, sizeof(run_dir), "%s/%s", tmp, eal_get_hugefile_prefix()); if (ret < 0 || ret == sizeof(run_dir)) { - RTE_LOG(ERR, EAL, "Error creating prefix-specific runtime path name\n"); + RTE_LOG_LINE(ERR, EAL, "Error creating prefix-specific runtime path name"); return -1; } @@ -58,14 +58,14 @@ int eal_create_runtime_dir(void) */ ret = mkdir(tmp, 0700); if (ret < 0 && errno != EEXIST) { - RTE_LOG(ERR, EAL, "Error creating '%s': %s\n", + RTE_LOG_LINE(ERR, EAL, "Error creating '%s': %s", tmp, strerror(errno)); return -1; } ret = mkdir(run_dir, 0700); if (ret < 0 && errno != EEXIST) { - RTE_LOG(ERR, EAL, "Error creating '%s': %s\n", + RTE_LOG_LINE(ERR, EAL, "Error creating '%s': %s", run_dir, strerror(errno)); return -1; } @@ -84,20 +84,20 @@ int eal_parse_sysfs_value(const char *filename, unsigned long *val) char *end = NULL; if ((f = fopen(filename, "r")) == NULL) { - RTE_LOG(ERR, EAL, "%s(): cannot open sysfs value %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): cannot open sysfs value %s", __func__, filename); return -1; } if (fgets(buf, sizeof(buf), f) == NULL) { - RTE_LOG(ERR, EAL, "%s(): cannot read sysfs value %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): cannot read sysfs value %s", __func__, filename); fclose(f); return -1; } *val = strtoul(buf, &end, 0); if ((buf[0] == '\0') || (end == NULL) || (*end != '\n')) { - RTE_LOG(ERR, EAL, "%s(): cannot parse sysfs value %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): cannot parse sysfs value %s", __func__, filename); fclose(f); return -1; diff --git a/lib/eal/unix/eal_firmware.c b/lib/eal/unix/eal_firmware.c index 1a7cf8e7b7..b071bb1396 100644 --- a/lib/eal/unix/eal_firmware.c +++ b/lib/eal/unix/eal_firmware.c @@ -151,7 +151,7 @@ rte_firmware_read(const char *name, void **buf, size_t *bufsz) path[PATH_MAX - 1] = '\0'; #ifndef RTE_HAS_LIBARCHIVE if (access(path, F_OK) == 0) { - RTE_LOG(WARNING, EAL, "libarchive not linked, %s cannot be decompressed\n", + RTE_LOG_LINE(WARNING, EAL, "libarchive not linked, %s cannot be decompressed", path); } #else diff --git a/lib/eal/unix/eal_unix_memory.c b/lib/eal/unix/eal_unix_memory.c index 68ae93bd6e..16183fb395 100644 --- a/lib/eal/unix/eal_unix_memory.c +++ b/lib/eal/unix/eal_unix_memory.c @@ -29,8 +29,8 @@ mem_map(void *requested_addr, size_t size, int prot, int flags, { void *virt = mmap(requested_addr, size, prot, flags, fd, offset); if (virt == MAP_FAILED) { - RTE_LOG(DEBUG, EAL, - "Cannot mmap(%p, 0x%zx, 0x%x, 0x%x, %d, 0x%"PRIx64"): %s\n", + RTE_LOG_LINE(DEBUG, EAL, + "Cannot mmap(%p, 0x%zx, 0x%x, 0x%x, %d, 0x%"PRIx64"): %s", requested_addr, size, prot, flags, fd, offset, strerror(errno)); rte_errno = errno; @@ -44,7 +44,7 @@ mem_unmap(void *virt, size_t size) { int ret = munmap(virt, size); if (ret < 0) { - RTE_LOG(DEBUG, EAL, "Cannot munmap(%p, 0x%zx): %s\n", + RTE_LOG_LINE(DEBUG, EAL, "Cannot munmap(%p, 0x%zx): %s", virt, size, strerror(errno)); rte_errno = errno; } @@ -83,7 +83,7 @@ eal_mem_set_dump(void *virt, size_t size, bool dump) int flags = dump ? EAL_DODUMP : EAL_DONTDUMP; int ret = madvise(virt, size, flags); if (ret) { - RTE_LOG(DEBUG, EAL, "madvise(%p, %#zx, %d) failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "madvise(%p, %#zx, %d) failed: %s", virt, size, flags, strerror(rte_errno)); rte_errno = errno; } diff --git a/lib/eal/unix/rte_thread.c b/lib/eal/unix/rte_thread.c index 36a21ab2f9..bee77e9448 100644 --- a/lib/eal/unix/rte_thread.c +++ b/lib/eal/unix/rte_thread.c @@ -53,7 +53,7 @@ thread_map_priority_to_os_value(enum rte_thread_priority eal_pri, int *os_pri, *os_pri = sched_get_priority_max(SCHED_RR); break; default: - RTE_LOG(DEBUG, EAL, "The requested priority value is invalid.\n"); + RTE_LOG_LINE(DEBUG, EAL, "The requested priority value is invalid."); return EINVAL; } @@ -79,7 +79,7 @@ thread_map_os_priority_to_eal_priority(int policy, int os_pri, } break; default: - RTE_LOG(DEBUG, EAL, "The OS priority value does not map to an EAL-defined priority.\n"); + RTE_LOG_LINE(DEBUG, EAL, "The OS priority value does not map to an EAL-defined priority."); return EINVAL; } @@ -97,7 +97,7 @@ thread_start_wrapper(void *arg) if (ctx->thread_attr != NULL && CPU_COUNT(&ctx->thread_attr->cpuset) > 0) { ret = rte_thread_set_affinity_by_id(rte_thread_self(), &ctx->thread_attr->cpuset); if (ret != 0) - RTE_LOG(DEBUG, EAL, "rte_thread_set_affinity_by_id failed\n"); + RTE_LOG_LINE(DEBUG, EAL, "rte_thread_set_affinity_by_id failed"); } pthread_mutex_lock(&ctx->wrapper_mutex); @@ -136,7 +136,7 @@ rte_thread_create(rte_thread_t *thread_id, if (thread_attr != NULL) { ret = pthread_attr_init(&attr); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "pthread_attr_init failed\n"); + RTE_LOG_LINE(DEBUG, EAL, "pthread_attr_init failed"); goto cleanup; } @@ -149,7 +149,7 @@ rte_thread_create(rte_thread_t *thread_id, ret = pthread_attr_setinheritsched(attrp, PTHREAD_EXPLICIT_SCHED); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "pthread_attr_setinheritsched failed\n"); + RTE_LOG_LINE(DEBUG, EAL, "pthread_attr_setinheritsched failed"); goto cleanup; } @@ -165,13 +165,13 @@ rte_thread_create(rte_thread_t *thread_id, ret = pthread_attr_setschedpolicy(attrp, policy); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "pthread_attr_setschedpolicy failed\n"); + RTE_LOG_LINE(DEBUG, EAL, "pthread_attr_setschedpolicy failed"); goto cleanup; } ret = pthread_attr_setschedparam(attrp, ¶m); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "pthread_attr_setschedparam failed\n"); + RTE_LOG_LINE(DEBUG, EAL, "pthread_attr_setschedparam failed"); goto cleanup; } } @@ -179,7 +179,7 @@ rte_thread_create(rte_thread_t *thread_id, ret = pthread_create((pthread_t *)&thread_id->opaque_id, attrp, thread_start_wrapper, &ctx); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "pthread_create failed\n"); + RTE_LOG_LINE(DEBUG, EAL, "pthread_create failed"); goto cleanup; } @@ -211,7 +211,7 @@ rte_thread_join(rte_thread_t thread_id, uint32_t *value_ptr) ret = pthread_join((pthread_t)thread_id.opaque_id, pres); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "pthread_join failed\n"); + RTE_LOG_LINE(DEBUG, EAL, "pthread_join failed"); return ret; } @@ -256,7 +256,7 @@ rte_thread_get_priority(rte_thread_t thread_id, ret = pthread_getschedparam((pthread_t)thread_id.opaque_id, &policy, ¶m); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "pthread_getschedparam failed\n"); + RTE_LOG_LINE(DEBUG, EAL, "pthread_getschedparam failed"); goto cleanup; } @@ -295,13 +295,13 @@ rte_thread_key_create(rte_thread_key *key, void (*destructor)(void *)) *key = malloc(sizeof(**key)); if ((*key) == NULL) { - RTE_LOG(DEBUG, EAL, "Cannot allocate TLS key.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Cannot allocate TLS key."); rte_errno = ENOMEM; return -1; } err = pthread_key_create(&((*key)->thread_index), destructor); if (err) { - RTE_LOG(DEBUG, EAL, "pthread_key_create failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "pthread_key_create failed: %s", strerror(err)); free(*key); rte_errno = ENOEXEC; @@ -316,13 +316,13 @@ rte_thread_key_delete(rte_thread_key key) int err; if (!key) { - RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Invalid TLS key."); rte_errno = EINVAL; return -1; } err = pthread_key_delete(key->thread_index); if (err) { - RTE_LOG(DEBUG, EAL, "pthread_key_delete failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "pthread_key_delete failed: %s", strerror(err)); free(key); rte_errno = ENOEXEC; @@ -338,13 +338,13 @@ rte_thread_value_set(rte_thread_key key, const void *value) int err; if (!key) { - RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Invalid TLS key."); rte_errno = EINVAL; return -1; } err = pthread_setspecific(key->thread_index, value); if (err) { - RTE_LOG(DEBUG, EAL, "pthread_setspecific failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "pthread_setspecific failed: %s", strerror(err)); rte_errno = ENOEXEC; return -1; @@ -356,7 +356,7 @@ void * rte_thread_value_get(rte_thread_key key) { if (!key) { - RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Invalid TLS key."); rte_errno = EINVAL; return NULL; } diff --git a/lib/eal/windows/eal.c b/lib/eal/windows/eal.c index 7ec2152211..b573fa7c74 100644 --- a/lib/eal/windows/eal.c +++ b/lib/eal/windows/eal.c @@ -67,7 +67,7 @@ eal_proc_type_detect(void) ptype = RTE_PROC_SECONDARY; } - RTE_LOG(INFO, EAL, "Auto-detected process type: %s\n", + RTE_LOG_LINE(INFO, EAL, "Auto-detected process type: %s", ptype == RTE_PROC_PRIMARY ? "PRIMARY" : "SECONDARY"); return ptype; @@ -175,16 +175,16 @@ eal_parse_args(int argc, char **argv) exit(EXIT_SUCCESS); default: if (opt < OPT_LONG_MIN_NUM && isprint(opt)) { - RTE_LOG(ERR, EAL, "Option %c is not supported " - "on Windows\n", opt); + RTE_LOG_LINE(ERR, EAL, "Option %c is not supported " + "on Windows", opt); } else if (opt >= OPT_LONG_MIN_NUM && opt < OPT_LONG_MAX_NUM) { - RTE_LOG(ERR, EAL, "Option %s is not supported " - "on Windows\n", + RTE_LOG_LINE(ERR, EAL, "Option %s is not supported " + "on Windows", eal_long_options[option_index].name); } else { - RTE_LOG(ERR, EAL, "Option %d is not supported " - "on Windows\n", opt); + RTE_LOG_LINE(ERR, EAL, "Option %d is not supported " + "on Windows", opt); } eal_usage(prgname); return -1; @@ -217,7 +217,7 @@ static void rte_eal_init_alert(const char *msg) { fprintf(stderr, "EAL: FATAL: %s\n", msg); - RTE_LOG(ERR, EAL, "%s\n", msg); + RTE_LOG_LINE(ERR, EAL, "%s", msg); } /* Stubs to enable EAL trace point compilation @@ -312,8 +312,8 @@ rte_eal_init(int argc, char **argv) /* Prevent creation of shared memory files. */ if (internal_conf->in_memory == 0) { - RTE_LOG(WARNING, EAL, "Multi-process support is requested, " - "but not available.\n"); + RTE_LOG_LINE(WARNING, EAL, "Multi-process support is requested, " + "but not available."); internal_conf->in_memory = 1; internal_conf->no_shconf = 1; } @@ -356,21 +356,21 @@ rte_eal_init(int argc, char **argv) has_phys_addr = true; if (eal_mem_virt2iova_init() < 0) { /* Non-fatal error if physical addresses are not required. */ - RTE_LOG(DEBUG, EAL, "Cannot access virt2phys driver, " - "PA will not be available\n"); + RTE_LOG_LINE(DEBUG, EAL, "Cannot access virt2phys driver, " + "PA will not be available"); has_phys_addr = false; } iova_mode = internal_conf->iova_mode; if (iova_mode == RTE_IOVA_DC) { - RTE_LOG(DEBUG, EAL, "Specific IOVA mode is not requested, autodetecting\n"); + RTE_LOG_LINE(DEBUG, EAL, "Specific IOVA mode is not requested, autodetecting"); if (has_phys_addr) { - RTE_LOG(DEBUG, EAL, "Selecting IOVA mode according to bus requests\n"); + RTE_LOG_LINE(DEBUG, EAL, "Selecting IOVA mode according to bus requests"); iova_mode = rte_bus_get_iommu_class(); if (iova_mode == RTE_IOVA_DC) { if (!RTE_IOVA_IN_MBUF) { iova_mode = RTE_IOVA_VA; - RTE_LOG(DEBUG, EAL, "IOVA as VA mode is forced by build option.\n"); + RTE_LOG_LINE(DEBUG, EAL, "IOVA as VA mode is forced by build option."); } else { iova_mode = RTE_IOVA_PA; } @@ -392,7 +392,7 @@ rte_eal_init(int argc, char **argv) return -1; } - RTE_LOG(DEBUG, EAL, "Selected IOVA mode '%s'\n", + RTE_LOG_LINE(DEBUG, EAL, "Selected IOVA mode '%s'", iova_mode == RTE_IOVA_PA ? "PA" : "VA"); rte_eal_get_configuration()->iova_mode = iova_mode; @@ -442,7 +442,7 @@ rte_eal_init(int argc, char **argv) &lcore_config[config->main_lcore].cpuset); ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset)); - RTE_LOG(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])\n", + RTE_LOG_LINE(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])", config->main_lcore, rte_thread_self().opaque_id, cpuset, ret == 0 ? "" : "..."); @@ -474,7 +474,7 @@ rte_eal_init(int argc, char **argv) ret = rte_thread_set_affinity_by_id(lcore_config[i].thread_id, &lcore_config[i].cpuset); if (ret != 0) - RTE_LOG(DEBUG, EAL, "Cannot set affinity\n"); + RTE_LOG_LINE(DEBUG, EAL, "Cannot set affinity"); } /* Initialize services so drivers can register services during probe. */ diff --git a/lib/eal/windows/eal_alarm.c b/lib/eal/windows/eal_alarm.c index 34b52380ce..c56aa0e687 100644 --- a/lib/eal/windows/eal_alarm.c +++ b/lib/eal/windows/eal_alarm.c @@ -92,7 +92,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg) int ret; if (cb_fn == NULL) { - RTE_LOG(ERR, EAL, "NULL callback\n"); + RTE_LOG_LINE(ERR, EAL, "NULL callback"); ret = -EINVAL; goto exit; } @@ -105,7 +105,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg) ap = calloc(1, sizeof(*ap)); if (ap == NULL) { - RTE_LOG(ERR, EAL, "Cannot allocate alarm entry\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate alarm entry"); ret = -ENOMEM; goto exit; } @@ -129,7 +129,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg) /* Directly schedule callback execution. */ ret = alarm_set(ap, deadline); if (ret < 0) { - RTE_LOG(ERR, EAL, "Cannot setup alarm\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot setup alarm"); goto fail; } } else { @@ -143,7 +143,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg) ret = intr_thread_exec_sync(alarm_task_exec, &task); if (ret < 0) { - RTE_LOG(ERR, EAL, "Cannot setup alarm in interrupt thread\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot setup alarm in interrupt thread"); goto fail; } @@ -187,7 +187,7 @@ rte_eal_alarm_cancel(rte_eal_alarm_callback cb_fn, void *cb_arg) removed = 0; if (cb_fn == NULL) { - RTE_LOG(ERR, EAL, "NULL callback\n"); + RTE_LOG_LINE(ERR, EAL, "NULL callback"); return -EINVAL; } @@ -246,7 +246,7 @@ intr_thread_exec_sync(void (*func)(void *arg), void *arg) rte_spinlock_lock(&task.lock); ret = eal_intr_thread_schedule(intr_thread_entry, &task); if (ret < 0) { - RTE_LOG(ERR, EAL, "Cannot schedule task to interrupt thread\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot schedule task to interrupt thread"); return -EINVAL; } diff --git a/lib/eal/windows/eal_debug.c b/lib/eal/windows/eal_debug.c index 56ed70df7d..be646080c3 100644 --- a/lib/eal/windows/eal_debug.c +++ b/lib/eal/windows/eal_debug.c @@ -48,8 +48,8 @@ rte_dump_stack(void) error_code = GetLastError(); if (error_code == ERROR_INVALID_ADDRESS) { /* Missing symbols, print message */ - rte_log(RTE_LOG_ERR, RTE_LOGTYPE_EAL, - "%d: [<missing_symbols>]\n", frame_num--); + RTE_LOG_LINE(ERR, EAL, + "%d: [<missing_symbols>]", frame_num--); continue; } else { RTE_LOG_WIN32_ERR("SymFromAddr()"); @@ -67,8 +67,8 @@ rte_dump_stack(void) } } - rte_log(RTE_LOG_ERR, RTE_LOGTYPE_EAL, - "%d: [%s (%s+0x%0llx)[0x%0llX]]\n", frame_num, + RTE_LOG_LINE(ERR, EAL, + "%d: [%s (%s+0x%0llx)[0x%0llX]]", frame_num, error_code ? "<unknown>" : line.FileName, symbol_info->Name, sym_disp, symbol_info->Address); frame_num--; diff --git a/lib/eal/windows/eal_dev.c b/lib/eal/windows/eal_dev.c index 35191056fd..264bc4a649 100644 --- a/lib/eal/windows/eal_dev.c +++ b/lib/eal/windows/eal_dev.c @@ -7,27 +7,27 @@ int rte_dev_event_monitor_start(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for Windows\n"); + RTE_LOG_LINE(ERR, EAL, "Device event is not supported for Windows"); return -1; } int rte_dev_event_monitor_stop(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for Windows\n"); + RTE_LOG_LINE(ERR, EAL, "Device event is not supported for Windows"); return -1; } int rte_dev_hotplug_handle_enable(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for Windows\n"); + RTE_LOG_LINE(ERR, EAL, "Device event is not supported for Windows"); return -1; } int rte_dev_hotplug_handle_disable(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for Windows\n"); + RTE_LOG_LINE(ERR, EAL, "Device event is not supported for Windows"); return -1; } diff --git a/lib/eal/windows/eal_hugepages.c b/lib/eal/windows/eal_hugepages.c index 701cd0cb08..c7dfe2d238 100644 --- a/lib/eal/windows/eal_hugepages.c +++ b/lib/eal/windows/eal_hugepages.c @@ -89,8 +89,8 @@ hugepage_info_init(void) } hpi->num_pages[socket_id] = bytes / hpi->hugepage_sz; - RTE_LOG(DEBUG, EAL, - "Found %u hugepages of %zu bytes on socket %u\n", + RTE_LOG_LINE(DEBUG, EAL, + "Found %u hugepages of %zu bytes on socket %u", hpi->num_pages[socket_id], hpi->hugepage_sz, socket_id); } @@ -105,13 +105,13 @@ int eal_hugepage_info_init(void) { if (hugepage_claim_privilege() < 0) { - RTE_LOG(ERR, EAL, "Cannot claim hugepage privilege, " - "verify that large-page support privilege is assigned to the current user\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot claim hugepage privilege, " + "verify that large-page support privilege is assigned to the current user"); return -1; } if (hugepage_info_init() < 0) { - RTE_LOG(ERR, EAL, "Cannot discover available hugepages\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot discover available hugepages"); return -1; } diff --git a/lib/eal/windows/eal_interrupts.c b/lib/eal/windows/eal_interrupts.c index 49efdc098c..a9c62453b8 100644 --- a/lib/eal/windows/eal_interrupts.c +++ b/lib/eal/windows/eal_interrupts.c @@ -39,7 +39,7 @@ eal_intr_thread_main(LPVOID arg __rte_unused) bool finished = false; if (eal_intr_thread_handle_init() < 0) { - RTE_LOG(ERR, EAL, "Cannot open interrupt thread handle\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot open interrupt thread handle"); goto cleanup; } @@ -57,7 +57,7 @@ eal_intr_thread_main(LPVOID arg __rte_unused) DWORD error = GetLastError(); if (error != WAIT_IO_COMPLETION) { RTE_LOG_WIN32_ERR("GetQueuedCompletionStatusEx()"); - RTE_LOG(ERR, EAL, "Failed waiting for interrupts\n"); + RTE_LOG_LINE(ERR, EAL, "Failed waiting for interrupts"); break; } @@ -94,7 +94,7 @@ rte_eal_intr_init(void) intr_iocp = CreateIoCompletionPort(INVALID_HANDLE_VALUE, NULL, 0, 1); if (intr_iocp == NULL) { RTE_LOG_WIN32_ERR("CreateIoCompletionPort()"); - RTE_LOG(ERR, EAL, "Cannot create interrupt IOCP\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot create interrupt IOCP"); return -1; } @@ -102,7 +102,7 @@ rte_eal_intr_init(void) eal_intr_thread_main, NULL); if (ret != 0) { rte_errno = -ret; - RTE_LOG(ERR, EAL, "Cannot create interrupt thread\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot create interrupt thread"); } return ret; @@ -140,7 +140,7 @@ eal_intr_thread_cancel(void) if (!PostQueuedCompletionStatus( intr_iocp, 0, IOCP_KEY_SHUTDOWN, NULL)) { RTE_LOG_WIN32_ERR("PostQueuedCompletionStatus()"); - RTE_LOG(ERR, EAL, "Cannot cancel interrupt thread\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot cancel interrupt thread"); return; } diff --git a/lib/eal/windows/eal_lcore.c b/lib/eal/windows/eal_lcore.c index 286fe241eb..da3be08aab 100644 --- a/lib/eal/windows/eal_lcore.c +++ b/lib/eal/windows/eal_lcore.c @@ -65,7 +65,7 @@ eal_query_group_affinity(void) &infos_size)) { DWORD error = GetLastError(); if (error != ERROR_INSUFFICIENT_BUFFER) { - RTE_LOG(ERR, EAL, "Cannot get group information size, error %lu\n", error); + RTE_LOG_LINE(ERR, EAL, "Cannot get group information size, error %lu", error); rte_errno = EINVAL; ret = -1; goto cleanup; @@ -74,7 +74,7 @@ eal_query_group_affinity(void) infos = malloc(infos_size); if (infos == NULL) { - RTE_LOG(ERR, EAL, "Cannot allocate memory for NUMA node information\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate memory for NUMA node information"); rte_errno = ENOMEM; ret = -1; goto cleanup; @@ -82,7 +82,7 @@ eal_query_group_affinity(void) if (!GetLogicalProcessorInformationEx(RelationGroup, infos, &infos_size)) { - RTE_LOG(ERR, EAL, "Cannot get group information, error %lu\n", + RTE_LOG_LINE(ERR, EAL, "Cannot get group information, error %lu", GetLastError()); rte_errno = EINVAL; ret = -1; diff --git a/lib/eal/windows/eal_memalloc.c b/lib/eal/windows/eal_memalloc.c index aa7589b81d..fa9d1fdc1e 100644 --- a/lib/eal/windows/eal_memalloc.c +++ b/lib/eal/windows/eal_memalloc.c @@ -52,7 +52,7 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id, } /* Bugcheck, should not happen. */ - RTE_LOG(DEBUG, EAL, "Attempted to reallocate segment %p " + RTE_LOG_LINE(DEBUG, EAL, "Attempted to reallocate segment %p " "(size %zu) on socket %d", ms->addr, ms->len, ms->socket_id); return -1; @@ -66,8 +66,8 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id, /* Request a new chunk of memory from OS. */ addr = eal_mem_alloc_socket(alloc_sz, socket_id); if (addr == NULL) { - RTE_LOG(DEBUG, EAL, "Cannot allocate %zu bytes " - "on socket %d\n", alloc_sz, socket_id); + RTE_LOG_LINE(DEBUG, EAL, "Cannot allocate %zu bytes " + "on socket %d", alloc_sz, socket_id); return -1; } } else { @@ -79,15 +79,15 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id, * error, because it breaks MSL assumptions. */ if ((addr != NULL) && (addr != requested_addr)) { - RTE_LOG(CRIT, EAL, "Address %p occupied by an alien " - " allocation - MSL is not VA-contiguous!\n", + RTE_LOG_LINE(CRIT, EAL, "Address %p occupied by an alien " + " allocation - MSL is not VA-contiguous!", requested_addr); return -1; } if (addr == NULL) { - RTE_LOG(DEBUG, EAL, "Cannot commit reserved memory %p " - "(size %zu) on socket %d\n", + RTE_LOG_LINE(DEBUG, EAL, "Cannot commit reserved memory %p " + "(size %zu) on socket %d", requested_addr, alloc_sz, socket_id); return -1; } @@ -101,8 +101,8 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id, iova = rte_mem_virt2iova(addr); if (iova == RTE_BAD_IOVA) { - RTE_LOG(DEBUG, EAL, - "Cannot get IOVA of allocated segment\n"); + RTE_LOG_LINE(DEBUG, EAL, + "Cannot get IOVA of allocated segment"); goto error; } @@ -115,12 +115,12 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id, page = &info.VirtualAttributes; if (!page->Valid || !page->LargePage) { - RTE_LOG(DEBUG, EAL, "Got regular page instead of a hugepage\n"); + RTE_LOG_LINE(DEBUG, EAL, "Got regular page instead of a hugepage"); goto error; } if (page->Node != numa_node) { - RTE_LOG(DEBUG, EAL, - "NUMA node hint %u (socket %d) not respected, got %u\n", + RTE_LOG_LINE(DEBUG, EAL, + "NUMA node hint %u (socket %d) not respected, got %u", numa_node, socket_id, page->Node); goto error; } @@ -141,8 +141,8 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id, /* During decommitment, memory is temporarily returned * to the system and the address may become unavailable. */ - RTE_LOG(CRIT, EAL, "Address %p occupied by an alien " - " allocation - MSL is not VA-contiguous!\n", addr); + RTE_LOG_LINE(CRIT, EAL, "Address %p occupied by an alien " + " allocation - MSL is not VA-contiguous!", addr); } return -1; } @@ -153,8 +153,8 @@ free_seg(struct rte_memseg *ms) if (eal_mem_decommit(ms->addr, ms->len)) { if (rte_errno == EADDRNOTAVAIL) { /* See alloc_seg() for explanation. */ - RTE_LOG(CRIT, EAL, "Address %p occupied by an alien " - " allocation - MSL is not VA-contiguous!\n", + RTE_LOG_LINE(CRIT, EAL, "Address %p occupied by an alien " + " allocation - MSL is not VA-contiguous!", ms->addr); } return -1; @@ -233,8 +233,8 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg) map_addr = RTE_PTR_ADD(cur_msl->base_va, cur_idx * page_sz); if (alloc_seg(cur, map_addr, wa->socket, wa->hi)) { - RTE_LOG(DEBUG, EAL, "attempted to allocate %i segments, " - "but only %i were allocated\n", need, i); + RTE_LOG_LINE(DEBUG, EAL, "attempted to allocate %i segments, " + "but only %i were allocated", need, i); /* if exact number wasn't requested, stop */ if (!wa->exact) @@ -249,7 +249,7 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg) rte_fbarray_set_free(arr, j); if (free_seg(tmp)) - RTE_LOG(DEBUG, EAL, "Cannot free page\n"); + RTE_LOG_LINE(DEBUG, EAL, "Cannot free page"); } /* clear the list */ if (wa->ms) @@ -318,7 +318,7 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs, eal_get_internal_configuration(); if (internal_conf->legacy_mem) { - RTE_LOG(ERR, EAL, "dynamic allocation not supported in legacy mode\n"); + RTE_LOG_LINE(ERR, EAL, "dynamic allocation not supported in legacy mode"); return -ENOTSUP; } @@ -330,7 +330,7 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs, } } if (!hi) { - RTE_LOG(ERR, EAL, "cannot find relevant hugepage_info entry\n"); + RTE_LOG_LINE(ERR, EAL, "cannot find relevant hugepage_info entry"); return -1; } @@ -346,7 +346,7 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs, /* memalloc is locked, so it's safe to use thread-unsafe version */ ret = rte_memseg_list_walk_thread_unsafe(alloc_seg_walk, &wa); if (ret == 0) { - RTE_LOG(ERR, EAL, "cannot find suitable memseg_list\n"); + RTE_LOG_LINE(ERR, EAL, "cannot find suitable memseg_list"); ret = -1; } else if (ret > 0) { ret = (int)wa.segs_allocated; @@ -383,7 +383,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs) /* if this page is marked as unfreeable, fail */ if (cur->flags & RTE_MEMSEG_FLAG_DO_NOT_FREE) { - RTE_LOG(DEBUG, EAL, "Page is not allowed to be freed\n"); + RTE_LOG_LINE(DEBUG, EAL, "Page is not allowed to be freed"); ret = -1; continue; } @@ -396,7 +396,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs) break; } if (i == RTE_DIM(internal_conf->hugepage_info)) { - RTE_LOG(ERR, EAL, "Can't find relevant hugepage_info entry\n"); + RTE_LOG_LINE(ERR, EAL, "Can't find relevant hugepage_info entry"); ret = -1; continue; } @@ -411,7 +411,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs) if (walk_res == 1) continue; if (walk_res == 0) - RTE_LOG(ERR, EAL, "Couldn't find memseg list\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't find memseg list"); ret = -1; } return ret; diff --git a/lib/eal/windows/eal_memory.c b/lib/eal/windows/eal_memory.c index fd39155163..7e1e8d4c84 100644 --- a/lib/eal/windows/eal_memory.c +++ b/lib/eal/windows/eal_memory.c @@ -114,8 +114,8 @@ eal_mem_win32api_init(void) library_name, function); /* Contrary to the docs, Server 2016 is not supported. */ - RTE_LOG(ERR, EAL, "Windows 10 or Windows Server 2019 " - " is required for memory management\n"); + RTE_LOG_LINE(ERR, EAL, "Windows 10 or Windows Server 2019 " + " is required for memory management"); ret = -1; } @@ -173,8 +173,8 @@ eal_mem_virt2iova_init(void) detail = malloc(detail_size); if (detail == NULL) { - RTE_LOG(ERR, EAL, "Cannot allocate virt2phys " - "device interface detail data\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate virt2phys " + "device interface detail data"); goto exit; } @@ -185,7 +185,7 @@ eal_mem_virt2iova_init(void) goto exit; } - RTE_LOG(DEBUG, EAL, "Found virt2phys device: %s\n", detail->DevicePath); + RTE_LOG_LINE(DEBUG, EAL, "Found virt2phys device: %s", detail->DevicePath); virt2phys_device = CreateFile( detail->DevicePath, 0, 0, NULL, OPEN_EXISTING, 0, NULL); @@ -574,8 +574,8 @@ rte_mem_map(void *requested_addr, size_t size, int prot, int flags, int ret = mem_free(requested_addr, size, true); if (ret) { if (ret > 0) { - RTE_LOG(ERR, EAL, "Cannot map memory " - "to a region not reserved\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot map memory " + "to a region not reserved"); rte_errno = EADDRNOTAVAIL; } return NULL; @@ -691,7 +691,7 @@ eal_nohuge_init(void) NULL, mem_sz, MEM_RESERVE | MEM_COMMIT, PAGE_READWRITE); if (addr == NULL) { RTE_LOG_WIN32_ERR("VirtualAlloc(size=%#zx)", mem_sz); - RTE_LOG(ERR, EAL, "Cannot allocate memory\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate memory"); return -1; } @@ -702,9 +702,9 @@ eal_nohuge_init(void) if (mcfg->dma_maskbits && rte_mem_check_dma_mask_thread_unsafe(mcfg->dma_maskbits)) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "%s(): couldn't allocate memory due to IOVA " - "exceeding limits of current DMA mask.\n", __func__); + "exceeding limits of current DMA mask.", __func__); return -1; } diff --git a/lib/eal/windows/eal_windows.h b/lib/eal/windows/eal_windows.h index 43b228d388..ee206f365d 100644 --- a/lib/eal/windows/eal_windows.h +++ b/lib/eal/windows/eal_windows.h @@ -17,7 +17,7 @@ */ #define EAL_LOG_NOT_IMPLEMENTED() \ do { \ - RTE_LOG(DEBUG, EAL, "%s() is not implemented\n", __func__); \ + RTE_LOG_LINE(DEBUG, EAL, "%s() is not implemented", __func__); \ rte_errno = ENOTSUP; \ } while (0) @@ -25,7 +25,7 @@ * Log current function as a stub. */ #define EAL_LOG_STUB() \ - RTE_LOG(DEBUG, EAL, "Windows: %s() is a stub\n", __func__) + RTE_LOG_LINE(DEBUG, EAL, "Windows: %s() is a stub", __func__) /** * Create a map of processors and cores on the system. diff --git a/lib/eal/windows/include/rte_windows.h b/lib/eal/windows/include/rte_windows.h index 83730c3d2e..015072885b 100644 --- a/lib/eal/windows/include/rte_windows.h +++ b/lib/eal/windows/include/rte_windows.h @@ -48,8 +48,8 @@ extern "C" { * Log GetLastError() with context, usually a Win32 API function and arguments. */ #define RTE_LOG_WIN32_ERR(...) \ - RTE_LOG(DEBUG, EAL, RTE_FMT("GetLastError()=%lu: " \ - RTE_FMT_HEAD(__VA_ARGS__,) "\n", GetLastError(), \ + RTE_LOG_LINE(DEBUG, EAL, RTE_FMT("GetLastError()=%lu: " \ + RTE_FMT_HEAD(__VA_ARGS__,), GetLastError(), \ RTE_FMT_TAIL(__VA_ARGS__,))) #ifdef __cplusplus diff --git a/lib/eal/windows/rte_thread.c b/lib/eal/windows/rte_thread.c index 145ac4b5aa..7c62f57e0d 100644 --- a/lib/eal/windows/rte_thread.c +++ b/lib/eal/windows/rte_thread.c @@ -67,7 +67,7 @@ static int thread_log_last_error(const char *message) { DWORD error = GetLastError(); - RTE_LOG(DEBUG, EAL, "GetLastError()=%lu: %s\n", error, message); + RTE_LOG_LINE(DEBUG, EAL, "GetLastError()=%lu: %s", error, message); return thread_translate_win32_error(error); } @@ -90,7 +90,7 @@ thread_map_priority_to_os_value(enum rte_thread_priority eal_pri, int *os_pri, *os_pri = THREAD_PRIORITY_TIME_CRITICAL; break; default: - RTE_LOG(DEBUG, EAL, "The requested priority value is invalid.\n"); + RTE_LOG_LINE(DEBUG, EAL, "The requested priority value is invalid."); return EINVAL; } @@ -109,7 +109,7 @@ thread_map_os_priority_to_eal_value(int os_pri, DWORD pri_class, } break; case HIGH_PRIORITY_CLASS: - RTE_LOG(WARNING, EAL, "The OS priority class is high not real-time.\n"); + RTE_LOG_LINE(WARNING, EAL, "The OS priority class is high not real-time."); /* FALLTHROUGH */ case REALTIME_PRIORITY_CLASS: if (os_pri == THREAD_PRIORITY_TIME_CRITICAL) { @@ -118,7 +118,7 @@ thread_map_os_priority_to_eal_value(int os_pri, DWORD pri_class, } break; default: - RTE_LOG(DEBUG, EAL, "The OS priority value does not map to an EAL-defined priority.\n"); + RTE_LOG_LINE(DEBUG, EAL, "The OS priority value does not map to an EAL-defined priority."); return EINVAL; } @@ -148,7 +148,7 @@ convert_cpuset_to_affinity(const rte_cpuset_t *cpuset, if (affinity->Group == (USHORT)-1) { affinity->Group = cpu_affinity->Group; } else if (affinity->Group != cpu_affinity->Group) { - RTE_LOG(DEBUG, EAL, "All processors must belong to the same processor group\n"); + RTE_LOG_LINE(DEBUG, EAL, "All processors must belong to the same processor group"); ret = ENOTSUP; goto cleanup; } @@ -194,7 +194,7 @@ rte_thread_create(rte_thread_t *thread_id, ctx = calloc(1, sizeof(*ctx)); if (ctx == NULL) { - RTE_LOG(DEBUG, EAL, "Insufficient memory for thread context allocations\n"); + RTE_LOG_LINE(DEBUG, EAL, "Insufficient memory for thread context allocations"); ret = ENOMEM; goto cleanup; } @@ -217,7 +217,7 @@ rte_thread_create(rte_thread_t *thread_id, &thread_affinity ); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "Unable to convert cpuset to thread affinity\n"); + RTE_LOG_LINE(DEBUG, EAL, "Unable to convert cpuset to thread affinity"); thread_exit = true; goto resume_thread; } @@ -232,7 +232,7 @@ rte_thread_create(rte_thread_t *thread_id, ret = rte_thread_set_priority(*thread_id, thread_attr->priority); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "Unable to set thread priority\n"); + RTE_LOG_LINE(DEBUG, EAL, "Unable to set thread priority"); thread_exit = true; goto resume_thread; } @@ -360,7 +360,7 @@ rte_thread_set_name(rte_thread_t thread_id, const char *thread_name) CloseHandle(thread_handle); if (ret != 0) - RTE_LOG(DEBUG, EAL, "Failed to set thread name\n"); + RTE_LOG_LINE(DEBUG, EAL, "Failed to set thread name"); } int @@ -446,7 +446,7 @@ rte_thread_key_create(rte_thread_key *key, { *key = malloc(sizeof(**key)); if ((*key) == NULL) { - RTE_LOG(DEBUG, EAL, "Cannot allocate TLS key.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Cannot allocate TLS key."); rte_errno = ENOMEM; return -1; } @@ -464,7 +464,7 @@ int rte_thread_key_delete(rte_thread_key key) { if (!key) { - RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Invalid TLS key."); rte_errno = EINVAL; return -1; } @@ -484,7 +484,7 @@ rte_thread_value_set(rte_thread_key key, const void *value) char *p; if (!key) { - RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Invalid TLS key."); rte_errno = EINVAL; return -1; } @@ -504,7 +504,7 @@ rte_thread_value_get(rte_thread_key key) void *output; if (!key) { - RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Invalid TLS key."); rte_errno = EINVAL; return NULL; } @@ -532,7 +532,7 @@ rte_thread_set_affinity_by_id(rte_thread_t thread_id, ret = convert_cpuset_to_affinity(cpuset, &thread_affinity); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "Unable to convert cpuset to thread affinity\n"); + RTE_LOG_LINE(DEBUG, EAL, "Unable to convert cpuset to thread affinity"); goto cleanup; } diff --git a/lib/efd/rte_efd.c b/lib/efd/rte_efd.c index 78fb9250ef..e441263335 100644 --- a/lib/efd/rte_efd.c +++ b/lib/efd/rte_efd.c @@ -512,13 +512,13 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len, efd_list = RTE_TAILQ_CAST(rte_efd_tailq.head, rte_efd_list); if (online_cpu_socket_bitmask == 0) { - RTE_LOG(ERR, EFD, "At least one CPU socket must be enabled " - "in the bitmask\n"); + RTE_LOG_LINE(ERR, EFD, "At least one CPU socket must be enabled " + "in the bitmask"); return NULL; } if (max_num_rules == 0) { - RTE_LOG(ERR, EFD, "Max num rules must be higher than 0\n"); + RTE_LOG_LINE(ERR, EFD, "Max num rules must be higher than 0"); return NULL; } @@ -557,7 +557,7 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len, te = rte_zmalloc("EFD_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, EFD, "tailq entry allocation failed\n"); + RTE_LOG_LINE(ERR, EFD, "tailq entry allocation failed"); goto error_unlock_exit; } @@ -567,15 +567,15 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len, RTE_CACHE_LINE_SIZE, offline_cpu_socket); if (table == NULL) { - RTE_LOG(ERR, EFD, "Allocating EFD table management structure" - " on socket %u failed\n", + RTE_LOG_LINE(ERR, EFD, "Allocating EFD table management structure" + " on socket %u failed", offline_cpu_socket); goto error_unlock_exit; } - RTE_LOG(DEBUG, EFD, "Allocated EFD table management structure " - "on socket %u\n", offline_cpu_socket); + RTE_LOG_LINE(DEBUG, EFD, "Allocated EFD table management structure " + "on socket %u", offline_cpu_socket); table->max_num_rules = num_chunks * EFD_TARGET_CHUNK_MAX_NUM_RULES; table->num_rules = 0; @@ -589,16 +589,16 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len, RTE_CACHE_LINE_SIZE, offline_cpu_socket); if (key_array == NULL) { - RTE_LOG(ERR, EFD, "Allocating key array" - " on socket %u failed\n", + RTE_LOG_LINE(ERR, EFD, "Allocating key array" + " on socket %u failed", offline_cpu_socket); goto error_unlock_exit; } table->keys = key_array; strlcpy(table->name, name, sizeof(table->name)); - RTE_LOG(DEBUG, EFD, "Creating an EFD table with %u chunks," - " which potentially supports %u entries\n", + RTE_LOG_LINE(DEBUG, EFD, "Creating an EFD table with %u chunks," + " which potentially supports %u entries", num_chunks, table->max_num_rules); /* Make sure all the allocatable table pointers are NULL initially */ @@ -626,15 +626,15 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len, RTE_CACHE_LINE_SIZE, socket_id); if (table->chunks[socket_id] == NULL) { - RTE_LOG(ERR, EFD, + RTE_LOG_LINE(ERR, EFD, "Allocating EFD online table on " - "socket %u failed\n", + "socket %u failed", socket_id); goto error_unlock_exit; } - RTE_LOG(DEBUG, EFD, + RTE_LOG_LINE(DEBUG, EFD, "Allocated EFD online table of size " - "%"PRIu64" bytes (%.2f MB) on socket %u\n", + "%"PRIu64" bytes (%.2f MB) on socket %u", online_table_size, (float) online_table_size / (1024.0F * 1024.0F), @@ -678,14 +678,14 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len, RTE_CACHE_LINE_SIZE, offline_cpu_socket); if (table->offline_chunks == NULL) { - RTE_LOG(ERR, EFD, "Allocating EFD offline table on socket %u " - "failed\n", offline_cpu_socket); + RTE_LOG_LINE(ERR, EFD, "Allocating EFD offline table on socket %u " + "failed", offline_cpu_socket); goto error_unlock_exit; } - RTE_LOG(DEBUG, EFD, + RTE_LOG_LINE(DEBUG, EFD, "Allocated EFD offline table of size %"PRIu64" bytes " - " (%.2f MB) on socket %u\n", offline_table_size, + " (%.2f MB) on socket %u", offline_table_size, (float) offline_table_size / (1024.0F * 1024.0F), offline_cpu_socket); @@ -698,7 +698,7 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len, r = rte_ring_create(ring_name, rte_align32pow2(table->max_num_rules), offline_cpu_socket, 0); if (r == NULL) { - RTE_LOG(ERR, EFD, "memory allocation failed\n"); + RTE_LOG_LINE(ERR, EFD, "memory allocation failed"); rte_efd_free(table); return NULL; } @@ -1018,9 +1018,9 @@ efd_compute_update(struct rte_efd_table * const table, if (found == 0) { /* Key does not exist. Insert the rule into the bin/group */ if (unlikely(current_group->num_rules >= EFD_MAX_GROUP_NUM_RULES)) { - RTE_LOG(ERR, EFD, + RTE_LOG_LINE(ERR, EFD, "Fatal: No room remaining for insert into " - "chunk %u group %u bin %u\n", + "chunk %u group %u bin %u", *chunk_id, current_group_id, *bin_id); return RTE_EFD_UPDATE_FAILED; @@ -1028,9 +1028,9 @@ efd_compute_update(struct rte_efd_table * const table, if (unlikely(current_group->num_rules == (EFD_MAX_GROUP_NUM_RULES - 1))) { - RTE_LOG(INFO, EFD, "Warn: Insert into last " + RTE_LOG_LINE(INFO, EFD, "Warn: Insert into last " "available slot in chunk %u " - "group %u bin %u\n", *chunk_id, + "group %u bin %u", *chunk_id, current_group_id, *bin_id); status = RTE_EFD_UPDATE_WARN_GROUP_FULL; } @@ -1117,10 +1117,10 @@ efd_compute_update(struct rte_efd_table * const table, if (current_group != new_group && new_group->num_rules + bin_size > EFD_MAX_GROUP_NUM_RULES) { - RTE_LOG(DEBUG, EFD, + RTE_LOG_LINE(DEBUG, EFD, "Unable to move_groups to dest group " "containing %u entries." - "bin_size:%u choice:%02x\n", + "bin_size:%u choice:%02x", new_group->num_rules, bin_size, choice - 1); goto next_choice; @@ -1135,9 +1135,9 @@ efd_compute_update(struct rte_efd_table * const table, if (!ret) return status; - RTE_LOG(DEBUG, EFD, + RTE_LOG_LINE(DEBUG, EFD, "Failed to find perfect hash for group " - "containing %u entries. bin_size:%u choice:%02x\n", + "containing %u entries. bin_size:%u choice:%02x", new_group->num_rules, bin_size, choice - 1); /* Restore groups modified to their previous state */ revert_groups(current_group, new_group, bin_size); diff --git a/lib/fib/rte_fib.c b/lib/fib/rte_fib.c index f88e71a59d..3d9bf6fe9d 100644 --- a/lib/fib/rte_fib.c +++ b/lib/fib/rte_fib.c @@ -171,8 +171,8 @@ rte_fib_create(const char *name, int socket_id, struct rte_fib_conf *conf) rib = rte_rib_create(name, socket_id, &rib_conf); if (rib == NULL) { - RTE_LOG(ERR, LPM, - "Can not allocate RIB %s\n", name); + RTE_LOG_LINE(ERR, LPM, + "Can not allocate RIB %s", name); return NULL; } @@ -196,8 +196,8 @@ rte_fib_create(const char *name, int socket_id, struct rte_fib_conf *conf) /* allocate tailq entry */ te = rte_zmalloc("FIB_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, LPM, - "Can not allocate tailq entry for FIB %s\n", name); + RTE_LOG_LINE(ERR, LPM, + "Can not allocate tailq entry for FIB %s", name); rte_errno = ENOMEM; goto exit; } @@ -206,7 +206,7 @@ rte_fib_create(const char *name, int socket_id, struct rte_fib_conf *conf) fib = rte_zmalloc_socket(mem_name, sizeof(struct rte_fib), RTE_CACHE_LINE_SIZE, socket_id); if (fib == NULL) { - RTE_LOG(ERR, LPM, "FIB %s memory allocation failed\n", name); + RTE_LOG_LINE(ERR, LPM, "FIB %s memory allocation failed", name); rte_errno = ENOMEM; goto free_te; } @@ -217,9 +217,9 @@ rte_fib_create(const char *name, int socket_id, struct rte_fib_conf *conf) fib->def_nh = conf->default_nh; ret = init_dataplane(fib, socket_id, conf); if (ret < 0) { - RTE_LOG(ERR, LPM, + RTE_LOG_LINE(ERR, LPM, "FIB dataplane struct %s memory allocation failed " - "with err %d\n", name, ret); + "with err %d", name, ret); rte_errno = -ret; goto free_fib; } diff --git a/lib/fib/rte_fib6.c b/lib/fib/rte_fib6.c index ab1d960479..2d23c09eea 100644 --- a/lib/fib/rte_fib6.c +++ b/lib/fib/rte_fib6.c @@ -171,8 +171,8 @@ rte_fib6_create(const char *name, int socket_id, struct rte_fib6_conf *conf) rib = rte_rib6_create(name, socket_id, &rib_conf); if (rib == NULL) { - RTE_LOG(ERR, LPM, - "Can not allocate RIB %s\n", name); + RTE_LOG_LINE(ERR, LPM, + "Can not allocate RIB %s", name); return NULL; } @@ -196,8 +196,8 @@ rte_fib6_create(const char *name, int socket_id, struct rte_fib6_conf *conf) /* allocate tailq entry */ te = rte_zmalloc("FIB_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, LPM, - "Can not allocate tailq entry for FIB %s\n", name); + RTE_LOG_LINE(ERR, LPM, + "Can not allocate tailq entry for FIB %s", name); rte_errno = ENOMEM; goto exit; } @@ -206,7 +206,7 @@ rte_fib6_create(const char *name, int socket_id, struct rte_fib6_conf *conf) fib = rte_zmalloc_socket(mem_name, sizeof(struct rte_fib6), RTE_CACHE_LINE_SIZE, socket_id); if (fib == NULL) { - RTE_LOG(ERR, LPM, "FIB %s memory allocation failed\n", name); + RTE_LOG_LINE(ERR, LPM, "FIB %s memory allocation failed", name); rte_errno = ENOMEM; goto free_te; } @@ -217,8 +217,8 @@ rte_fib6_create(const char *name, int socket_id, struct rte_fib6_conf *conf) fib->def_nh = conf->default_nh; ret = init_dataplane(fib, socket_id, conf); if (ret < 0) { - RTE_LOG(ERR, LPM, - "FIB dataplane struct %s memory allocation failed\n", + RTE_LOG_LINE(ERR, LPM, + "FIB dataplane struct %s memory allocation failed", name); rte_errno = -ret; goto free_fib; diff --git a/lib/hash/rte_cuckoo_hash.c b/lib/hash/rte_cuckoo_hash.c index 8e4364f060..2a7b38843d 100644 --- a/lib/hash/rte_cuckoo_hash.c +++ b/lib/hash/rte_cuckoo_hash.c @@ -164,7 +164,7 @@ rte_hash_create(const struct rte_hash_parameters *params) hash_list = RTE_TAILQ_CAST(rte_hash_tailq.head, rte_hash_list); if (params == NULL) { - RTE_LOG(ERR, HASH, "rte_hash_create has no parameters\n"); + RTE_LOG_LINE(ERR, HASH, "rte_hash_create has no parameters"); return NULL; } @@ -173,13 +173,13 @@ rte_hash_create(const struct rte_hash_parameters *params) (params->entries < RTE_HASH_BUCKET_ENTRIES) || (params->key_len == 0)) { rte_errno = EINVAL; - RTE_LOG(ERR, HASH, "rte_hash_create has invalid parameters\n"); + RTE_LOG_LINE(ERR, HASH, "rte_hash_create has invalid parameters"); return NULL; } if (params->extra_flag & ~RTE_HASH_EXTRA_FLAGS_MASK) { rte_errno = EINVAL; - RTE_LOG(ERR, HASH, "rte_hash_create: unsupported extra flags\n"); + RTE_LOG_LINE(ERR, HASH, "rte_hash_create: unsupported extra flags"); return NULL; } @@ -187,8 +187,8 @@ rte_hash_create(const struct rte_hash_parameters *params) if ((params->extra_flag & RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY) && (params->extra_flag & RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF)) { rte_errno = EINVAL; - RTE_LOG(ERR, HASH, "rte_hash_create: choose rw concurrency or " - "rw concurrency lock free\n"); + RTE_LOG_LINE(ERR, HASH, "rte_hash_create: choose rw concurrency or " + "rw concurrency lock free"); return NULL; } @@ -238,7 +238,7 @@ rte_hash_create(const struct rte_hash_parameters *params) r = rte_ring_create_elem(ring_name, sizeof(uint32_t), rte_align32pow2(num_key_slots), params->socket_id, 0); if (r == NULL) { - RTE_LOG(ERR, HASH, "memory allocation failed\n"); + RTE_LOG_LINE(ERR, HASH, "memory allocation failed"); goto err; } @@ -254,8 +254,8 @@ rte_hash_create(const struct rte_hash_parameters *params) params->socket_id, 0); if (r_ext == NULL) { - RTE_LOG(ERR, HASH, "ext buckets memory allocation " - "failed\n"); + RTE_LOG_LINE(ERR, HASH, "ext buckets memory allocation " + "failed"); goto err; } } @@ -280,7 +280,7 @@ rte_hash_create(const struct rte_hash_parameters *params) te = rte_zmalloc("HASH_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, HASH, "tailq entry allocation failed\n"); + RTE_LOG_LINE(ERR, HASH, "tailq entry allocation failed"); goto err_unlock; } @@ -288,7 +288,7 @@ rte_hash_create(const struct rte_hash_parameters *params) RTE_CACHE_LINE_SIZE, params->socket_id); if (h == NULL) { - RTE_LOG(ERR, HASH, "memory allocation failed\n"); + RTE_LOG_LINE(ERR, HASH, "memory allocation failed"); goto err_unlock; } @@ -297,7 +297,7 @@ rte_hash_create(const struct rte_hash_parameters *params) RTE_CACHE_LINE_SIZE, params->socket_id); if (buckets == NULL) { - RTE_LOG(ERR, HASH, "buckets memory allocation failed\n"); + RTE_LOG_LINE(ERR, HASH, "buckets memory allocation failed"); goto err_unlock; } @@ -307,8 +307,8 @@ rte_hash_create(const struct rte_hash_parameters *params) num_buckets * sizeof(struct rte_hash_bucket), RTE_CACHE_LINE_SIZE, params->socket_id); if (buckets_ext == NULL) { - RTE_LOG(ERR, HASH, "ext buckets memory allocation " - "failed\n"); + RTE_LOG_LINE(ERR, HASH, "ext buckets memory allocation " + "failed"); goto err_unlock; } /* Populate ext bkt ring. We reserve 0 similar to the @@ -323,8 +323,8 @@ rte_hash_create(const struct rte_hash_parameters *params) ext_bkt_to_free = rte_zmalloc(NULL, sizeof(uint32_t) * num_key_slots, 0); if (ext_bkt_to_free == NULL) { - RTE_LOG(ERR, HASH, "ext bkt to free memory allocation " - "failed\n"); + RTE_LOG_LINE(ERR, HASH, "ext bkt to free memory allocation " + "failed"); goto err_unlock; } } @@ -339,7 +339,7 @@ rte_hash_create(const struct rte_hash_parameters *params) RTE_CACHE_LINE_SIZE, params->socket_id); if (k == NULL) { - RTE_LOG(ERR, HASH, "memory allocation failed\n"); + RTE_LOG_LINE(ERR, HASH, "memory allocation failed"); goto err_unlock; } @@ -347,7 +347,7 @@ rte_hash_create(const struct rte_hash_parameters *params) RTE_CACHE_LINE_SIZE, params->socket_id); if (tbl_chng_cnt == NULL) { - RTE_LOG(ERR, HASH, "memory allocation failed\n"); + RTE_LOG_LINE(ERR, HASH, "memory allocation failed"); goto err_unlock; } @@ -395,7 +395,7 @@ rte_hash_create(const struct rte_hash_parameters *params) sizeof(struct lcore_cache) * RTE_MAX_LCORE, RTE_CACHE_LINE_SIZE, params->socket_id); if (local_free_slots == NULL) { - RTE_LOG(ERR, HASH, "local free slots memory allocation failed\n"); + RTE_LOG_LINE(ERR, HASH, "local free slots memory allocation failed"); goto err_unlock; } } @@ -637,7 +637,7 @@ rte_hash_reset(struct rte_hash *h) /* Reclaim all the resources */ rte_rcu_qsbr_dq_reclaim(h->dq, ~0, NULL, &pending, NULL); if (pending != 0) - RTE_LOG(ERR, HASH, "RCU reclaim all resources failed\n"); + RTE_LOG_LINE(ERR, HASH, "RCU reclaim all resources failed"); } memset(h->buckets, 0, h->num_buckets * sizeof(struct rte_hash_bucket)); @@ -1511,8 +1511,8 @@ __hash_rcu_qsbr_free_resource(void *p, void *e, unsigned int n) /* Return key indexes to free slot ring */ ret = free_slot(h, rcu_dq_entry.key_idx); if (ret < 0) { - RTE_LOG(ERR, HASH, - "%s: could not enqueue free slots in global ring\n", + RTE_LOG_LINE(ERR, HASH, + "%s: could not enqueue free slots in global ring", __func__); } } @@ -1540,7 +1540,7 @@ rte_hash_rcu_qsbr_add(struct rte_hash *h, struct rte_hash_rcu_config *cfg) hash_rcu_cfg = rte_zmalloc(NULL, sizeof(struct rte_hash_rcu_config), 0); if (hash_rcu_cfg == NULL) { - RTE_LOG(ERR, HASH, "memory allocation failed\n"); + RTE_LOG_LINE(ERR, HASH, "memory allocation failed"); return 1; } @@ -1564,7 +1564,7 @@ rte_hash_rcu_qsbr_add(struct rte_hash *h, struct rte_hash_rcu_config *cfg) h->dq = rte_rcu_qsbr_dq_create(¶ms); if (h->dq == NULL) { rte_free(hash_rcu_cfg); - RTE_LOG(ERR, HASH, "HASH defer queue creation failed\n"); + RTE_LOG_LINE(ERR, HASH, "HASH defer queue creation failed"); return 1; } } else { @@ -1593,8 +1593,8 @@ remove_entry(const struct rte_hash *h, struct rte_hash_bucket *bkt, int ret = free_slot(h, bkt->key_idx[i]); if (ret < 0) { - RTE_LOG(ERR, HASH, - "%s: could not enqueue free slots in global ring\n", + RTE_LOG_LINE(ERR, HASH, + "%s: could not enqueue free slots in global ring", __func__); } } @@ -1783,7 +1783,7 @@ __rte_hash_del_key_with_hash(const struct rte_hash *h, const void *key, } else if (h->dq) /* Push into QSBR FIFO if using RTE_HASH_QSBR_MODE_DQ */ if (rte_rcu_qsbr_dq_enqueue(h->dq, &rcu_dq_entry) != 0) - RTE_LOG(ERR, HASH, "Failed to push QSBR FIFO\n"); + RTE_LOG_LINE(ERR, HASH, "Failed to push QSBR FIFO"); } __hash_rw_writer_unlock(h); return ret; diff --git a/lib/hash/rte_fbk_hash.c b/lib/hash/rte_fbk_hash.c index faeb50cd89..20433a92c8 100644 --- a/lib/hash/rte_fbk_hash.c +++ b/lib/hash/rte_fbk_hash.c @@ -118,7 +118,7 @@ rte_fbk_hash_create(const struct rte_fbk_hash_params *params) te = rte_zmalloc("FBK_HASH_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, HASH, "Failed to allocate tailq entry\n"); + RTE_LOG_LINE(ERR, HASH, "Failed to allocate tailq entry"); goto exit; } @@ -126,7 +126,7 @@ rte_fbk_hash_create(const struct rte_fbk_hash_params *params) ht = rte_zmalloc_socket(hash_name, mem_size, 0, params->socket_id); if (ht == NULL) { - RTE_LOG(ERR, HASH, "Failed to allocate fbk hash table\n"); + RTE_LOG_LINE(ERR, HASH, "Failed to allocate fbk hash table"); rte_free(te); goto exit; } diff --git a/lib/hash/rte_hash_crc.c b/lib/hash/rte_hash_crc.c index 1439d8a71f..0d52840eaa 100644 --- a/lib/hash/rte_hash_crc.c +++ b/lib/hash/rte_hash_crc.c @@ -34,8 +34,8 @@ rte_hash_crc_set_alg(uint8_t alg) #if defined RTE_ARCH_X86 if (!(alg & CRC32_SSE42_x64)) - RTE_LOG(WARNING, HASH_CRC, - "Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42\n"); + RTE_LOG_LINE(WARNING, HASH_CRC, + "Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42"); if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T) || alg == CRC32_SSE42) rte_hash_crc32_alg = CRC32_SSE42; else @@ -44,15 +44,15 @@ rte_hash_crc_set_alg(uint8_t alg) #if defined RTE_ARCH_ARM64 if (!(alg & CRC32_ARM64)) - RTE_LOG(WARNING, HASH_CRC, - "Unsupported CRC32 algorithm requested using CRC32_ARM64\n"); + RTE_LOG_LINE(WARNING, HASH_CRC, + "Unsupported CRC32 algorithm requested using CRC32_ARM64"); if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_CRC32)) rte_hash_crc32_alg = CRC32_ARM64; #endif if (rte_hash_crc32_alg == CRC32_SW) - RTE_LOG(WARNING, HASH_CRC, - "Unsupported CRC32 algorithm requested using CRC32_SW\n"); + RTE_LOG_LINE(WARNING, HASH_CRC, + "Unsupported CRC32 algorithm requested using CRC32_SW"); } /* Setting the best available algorithm */ diff --git a/lib/hash/rte_thash.c b/lib/hash/rte_thash.c index d819dddd84..a5d84eee8e 100644 --- a/lib/hash/rte_thash.c +++ b/lib/hash/rte_thash.c @@ -243,8 +243,8 @@ rte_thash_init_ctx(const char *name, uint32_t key_len, uint32_t reta_sz, /* allocate tailq entry */ te = rte_zmalloc("THASH_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, HASH, - "Can not allocate tailq entry for thash context %s\n", + RTE_LOG_LINE(ERR, HASH, + "Can not allocate tailq entry for thash context %s", name); rte_errno = ENOMEM; goto exit; @@ -252,7 +252,7 @@ rte_thash_init_ctx(const char *name, uint32_t key_len, uint32_t reta_sz, ctx = rte_zmalloc(NULL, sizeof(struct rte_thash_ctx) + key_len, 0); if (ctx == NULL) { - RTE_LOG(ERR, HASH, "thash ctx %s memory allocation failed\n", + RTE_LOG_LINE(ERR, HASH, "thash ctx %s memory allocation failed", name); rte_errno = ENOMEM; goto free_te; @@ -275,7 +275,7 @@ rte_thash_init_ctx(const char *name, uint32_t key_len, uint32_t reta_sz, ctx->matrices = rte_zmalloc(NULL, key_len * sizeof(uint64_t), RTE_CACHE_LINE_SIZE); if (ctx->matrices == NULL) { - RTE_LOG(ERR, HASH, "Cannot allocate matrices\n"); + RTE_LOG_LINE(ERR, HASH, "Cannot allocate matrices"); rte_errno = ENOMEM; goto free_ctx; } @@ -390,8 +390,8 @@ generate_subkey(struct rte_thash_ctx *ctx, struct thash_lfsr *lfsr, if (((lfsr->bits_cnt + req_bits) > (1ULL << lfsr->deg) - 1) && ((ctx->flags & RTE_THASH_IGNORE_PERIOD_OVERFLOW) != RTE_THASH_IGNORE_PERIOD_OVERFLOW)) { - RTE_LOG(ERR, HASH, - "Can't generate m-sequence due to period overflow\n"); + RTE_LOG_LINE(ERR, HASH, + "Can't generate m-sequence due to period overflow"); return -ENOSPC; } @@ -470,9 +470,9 @@ insert_before(struct rte_thash_ctx *ctx, return ret; } } else if ((next_ent != NULL) && (end > next_ent->offset)) { - RTE_LOG(ERR, HASH, + RTE_LOG_LINE(ERR, HASH, "Can't add helper %s due to conflict with existing" - " helper %s\n", ent->name, next_ent->name); + " helper %s", ent->name, next_ent->name); rte_free(ent); return -ENOSPC; } @@ -519,9 +519,9 @@ insert_after(struct rte_thash_ctx *ctx, int ret; if ((next_ent != NULL) && (end > next_ent->offset)) { - RTE_LOG(ERR, HASH, + RTE_LOG_LINE(ERR, HASH, "Can't add helper %s due to conflict with existing" - " helper %s\n", ent->name, next_ent->name); + " helper %s", ent->name, next_ent->name); rte_free(ent); return -EEXIST; } diff --git a/lib/hash/rte_thash_gfni.c b/lib/hash/rte_thash_gfni.c index c863789b51..6b84180b62 100644 --- a/lib/hash/rte_thash_gfni.c +++ b/lib/hash/rte_thash_gfni.c @@ -20,8 +20,8 @@ rte_thash_gfni(const uint64_t *mtrx __rte_unused, if (!warned) { warned = true; - RTE_LOG(ERR, HASH, - "%s is undefined under given arch\n", __func__); + RTE_LOG_LINE(ERR, HASH, + "%s is undefined under given arch", __func__); } return 0; @@ -38,8 +38,8 @@ rte_thash_gfni_bulk(const uint64_t *mtrx __rte_unused, if (!warned) { warned = true; - RTE_LOG(ERR, HASH, - "%s is undefined under given arch\n", __func__); + RTE_LOG_LINE(ERR, HASH, + "%s is undefined under given arch", __func__); } for (i = 0; i < num; i++) diff --git a/lib/ip_frag/rte_ip_frag_common.c b/lib/ip_frag/rte_ip_frag_common.c index eed399da6b..02dcac3137 100644 --- a/lib/ip_frag/rte_ip_frag_common.c +++ b/lib/ip_frag/rte_ip_frag_common.c @@ -54,20 +54,20 @@ rte_ip_frag_table_create(uint32_t bucket_num, uint32_t bucket_entries, if (rte_is_power_of_2(bucket_entries) == 0 || nb_entries > UINT32_MAX || nb_entries == 0 || nb_entries < max_entries) { - RTE_LOG(ERR, IPFRAG, "%s: invalid input parameter\n", __func__); + RTE_LOG_LINE(ERR, IPFRAG, "%s: invalid input parameter", __func__); return NULL; } sz = sizeof (*tbl) + nb_entries * sizeof (tbl->pkt[0]); if ((tbl = rte_zmalloc_socket(__func__, sz, RTE_CACHE_LINE_SIZE, socket_id)) == NULL) { - RTE_LOG(ERR, IPFRAG, - "%s: allocation of %zu bytes at socket %d failed do\n", + RTE_LOG_LINE(ERR, IPFRAG, + "%s: allocation of %zu bytes at socket %d failed do", __func__, sz, socket_id); return NULL; } - RTE_LOG(INFO, IPFRAG, "%s: allocated of %zu bytes at socket %d\n", + RTE_LOG_LINE(INFO, IPFRAG, "%s: allocated of %zu bytes at socket %d", __func__, sz, socket_id); tbl->max_cycles = max_cycles; diff --git a/lib/latencystats/rte_latencystats.c b/lib/latencystats/rte_latencystats.c index f3c1746cca..cc3c2cf4de 100644 --- a/lib/latencystats/rte_latencystats.c +++ b/lib/latencystats/rte_latencystats.c @@ -25,7 +25,6 @@ latencystat_cycles_per_ns(void) return rte_get_timer_hz() / NS_PER_SEC; } -/* Macros for printing using RTE_LOG */ RTE_LOG_REGISTER_DEFAULT(latencystat_logtype, INFO); #define RTE_LOGTYPE_LATENCY_STATS latencystat_logtype @@ -96,7 +95,7 @@ rte_latencystats_update(void) latency_stats_index, values, NUM_LATENCY_STATS); if (ret < 0) - RTE_LOG(INFO, LATENCY_STATS, "Failed to push the stats\n"); + RTE_LOG_LINE(INFO, LATENCY_STATS, "Failed to push the stats"); return ret; } @@ -228,7 +227,7 @@ rte_latencystats_init(uint64_t app_samp_intvl, mz = rte_memzone_reserve(MZ_RTE_LATENCY_STATS, sizeof(*glob_stats), rte_socket_id(), flags); if (mz == NULL) { - RTE_LOG(ERR, LATENCY_STATS, "Cannot reserve memory: %s:%d\n", + RTE_LOG_LINE(ERR, LATENCY_STATS, "Cannot reserve memory: %s:%d", __func__, __LINE__); return -ENOMEM; } @@ -244,8 +243,8 @@ rte_latencystats_init(uint64_t app_samp_intvl, latency_stats_index = rte_metrics_reg_names(ptr_strings, NUM_LATENCY_STATS); if (latency_stats_index < 0) { - RTE_LOG(DEBUG, LATENCY_STATS, - "Failed to register latency stats names\n"); + RTE_LOG_LINE(DEBUG, LATENCY_STATS, + "Failed to register latency stats names"); return -1; } @@ -253,8 +252,8 @@ rte_latencystats_init(uint64_t app_samp_intvl, ret = rte_mbuf_dyn_rx_timestamp_register(×tamp_dynfield_offset, ×tamp_dynflag); if (ret != 0) { - RTE_LOG(ERR, LATENCY_STATS, - "Cannot register mbuf field/flag for timestamp\n"); + RTE_LOG_LINE(ERR, LATENCY_STATS, + "Cannot register mbuf field/flag for timestamp"); return -rte_errno; } @@ -264,8 +263,8 @@ rte_latencystats_init(uint64_t app_samp_intvl, ret = rte_eth_dev_info_get(pid, &dev_info); if (ret != 0) { - RTE_LOG(INFO, LATENCY_STATS, - "Error during getting device (port %u) info: %s\n", + RTE_LOG_LINE(INFO, LATENCY_STATS, + "Error during getting device (port %u) info: %s", pid, strerror(-ret)); continue; @@ -276,18 +275,18 @@ rte_latencystats_init(uint64_t app_samp_intvl, cbs->cb = rte_eth_add_first_rx_callback(pid, qid, add_time_stamps, user_cb); if (!cbs->cb) - RTE_LOG(INFO, LATENCY_STATS, "Failed to " + RTE_LOG_LINE(INFO, LATENCY_STATS, "Failed to " "register Rx callback for pid=%d, " - "qid=%d\n", pid, qid); + "qid=%d", pid, qid); } for (qid = 0; qid < dev_info.nb_tx_queues; qid++) { cbs = &tx_cbs[pid][qid]; cbs->cb = rte_eth_add_tx_callback(pid, qid, calc_latency, user_cb); if (!cbs->cb) - RTE_LOG(INFO, LATENCY_STATS, "Failed to " + RTE_LOG_LINE(INFO, LATENCY_STATS, "Failed to " "register Tx callback for pid=%d, " - "qid=%d\n", pid, qid); + "qid=%d", pid, qid); } } return 0; @@ -308,8 +307,8 @@ rte_latencystats_uninit(void) ret = rte_eth_dev_info_get(pid, &dev_info); if (ret != 0) { - RTE_LOG(INFO, LATENCY_STATS, - "Error during getting device (port %u) info: %s\n", + RTE_LOG_LINE(INFO, LATENCY_STATS, + "Error during getting device (port %u) info: %s", pid, strerror(-ret)); continue; @@ -319,17 +318,17 @@ rte_latencystats_uninit(void) cbs = &rx_cbs[pid][qid]; ret = rte_eth_remove_rx_callback(pid, qid, cbs->cb); if (ret) - RTE_LOG(INFO, LATENCY_STATS, "failed to " + RTE_LOG_LINE(INFO, LATENCY_STATS, "failed to " "remove Rx callback for pid=%d, " - "qid=%d\n", pid, qid); + "qid=%d", pid, qid); } for (qid = 0; qid < dev_info.nb_tx_queues; qid++) { cbs = &tx_cbs[pid][qid]; ret = rte_eth_remove_tx_callback(pid, qid, cbs->cb); if (ret) - RTE_LOG(INFO, LATENCY_STATS, "failed to " + RTE_LOG_LINE(INFO, LATENCY_STATS, "failed to " "remove Tx callback for pid=%d, " - "qid=%d\n", pid, qid); + "qid=%d", pid, qid); } } @@ -366,8 +365,8 @@ rte_latencystats_get(struct rte_metric_value *values, uint16_t size) const struct rte_memzone *mz; mz = rte_memzone_lookup(MZ_RTE_LATENCY_STATS); if (mz == NULL) { - RTE_LOG(ERR, LATENCY_STATS, - "Latency stats memzone not found\n"); + RTE_LOG_LINE(ERR, LATENCY_STATS, + "Latency stats memzone not found"); return -ENOMEM; } glob_stats = mz->addr; diff --git a/lib/log/log.c b/lib/log/log.c index e3cd4cff0f..d03691db0d 100644 --- a/lib/log/log.c +++ b/lib/log/log.c @@ -146,7 +146,7 @@ logtype_set_level(uint32_t type, uint32_t level) if (current != level) { rte_logs.dynamic_types[type].loglevel = level; - RTE_LOG(DEBUG, EAL, "%s log level changed from %s to %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s log level changed from %s to %s", rte_logs.dynamic_types[type].name == NULL ? "" : rte_logs.dynamic_types[type].name, eal_log_level2str(current), @@ -519,8 +519,8 @@ eal_log_set_default(FILE *default_log) default_log_stream = default_log; #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG - RTE_LOG(NOTICE, EAL, - "Debug dataplane logs available - lower performance\n"); + RTE_LOG_LINE(NOTICE, EAL, + "Debug dataplane logs available - lower performance"); #endif } diff --git a/lib/lpm/rte_lpm.c b/lib/lpm/rte_lpm.c index 0ca8214786..a332faf720 100644 --- a/lib/lpm/rte_lpm.c +++ b/lib/lpm/rte_lpm.c @@ -192,7 +192,7 @@ rte_lpm_create(const char *name, int socket_id, /* allocate tailq entry */ te = rte_zmalloc("LPM_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, LPM, "Failed to allocate tailq entry\n"); + RTE_LOG_LINE(ERR, LPM, "Failed to allocate tailq entry"); rte_errno = ENOMEM; goto exit; } @@ -201,7 +201,7 @@ rte_lpm_create(const char *name, int socket_id, i_lpm = rte_zmalloc_socket(mem_name, mem_size, RTE_CACHE_LINE_SIZE, socket_id); if (i_lpm == NULL) { - RTE_LOG(ERR, LPM, "LPM memory allocation failed\n"); + RTE_LOG_LINE(ERR, LPM, "LPM memory allocation failed"); rte_free(te); rte_errno = ENOMEM; goto exit; @@ -211,7 +211,7 @@ rte_lpm_create(const char *name, int socket_id, (size_t)rules_size, RTE_CACHE_LINE_SIZE, socket_id); if (i_lpm->rules_tbl == NULL) { - RTE_LOG(ERR, LPM, "LPM rules_tbl memory allocation failed\n"); + RTE_LOG_LINE(ERR, LPM, "LPM rules_tbl memory allocation failed"); rte_free(i_lpm); i_lpm = NULL; rte_free(te); @@ -223,7 +223,7 @@ rte_lpm_create(const char *name, int socket_id, (size_t)tbl8s_size, RTE_CACHE_LINE_SIZE, socket_id); if (i_lpm->lpm.tbl8 == NULL) { - RTE_LOG(ERR, LPM, "LPM tbl8 memory allocation failed\n"); + RTE_LOG_LINE(ERR, LPM, "LPM tbl8 memory allocation failed"); rte_free(i_lpm->rules_tbl); rte_free(i_lpm); i_lpm = NULL; @@ -338,7 +338,7 @@ rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_lpm_rcu_config *cfg) params.v = cfg->v; i_lpm->dq = rte_rcu_qsbr_dq_create(¶ms); if (i_lpm->dq == NULL) { - RTE_LOG(ERR, LPM, "LPM defer queue creation failed\n"); + RTE_LOG_LINE(ERR, LPM, "LPM defer queue creation failed"); return 1; } } else { @@ -565,7 +565,7 @@ tbl8_free(struct __rte_lpm *i_lpm, uint32_t tbl8_group_start) status = rte_rcu_qsbr_dq_enqueue(i_lpm->dq, (void *)&tbl8_group_start); if (status == 1) { - RTE_LOG(ERR, LPM, "Failed to push QSBR FIFO\n"); + RTE_LOG_LINE(ERR, LPM, "Failed to push QSBR FIFO"); return -rte_errno; } } diff --git a/lib/lpm/rte_lpm6.c b/lib/lpm/rte_lpm6.c index 24ce7dd022..251bfcc73d 100644 --- a/lib/lpm/rte_lpm6.c +++ b/lib/lpm/rte_lpm6.c @@ -280,7 +280,7 @@ rte_lpm6_create(const char *name, int socket_id, rules_tbl = rte_hash_create(&rule_hash_tbl_params); if (rules_tbl == NULL) { - RTE_LOG(ERR, LPM, "LPM rules hash table allocation failed: %s (%d)\n", + RTE_LOG_LINE(ERR, LPM, "LPM rules hash table allocation failed: %s (%d)", rte_strerror(rte_errno), rte_errno); goto fail_wo_unlock; } @@ -290,7 +290,7 @@ rte_lpm6_create(const char *name, int socket_id, sizeof(uint32_t) * config->number_tbl8s, RTE_CACHE_LINE_SIZE); if (tbl8_pool == NULL) { - RTE_LOG(ERR, LPM, "LPM tbl8 pool allocation failed: %s (%d)\n", + RTE_LOG_LINE(ERR, LPM, "LPM tbl8 pool allocation failed: %s (%d)", rte_strerror(rte_errno), rte_errno); rte_errno = ENOMEM; goto fail_wo_unlock; @@ -301,7 +301,7 @@ rte_lpm6_create(const char *name, int socket_id, sizeof(struct rte_lpm_tbl8_hdr) * config->number_tbl8s, RTE_CACHE_LINE_SIZE); if (tbl8_hdrs == NULL) { - RTE_LOG(ERR, LPM, "LPM tbl8 headers allocation failed: %s (%d)\n", + RTE_LOG_LINE(ERR, LPM, "LPM tbl8 headers allocation failed: %s (%d)", rte_strerror(rte_errno), rte_errno); rte_errno = ENOMEM; goto fail_wo_unlock; @@ -330,7 +330,7 @@ rte_lpm6_create(const char *name, int socket_id, /* allocate tailq entry */ te = rte_zmalloc("LPM6_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, LPM, "Failed to allocate tailq entry!\n"); + RTE_LOG_LINE(ERR, LPM, "Failed to allocate tailq entry!"); rte_errno = ENOMEM; goto fail; } @@ -340,7 +340,7 @@ rte_lpm6_create(const char *name, int socket_id, RTE_CACHE_LINE_SIZE, socket_id); if (lpm == NULL) { - RTE_LOG(ERR, LPM, "LPM memory allocation failed\n"); + RTE_LOG_LINE(ERR, LPM, "LPM memory allocation failed"); rte_free(te); rte_errno = ENOMEM; goto fail; diff --git a/lib/mbuf/rte_mbuf.c b/lib/mbuf/rte_mbuf.c index 3eccc61827..8472c6a977 100644 --- a/lib/mbuf/rte_mbuf.c +++ b/lib/mbuf/rte_mbuf.c @@ -231,7 +231,7 @@ rte_pktmbuf_pool_create_by_ops(const char *name, unsigned int n, int ret; if (RTE_ALIGN(priv_size, RTE_MBUF_PRIV_ALIGN) != priv_size) { - RTE_LOG(ERR, MBUF, "mbuf priv_size=%u is not aligned\n", + RTE_LOG_LINE(ERR, MBUF, "mbuf priv_size=%u is not aligned", priv_size); rte_errno = EINVAL; return NULL; @@ -251,7 +251,7 @@ rte_pktmbuf_pool_create_by_ops(const char *name, unsigned int n, mp_ops_name = rte_mbuf_best_mempool_ops(); ret = rte_mempool_set_ops_byname(mp, mp_ops_name, NULL); if (ret != 0) { - RTE_LOG(ERR, MBUF, "error setting mempool handler\n"); + RTE_LOG_LINE(ERR, MBUF, "error setting mempool handler"); rte_mempool_free(mp); rte_errno = -ret; return NULL; @@ -297,7 +297,7 @@ rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n, int ret; if (RTE_ALIGN(priv_size, RTE_MBUF_PRIV_ALIGN) != priv_size) { - RTE_LOG(ERR, MBUF, "mbuf priv_size=%u is not aligned\n", + RTE_LOG_LINE(ERR, MBUF, "mbuf priv_size=%u is not aligned", priv_size); rte_errno = EINVAL; return NULL; @@ -307,12 +307,12 @@ rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n, const struct rte_pktmbuf_extmem *extm = ext_mem + i; if (!extm->elt_size || !extm->buf_len || !extm->buf_ptr) { - RTE_LOG(ERR, MBUF, "invalid extmem descriptor\n"); + RTE_LOG_LINE(ERR, MBUF, "invalid extmem descriptor"); rte_errno = EINVAL; return NULL; } if (data_room_size > extm->elt_size) { - RTE_LOG(ERR, MBUF, "ext elt_size=%u is too small\n", + RTE_LOG_LINE(ERR, MBUF, "ext elt_size=%u is too small", priv_size); rte_errno = EINVAL; return NULL; @@ -321,7 +321,7 @@ rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n, } /* Check whether enough external memory provided. */ if (n_elts < n) { - RTE_LOG(ERR, MBUF, "not enough extmem\n"); + RTE_LOG_LINE(ERR, MBUF, "not enough extmem"); rte_errno = ENOMEM; return NULL; } @@ -342,7 +342,7 @@ rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n, mp_ops_name = rte_mbuf_best_mempool_ops(); ret = rte_mempool_set_ops_byname(mp, mp_ops_name, NULL); if (ret != 0) { - RTE_LOG(ERR, MBUF, "error setting mempool handler\n"); + RTE_LOG_LINE(ERR, MBUF, "error setting mempool handler"); rte_mempool_free(mp); rte_errno = -ret; return NULL; diff --git a/lib/mbuf/rte_mbuf_dyn.c b/lib/mbuf/rte_mbuf_dyn.c index 4fb1863a10..a9f7bb2b81 100644 --- a/lib/mbuf/rte_mbuf_dyn.c +++ b/lib/mbuf/rte_mbuf_dyn.c @@ -118,7 +118,7 @@ init_shared_mem(void) mz = rte_memzone_lookup(RTE_MBUF_DYN_MZNAME); } if (mz == NULL) { - RTE_LOG(ERR, MBUF, "Failed to get mbuf dyn shared memory\n"); + RTE_LOG_LINE(ERR, MBUF, "Failed to get mbuf dyn shared memory"); return -1; } @@ -317,7 +317,7 @@ __rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield *params, shm->free_space[i] = 0; process_score(); - RTE_LOG(DEBUG, MBUF, "Registered dynamic field %s (sz=%zu, al=%zu, fl=0x%x) -> %zd\n", + RTE_LOG_LINE(DEBUG, MBUF, "Registered dynamic field %s (sz=%zu, al=%zu, fl=0x%x) -> %zd", params->name, params->size, params->align, params->flags, offset); @@ -491,7 +491,7 @@ __rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params, shm->free_flags &= ~(1ULL << bitnum); - RTE_LOG(DEBUG, MBUF, "Registered dynamic flag %s (fl=0x%x) -> %u\n", + RTE_LOG_LINE(DEBUG, MBUF, "Registered dynamic flag %s (fl=0x%x) -> %u", params->name, params->flags, bitnum); return bitnum; @@ -592,8 +592,8 @@ rte_mbuf_dyn_timestamp_register(int *field_offset, uint64_t *flag, offset = rte_mbuf_dynfield_register(&field_desc); if (offset < 0) { - RTE_LOG(ERR, MBUF, - "Failed to register mbuf field for timestamp\n"); + RTE_LOG_LINE(ERR, MBUF, + "Failed to register mbuf field for timestamp"); return -1; } if (field_offset != NULL) @@ -602,8 +602,8 @@ rte_mbuf_dyn_timestamp_register(int *field_offset, uint64_t *flag, strlcpy(flag_desc.name, flag_name, sizeof(flag_desc.name)); offset = rte_mbuf_dynflag_register(&flag_desc); if (offset < 0) { - RTE_LOG(ERR, MBUF, - "Failed to register mbuf flag for %s timestamp\n", + RTE_LOG_LINE(ERR, MBUF, + "Failed to register mbuf flag for %s timestamp", direction); return -1; } diff --git a/lib/mbuf/rte_mbuf_pool_ops.c b/lib/mbuf/rte_mbuf_pool_ops.c index 5318430126..639aa557f8 100644 --- a/lib/mbuf/rte_mbuf_pool_ops.c +++ b/lib/mbuf/rte_mbuf_pool_ops.c @@ -33,8 +33,8 @@ rte_mbuf_set_platform_mempool_ops(const char *ops_name) return 0; } - RTE_LOG(ERR, MBUF, - "%s is already registered as platform mbuf pool ops\n", + RTE_LOG_LINE(ERR, MBUF, + "%s is already registered as platform mbuf pool ops", (char *)mz->addr); return -EEXIST; } diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c index 2f8adad5ca..b66c8898a8 100644 --- a/lib/mempool/rte_mempool.c +++ b/lib/mempool/rte_mempool.c @@ -775,7 +775,7 @@ rte_mempool_cache_create(uint32_t size, int socket_id) cache = rte_zmalloc_socket("MEMPOOL_CACHE", sizeof(*cache), RTE_CACHE_LINE_SIZE, socket_id); if (cache == NULL) { - RTE_LOG(ERR, MEMPOOL, "Cannot allocate mempool cache.\n"); + RTE_LOG_LINE(ERR, MEMPOOL, "Cannot allocate mempool cache."); rte_errno = ENOMEM; return NULL; } @@ -877,7 +877,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size, /* try to allocate tailq entry */ te = rte_zmalloc("MEMPOOL_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, MEMPOOL, "Cannot allocate tailq entry!\n"); + RTE_LOG_LINE(ERR, MEMPOOL, "Cannot allocate tailq entry!"); goto exit_unlock; } @@ -1088,16 +1088,16 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp, if (free == 0) { if (cookie != RTE_MEMPOOL_HEADER_COOKIE1) { - RTE_LOG(CRIT, MEMPOOL, - "obj=%p, mempool=%p, cookie=%" PRIx64 "\n", + RTE_LOG_LINE(CRIT, MEMPOOL, + "obj=%p, mempool=%p, cookie=%" PRIx64, obj, (const void *) mp, cookie); rte_panic("MEMPOOL: bad header cookie (put)\n"); } hdr->cookie = RTE_MEMPOOL_HEADER_COOKIE2; } else if (free == 1) { if (cookie != RTE_MEMPOOL_HEADER_COOKIE2) { - RTE_LOG(CRIT, MEMPOOL, - "obj=%p, mempool=%p, cookie=%" PRIx64 "\n", + RTE_LOG_LINE(CRIT, MEMPOOL, + "obj=%p, mempool=%p, cookie=%" PRIx64, obj, (const void *) mp, cookie); rte_panic("MEMPOOL: bad header cookie (get)\n"); } @@ -1105,8 +1105,8 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp, } else if (free == 2) { if (cookie != RTE_MEMPOOL_HEADER_COOKIE1 && cookie != RTE_MEMPOOL_HEADER_COOKIE2) { - RTE_LOG(CRIT, MEMPOOL, - "obj=%p, mempool=%p, cookie=%" PRIx64 "\n", + RTE_LOG_LINE(CRIT, MEMPOOL, + "obj=%p, mempool=%p, cookie=%" PRIx64, obj, (const void *) mp, cookie); rte_panic("MEMPOOL: bad header cookie (audit)\n"); } @@ -1114,8 +1114,8 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp, tlr = rte_mempool_get_trailer(obj); cookie = tlr->cookie; if (cookie != RTE_MEMPOOL_TRAILER_COOKIE) { - RTE_LOG(CRIT, MEMPOOL, - "obj=%p, mempool=%p, cookie=%" PRIx64 "\n", + RTE_LOG_LINE(CRIT, MEMPOOL, + "obj=%p, mempool=%p, cookie=%" PRIx64, obj, (const void *) mp, cookie); rte_panic("MEMPOOL: bad trailer cookie\n"); } @@ -1200,7 +1200,7 @@ mempool_audit_cache(const struct rte_mempool *mp) const struct rte_mempool_cache *cache; cache = &mp->local_cache[lcore_id]; if (cache->len > RTE_DIM(cache->objs)) { - RTE_LOG(CRIT, MEMPOOL, "badness on cache[%u]\n", + RTE_LOG_LINE(CRIT, MEMPOOL, "badness on cache[%u]", lcore_id); rte_panic("MEMPOOL: invalid cache len\n"); } @@ -1429,7 +1429,7 @@ rte_mempool_event_callback_register(rte_mempool_event_callback *func, cb = calloc(1, sizeof(*cb)); if (cb == NULL) { - RTE_LOG(ERR, MEMPOOL, "Cannot allocate event callback!\n"); + RTE_LOG_LINE(ERR, MEMPOOL, "Cannot allocate event callback!"); ret = -ENOMEM; goto exit; } diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h index 4f8511b8f5..30ce579737 100644 --- a/lib/mempool/rte_mempool.h +++ b/lib/mempool/rte_mempool.h @@ -847,7 +847,7 @@ rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table, ret = ops->enqueue(mp, obj_table, n); #ifdef RTE_LIBRTE_MEMPOOL_DEBUG if (unlikely(ret < 0)) - RTE_LOG(CRIT, MEMPOOL, "cannot enqueue %u objects to mempool %s\n", + RTE_LOG_LINE(CRIT, MEMPOOL, "cannot enqueue %u objects to mempool %s", n, mp->name); #endif return ret; diff --git a/lib/mempool/rte_mempool_ops.c b/lib/mempool/rte_mempool_ops.c index e871de9ec9..d35e9b118b 100644 --- a/lib/mempool/rte_mempool_ops.c +++ b/lib/mempool/rte_mempool_ops.c @@ -31,22 +31,22 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h) if (rte_mempool_ops_table.num_ops >= RTE_MEMPOOL_MAX_OPS_IDX) { rte_spinlock_unlock(&rte_mempool_ops_table.sl); - RTE_LOG(ERR, MEMPOOL, - "Maximum number of mempool ops structs exceeded\n"); + RTE_LOG_LINE(ERR, MEMPOOL, + "Maximum number of mempool ops structs exceeded"); return -ENOSPC; } if (h->alloc == NULL || h->enqueue == NULL || h->dequeue == NULL || h->get_count == NULL) { rte_spinlock_unlock(&rte_mempool_ops_table.sl); - RTE_LOG(ERR, MEMPOOL, - "Missing callback while registering mempool ops\n"); + RTE_LOG_LINE(ERR, MEMPOOL, + "Missing callback while registering mempool ops"); return -EINVAL; } if (strlen(h->name) >= sizeof(ops->name) - 1) { rte_spinlock_unlock(&rte_mempool_ops_table.sl); - RTE_LOG(DEBUG, MEMPOOL, "%s(): mempool_ops <%s>: name too long\n", + RTE_LOG_LINE(DEBUG, MEMPOOL, "%s(): mempool_ops <%s>: name too long", __func__, h->name); rte_errno = EEXIST; return -EEXIST; diff --git a/lib/pipeline/rte_pipeline.c b/lib/pipeline/rte_pipeline.c index 436cf54953..fe91c48947 100644 --- a/lib/pipeline/rte_pipeline.c +++ b/lib/pipeline/rte_pipeline.c @@ -160,22 +160,22 @@ static int rte_pipeline_check_params(struct rte_pipeline_params *params) { if (params == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: Incorrect value for parameter params\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Incorrect value for parameter params", __func__); return -EINVAL; } /* name */ if (params->name == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: Incorrect value for parameter name\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Incorrect value for parameter name", __func__); return -EINVAL; } /* socket */ if (params->socket_id < 0) { - RTE_LOG(ERR, PIPELINE, - "%s: Incorrect value for parameter socket_id\n", + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Incorrect value for parameter socket_id", __func__); return -EINVAL; } @@ -192,8 +192,8 @@ rte_pipeline_create(struct rte_pipeline_params *params) /* Check input parameters */ status = rte_pipeline_check_params(params); if (status != 0) { - RTE_LOG(ERR, PIPELINE, - "%s: Pipeline params check failed (%d)\n", + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Pipeline params check failed (%d)", __func__, status); return NULL; } @@ -203,8 +203,8 @@ rte_pipeline_create(struct rte_pipeline_params *params) RTE_CACHE_LINE_SIZE, params->socket_id); if (p == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: Pipeline memory allocation failed\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Pipeline memory allocation failed", __func__); return NULL; } @@ -232,8 +232,8 @@ rte_pipeline_free(struct rte_pipeline *p) /* Check input parameters */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: rte_pipeline parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: rte_pipeline parameter is NULL", __func__); return -EINVAL; } @@ -273,44 +273,44 @@ rte_table_check_params(struct rte_pipeline *p, uint32_t *table_id) { if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter is NULL", __func__); return -EINVAL; } if (params == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: params parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: params parameter is NULL", __func__); return -EINVAL; } if (table_id == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: table_id parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: table_id parameter is NULL", __func__); return -EINVAL; } /* ops */ if (params->ops == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: params->ops is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: params->ops is NULL", __func__); return -EINVAL; } if (params->ops->f_create == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_create function pointer is NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: f_create function pointer is NULL", __func__); return -EINVAL; } if (params->ops->f_lookup == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_lookup function pointer is NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: f_lookup function pointer is NULL", __func__); return -EINVAL; } /* De we have room for one more table? */ if (p->num_tables == RTE_PIPELINE_TABLE_MAX) { - RTE_LOG(ERR, PIPELINE, - "%s: Incorrect value for num_tables parameter\n", + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Incorrect value for num_tables parameter", __func__); return -EINVAL; } @@ -343,8 +343,8 @@ rte_pipeline_table_create(struct rte_pipeline *p, default_entry = rte_zmalloc_socket( "PIPELINE", entry_size, RTE_CACHE_LINE_SIZE, p->socket_id); if (default_entry == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: Failed to allocate default entry\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Failed to allocate default entry", __func__); return -EINVAL; } @@ -353,7 +353,7 @@ rte_pipeline_table_create(struct rte_pipeline *p, entry_size); if (h_table == NULL) { rte_free(default_entry); - RTE_LOG(ERR, PIPELINE, "%s: Table creation failed\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, "%s: Table creation failed", __func__); return -EINVAL; } @@ -399,20 +399,20 @@ rte_pipeline_table_default_entry_add(struct rte_pipeline *p, /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter is NULL", __func__); return -EINVAL; } if (default_entry == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: default_entry parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: default_entry parameter is NULL", __func__); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: table_id %d out of range\n", __func__, table_id); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: table_id %d out of range", __func__, table_id); return -EINVAL; } @@ -421,8 +421,8 @@ rte_pipeline_table_default_entry_add(struct rte_pipeline *p, if ((default_entry->action == RTE_PIPELINE_ACTION_TABLE) && table->table_next_id_valid && (default_entry->table_id != table->table_next_id)) { - RTE_LOG(ERR, PIPELINE, - "%s: Tree-like topologies not allowed\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Tree-like topologies not allowed", __func__); return -EINVAL; } @@ -448,14 +448,14 @@ rte_pipeline_table_default_entry_delete(struct rte_pipeline *p, /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: pipeline parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: pipeline parameter is NULL", __func__); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: table_id %d out of range\n", __func__, table_id); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: table_id %d out of range", __func__, table_id); return -EINVAL; } @@ -484,32 +484,32 @@ rte_pipeline_table_entry_add(struct rte_pipeline *p, /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter is NULL", __func__); return -EINVAL; } if (key == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: key parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, "%s: key parameter is NULL", __func__); return -EINVAL; } if (entry == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: entry parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: entry parameter is NULL", __func__); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: table_id %d out of range\n", __func__, table_id); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: table_id %d out of range", __func__, table_id); return -EINVAL; } table = &p->tables[table_id]; if (table->ops.f_add == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: f_add function pointer NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: f_add function pointer NULL", __func__); return -EINVAL; } @@ -517,8 +517,8 @@ rte_pipeline_table_entry_add(struct rte_pipeline *p, if ((entry->action == RTE_PIPELINE_ACTION_TABLE) && table->table_next_id_valid && (entry->table_id != table->table_next_id)) { - RTE_LOG(ERR, PIPELINE, - "%s: Tree-like topologies not allowed\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Tree-like topologies not allowed", __func__); return -EINVAL; } @@ -544,28 +544,28 @@ rte_pipeline_table_entry_delete(struct rte_pipeline *p, /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (key == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: key parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: key parameter is NULL", __func__); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: table_id %d out of range\n", __func__, table_id); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: table_id %d out of range", __func__, table_id); return -EINVAL; } table = &p->tables[table_id]; if (table->ops.f_delete == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_delete function pointer NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: f_delete function pointer NULL", __func__); return -EINVAL; } @@ -585,32 +585,32 @@ int rte_pipeline_table_entry_add_bulk(struct rte_pipeline *p, /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter is NULL", __func__); return -EINVAL; } if (keys == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: keys parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, "%s: keys parameter is NULL", __func__); return -EINVAL; } if (entries == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: entries parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: entries parameter is NULL", __func__); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: table_id %d out of range\n", __func__, table_id); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: table_id %d out of range", __func__, table_id); return -EINVAL; } table = &p->tables[table_id]; if (table->ops.f_add_bulk == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: f_add_bulk function pointer NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: f_add_bulk function pointer NULL", __func__); return -EINVAL; } @@ -619,8 +619,8 @@ int rte_pipeline_table_entry_add_bulk(struct rte_pipeline *p, if ((entries[i]->action == RTE_PIPELINE_ACTION_TABLE) && table->table_next_id_valid && (entries[i]->table_id != table->table_next_id)) { - RTE_LOG(ERR, PIPELINE, - "%s: Tree-like topologies not allowed\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Tree-like topologies not allowed", __func__); return -EINVAL; } } @@ -649,28 +649,28 @@ int rte_pipeline_table_entry_delete_bulk(struct rte_pipeline *p, /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (keys == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: key parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: key parameter is NULL", __func__); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: table_id %d out of range\n", __func__, table_id); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: table_id %d out of range", __func__, table_id); return -EINVAL; } table = &p->tables[table_id]; if (table->ops.f_delete_bulk == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_delete function pointer NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: f_delete function pointer NULL", __func__); return -EINVAL; } @@ -687,35 +687,35 @@ rte_pipeline_port_in_check_params(struct rte_pipeline *p, uint32_t *port_id) { if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (params == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: params parameter NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, "%s: params parameter NULL", __func__); return -EINVAL; } if (port_id == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: port_id parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: port_id parameter NULL", __func__); return -EINVAL; } /* ops */ if (params->ops == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: params->ops parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: params->ops parameter NULL", __func__); return -EINVAL; } if (params->ops->f_create == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_create function pointer NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: f_create function pointer NULL", __func__); return -EINVAL; } if (params->ops->f_rx == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: f_rx function pointer NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: f_rx function pointer NULL", __func__); return -EINVAL; } @@ -723,15 +723,15 @@ rte_pipeline_port_in_check_params(struct rte_pipeline *p, /* burst_size */ if ((params->burst_size == 0) || (params->burst_size > RTE_PORT_IN_BURST_SIZE_MAX)) { - RTE_LOG(ERR, PIPELINE, "%s: invalid value for burst_size\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: invalid value for burst_size", __func__); return -EINVAL; } /* Do we have room for one more port? */ if (p->num_ports_in == RTE_PIPELINE_PORT_IN_MAX) { - RTE_LOG(ERR, PIPELINE, - "%s: invalid value for num_ports_in\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: invalid value for num_ports_in", __func__); return -EINVAL; } @@ -744,51 +744,51 @@ rte_pipeline_port_out_check_params(struct rte_pipeline *p, uint32_t *port_id) { if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (params == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: params parameter NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, "%s: params parameter NULL", __func__); return -EINVAL; } if (port_id == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: port_id parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: port_id parameter NULL", __func__); return -EINVAL; } /* ops */ if (params->ops == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: params->ops parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: params->ops parameter NULL", __func__); return -EINVAL; } if (params->ops->f_create == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_create function pointer NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: f_create function pointer NULL", __func__); return -EINVAL; } if (params->ops->f_tx == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_tx function pointer NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: f_tx function pointer NULL", __func__); return -EINVAL; } if (params->ops->f_tx_bulk == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_tx_bulk function pointer NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: f_tx_bulk function pointer NULL", __func__); return -EINVAL; } /* Do we have room for one more port? */ if (p->num_ports_out == RTE_PIPELINE_PORT_OUT_MAX) { - RTE_LOG(ERR, PIPELINE, - "%s: invalid value for num_ports_out\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: invalid value for num_ports_out", __func__); return -EINVAL; } @@ -816,7 +816,7 @@ rte_pipeline_port_in_create(struct rte_pipeline *p, /* Create the port */ h_port = params->ops->f_create(params->arg_create, p->socket_id); if (h_port == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: Port creation failed\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, "%s: Port creation failed", __func__); return -EINVAL; } @@ -866,7 +866,7 @@ rte_pipeline_port_out_create(struct rte_pipeline *p, /* Create the port */ h_port = params->ops->f_create(params->arg_create, p->socket_id); if (h_port == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: Port creation failed\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, "%s: Port creation failed", __func__); return -EINVAL; } @@ -901,21 +901,21 @@ rte_pipeline_port_in_connect_to_table(struct rte_pipeline *p, /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (port_id >= p->num_ports_in) { - RTE_LOG(ERR, PIPELINE, - "%s: port IN ID %u is out of range\n", + RTE_LOG_LINE(ERR, PIPELINE, + "%s: port IN ID %u is out of range", __func__, port_id); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: Table ID %u is out of range\n", + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Table ID %u is out of range", __func__, table_id); return -EINVAL; } @@ -935,14 +935,14 @@ rte_pipeline_port_in_enable(struct rte_pipeline *p, uint32_t port_id) /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (port_id >= p->num_ports_in) { - RTE_LOG(ERR, PIPELINE, - "%s: port IN ID %u is out of range\n", + RTE_LOG_LINE(ERR, PIPELINE, + "%s: port IN ID %u is out of range", __func__, port_id); return -EINVAL; } @@ -982,13 +982,13 @@ rte_pipeline_port_in_disable(struct rte_pipeline *p, uint32_t port_id) /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (port_id >= p->num_ports_in) { - RTE_LOG(ERR, PIPELINE, "%s: port IN ID %u is out of range\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: port IN ID %u is out of range", __func__, port_id); return -EINVAL; } @@ -1035,7 +1035,7 @@ rte_pipeline_check(struct rte_pipeline *p) /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } @@ -1043,17 +1043,17 @@ rte_pipeline_check(struct rte_pipeline *p) /* Check that pipeline has at least one input port, one table and one output port */ if (p->num_ports_in == 0) { - RTE_LOG(ERR, PIPELINE, "%s: must have at least 1 input port\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: must have at least 1 input port", __func__); return -EINVAL; } if (p->num_tables == 0) { - RTE_LOG(ERR, PIPELINE, "%s: must have at least 1 table\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: must have at least 1 table", __func__); return -EINVAL; } if (p->num_ports_out == 0) { - RTE_LOG(ERR, PIPELINE, "%s: must have at least 1 output port\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: must have at least 1 output port", __func__); return -EINVAL; } @@ -1063,8 +1063,8 @@ rte_pipeline_check(struct rte_pipeline *p) struct rte_port_in *port_in = &p->ports_in[port_in_id]; if (port_in->table_id == RTE_TABLE_INVALID) { - RTE_LOG(ERR, PIPELINE, - "%s: Port IN ID %u is not connected\n", + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Port IN ID %u is not connected", __func__, port_in_id); return -EINVAL; } @@ -1447,7 +1447,7 @@ rte_pipeline_flush(struct rte_pipeline *p) /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } @@ -1500,14 +1500,14 @@ int rte_pipeline_port_in_stats_read(struct rte_pipeline *p, uint32_t port_id, int retval; if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (port_id >= p->num_ports_in) { - RTE_LOG(ERR, PIPELINE, - "%s: port IN ID %u is out of range\n", + RTE_LOG_LINE(ERR, PIPELINE, + "%s: port IN ID %u is out of range", __func__, port_id); return -EINVAL; } @@ -1537,13 +1537,13 @@ int rte_pipeline_port_out_stats_read(struct rte_pipeline *p, uint32_t port_id, int retval; if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (port_id >= p->num_ports_out) { - RTE_LOG(ERR, PIPELINE, - "%s: port OUT ID %u is out of range\n", __func__, port_id); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: port OUT ID %u is out of range", __func__, port_id); return -EINVAL; } @@ -1571,14 +1571,14 @@ int rte_pipeline_table_stats_read(struct rte_pipeline *p, uint32_t table_id, int retval; if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: table %u is out of range\n", __func__, table_id); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: table %u is out of range", __func__, table_id); return -EINVAL; } diff --git a/lib/port/rte_port_ethdev.c b/lib/port/rte_port_ethdev.c index e6bb7ee480..7f7eadda11 100644 --- a/lib/port/rte_port_ethdev.c +++ b/lib/port/rte_port_ethdev.c @@ -43,7 +43,7 @@ rte_port_ethdev_reader_create(void *params, int socket_id) /* Check input parameters */ if (conf == NULL) { - RTE_LOG(ERR, PORT, "%s: params is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: params is NULL", __func__); return NULL; } @@ -51,7 +51,7 @@ rte_port_ethdev_reader_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -78,7 +78,7 @@ static int rte_port_ethdev_reader_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__); return -EINVAL; } @@ -142,7 +142,7 @@ rte_port_ethdev_writer_create(void *params, int socket_id) (conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->tx_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__); return NULL; } @@ -150,7 +150,7 @@ rte_port_ethdev_writer_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -257,7 +257,7 @@ static int rte_port_ethdev_writer_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } @@ -323,7 +323,7 @@ rte_port_ethdev_writer_nodrop_create(void *params, int socket_id) (conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->tx_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__); return NULL; } @@ -331,7 +331,7 @@ rte_port_ethdev_writer_nodrop_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -470,7 +470,7 @@ static int rte_port_ethdev_writer_nodrop_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } diff --git a/lib/port/rte_port_eventdev.c b/lib/port/rte_port_eventdev.c index 13350fd608..1d0571966c 100644 --- a/lib/port/rte_port_eventdev.c +++ b/lib/port/rte_port_eventdev.c @@ -45,7 +45,7 @@ rte_port_eventdev_reader_create(void *params, int socket_id) /* Check input parameters */ if (conf == NULL) { - RTE_LOG(ERR, PORT, "%s: params is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: params is NULL", __func__); return NULL; } @@ -53,7 +53,7 @@ rte_port_eventdev_reader_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -85,7 +85,7 @@ static int rte_port_eventdev_reader_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__); return -EINVAL; } @@ -155,7 +155,7 @@ rte_port_eventdev_writer_create(void *params, int socket_id) (conf->enq_burst_sz == 0) || (conf->enq_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->enq_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__); return NULL; } @@ -163,7 +163,7 @@ rte_port_eventdev_writer_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -290,7 +290,7 @@ static int rte_port_eventdev_writer_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } @@ -362,7 +362,7 @@ rte_port_eventdev_writer_nodrop_create(void *params, int socket_id) (conf->enq_burst_sz == 0) || (conf->enq_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->enq_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__); return NULL; } @@ -370,7 +370,7 @@ rte_port_eventdev_writer_nodrop_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -530,7 +530,7 @@ static int rte_port_eventdev_writer_nodrop_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } diff --git a/lib/port/rte_port_fd.c b/lib/port/rte_port_fd.c index 7e140793b2..1b95d7b014 100644 --- a/lib/port/rte_port_fd.c +++ b/lib/port/rte_port_fd.c @@ -43,19 +43,19 @@ rte_port_fd_reader_create(void *params, int socket_id) /* Check input parameters */ if (conf == NULL) { - RTE_LOG(ERR, PORT, "%s: params is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: params is NULL", __func__); return NULL; } if (conf->fd < 0) { - RTE_LOG(ERR, PORT, "%s: Invalid file descriptor\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid file descriptor", __func__); return NULL; } if (conf->mtu == 0) { - RTE_LOG(ERR, PORT, "%s: Invalid MTU\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid MTU", __func__); return NULL; } if (conf->mempool == NULL) { - RTE_LOG(ERR, PORT, "%s: Invalid mempool\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid mempool", __func__); return NULL; } @@ -63,7 +63,7 @@ rte_port_fd_reader_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -109,7 +109,7 @@ static int rte_port_fd_reader_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__); return -EINVAL; } @@ -171,7 +171,7 @@ rte_port_fd_writer_create(void *params, int socket_id) (conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->tx_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__); return NULL; } @@ -179,7 +179,7 @@ rte_port_fd_writer_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -279,7 +279,7 @@ static int rte_port_fd_writer_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } @@ -344,7 +344,7 @@ rte_port_fd_writer_nodrop_create(void *params, int socket_id) (conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->tx_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__); return NULL; } @@ -352,7 +352,7 @@ rte_port_fd_writer_nodrop_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -464,7 +464,7 @@ static int rte_port_fd_writer_nodrop_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } diff --git a/lib/port/rte_port_frag.c b/lib/port/rte_port_frag.c index e1f1892176..39ff31e447 100644 --- a/lib/port/rte_port_frag.c +++ b/lib/port/rte_port_frag.c @@ -62,24 +62,24 @@ rte_port_ring_reader_frag_create(void *params, int socket_id, int is_ipv4) /* Check input parameters */ if (conf == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter conf is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Parameter conf is NULL", __func__); return NULL; } if (conf->ring == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter ring is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Parameter ring is NULL", __func__); return NULL; } if (conf->mtu == 0) { - RTE_LOG(ERR, PORT, "%s: Parameter mtu is invalid\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Parameter mtu is invalid", __func__); return NULL; } if (conf->pool_direct == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter pool_direct is NULL\n", + RTE_LOG_LINE(ERR, PORT, "%s: Parameter pool_direct is NULL", __func__); return NULL; } if (conf->pool_indirect == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter pool_indirect is NULL\n", + RTE_LOG_LINE(ERR, PORT, "%s: Parameter pool_indirect is NULL", __func__); return NULL; } @@ -88,7 +88,7 @@ rte_port_ring_reader_frag_create(void *params, int socket_id, int is_ipv4) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__); return NULL; } @@ -232,7 +232,7 @@ static int rte_port_ring_reader_frag_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Parameter port is NULL", __func__); return -1; } diff --git a/lib/port/rte_port_ras.c b/lib/port/rte_port_ras.c index 15109661d1..1e697fd226 100644 --- a/lib/port/rte_port_ras.c +++ b/lib/port/rte_port_ras.c @@ -69,16 +69,16 @@ rte_port_ring_writer_ras_create(void *params, int socket_id, int is_ipv4) /* Check input parameters */ if (conf == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter conf is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Parameter conf is NULL", __func__); return NULL; } if (conf->ring == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter ring is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Parameter ring is NULL", __func__); return NULL; } if ((conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX)) { - RTE_LOG(ERR, PORT, "%s: Parameter tx_burst_sz is invalid\n", + RTE_LOG_LINE(ERR, PORT, "%s: Parameter tx_burst_sz is invalid", __func__); return NULL; } @@ -87,7 +87,7 @@ rte_port_ring_writer_ras_create(void *params, int socket_id, int is_ipv4) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate socket\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate socket", __func__); return NULL; } @@ -103,7 +103,7 @@ rte_port_ring_writer_ras_create(void *params, int socket_id, int is_ipv4) socket_id); if (port->frag_tbl == NULL) { - RTE_LOG(ERR, PORT, "%s: rte_ip_frag_table_create failed\n", + RTE_LOG_LINE(ERR, PORT, "%s: rte_ip_frag_table_create failed", __func__); rte_free(port); return NULL; @@ -282,7 +282,7 @@ rte_port_ring_writer_ras_free(void *port) port; if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Parameter port is NULL", __func__); return -1; } diff --git a/lib/port/rte_port_ring.c b/lib/port/rte_port_ring.c index 002efb7c3e..42b33763d1 100644 --- a/lib/port/rte_port_ring.c +++ b/lib/port/rte_port_ring.c @@ -46,7 +46,7 @@ rte_port_ring_reader_create_internal(void *params, int socket_id, (conf->ring == NULL) || (rte_ring_is_cons_single(conf->ring) && is_multi) || (!rte_ring_is_cons_single(conf->ring) && !is_multi)) { - RTE_LOG(ERR, PORT, "%s: Invalid Parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid Parameters", __func__); return NULL; } @@ -54,7 +54,7 @@ rte_port_ring_reader_create_internal(void *params, int socket_id, port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -107,7 +107,7 @@ static int rte_port_ring_reader_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__); return -EINVAL; } @@ -174,7 +174,7 @@ rte_port_ring_writer_create_internal(void *params, int socket_id, (rte_ring_is_prod_single(conf->ring) && is_multi) || (!rte_ring_is_prod_single(conf->ring) && !is_multi) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX)) { - RTE_LOG(ERR, PORT, "%s: Invalid Parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid Parameters", __func__); return NULL; } @@ -182,7 +182,7 @@ rte_port_ring_writer_create_internal(void *params, int socket_id, port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -370,7 +370,7 @@ rte_port_ring_writer_free(void *port) struct rte_port_ring_writer *p = port; if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } @@ -443,7 +443,7 @@ rte_port_ring_writer_nodrop_create_internal(void *params, int socket_id, (rte_ring_is_prod_single(conf->ring) && is_multi) || (!rte_ring_is_prod_single(conf->ring) && !is_multi) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX)) { - RTE_LOG(ERR, PORT, "%s: Invalid Parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid Parameters", __func__); return NULL; } @@ -451,7 +451,7 @@ rte_port_ring_writer_nodrop_create_internal(void *params, int socket_id, port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -703,7 +703,7 @@ rte_port_ring_writer_nodrop_free(void *port) port; if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } diff --git a/lib/port/rte_port_sched.c b/lib/port/rte_port_sched.c index f6255c4346..e83112989f 100644 --- a/lib/port/rte_port_sched.c +++ b/lib/port/rte_port_sched.c @@ -40,7 +40,7 @@ rte_port_sched_reader_create(void *params, int socket_id) /* Check input parameters */ if ((conf == NULL) || (conf->sched == NULL)) { - RTE_LOG(ERR, PORT, "%s: Invalid params\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid params", __func__); return NULL; } @@ -48,7 +48,7 @@ rte_port_sched_reader_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -74,7 +74,7 @@ static int rte_port_sched_reader_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__); return -EINVAL; } @@ -139,7 +139,7 @@ rte_port_sched_writer_create(void *params, int socket_id) (conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->tx_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid params\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid params", __func__); return NULL; } @@ -147,7 +147,7 @@ rte_port_sched_writer_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -247,7 +247,7 @@ static int rte_port_sched_writer_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__); return -EINVAL; } diff --git a/lib/port/rte_port_source_sink.c b/lib/port/rte_port_source_sink.c index ff9677cdfe..cb4b7fa7fb 100644 --- a/lib/port/rte_port_source_sink.c +++ b/lib/port/rte_port_source_sink.c @@ -75,8 +75,8 @@ pcap_source_load(struct rte_port_source *port, /* first time open, get packet number */ pcap_handle = pcap_open_offline(file_name, pcap_errbuf); if (pcap_handle == NULL) { - RTE_LOG(ERR, PORT, "Failed to open pcap file " - "'%s' for reading\n", file_name); + RTE_LOG_LINE(ERR, PORT, "Failed to open pcap file " + "'%s' for reading", file_name); goto error_exit; } @@ -88,29 +88,29 @@ pcap_source_load(struct rte_port_source *port, port->pkt_len = rte_zmalloc_socket("PCAP", (sizeof(*port->pkt_len) * n_pkts), 0, socket_id); if (port->pkt_len == NULL) { - RTE_LOG(ERR, PORT, "No enough memory\n"); + RTE_LOG_LINE(ERR, PORT, "No enough memory"); goto error_exit; } pkt_len_aligns = rte_malloc("PCAP", (sizeof(*pkt_len_aligns) * n_pkts), 0); if (pkt_len_aligns == NULL) { - RTE_LOG(ERR, PORT, "No enough memory\n"); + RTE_LOG_LINE(ERR, PORT, "No enough memory"); goto error_exit; } port->pkts = rte_zmalloc_socket("PCAP", (sizeof(*port->pkts) * n_pkts), 0, socket_id); if (port->pkts == NULL) { - RTE_LOG(ERR, PORT, "No enough memory\n"); + RTE_LOG_LINE(ERR, PORT, "No enough memory"); goto error_exit; } /* open 2nd time, get pkt_len */ pcap_handle = pcap_open_offline(file_name, pcap_errbuf); if (pcap_handle == NULL) { - RTE_LOG(ERR, PORT, "Failed to open pcap file " - "'%s' for reading\n", file_name); + RTE_LOG_LINE(ERR, PORT, "Failed to open pcap file " + "'%s' for reading", file_name); goto error_exit; } @@ -128,7 +128,7 @@ pcap_source_load(struct rte_port_source *port, buff = rte_zmalloc_socket("PCAP", total_buff_len, 0, socket_id); if (buff == NULL) { - RTE_LOG(ERR, PORT, "No enough memory\n"); + RTE_LOG_LINE(ERR, PORT, "No enough memory"); goto error_exit; } @@ -137,8 +137,8 @@ pcap_source_load(struct rte_port_source *port, /* open file one last time to copy the pkt content */ pcap_handle = pcap_open_offline(file_name, pcap_errbuf); if (pcap_handle == NULL) { - RTE_LOG(ERR, PORT, "Failed to open pcap file " - "'%s' for reading\n", file_name); + RTE_LOG_LINE(ERR, PORT, "Failed to open pcap file " + "'%s' for reading", file_name); goto error_exit; } @@ -155,8 +155,8 @@ pcap_source_load(struct rte_port_source *port, rte_free(pkt_len_aligns); - RTE_LOG(INFO, PORT, "Successfully load pcap file " - "'%s' with %u pkts\n", + RTE_LOG_LINE(INFO, PORT, "Successfully load pcap file " + "'%s' with %u pkts", file_name, port->n_pkts); return 0; @@ -180,8 +180,8 @@ pcap_source_load(struct rte_port_source *port, int _ret = 0; \ \ if (file_name) { \ - RTE_LOG(ERR, PORT, "Source port field " \ - "\"file_name\" is not NULL.\n"); \ + RTE_LOG_LINE(ERR, PORT, "Source port field " \ + "\"file_name\" is not NULL."); \ _ret = -1; \ } \ \ @@ -199,7 +199,7 @@ rte_port_source_create(void *params, int socket_id) /* Check input arguments*/ if ((p == NULL) || (p->mempool == NULL)) { - RTE_LOG(ERR, PORT, "%s: Invalid params\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid params", __func__); return NULL; } @@ -207,7 +207,7 @@ rte_port_source_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -332,15 +332,15 @@ pcap_sink_open(struct rte_port_sink *port, /** Open a dead pcap handler for opening dumper file */ tx_pcap = pcap_open_dead(DLT_EN10MB, 65535); if (tx_pcap == NULL) { - RTE_LOG(ERR, PORT, "Cannot open pcap dead handler\n"); + RTE_LOG_LINE(ERR, PORT, "Cannot open pcap dead handler"); return -1; } /* The dumper is created using the previous pcap_t reference */ pcap_dumper = pcap_dump_open(tx_pcap, file_name); if (pcap_dumper == NULL) { - RTE_LOG(ERR, PORT, "Failed to open pcap file " - "\"%s\" for writing\n", file_name); + RTE_LOG_LINE(ERR, PORT, "Failed to open pcap file " + "\"%s\" for writing", file_name); return -1; } @@ -349,7 +349,7 @@ pcap_sink_open(struct rte_port_sink *port, port->pkt_index = 0; port->dump_finish = 0; - RTE_LOG(INFO, PORT, "Ready to dump packets to file \"%s\"\n", + RTE_LOG_LINE(INFO, PORT, "Ready to dump packets to file \"%s\"", file_name); return 0; @@ -402,7 +402,7 @@ pcap_sink_write_pkt(struct rte_port_sink *port, struct rte_mbuf *mbuf) if ((port->max_pkts != 0) && (port->pkt_index >= port->max_pkts)) { port->dump_finish = 1; - RTE_LOG(INFO, PORT, "Dumped %u packets to file\n", + RTE_LOG_LINE(INFO, PORT, "Dumped %u packets to file", port->pkt_index); } @@ -433,8 +433,8 @@ do { \ int _ret = 0; \ \ if (file_name) { \ - RTE_LOG(ERR, PORT, "Sink port field " \ - "\"file_name\" is not NULL.\n"); \ + RTE_LOG_LINE(ERR, PORT, "Sink port field " \ + "\"file_name\" is not NULL."); \ _ret = -1; \ } \ \ @@ -459,7 +459,7 @@ rte_port_sink_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } diff --git a/lib/port/rte_port_sym_crypto.c b/lib/port/rte_port_sym_crypto.c index 27b7e07cea..8e9abff9d6 100644 --- a/lib/port/rte_port_sym_crypto.c +++ b/lib/port/rte_port_sym_crypto.c @@ -44,7 +44,7 @@ rte_port_sym_crypto_reader_create(void *params, int socket_id) /* Check input parameters */ if (conf == NULL) { - RTE_LOG(ERR, PORT, "%s: params is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: params is NULL", __func__); return NULL; } @@ -52,7 +52,7 @@ rte_port_sym_crypto_reader_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -100,7 +100,7 @@ static int rte_port_sym_crypto_reader_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__); return -EINVAL; } @@ -167,7 +167,7 @@ rte_port_sym_crypto_writer_create(void *params, int socket_id) (conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->tx_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__); return NULL; } @@ -175,7 +175,7 @@ rte_port_sym_crypto_writer_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -285,7 +285,7 @@ static int rte_port_sym_crypto_writer_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } @@ -353,7 +353,7 @@ rte_port_sym_crypto_writer_nodrop_create(void *params, int socket_id) (conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->tx_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__); return NULL; } @@ -361,7 +361,7 @@ rte_port_sym_crypto_writer_nodrop_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -497,7 +497,7 @@ static int rte_port_sym_crypto_writer_nodrop_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } diff --git a/lib/power/guest_channel.c b/lib/power/guest_channel.c index a6f2097d5b..a9bbda8f48 100644 --- a/lib/power/guest_channel.c +++ b/lib/power/guest_channel.c @@ -59,38 +59,38 @@ guest_channel_host_connect(const char *path, unsigned int lcore_id) int fd = -1; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d\n", + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d", lcore_id, RTE_MAX_LCORE-1); return -1; } /* check if path is already open */ if (global_fds[lcore_id] != -1) { - RTE_LOG(ERR, GUEST_CHANNEL, "Channel(%u) is already open with fd %d\n", + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Channel(%u) is already open with fd %d", lcore_id, global_fds[lcore_id]); return -1; } snprintf(fd_path, PATH_MAX, "%s.%u", path, lcore_id); - RTE_LOG(INFO, GUEST_CHANNEL, "Opening channel '%s' for lcore %u\n", + RTE_LOG_LINE(INFO, GUEST_CHANNEL, "Opening channel '%s' for lcore %u", fd_path, lcore_id); fd = open(fd_path, O_RDWR); if (fd < 0) { - RTE_LOG(ERR, GUEST_CHANNEL, "Unable to connect to '%s' with error " - "%s\n", fd_path, strerror(errno)); + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Unable to connect to '%s' with error " + "%s", fd_path, strerror(errno)); return -1; } flags = fcntl(fd, F_GETFL, 0); if (flags < 0) { - RTE_LOG(ERR, GUEST_CHANNEL, "Failed on fcntl get flags for file %s\n", + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Failed on fcntl get flags for file %s", fd_path); goto error; } flags |= O_NONBLOCK; if (fcntl(fd, F_SETFL, flags) < 0) { - RTE_LOG(ERR, GUEST_CHANNEL, "Failed on setting non-blocking mode for " - "file %s\n", fd_path); + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Failed on setting non-blocking mode for " + "file %s", fd_path); goto error; } /* QEMU needs a delay after connection */ @@ -103,13 +103,13 @@ guest_channel_host_connect(const char *path, unsigned int lcore_id) global_fds[lcore_id] = fd; ret = guest_channel_send_msg(&pkt, lcore_id); if (ret != 0) { - RTE_LOG(ERR, GUEST_CHANNEL, - "Error on channel '%s' communications test: %s\n", + RTE_LOG_LINE(ERR, GUEST_CHANNEL, + "Error on channel '%s' communications test: %s", fd_path, ret > 0 ? strerror(ret) : "channel not connected"); goto error; } - RTE_LOG(INFO, GUEST_CHANNEL, "Channel '%s' is now connected\n", fd_path); + RTE_LOG_LINE(INFO, GUEST_CHANNEL, "Channel '%s' is now connected", fd_path); return 0; error: close(fd); @@ -125,13 +125,13 @@ guest_channel_send_msg(struct rte_power_channel_packet *pkt, void *buffer = pkt; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d\n", + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d", lcore_id, RTE_MAX_LCORE-1); return -1; } if (global_fds[lcore_id] < 0) { - RTE_LOG(ERR, GUEST_CHANNEL, "Channel is not connected\n"); + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Channel is not connected"); return -1; } while (buffer_len > 0) { @@ -166,13 +166,13 @@ int power_guest_channel_read_msg(void *pkt, return -1; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d\n", + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d", lcore_id, RTE_MAX_LCORE-1); return -1; } if (global_fds[lcore_id] < 0) { - RTE_LOG(ERR, GUEST_CHANNEL, "Channel is not connected\n"); + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Channel is not connected"); return -1; } @@ -181,10 +181,10 @@ int power_guest_channel_read_msg(void *pkt, ret = poll(&fds, 1, TIMEOUT); if (ret == 0) { - RTE_LOG(DEBUG, GUEST_CHANNEL, "Timeout occurred during poll function.\n"); + RTE_LOG_LINE(DEBUG, GUEST_CHANNEL, "Timeout occurred during poll function."); return -1; } else if (ret < 0) { - RTE_LOG(ERR, GUEST_CHANNEL, "Error occurred during poll function: %s\n", + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Error occurred during poll function: %s", strerror(errno)); return -1; } @@ -200,7 +200,7 @@ int power_guest_channel_read_msg(void *pkt, } if (ret == 0) { - RTE_LOG(ERR, GUEST_CHANNEL, "Expected more data, but connection has been closed.\n"); + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Expected more data, but connection has been closed."); return -1; } pkt = (char *)pkt + ret; @@ -221,7 +221,7 @@ void guest_channel_host_disconnect(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d\n", + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d", lcore_id, RTE_MAX_LCORE-1); return; } diff --git a/lib/power/power_acpi_cpufreq.c b/lib/power/power_acpi_cpufreq.c index 8b55f19247..dd143f2cc8 100644 --- a/lib/power/power_acpi_cpufreq.c +++ b/lib/power/power_acpi_cpufreq.c @@ -63,8 +63,8 @@ static int set_freq_internal(struct acpi_power_info *pi, uint32_t idx) { if (idx >= RTE_MAX_LCORE_FREQS || idx >= pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Invalid frequency index %u, which " - "should be less than %u\n", idx, pi->nb_freqs); + RTE_LOG_LINE(ERR, POWER, "Invalid frequency index %u, which " + "should be less than %u", idx, pi->nb_freqs); return -1; } @@ -75,13 +75,13 @@ set_freq_internal(struct acpi_power_info *pi, uint32_t idx) POWER_DEBUG_TRACE("Frequency[%u] %u to be set for lcore %u\n", idx, pi->freqs[idx], pi->lcore_id); if (fseek(pi->f, 0, SEEK_SET) < 0) { - RTE_LOG(ERR, POWER, "Fail to set file position indicator to 0 " - "for setting frequency for lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to set file position indicator to 0 " + "for setting frequency for lcore %u", pi->lcore_id); return -1; } if (fprintf(pi->f, "%u", pi->freqs[idx]) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new frequency for " - "lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to write new frequency for " + "lcore %u", pi->lcore_id); return -1; } fflush(pi->f); @@ -127,14 +127,14 @@ power_get_available_freqs(struct acpi_power_info *pi) open_core_sysfs_file(&f, "r", POWER_SYSFILE_AVAIL_FREQ, pi->lcore_id); if (f == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_AVAIL_FREQ); goto out; } ret = read_core_sysfs_s(f, buf, sizeof(buf)); if ((ret) < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_AVAIL_FREQ); goto out; } @@ -143,12 +143,12 @@ power_get_available_freqs(struct acpi_power_info *pi) count = rte_strsplit(buf, sizeof(buf), freqs, RTE_MAX_LCORE_FREQS, ' '); if (count <= 0) { - RTE_LOG(ERR, POWER, "No available frequency in " - ""POWER_SYSFILE_AVAIL_FREQ"\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "No available frequency in " + POWER_SYSFILE_AVAIL_FREQ, pi->lcore_id); goto out; } if (count >= RTE_MAX_LCORE_FREQS) { - RTE_LOG(ERR, POWER, "Too many available frequencies : %d\n", + RTE_LOG_LINE(ERR, POWER, "Too many available frequencies : %d", count); goto out; } @@ -196,14 +196,14 @@ power_init_for_setting_freq(struct acpi_power_info *pi) open_core_sysfs_file(&f, "rw+", POWER_SYSFILE_SETSPEED, pi->lcore_id); if (f == NULL) { - RTE_LOG(ERR, POWER, "Failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to open %s", POWER_SYSFILE_SETSPEED); goto err; } ret = read_core_sysfs_s(f, buf, sizeof(buf)); if ((ret) < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_SETSPEED); goto err; } @@ -237,7 +237,7 @@ power_acpi_cpufreq_init(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n", + RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceeds %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -253,42 +253,42 @@ power_acpi_cpufreq_init(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "in use\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is " + "in use", lcore_id); return -1; } pi->lcore_id = lcore_id; /* Check and set the governor */ if (power_set_governor_userspace(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to " - "userspace\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set governor of lcore %u to " + "userspace", lcore_id); goto fail; } /* Get the available frequencies */ if (power_get_available_freqs(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot get available frequencies of " - "lcore %u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot get available frequencies of " + "lcore %u", lcore_id); goto fail; } /* Init for setting lcore frequency */ if (power_init_for_setting_freq(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot init for setting frequency for " - "lcore %u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot init for setting frequency for " + "lcore %u", lcore_id); goto fail; } /* Set freq to max by default */ if (power_acpi_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, "Cannot set frequency of lcore %u " - "to max\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set frequency of lcore %u " + "to max", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Initialized successfully for lcore %u " - "power management\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Initialized successfully for lcore %u " + "power management", lcore_id); exp_state = POWER_ONGOING; rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_USED, rte_memory_order_release, rte_memory_order_relaxed); @@ -310,7 +310,7 @@ power_acpi_cpufreq_exit(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n", + RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceeds %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -325,8 +325,8 @@ power_acpi_cpufreq_exit(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "not used\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is " + "not used", lcore_id); return -1; } @@ -336,14 +336,14 @@ power_acpi_cpufreq_exit(unsigned int lcore_id) /* Set the governor back to the original */ if (power_set_governor_original(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set the governor of %u back " - "to the original\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set the governor of %u back " + "to the original", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Power management of lcore %u has exited from " + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u has exited from " "'userspace' mode and been set back to the " - "original\n", lcore_id); + "original", lcore_id); exp_state = POWER_ONGOING; rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_IDLE, rte_memory_order_release, rte_memory_order_relaxed); @@ -364,18 +364,18 @@ power_acpi_cpufreq_freqs(unsigned int lcore_id, uint32_t *freqs, uint32_t num) struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return 0; } if (freqs == NULL) { - RTE_LOG(ERR, POWER, "NULL buffer supplied\n"); + RTE_LOG_LINE(ERR, POWER, "NULL buffer supplied"); return 0; } pi = &lcore_power_info[lcore_id]; if (num < pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Buffer size is not enough\n"); + RTE_LOG_LINE(ERR, POWER, "Buffer size is not enough"); return 0; } rte_memcpy(freqs, pi->freqs, pi->nb_freqs * sizeof(uint32_t)); @@ -387,7 +387,7 @@ uint32_t power_acpi_cpufreq_get_freq(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return RTE_POWER_INVALID_FREQ_INDEX; } @@ -398,7 +398,7 @@ int power_acpi_cpufreq_set_freq(unsigned int lcore_id, uint32_t index) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -411,7 +411,7 @@ power_acpi_cpufreq_freq_down(unsigned int lcore_id) struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -429,7 +429,7 @@ power_acpi_cpufreq_freq_up(unsigned int lcore_id) struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -446,7 +446,7 @@ int power_acpi_cpufreq_freq_max(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -470,7 +470,7 @@ power_acpi_cpufreq_freq_min(unsigned int lcore_id) struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -487,7 +487,7 @@ power_acpi_turbo_status(unsigned int lcore_id) struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -503,7 +503,7 @@ power_acpi_enable_turbo(unsigned int lcore_id) struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -513,16 +513,16 @@ power_acpi_enable_turbo(unsigned int lcore_id) pi->turbo_enable = 1; else { pi->turbo_enable = 0; - RTE_LOG(ERR, POWER, - "Failed to enable turbo on lcore %u\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to enable turbo on lcore %u", lcore_id); return -1; } /* Max may have changed, so call to max function */ if (power_acpi_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, - "Failed to set frequency of lcore %u to max\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to set frequency of lcore %u to max", lcore_id); return -1; } @@ -536,7 +536,7 @@ power_acpi_disable_turbo(unsigned int lcore_id) struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -547,8 +547,8 @@ power_acpi_disable_turbo(unsigned int lcore_id) if ((pi->turbo_available) && (pi->curr_idx <= 1)) { /* Try to set freq to max by default coming out of turbo */ if (power_acpi_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, - "Failed to set frequency of lcore %u to max\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to set frequency of lcore %u to max", lcore_id); return -1; } @@ -563,11 +563,11 @@ int power_acpi_get_capabilities(unsigned int lcore_id, struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } if (caps == NULL) { - RTE_LOG(ERR, POWER, "Invalid argument\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid argument"); return -1; } diff --git a/lib/power/power_amd_pstate_cpufreq.c b/lib/power/power_amd_pstate_cpufreq.c index dbd9d2b3ee..44581fd48b 100644 --- a/lib/power/power_amd_pstate_cpufreq.c +++ b/lib/power/power_amd_pstate_cpufreq.c @@ -70,8 +70,8 @@ static int set_freq_internal(struct amd_pstate_power_info *pi, uint32_t idx) { if (idx >= RTE_MAX_LCORE_FREQS || idx >= pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Invalid frequency index %u, which " - "should be less than %u\n", idx, pi->nb_freqs); + RTE_LOG_LINE(ERR, POWER, "Invalid frequency index %u, which " + "should be less than %u", idx, pi->nb_freqs); return -1; } @@ -82,13 +82,13 @@ set_freq_internal(struct amd_pstate_power_info *pi, uint32_t idx) POWER_DEBUG_TRACE("Frequency[%u] %u to be set for lcore %u\n", idx, pi->freqs[idx], pi->lcore_id); if (fseek(pi->f, 0, SEEK_SET) < 0) { - RTE_LOG(ERR, POWER, "Fail to set file position indicator to 0 " - "for setting frequency for lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to set file position indicator to 0 " + "for setting frequency for lcore %u", pi->lcore_id); return -1; } if (fprintf(pi->f, "%u", pi->freqs[idx]) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new frequency for " - "lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to write new frequency for " + "lcore %u", pi->lcore_id); return -1; } fflush(pi->f); @@ -119,7 +119,7 @@ power_check_turbo(struct amd_pstate_power_info *pi) open_core_sysfs_file(&f_max, "r", POWER_SYSFILE_HIGHEST_PERF, pi->lcore_id); if (f_max == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_HIGHEST_PERF); goto err; } @@ -127,21 +127,21 @@ power_check_turbo(struct amd_pstate_power_info *pi) open_core_sysfs_file(&f_nom, "r", POWER_SYSFILE_NOMINAL_PERF, pi->lcore_id); if (f_nom == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_NOMINAL_PERF); goto err; } ret = read_core_sysfs_u32(f_max, &highest_perf); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_HIGHEST_PERF); goto err; } ret = read_core_sysfs_u32(f_nom, &nominal_perf); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_NOMINAL_PERF); goto err; } @@ -190,7 +190,7 @@ power_get_available_freqs(struct amd_pstate_power_info *pi) open_core_sysfs_file(&f_max, "r", POWER_SYSFILE_SCALING_MAX_FREQ, pi->lcore_id); if (f_max == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_SCALING_MAX_FREQ); goto out; } @@ -198,7 +198,7 @@ power_get_available_freqs(struct amd_pstate_power_info *pi) open_core_sysfs_file(&f_min, "r", POWER_SYSFILE_SCALING_MIN_FREQ, pi->lcore_id); if (f_min == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_SCALING_MIN_FREQ); goto out; } @@ -206,28 +206,28 @@ power_get_available_freqs(struct amd_pstate_power_info *pi) open_core_sysfs_file(&f_nom, "r", POWER_SYSFILE_NOMINAL_FREQ, pi->lcore_id); if (f_nom == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_NOMINAL_FREQ); goto out; } ret = read_core_sysfs_u32(f_max, &scaling_max_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_SCALING_MAX_FREQ); goto out; } ret = read_core_sysfs_u32(f_min, &scaling_min_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_SCALING_MIN_FREQ); goto out; } ret = read_core_sysfs_u32(f_nom, &nominal_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_NOMINAL_FREQ); goto out; } @@ -235,8 +235,8 @@ power_get_available_freqs(struct amd_pstate_power_info *pi) power_check_turbo(pi); if (scaling_max_freq < scaling_min_freq) { - RTE_LOG(ERR, POWER, "scaling min freq exceeds max freq, " - "not expected! Check system power policy\n"); + RTE_LOG_LINE(ERR, POWER, "scaling min freq exceeds max freq, " + "not expected! Check system power policy"); goto out; } else if (scaling_max_freq == scaling_min_freq) { num_freqs = 1; @@ -304,14 +304,14 @@ power_init_for_setting_freq(struct amd_pstate_power_info *pi) open_core_sysfs_file(&f, "rw+", POWER_SYSFILE_SETSPEED, pi->lcore_id); if (f == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_SETSPEED); goto err; } ret = read_core_sysfs_s(f, buf, sizeof(buf)); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_SETSPEED); goto err; } @@ -355,7 +355,7 @@ power_amd_pstate_cpufreq_init(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n", + RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceeds %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -371,42 +371,42 @@ power_amd_pstate_cpufreq_init(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "in use\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is " + "in use", lcore_id); return -1; } pi->lcore_id = lcore_id; /* Check and set the governor */ if (power_set_governor_userspace(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to " - "userspace\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set governor of lcore %u to " + "userspace", lcore_id); goto fail; } /* Get the available frequencies */ if (power_get_available_freqs(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot get available frequencies of " - "lcore %u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot get available frequencies of " + "lcore %u", lcore_id); goto fail; } /* Init for setting lcore frequency */ if (power_init_for_setting_freq(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot init for setting frequency for " - "lcore %u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot init for setting frequency for " + "lcore %u", lcore_id); goto fail; } /* Set freq to max by default */ if (power_amd_pstate_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, "Cannot set frequency of lcore %u " - "to max\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set frequency of lcore %u " + "to max", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Initialized successfully for lcore %u " - "power management\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Initialized successfully for lcore %u " + "power management", lcore_id); rte_atomic_store_explicit(&(pi->state), POWER_USED, rte_memory_order_release); @@ -434,7 +434,7 @@ power_amd_pstate_cpufreq_exit(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n", + RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceeds %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -449,8 +449,8 @@ power_amd_pstate_cpufreq_exit(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "not used\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is " + "not used", lcore_id); return -1; } @@ -460,14 +460,14 @@ power_amd_pstate_cpufreq_exit(unsigned int lcore_id) /* Set the governor back to the original */ if (power_set_governor_original(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set the governor of %u back " - "to the original\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set the governor of %u back " + "to the original", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Power management of lcore %u has exited from " + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u has exited from " "'userspace' mode and been set back to the " - "original\n", lcore_id); + "original", lcore_id); rte_atomic_store_explicit(&(pi->state), POWER_IDLE, rte_memory_order_release); return 0; @@ -484,18 +484,18 @@ power_amd_pstate_cpufreq_freqs(unsigned int lcore_id, uint32_t *freqs, uint32_t struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return 0; } if (freqs == NULL) { - RTE_LOG(ERR, POWER, "NULL buffer supplied\n"); + RTE_LOG_LINE(ERR, POWER, "NULL buffer supplied"); return 0; } pi = &lcore_power_info[lcore_id]; if (num < pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Buffer size is not enough\n"); + RTE_LOG_LINE(ERR, POWER, "Buffer size is not enough"); return 0; } rte_memcpy(freqs, pi->freqs, pi->nb_freqs * sizeof(uint32_t)); @@ -507,7 +507,7 @@ uint32_t power_amd_pstate_cpufreq_get_freq(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return RTE_POWER_INVALID_FREQ_INDEX; } @@ -518,7 +518,7 @@ int power_amd_pstate_cpufreq_set_freq(unsigned int lcore_id, uint32_t index) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -531,7 +531,7 @@ power_amd_pstate_cpufreq_freq_down(unsigned int lcore_id) struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -549,7 +549,7 @@ power_amd_pstate_cpufreq_freq_up(unsigned int lcore_id) struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -566,7 +566,7 @@ int power_amd_pstate_cpufreq_freq_max(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -591,7 +591,7 @@ power_amd_pstate_cpufreq_freq_min(unsigned int lcore_id) struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -607,7 +607,7 @@ power_amd_pstate_turbo_status(unsigned int lcore_id) struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -622,7 +622,7 @@ power_amd_pstate_enable_turbo(unsigned int lcore_id) struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -632,8 +632,8 @@ power_amd_pstate_enable_turbo(unsigned int lcore_id) pi->turbo_enable = 1; else { pi->turbo_enable = 0; - RTE_LOG(ERR, POWER, - "Failed to enable turbo on lcore %u\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to enable turbo on lcore %u", lcore_id); return -1; } @@ -643,8 +643,8 @@ power_amd_pstate_enable_turbo(unsigned int lcore_id) */ /* Max may have changed, so call to max function */ if (power_amd_pstate_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, - "Failed to set frequency of lcore %u to max\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to set frequency of lcore %u to max", lcore_id); return -1; } @@ -658,7 +658,7 @@ power_amd_pstate_disable_turbo(unsigned int lcore_id) struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -669,8 +669,8 @@ power_amd_pstate_disable_turbo(unsigned int lcore_id) if ((pi->turbo_available) && (pi->curr_idx <= pi->nom_idx)) { /* Try to set freq to max by default coming out of turbo */ if (power_amd_pstate_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, - "Failed to set frequency of lcore %u to max\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to set frequency of lcore %u to max", lcore_id); return -1; } @@ -686,11 +686,11 @@ power_amd_pstate_get_capabilities(unsigned int lcore_id, struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } if (caps == NULL) { - RTE_LOG(ERR, POWER, "Invalid argument\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid argument"); return -1; } diff --git a/lib/power/power_common.c b/lib/power/power_common.c index bf77eafa88..bc57642cd1 100644 --- a/lib/power/power_common.c +++ b/lib/power/power_common.c @@ -163,14 +163,14 @@ power_set_governor(unsigned int lcore_id, const char *new_governor, open_core_sysfs_file(&f_governor, "rw+", POWER_SYSFILE_GOVERNOR, lcore_id); if (f_governor == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_GOVERNOR); goto out; } ret = read_core_sysfs_s(f_governor, buf, sizeof(buf)); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_GOVERNOR); goto out; } @@ -190,14 +190,14 @@ power_set_governor(unsigned int lcore_id, const char *new_governor, /* Write the new governor */ ret = write_core_sysfs_s(f_governor, new_governor); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to write %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to write %s", POWER_SYSFILE_GOVERNOR); goto out; } ret = 0; - RTE_LOG(INFO, POWER, "Power management governor of lcore %u has been " - "set to '%s' successfully\n", lcore_id, new_governor); + RTE_LOG_LINE(INFO, POWER, "Power management governor of lcore %u has been " + "set to '%s' successfully", lcore_id, new_governor); out: if (f_governor != NULL) fclose(f_governor); diff --git a/lib/power/power_cppc_cpufreq.c b/lib/power/power_cppc_cpufreq.c index bb70f6ae52..83e1e62830 100644 --- a/lib/power/power_cppc_cpufreq.c +++ b/lib/power/power_cppc_cpufreq.c @@ -73,8 +73,8 @@ static int set_freq_internal(struct cppc_power_info *pi, uint32_t idx) { if (idx >= RTE_MAX_LCORE_FREQS || idx >= pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Invalid frequency index %u, which " - "should be less than %u\n", idx, pi->nb_freqs); + RTE_LOG_LINE(ERR, POWER, "Invalid frequency index %u, which " + "should be less than %u", idx, pi->nb_freqs); return -1; } @@ -85,13 +85,13 @@ set_freq_internal(struct cppc_power_info *pi, uint32_t idx) POWER_DEBUG_TRACE("Frequency[%u] %u to be set for lcore %u\n", idx, pi->freqs[idx], pi->lcore_id); if (fseek(pi->f, 0, SEEK_SET) < 0) { - RTE_LOG(ERR, POWER, "Fail to set file position indicator to 0 " - "for setting frequency for lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to set file position indicator to 0 " + "for setting frequency for lcore %u", pi->lcore_id); return -1; } if (fprintf(pi->f, "%u", pi->freqs[idx]) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new frequency for " - "lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to write new frequency for " + "lcore %u", pi->lcore_id); return -1; } fflush(pi->f); @@ -122,7 +122,7 @@ power_check_turbo(struct cppc_power_info *pi) open_core_sysfs_file(&f_max, "r", POWER_SYSFILE_HIGHEST_PERF, pi->lcore_id); if (f_max == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_HIGHEST_PERF); goto err; } @@ -130,7 +130,7 @@ power_check_turbo(struct cppc_power_info *pi) open_core_sysfs_file(&f_nom, "r", POWER_SYSFILE_NOMINAL_PERF, pi->lcore_id); if (f_nom == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_NOMINAL_PERF); goto err; } @@ -138,28 +138,28 @@ power_check_turbo(struct cppc_power_info *pi) open_core_sysfs_file(&f_cmax, "r", POWER_SYSFILE_SYS_MAX, pi->lcore_id); if (f_cmax == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_SYS_MAX); goto err; } ret = read_core_sysfs_u32(f_max, &highest_perf); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_HIGHEST_PERF); goto err; } ret = read_core_sysfs_u32(f_nom, &nominal_perf); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_NOMINAL_PERF); goto err; } ret = read_core_sysfs_u32(f_cmax, &cpuinfo_max_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_SYS_MAX); goto err; } @@ -209,7 +209,7 @@ power_get_available_freqs(struct cppc_power_info *pi) open_core_sysfs_file(&f_max, "r", POWER_SYSFILE_SCALING_MAX_FREQ, pi->lcore_id); if (f_max == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_SCALING_MAX_FREQ); goto out; } @@ -217,21 +217,21 @@ power_get_available_freqs(struct cppc_power_info *pi) open_core_sysfs_file(&f_min, "r", POWER_SYSFILE_SCALING_MIN_FREQ, pi->lcore_id); if (f_min == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_SCALING_MIN_FREQ); goto out; } ret = read_core_sysfs_u32(f_max, &scaling_max_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_SCALING_MAX_FREQ); goto out; } ret = read_core_sysfs_u32(f_min, &scaling_min_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_SCALING_MIN_FREQ); goto out; } @@ -249,7 +249,7 @@ power_get_available_freqs(struct cppc_power_info *pi) num_freqs = (nominal_perf - scaling_min_freq) / BUS_FREQ + 1 + pi->turbo_available; if (num_freqs >= RTE_MAX_LCORE_FREQS) { - RTE_LOG(ERR, POWER, "Too many available frequencies: %d\n", + RTE_LOG_LINE(ERR, POWER, "Too many available frequencies: %d", num_freqs); goto out; } @@ -290,14 +290,14 @@ power_init_for_setting_freq(struct cppc_power_info *pi) open_core_sysfs_file(&f, "rw+", POWER_SYSFILE_SETSPEED, pi->lcore_id); if (f == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_SETSPEED); goto err; } ret = read_core_sysfs_s(f, buf, sizeof(buf)); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_SETSPEED); goto err; } @@ -341,7 +341,7 @@ power_cppc_cpufreq_init(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n", + RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceeds %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -357,42 +357,42 @@ power_cppc_cpufreq_init(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "in use\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is " + "in use", lcore_id); return -1; } pi->lcore_id = lcore_id; /* Check and set the governor */ if (power_set_governor_userspace(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to " - "userspace\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set governor of lcore %u to " + "userspace", lcore_id); goto fail; } /* Get the available frequencies */ if (power_get_available_freqs(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot get available frequencies of " - "lcore %u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot get available frequencies of " + "lcore %u", lcore_id); goto fail; } /* Init for setting lcore frequency */ if (power_init_for_setting_freq(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot init for setting frequency for " - "lcore %u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot init for setting frequency for " + "lcore %u", lcore_id); goto fail; } /* Set freq to max by default */ if (power_cppc_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, "Cannot set frequency of lcore %u " - "to max\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set frequency of lcore %u " + "to max", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Initialized successfully for lcore %u " - "power management\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Initialized successfully for lcore %u " + "power management", lcore_id); rte_atomic_store_explicit(&(pi->state), POWER_USED, rte_memory_order_release); @@ -420,7 +420,7 @@ power_cppc_cpufreq_exit(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n", + RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceeds %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -435,8 +435,8 @@ power_cppc_cpufreq_exit(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "not used\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is " + "not used", lcore_id); return -1; } @@ -446,14 +446,14 @@ power_cppc_cpufreq_exit(unsigned int lcore_id) /* Set the governor back to the original */ if (power_set_governor_original(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set the governor of %u back " - "to the original\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set the governor of %u back " + "to the original", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Power management of lcore %u has exited from " + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u has exited from " "'userspace' mode and been set back to the " - "original\n", lcore_id); + "original", lcore_id); rte_atomic_store_explicit(&(pi->state), POWER_IDLE, rte_memory_order_release); return 0; @@ -470,18 +470,18 @@ power_cppc_cpufreq_freqs(unsigned int lcore_id, uint32_t *freqs, uint32_t num) struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return 0; } if (freqs == NULL) { - RTE_LOG(ERR, POWER, "NULL buffer supplied\n"); + RTE_LOG_LINE(ERR, POWER, "NULL buffer supplied"); return 0; } pi = &lcore_power_info[lcore_id]; if (num < pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Buffer size is not enough\n"); + RTE_LOG_LINE(ERR, POWER, "Buffer size is not enough"); return 0; } rte_memcpy(freqs, pi->freqs, pi->nb_freqs * sizeof(uint32_t)); @@ -493,7 +493,7 @@ uint32_t power_cppc_cpufreq_get_freq(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return RTE_POWER_INVALID_FREQ_INDEX; } @@ -504,7 +504,7 @@ int power_cppc_cpufreq_set_freq(unsigned int lcore_id, uint32_t index) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -517,7 +517,7 @@ power_cppc_cpufreq_freq_down(unsigned int lcore_id) struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -535,7 +535,7 @@ power_cppc_cpufreq_freq_up(unsigned int lcore_id) struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -552,7 +552,7 @@ int power_cppc_cpufreq_freq_max(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -576,7 +576,7 @@ power_cppc_cpufreq_freq_min(unsigned int lcore_id) struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -592,7 +592,7 @@ power_cppc_turbo_status(unsigned int lcore_id) struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -607,7 +607,7 @@ power_cppc_enable_turbo(unsigned int lcore_id) struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -617,8 +617,8 @@ power_cppc_enable_turbo(unsigned int lcore_id) pi->turbo_enable = 1; else { pi->turbo_enable = 0; - RTE_LOG(ERR, POWER, - "Failed to enable turbo on lcore %u\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to enable turbo on lcore %u", lcore_id); return -1; } @@ -628,8 +628,8 @@ power_cppc_enable_turbo(unsigned int lcore_id) */ /* Max may have changed, so call to max function */ if (power_cppc_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, - "Failed to set frequency of lcore %u to max\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to set frequency of lcore %u to max", lcore_id); return -1; } @@ -643,7 +643,7 @@ power_cppc_disable_turbo(unsigned int lcore_id) struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -654,8 +654,8 @@ power_cppc_disable_turbo(unsigned int lcore_id) if ((pi->turbo_available) && (pi->curr_idx <= 1)) { /* Try to set freq to max by default coming out of turbo */ if (power_cppc_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, - "Failed to set frequency of lcore %u to max\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to set frequency of lcore %u to max", lcore_id); return -1; } @@ -671,11 +671,11 @@ power_cppc_get_capabilities(unsigned int lcore_id, struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } if (caps == NULL) { - RTE_LOG(ERR, POWER, "Invalid argument\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid argument"); return -1; } diff --git a/lib/power/power_intel_uncore.c b/lib/power/power_intel_uncore.c index 688aebc4ee..0ee8e603d2 100644 --- a/lib/power/power_intel_uncore.c +++ b/lib/power/power_intel_uncore.c @@ -52,8 +52,8 @@ set_uncore_freq_internal(struct uncore_power_info *ui, uint32_t idx) int ret; if (idx >= MAX_UNCORE_FREQS || idx >= ui->nb_freqs) { - RTE_LOG(DEBUG, POWER, "Invalid uncore frequency index %u, which " - "should be less than %u\n", idx, ui->nb_freqs); + RTE_LOG_LINE(DEBUG, POWER, "Invalid uncore frequency index %u, which " + "should be less than %u", idx, ui->nb_freqs); return -1; } @@ -65,13 +65,13 @@ set_uncore_freq_internal(struct uncore_power_info *ui, uint32_t idx) open_core_sysfs_file(&ui->f_cur_max, "rw+", POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ, ui->pkg, ui->die); if (ui->f_cur_max == NULL) { - RTE_LOG(DEBUG, POWER, "failed to open %s\n", + RTE_LOG_LINE(DEBUG, POWER, "failed to open %s", POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ); return -1; } ret = read_core_sysfs_u32(ui->f_cur_max, &curr_max_freq); if (ret < 0) { - RTE_LOG(DEBUG, POWER, "Failed to read %s\n", + RTE_LOG_LINE(DEBUG, POWER, "Failed to read %s", POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ); fclose(ui->f_cur_max); return -1; @@ -79,14 +79,14 @@ set_uncore_freq_internal(struct uncore_power_info *ui, uint32_t idx) /* check this value first before fprintf value to f_cur_max, so value isn't overwritten */ if (fprintf(ui->f_cur_min, "%u", target_uncore_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new uncore frequency for " - "pkg %02u die %02u\n", ui->pkg, ui->die); + RTE_LOG_LINE(ERR, POWER, "Fail to write new uncore frequency for " + "pkg %02u die %02u", ui->pkg, ui->die); return -1; } if (fprintf(ui->f_cur_max, "%u", target_uncore_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new uncore frequency for " - "pkg %02u die %02u\n", ui->pkg, ui->die); + RTE_LOG_LINE(ERR, POWER, "Fail to write new uncore frequency for " + "pkg %02u die %02u", ui->pkg, ui->die); return -1; } @@ -121,13 +121,13 @@ power_init_for_setting_uncore_freq(struct uncore_power_info *ui) open_core_sysfs_file(&f_base_max, "r", POWER_INTEL_UNCORE_SYSFILE_BASE_MAX_FREQ, ui->pkg, ui->die); if (f_base_max == NULL) { - RTE_LOG(DEBUG, POWER, "failed to open %s\n", + RTE_LOG_LINE(DEBUG, POWER, "failed to open %s", POWER_INTEL_UNCORE_SYSFILE_BASE_MAX_FREQ); goto err; } ret = read_core_sysfs_u32(f_base_max, &base_max_freq); if (ret < 0) { - RTE_LOG(DEBUG, POWER, "Failed to read %s\n", + RTE_LOG_LINE(DEBUG, POWER, "Failed to read %s", POWER_INTEL_UNCORE_SYSFILE_BASE_MAX_FREQ); goto err; } @@ -136,14 +136,14 @@ power_init_for_setting_uncore_freq(struct uncore_power_info *ui) open_core_sysfs_file(&f_base_min, "r", POWER_INTEL_UNCORE_SYSFILE_BASE_MIN_FREQ, ui->pkg, ui->die); if (f_base_min == NULL) { - RTE_LOG(DEBUG, POWER, "failed to open %s\n", + RTE_LOG_LINE(DEBUG, POWER, "failed to open %s", POWER_INTEL_UNCORE_SYSFILE_BASE_MIN_FREQ); goto err; } if (f_base_min != NULL) { ret = read_core_sysfs_u32(f_base_min, &base_min_freq); if (ret < 0) { - RTE_LOG(DEBUG, POWER, "Failed to read %s\n", + RTE_LOG_LINE(DEBUG, POWER, "Failed to read %s", POWER_INTEL_UNCORE_SYSFILE_BASE_MIN_FREQ); goto err; } @@ -153,14 +153,14 @@ power_init_for_setting_uncore_freq(struct uncore_power_info *ui) open_core_sysfs_file(&f_min, "rw+", POWER_INTEL_UNCORE_SYSFILE_MIN_FREQ, ui->pkg, ui->die); if (f_min == NULL) { - RTE_LOG(DEBUG, POWER, "failed to open %s\n", + RTE_LOG_LINE(DEBUG, POWER, "failed to open %s", POWER_INTEL_UNCORE_SYSFILE_MIN_FREQ); goto err; } if (f_min != NULL) { ret = read_core_sysfs_u32(f_min, &min_freq); if (ret < 0) { - RTE_LOG(DEBUG, POWER, "Failed to read %s\n", + RTE_LOG_LINE(DEBUG, POWER, "Failed to read %s", POWER_INTEL_UNCORE_SYSFILE_MIN_FREQ); goto err; } @@ -170,14 +170,14 @@ power_init_for_setting_uncore_freq(struct uncore_power_info *ui) open_core_sysfs_file(&f_max, "rw+", POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ, ui->pkg, ui->die); if (f_max == NULL) { - RTE_LOG(DEBUG, POWER, "failed to open %s\n", + RTE_LOG_LINE(DEBUG, POWER, "failed to open %s", POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ); goto err; } if (f_max != NULL) { ret = read_core_sysfs_u32(f_max, &max_freq); if (ret < 0) { - RTE_LOG(DEBUG, POWER, "Failed to read %s\n", + RTE_LOG_LINE(DEBUG, POWER, "Failed to read %s", POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ); goto err; } @@ -222,7 +222,7 @@ power_get_available_uncore_freqs(struct uncore_power_info *ui) num_uncore_freqs = (ui->init_max_freq - ui->init_min_freq) / BUS_FREQ + 1; if (num_uncore_freqs >= MAX_UNCORE_FREQS) { - RTE_LOG(ERR, POWER, "Too many available uncore frequencies: %d\n", + RTE_LOG_LINE(ERR, POWER, "Too many available uncore frequencies: %d", num_uncore_freqs); goto out; } @@ -250,7 +250,7 @@ check_pkg_die_values(unsigned int pkg, unsigned int die) if (max_pkgs == 0) return -1; if (pkg >= max_pkgs) { - RTE_LOG(DEBUG, POWER, "Package number %02u can not exceed %u\n", + RTE_LOG_LINE(DEBUG, POWER, "Package number %02u can not exceed %u", pkg, max_pkgs); return -1; } @@ -259,7 +259,7 @@ check_pkg_die_values(unsigned int pkg, unsigned int die) if (max_dies == 0) return -1; if (die >= max_dies) { - RTE_LOG(DEBUG, POWER, "Die number %02u can not exceed %u\n", + RTE_LOG_LINE(DEBUG, POWER, "Die number %02u can not exceed %u", die, max_dies); return -1; } @@ -282,15 +282,15 @@ power_intel_uncore_init(unsigned int pkg, unsigned int die) /* Init for setting uncore die frequency */ if (power_init_for_setting_uncore_freq(ui) < 0) { - RTE_LOG(DEBUG, POWER, "Cannot init for setting uncore frequency for " - "pkg %02u die %02u\n", pkg, die); + RTE_LOG_LINE(DEBUG, POWER, "Cannot init for setting uncore frequency for " + "pkg %02u die %02u", pkg, die); return -1; } /* Get the available frequencies */ if (power_get_available_uncore_freqs(ui) < 0) { - RTE_LOG(DEBUG, POWER, "Cannot get available uncore frequencies of " - "pkg %02u die %02u\n", pkg, die); + RTE_LOG_LINE(DEBUG, POWER, "Cannot get available uncore frequencies of " + "pkg %02u die %02u", pkg, die); return -1; } @@ -309,14 +309,14 @@ power_intel_uncore_exit(unsigned int pkg, unsigned int die) ui = &uncore_info[pkg][die]; if (fprintf(ui->f_cur_min, "%u", ui->org_min_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write original uncore frequency for " - "pkg %02u die %02u\n", ui->pkg, ui->die); + RTE_LOG_LINE(ERR, POWER, "Fail to write original uncore frequency for " + "pkg %02u die %02u", ui->pkg, ui->die); return -1; } if (fprintf(ui->f_cur_max, "%u", ui->org_max_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write original uncore frequency for " - "pkg %02u die %02u\n", ui->pkg, ui->die); + RTE_LOG_LINE(ERR, POWER, "Fail to write original uncore frequency for " + "pkg %02u die %02u", ui->pkg, ui->die); return -1; } @@ -385,13 +385,13 @@ power_intel_uncore_freqs(unsigned int pkg, unsigned int die, uint32_t *freqs, ui return -1; if (freqs == NULL) { - RTE_LOG(ERR, POWER, "NULL buffer supplied\n"); + RTE_LOG_LINE(ERR, POWER, "NULL buffer supplied"); return 0; } ui = &uncore_info[pkg][die]; if (num < ui->nb_freqs) { - RTE_LOG(ERR, POWER, "Buffer size is not enough\n"); + RTE_LOG_LINE(ERR, POWER, "Buffer size is not enough"); return 0; } rte_memcpy(freqs, ui->freqs, ui->nb_freqs * sizeof(uint32_t)); @@ -419,10 +419,10 @@ power_intel_uncore_get_num_pkgs(void) d = opendir(INTEL_UNCORE_FREQUENCY_DIR); if (d == NULL) { - RTE_LOG(ERR, POWER, + RTE_LOG_LINE(ERR, POWER, "Uncore frequency management not supported/enabled on this kernel. " "Please enable CONFIG_INTEL_UNCORE_FREQ_CONTROL if on Intel x86 with linux kernel" - " >= 5.6\n"); + " >= 5.6"); return 0; } @@ -451,16 +451,16 @@ power_intel_uncore_get_num_dies(unsigned int pkg) if (max_pkgs == 0) return 0; if (pkg >= max_pkgs) { - RTE_LOG(DEBUG, POWER, "Invalid package number\n"); + RTE_LOG_LINE(DEBUG, POWER, "Invalid package number"); return 0; } d = opendir(INTEL_UNCORE_FREQUENCY_DIR); if (d == NULL) { - RTE_LOG(ERR, POWER, + RTE_LOG_LINE(ERR, POWER, "Uncore frequency management not supported/enabled on this kernel. " "Please enable CONFIG_INTEL_UNCORE_FREQ_CONTROL if on Intel x86 with linux kernel" - " >= 5.6\n"); + " >= 5.6"); return 0; } diff --git a/lib/power/power_kvm_vm.c b/lib/power/power_kvm_vm.c index db031f4310..218799491e 100644 --- a/lib/power/power_kvm_vm.c +++ b/lib/power/power_kvm_vm.c @@ -25,7 +25,7 @@ int power_kvm_vm_init(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Core(%u) is out of range 0...%d\n", + RTE_LOG_LINE(ERR, POWER, "Core(%u) is out of range 0...%d", lcore_id, RTE_MAX_LCORE-1); return -1; } @@ -46,16 +46,16 @@ power_kvm_vm_freqs(__rte_unused unsigned int lcore_id, __rte_unused uint32_t *freqs, __rte_unused uint32_t num) { - RTE_LOG(ERR, POWER, "rte_power_freqs is not implemented " - "for Virtual Machine Power Management\n"); + RTE_LOG_LINE(ERR, POWER, "rte_power_freqs is not implemented " + "for Virtual Machine Power Management"); return -ENOTSUP; } uint32_t power_kvm_vm_get_freq(__rte_unused unsigned int lcore_id) { - RTE_LOG(ERR, POWER, "rte_power_get_freq is not implemented " - "for Virtual Machine Power Management\n"); + RTE_LOG_LINE(ERR, POWER, "rte_power_get_freq is not implemented " + "for Virtual Machine Power Management"); return -ENOTSUP; } @@ -63,8 +63,8 @@ int power_kvm_vm_set_freq(__rte_unused unsigned int lcore_id, __rte_unused uint32_t index) { - RTE_LOG(ERR, POWER, "rte_power_set_freq is not implemented " - "for Virtual Machine Power Management\n"); + RTE_LOG_LINE(ERR, POWER, "rte_power_set_freq is not implemented " + "for Virtual Machine Power Management"); return -ENOTSUP; } @@ -74,7 +74,7 @@ send_msg(unsigned int lcore_id, uint32_t scale_direction) int ret; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Core(%u) is out of range 0...%d\n", + RTE_LOG_LINE(ERR, POWER, "Core(%u) is out of range 0...%d", lcore_id, RTE_MAX_LCORE-1); return -1; } @@ -82,7 +82,7 @@ send_msg(unsigned int lcore_id, uint32_t scale_direction) ret = guest_channel_send_msg(&pkt[lcore_id], lcore_id); if (ret == 0) return 1; - RTE_LOG(DEBUG, POWER, "Error sending message: %s\n", + RTE_LOG_LINE(DEBUG, POWER, "Error sending message: %s", ret > 0 ? strerror(ret) : "channel not connected"); return -1; } @@ -114,7 +114,7 @@ power_kvm_vm_freq_min(unsigned int lcore_id) int power_kvm_vm_turbo_status(__rte_unused unsigned int lcore_id) { - RTE_LOG(ERR, POWER, "rte_power_turbo_status is not implemented for Virtual Machine Power Management\n"); + RTE_LOG_LINE(ERR, POWER, "rte_power_turbo_status is not implemented for Virtual Machine Power Management"); return -ENOTSUP; } @@ -134,6 +134,6 @@ struct rte_power_core_capabilities; int power_kvm_vm_get_capabilities(__rte_unused unsigned int lcore_id, __rte_unused struct rte_power_core_capabilities *caps) { - RTE_LOG(ERR, POWER, "rte_power_get_capabilities is not implemented for Virtual Machine Power Management\n"); + RTE_LOG_LINE(ERR, POWER, "rte_power_get_capabilities is not implemented for Virtual Machine Power Management"); return -ENOTSUP; } diff --git a/lib/power/power_pstate_cpufreq.c b/lib/power/power_pstate_cpufreq.c index 5ca5f60bcd..56aa302b5d 100644 --- a/lib/power/power_pstate_cpufreq.c +++ b/lib/power/power_pstate_cpufreq.c @@ -82,7 +82,7 @@ power_read_turbo_pct(uint64_t *outVal) fd = open(POWER_SYSFILE_TURBO_PCT, O_RDONLY); if (fd < 0) { - RTE_LOG(ERR, POWER, "Error opening '%s': %s\n", POWER_SYSFILE_TURBO_PCT, + RTE_LOG_LINE(ERR, POWER, "Error opening '%s': %s", POWER_SYSFILE_TURBO_PCT, strerror(errno)); return fd; } @@ -90,7 +90,7 @@ power_read_turbo_pct(uint64_t *outVal) ret = read(fd, val, sizeof(val)); if (ret < 0) { - RTE_LOG(ERR, POWER, "Error reading '%s': %s\n", POWER_SYSFILE_TURBO_PCT, + RTE_LOG_LINE(ERR, POWER, "Error reading '%s': %s", POWER_SYSFILE_TURBO_PCT, strerror(errno)); goto out; } @@ -98,7 +98,7 @@ power_read_turbo_pct(uint64_t *outVal) errno = 0; *outVal = (uint64_t) strtol(val, &endptr, 10); if (errno != 0 || (*endptr != 0 && *endptr != '\n')) { - RTE_LOG(ERR, POWER, "Error converting str to digits, read from %s: %s\n", + RTE_LOG_LINE(ERR, POWER, "Error converting str to digits, read from %s: %s", POWER_SYSFILE_TURBO_PCT, strerror(errno)); ret = -1; goto out; @@ -126,7 +126,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) open_core_sysfs_file(&f_base_max, "r", POWER_SYSFILE_BASE_MAX_FREQ, pi->lcore_id); if (f_base_max == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_BASE_MAX_FREQ); goto err; } @@ -134,7 +134,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) open_core_sysfs_file(&f_base_min, "r", POWER_SYSFILE_BASE_MIN_FREQ, pi->lcore_id); if (f_base_min == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_BASE_MIN_FREQ); goto err; } @@ -142,7 +142,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) open_core_sysfs_file(&f_min, "rw+", POWER_SYSFILE_MIN_FREQ, pi->lcore_id); if (f_min == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_MIN_FREQ); goto err; } @@ -150,7 +150,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) open_core_sysfs_file(&f_max, "rw+", POWER_SYSFILE_MAX_FREQ, pi->lcore_id); if (f_max == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_MAX_FREQ); goto err; } @@ -162,7 +162,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) /* read base max ratio */ ret = read_core_sysfs_u32(f_base_max, &base_max_ratio); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_BASE_MAX_FREQ); goto err; } @@ -170,7 +170,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) /* read base min ratio */ ret = read_core_sysfs_u32(f_base_min, &base_min_ratio); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_BASE_MIN_FREQ); goto err; } @@ -179,7 +179,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) if (f_base != NULL) { ret = read_core_sysfs_u32(f_base, &base_ratio); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_BASE_FREQ); goto err; } @@ -257,8 +257,8 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx) uint32_t target_freq = 0; if (idx >= RTE_MAX_LCORE_FREQS || idx >= pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Invalid frequency index %u, which " - "should be less than %u\n", idx, pi->nb_freqs); + RTE_LOG_LINE(ERR, POWER, "Invalid frequency index %u, which " + "should be less than %u", idx, pi->nb_freqs); return -1; } @@ -270,15 +270,15 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx) * User need change the min/max as same value. */ if (fseek(pi->f_cur_min, 0, SEEK_SET) < 0) { - RTE_LOG(ERR, POWER, "Fail to set file position indicator to 0 " - "for setting frequency for lcore %u\n", + RTE_LOG_LINE(ERR, POWER, "Fail to set file position indicator to 0 " + "for setting frequency for lcore %u", pi->lcore_id); return -1; } if (fseek(pi->f_cur_max, 0, SEEK_SET) < 0) { - RTE_LOG(ERR, POWER, "Fail to set file position indicator to 0 " - "for setting frequency for lcore %u\n", + RTE_LOG_LINE(ERR, POWER, "Fail to set file position indicator to 0 " + "for setting frequency for lcore %u", pi->lcore_id); return -1; } @@ -288,7 +288,7 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx) if (pi->turbo_enable) target_freq = pi->sys_max_freq; else { - RTE_LOG(ERR, POWER, "Turbo is off, frequency can't be scaled up more %u\n", + RTE_LOG_LINE(ERR, POWER, "Turbo is off, frequency can't be scaled up more %u", pi->lcore_id); return -1; } @@ -299,14 +299,14 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx) if (idx > pi->curr_idx) { if (fprintf(pi->f_cur_min, "%u", target_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new frequency for " - "lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to write new frequency for " + "lcore %u", pi->lcore_id); return -1; } if (fprintf(pi->f_cur_max, "%u", target_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new frequency for " - "lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to write new frequency for " + "lcore %u", pi->lcore_id); return -1; } @@ -322,14 +322,14 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx) if (idx < pi->curr_idx) { if (fprintf(pi->f_cur_max, "%u", target_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new frequency for " - "lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to write new frequency for " + "lcore %u", pi->lcore_id); return -1; } if (fprintf(pi->f_cur_min, "%u", target_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new frequency for " - "lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to write new frequency for " + "lcore %u", pi->lcore_id); return -1; } @@ -384,7 +384,7 @@ power_get_available_freqs(struct pstate_power_info *pi) open_core_sysfs_file(&f_max, "r", POWER_SYSFILE_BASE_MAX_FREQ, pi->lcore_id); if (f_max == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_BASE_MAX_FREQ); goto out; } @@ -392,7 +392,7 @@ power_get_available_freqs(struct pstate_power_info *pi) open_core_sysfs_file(&f_min, "r", POWER_SYSFILE_BASE_MIN_FREQ, pi->lcore_id); if (f_min == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_BASE_MIN_FREQ); goto out; } @@ -400,14 +400,14 @@ power_get_available_freqs(struct pstate_power_info *pi) /* read base ratios */ ret = read_core_sysfs_u32(f_max, &sys_max_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_BASE_MAX_FREQ); goto out; } ret = read_core_sysfs_u32(f_min, &sys_min_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_BASE_MIN_FREQ); goto out; } @@ -450,7 +450,7 @@ power_get_available_freqs(struct pstate_power_info *pi) num_freqs = (RTE_MIN(base_max_freq, sys_max_freq) - sys_min_freq) / BUS_FREQ + 1 + pi->turbo_available; if (num_freqs >= RTE_MAX_LCORE_FREQS) { - RTE_LOG(ERR, POWER, "Too many available frequencies: %d\n", + RTE_LOG_LINE(ERR, POWER, "Too many available frequencies: %d", num_freqs); goto out; } @@ -494,14 +494,14 @@ power_get_cur_idx(struct pstate_power_info *pi) open_core_sysfs_file(&f_cur, "r", POWER_SYSFILE_CUR_FREQ, pi->lcore_id); if (f_cur == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_CUR_FREQ); goto fail; } ret = read_core_sysfs_u32(f_cur, &sys_cur_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_CUR_FREQ); goto fail; } @@ -543,7 +543,7 @@ power_pstate_cpufreq_init(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceed %u\n", + RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceed %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -559,47 +559,47 @@ power_pstate_cpufreq_init(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "in use\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is " + "in use", lcore_id); return -1; } pi->lcore_id = lcore_id; /* Check and set the governor */ if (power_set_governor_performance(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to " - "performance\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set governor of lcore %u to " + "performance", lcore_id); goto fail; } /* Init for setting lcore frequency */ if (power_init_for_setting_freq(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot init for setting frequency for " - "lcore %u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot init for setting frequency for " + "lcore %u", lcore_id); goto fail; } /* Get the available frequencies */ if (power_get_available_freqs(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot get available frequencies of " - "lcore %u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot get available frequencies of " + "lcore %u", lcore_id); goto fail; } if (power_get_cur_idx(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot get current frequency " - "index of lcore %u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot get current frequency " + "index of lcore %u", lcore_id); goto fail; } /* Set freq to max by default */ if (power_pstate_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, "Cannot set frequency of lcore %u " - "to max\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set frequency of lcore %u " + "to max", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Initialized successfully for lcore %u " - "power management\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Initialized successfully for lcore %u " + "power management", lcore_id); exp_state = POWER_ONGOING; rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_USED, rte_memory_order_release, rte_memory_order_relaxed); @@ -621,7 +621,7 @@ power_pstate_cpufreq_exit(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n", + RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceeds %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -637,8 +637,8 @@ power_pstate_cpufreq_exit(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "not used\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is " + "not used", lcore_id); return -1; } @@ -650,14 +650,14 @@ power_pstate_cpufreq_exit(unsigned int lcore_id) /* Set the governor back to the original */ if (power_set_governor_original(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set the governor of %u back " - "to the original\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set the governor of %u back " + "to the original", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Power management of lcore %u has exited from " + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u has exited from " "'performance' mode and been set back to the " - "original\n", lcore_id); + "original", lcore_id); exp_state = POWER_ONGOING; rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_IDLE, rte_memory_order_release, rte_memory_order_relaxed); @@ -679,18 +679,18 @@ power_pstate_cpufreq_freqs(unsigned int lcore_id, uint32_t *freqs, uint32_t num) struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return 0; } if (freqs == NULL) { - RTE_LOG(ERR, POWER, "NULL buffer supplied\n"); + RTE_LOG_LINE(ERR, POWER, "NULL buffer supplied"); return 0; } pi = &lcore_power_info[lcore_id]; if (num < pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Buffer size is not enough\n"); + RTE_LOG_LINE(ERR, POWER, "Buffer size is not enough"); return 0; } rte_memcpy(freqs, pi->freqs, pi->nb_freqs * sizeof(uint32_t)); @@ -702,7 +702,7 @@ uint32_t power_pstate_cpufreq_get_freq(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return RTE_POWER_INVALID_FREQ_INDEX; } @@ -714,7 +714,7 @@ int power_pstate_cpufreq_set_freq(unsigned int lcore_id, uint32_t index) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -727,7 +727,7 @@ power_pstate_cpufreq_freq_up(unsigned int lcore_id) struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -746,7 +746,7 @@ power_pstate_cpufreq_freq_down(unsigned int lcore_id) struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -762,7 +762,7 @@ int power_pstate_cpufreq_freq_max(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -787,7 +787,7 @@ power_pstate_cpufreq_freq_min(unsigned int lcore_id) struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -804,7 +804,7 @@ power_pstate_turbo_status(unsigned int lcore_id) struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -819,7 +819,7 @@ power_pstate_enable_turbo(unsigned int lcore_id) struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -829,8 +829,8 @@ power_pstate_enable_turbo(unsigned int lcore_id) pi->turbo_enable = 1; else { pi->turbo_enable = 0; - RTE_LOG(ERR, POWER, - "Failed to enable turbo on lcore %u\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to enable turbo on lcore %u", lcore_id); return -1; } @@ -845,7 +845,7 @@ power_pstate_disable_turbo(unsigned int lcore_id) struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -856,8 +856,8 @@ power_pstate_disable_turbo(unsigned int lcore_id) if (pi->turbo_available && pi->curr_idx <= 1) { /* Try to set freq to max by default coming out of turbo */ if (power_pstate_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, - "Failed to set frequency of lcore %u to max\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to set frequency of lcore %u to max", lcore_id); return -1; } @@ -873,11 +873,11 @@ int power_pstate_get_capabilities(unsigned int lcore_id, struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } if (caps == NULL) { - RTE_LOG(ERR, POWER, "Invalid argument\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid argument"); return -1; } diff --git a/lib/power/rte_power.c b/lib/power/rte_power.c index 1502612b0a..7bee4f88f9 100644 --- a/lib/power/rte_power.c +++ b/lib/power/rte_power.c @@ -74,7 +74,7 @@ rte_power_set_env(enum power_management_env env) rte_spinlock_lock(&global_env_cfg_lock); if (global_default_env != PM_ENV_NOT_SET) { - RTE_LOG(ERR, POWER, "Power Management Environment already set.\n"); + RTE_LOG_LINE(ERR, POWER, "Power Management Environment already set."); rte_spinlock_unlock(&global_env_cfg_lock); return -1; } @@ -143,7 +143,7 @@ rte_power_set_env(enum power_management_env env) rte_power_freq_disable_turbo = power_amd_pstate_disable_turbo; rte_power_get_capabilities = power_amd_pstate_get_capabilities; } else { - RTE_LOG(ERR, POWER, "Invalid Power Management Environment(%d) set\n", + RTE_LOG_LINE(ERR, POWER, "Invalid Power Management Environment(%d) set", env); ret = -1; } @@ -190,46 +190,46 @@ rte_power_init(unsigned int lcore_id) case PM_ENV_AMD_PSTATE_CPUFREQ: return power_amd_pstate_cpufreq_init(lcore_id); default: - RTE_LOG(INFO, POWER, "Env isn't set yet!\n"); + RTE_LOG_LINE(INFO, POWER, "Env isn't set yet!"); } /* Auto detect Environment */ - RTE_LOG(INFO, POWER, "Attempting to initialise ACPI cpufreq power management...\n"); + RTE_LOG_LINE(INFO, POWER, "Attempting to initialise ACPI cpufreq power management..."); ret = power_acpi_cpufreq_init(lcore_id); if (ret == 0) { rte_power_set_env(PM_ENV_ACPI_CPUFREQ); goto out; } - RTE_LOG(INFO, POWER, "Attempting to initialise PSTAT power management...\n"); + RTE_LOG_LINE(INFO, POWER, "Attempting to initialise PSTAT power management..."); ret = power_pstate_cpufreq_init(lcore_id); if (ret == 0) { rte_power_set_env(PM_ENV_PSTATE_CPUFREQ); goto out; } - RTE_LOG(INFO, POWER, "Attempting to initialise AMD PSTATE power management...\n"); + RTE_LOG_LINE(INFO, POWER, "Attempting to initialise AMD PSTATE power management..."); ret = power_amd_pstate_cpufreq_init(lcore_id); if (ret == 0) { rte_power_set_env(PM_ENV_AMD_PSTATE_CPUFREQ); goto out; } - RTE_LOG(INFO, POWER, "Attempting to initialise CPPC power management...\n"); + RTE_LOG_LINE(INFO, POWER, "Attempting to initialise CPPC power management..."); ret = power_cppc_cpufreq_init(lcore_id); if (ret == 0) { rte_power_set_env(PM_ENV_CPPC_CPUFREQ); goto out; } - RTE_LOG(INFO, POWER, "Attempting to initialise VM power management...\n"); + RTE_LOG_LINE(INFO, POWER, "Attempting to initialise VM power management..."); ret = power_kvm_vm_init(lcore_id); if (ret == 0) { rte_power_set_env(PM_ENV_KVM_VM); goto out; } - RTE_LOG(ERR, POWER, "Unable to set Power Management Environment for lcore " - "%u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Unable to set Power Management Environment for lcore " + "%u", lcore_id); out: return ret; } @@ -249,7 +249,7 @@ rte_power_exit(unsigned int lcore_id) case PM_ENV_AMD_PSTATE_CPUFREQ: return power_amd_pstate_cpufreq_exit(lcore_id); default: - RTE_LOG(ERR, POWER, "Environment has not been set, unable to exit gracefully\n"); + RTE_LOG_LINE(ERR, POWER, "Environment has not been set, unable to exit gracefully"); } return -1; diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c index 6f18ed0adf..fb7d8fddb3 100644 --- a/lib/power/rte_power_pmd_mgmt.c +++ b/lib/power/rte_power_pmd_mgmt.c @@ -146,7 +146,7 @@ get_monitor_addresses(struct pmd_core_cfg *cfg, /* attempted out of bounds access */ if (i >= len) { - RTE_LOG(ERR, POWER, "Too many queues being monitored\n"); + RTE_LOG_LINE(ERR, POWER, "Too many queues being monitored"); return -1; } @@ -423,7 +423,7 @@ check_scale(unsigned int lcore) if (!rte_power_check_env_supported(PM_ENV_ACPI_CPUFREQ) && !rte_power_check_env_supported(PM_ENV_PSTATE_CPUFREQ) && !rte_power_check_env_supported(PM_ENV_AMD_PSTATE_CPUFREQ)) { - RTE_LOG(DEBUG, POWER, "Neither ACPI nor PSTATE modes are supported\n"); + RTE_LOG_LINE(DEBUG, POWER, "Neither ACPI nor PSTATE modes are supported"); return -ENOTSUP; } /* ensure we could initialize the power library */ @@ -434,7 +434,7 @@ check_scale(unsigned int lcore) env = rte_power_get_env(); if (env != PM_ENV_ACPI_CPUFREQ && env != PM_ENV_PSTATE_CPUFREQ && env != PM_ENV_AMD_PSTATE_CPUFREQ) { - RTE_LOG(DEBUG, POWER, "Neither ACPI nor PSTATE modes were initialized\n"); + RTE_LOG_LINE(DEBUG, POWER, "Neither ACPI nor PSTATE modes were initialized"); return -ENOTSUP; } @@ -450,7 +450,7 @@ check_monitor(struct pmd_core_cfg *cfg, const union queue *qdata) /* check if rte_power_monitor is supported */ if (!global_data.intrinsics_support.power_monitor) { - RTE_LOG(DEBUG, POWER, "Monitoring intrinsics are not supported\n"); + RTE_LOG_LINE(DEBUG, POWER, "Monitoring intrinsics are not supported"); return -ENOTSUP; } /* check if multi-monitor is supported */ @@ -459,14 +459,14 @@ check_monitor(struct pmd_core_cfg *cfg, const union queue *qdata) /* if we're adding a new queue, do we support multiple queues? */ if (cfg->n_queues > 0 && !multimonitor_supported) { - RTE_LOG(DEBUG, POWER, "Monitoring multiple queues is not supported\n"); + RTE_LOG_LINE(DEBUG, POWER, "Monitoring multiple queues is not supported"); return -ENOTSUP; } /* check if the device supports the necessary PMD API */ if (rte_eth_get_monitor_addr(qdata->portid, qdata->qid, &dummy) == -ENOTSUP) { - RTE_LOG(DEBUG, POWER, "The device does not support rte_eth_get_monitor_addr\n"); + RTE_LOG_LINE(DEBUG, POWER, "The device does not support rte_eth_get_monitor_addr"); return -ENOTSUP; } @@ -566,14 +566,14 @@ rte_power_ethdev_pmgmt_queue_enable(unsigned int lcore_id, uint16_t port_id, clb = clb_pause; break; default: - RTE_LOG(DEBUG, POWER, "Invalid power management type\n"); + RTE_LOG_LINE(DEBUG, POWER, "Invalid power management type"); ret = -EINVAL; goto end; } /* add this queue to the list */ ret = queue_list_add(lcore_cfg, &qdata); if (ret < 0) { - RTE_LOG(DEBUG, POWER, "Failed to add queue to list: %s\n", + RTE_LOG_LINE(DEBUG, POWER, "Failed to add queue to list: %s", strerror(-ret)); goto end; } @@ -686,7 +686,7 @@ int rte_power_pmd_mgmt_set_pause_duration(unsigned int duration) { if (duration == 0) { - RTE_LOG(ERR, POWER, "Pause duration must be greater than 0, value unchanged\n"); + RTE_LOG_LINE(ERR, POWER, "Pause duration must be greater than 0, value unchanged"); return -EINVAL; } pause_duration = duration; @@ -704,12 +704,12 @@ int rte_power_pmd_mgmt_set_scaling_freq_min(unsigned int lcore, unsigned int min) { if (lcore >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID: %u\n", lcore); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID: %u", lcore); return -EINVAL; } if (min > scale_freq_max[lcore]) { - RTE_LOG(ERR, POWER, "Invalid min frequency: Cannot be greater than max frequency\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid min frequency: Cannot be greater than max frequency"); return -EINVAL; } scale_freq_min[lcore] = min; @@ -721,7 +721,7 @@ int rte_power_pmd_mgmt_set_scaling_freq_max(unsigned int lcore, unsigned int max) { if (lcore >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID: %u\n", lcore); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID: %u", lcore); return -EINVAL; } @@ -729,7 +729,7 @@ rte_power_pmd_mgmt_set_scaling_freq_max(unsigned int lcore, unsigned int max) if (max == 0) max = UINT32_MAX; if (max < scale_freq_min[lcore]) { - RTE_LOG(ERR, POWER, "Invalid max frequency: Cannot be less than min frequency\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid max frequency: Cannot be less than min frequency"); return -EINVAL; } @@ -742,12 +742,12 @@ int rte_power_pmd_mgmt_get_scaling_freq_min(unsigned int lcore) { if (lcore >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID: %u\n", lcore); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID: %u", lcore); return -EINVAL; } if (scale_freq_max[lcore] == 0) - RTE_LOG(DEBUG, POWER, "Scaling freq min config not set. Using sysfs min freq.\n"); + RTE_LOG_LINE(DEBUG, POWER, "Scaling freq min config not set. Using sysfs min freq."); return scale_freq_min[lcore]; } @@ -756,12 +756,12 @@ int rte_power_pmd_mgmt_get_scaling_freq_max(unsigned int lcore) { if (lcore >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID: %u\n", lcore); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID: %u", lcore); return -EINVAL; } if (scale_freq_max[lcore] == UINT32_MAX) { - RTE_LOG(DEBUG, POWER, "Scaling freq max config not set. Using sysfs max freq.\n"); + RTE_LOG_LINE(DEBUG, POWER, "Scaling freq max config not set. Using sysfs max freq."); return 0; } diff --git a/lib/power/rte_power_uncore.c b/lib/power/rte_power_uncore.c index 9c20fe150d..d57fc18faa 100644 --- a/lib/power/rte_power_uncore.c +++ b/lib/power/rte_power_uncore.c @@ -101,7 +101,7 @@ rte_power_set_uncore_env(enum rte_uncore_power_mgmt_env env) rte_spinlock_lock(&global_env_cfg_lock); if (default_uncore_env != RTE_UNCORE_PM_ENV_NOT_SET) { - RTE_LOG(ERR, POWER, "Uncore Power Management Env already set.\n"); + RTE_LOG_LINE(ERR, POWER, "Uncore Power Management Env already set."); rte_spinlock_unlock(&global_env_cfg_lock); return -1; } @@ -124,7 +124,7 @@ rte_power_set_uncore_env(enum rte_uncore_power_mgmt_env env) rte_power_uncore_get_num_pkgs = power_intel_uncore_get_num_pkgs; rte_power_uncore_get_num_dies = power_intel_uncore_get_num_dies; } else { - RTE_LOG(ERR, POWER, "Invalid Power Management Environment(%d) set\n", env); + RTE_LOG_LINE(ERR, POWER, "Invalid Power Management Environment(%d) set", env); ret = -1; goto out; } @@ -159,12 +159,12 @@ rte_power_uncore_init(unsigned int pkg, unsigned int die) case RTE_UNCORE_PM_ENV_INTEL_UNCORE: return power_intel_uncore_init(pkg, die); default: - RTE_LOG(INFO, POWER, "Uncore Env isn't set yet!\n"); + RTE_LOG_LINE(INFO, POWER, "Uncore Env isn't set yet!"); break; } /* Auto detect Environment */ - RTE_LOG(INFO, POWER, "Attempting to initialise Intel Uncore power mgmt...\n"); + RTE_LOG_LINE(INFO, POWER, "Attempting to initialise Intel Uncore power mgmt..."); ret = power_intel_uncore_init(pkg, die); if (ret == 0) { rte_power_set_uncore_env(RTE_UNCORE_PM_ENV_INTEL_UNCORE); @@ -172,8 +172,8 @@ rte_power_uncore_init(unsigned int pkg, unsigned int die) } if (default_uncore_env == RTE_UNCORE_PM_ENV_NOT_SET) { - RTE_LOG(ERR, POWER, "Unable to set Power Management Environment " - "for package %u Die %u\n", pkg, die); + RTE_LOG_LINE(ERR, POWER, "Unable to set Power Management Environment " + "for package %u Die %u", pkg, die); ret = 0; } out: @@ -187,7 +187,7 @@ rte_power_uncore_exit(unsigned int pkg, unsigned int die) case RTE_UNCORE_PM_ENV_INTEL_UNCORE: return power_intel_uncore_exit(pkg, die); default: - RTE_LOG(ERR, POWER, "Uncore Env has not been set, unable to exit gracefully\n"); + RTE_LOG_LINE(ERR, POWER, "Uncore Env has not been set, unable to exit gracefully"); break; } return -1; diff --git a/lib/rcu/rte_rcu_qsbr.c b/lib/rcu/rte_rcu_qsbr.c index 5b6530788a..bd0b83be0c 100644 --- a/lib/rcu/rte_rcu_qsbr.c +++ b/lib/rcu/rte_rcu_qsbr.c @@ -20,7 +20,7 @@ #include "rcu_qsbr_pvt.h" #define RCU_LOG(level, fmt, args...) \ - RTE_LOG(level, RCU, "%s(): " fmt "\n", __func__, ## args) + RTE_LOG_LINE(level, RCU, "%s(): " fmt, __func__, ## args) /* Get the memory size of QSBR variable */ size_t diff --git a/lib/reorder/rte_reorder.c b/lib/reorder/rte_reorder.c index 640719c3ec..847e45b9f7 100644 --- a/lib/reorder/rte_reorder.c +++ b/lib/reorder/rte_reorder.c @@ -74,34 +74,34 @@ rte_reorder_init(struct rte_reorder_buffer *b, unsigned int bufsize, }; if (b == NULL) { - RTE_LOG(ERR, REORDER, "Invalid reorder buffer parameter:" - " NULL\n"); + RTE_LOG_LINE(ERR, REORDER, "Invalid reorder buffer parameter:" + " NULL"); rte_errno = EINVAL; return NULL; } if (!rte_is_power_of_2(size)) { - RTE_LOG(ERR, REORDER, "Invalid reorder buffer size" - " - Not a power of 2\n"); + RTE_LOG_LINE(ERR, REORDER, "Invalid reorder buffer size" + " - Not a power of 2"); rte_errno = EINVAL; return NULL; } if (name == NULL) { - RTE_LOG(ERR, REORDER, "Invalid reorder buffer name ptr:" - " NULL\n"); + RTE_LOG_LINE(ERR, REORDER, "Invalid reorder buffer name ptr:" + " NULL"); rte_errno = EINVAL; return NULL; } if (bufsize < min_bufsize) { - RTE_LOG(ERR, REORDER, "Invalid reorder buffer memory size: %u, " - "minimum required: %u\n", bufsize, min_bufsize); + RTE_LOG_LINE(ERR, REORDER, "Invalid reorder buffer memory size: %u, " + "minimum required: %u", bufsize, min_bufsize); rte_errno = EINVAL; return NULL; } rte_reorder_seqn_dynfield_offset = rte_mbuf_dynfield_register(&reorder_seqn_dynfield_desc); if (rte_reorder_seqn_dynfield_offset < 0) { - RTE_LOG(ERR, REORDER, - "Failed to register mbuf field for reorder sequence number, rte_errno: %i\n", + RTE_LOG_LINE(ERR, REORDER, + "Failed to register mbuf field for reorder sequence number, rte_errno: %i", rte_errno); rte_errno = ENOMEM; return NULL; @@ -161,14 +161,14 @@ rte_reorder_create(const char *name, unsigned socket_id, unsigned int size) /* Check user arguments. */ if (!rte_is_power_of_2(size)) { - RTE_LOG(ERR, REORDER, "Invalid reorder buffer size" - " - Not a power of 2\n"); + RTE_LOG_LINE(ERR, REORDER, "Invalid reorder buffer size" + " - Not a power of 2"); rte_errno = EINVAL; return NULL; } if (name == NULL) { - RTE_LOG(ERR, REORDER, "Invalid reorder buffer name ptr:" - " NULL\n"); + RTE_LOG_LINE(ERR, REORDER, "Invalid reorder buffer name ptr:" + " NULL"); rte_errno = EINVAL; return NULL; } @@ -176,7 +176,7 @@ rte_reorder_create(const char *name, unsigned socket_id, unsigned int size) /* allocate tailq entry */ te = rte_zmalloc("REORDER_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, REORDER, "Failed to allocate tailq entry\n"); + RTE_LOG_LINE(ERR, REORDER, "Failed to allocate tailq entry"); rte_errno = ENOMEM; return NULL; } @@ -184,7 +184,7 @@ rte_reorder_create(const char *name, unsigned socket_id, unsigned int size) /* Allocate memory to store the reorder buffer structure. */ b = rte_zmalloc_socket("REORDER_BUFFER", bufsize, 0, socket_id); if (b == NULL) { - RTE_LOG(ERR, REORDER, "Memzone allocation failed\n"); + RTE_LOG_LINE(ERR, REORDER, "Memzone allocation failed"); rte_errno = ENOMEM; rte_free(te); return NULL; diff --git a/lib/rib/rte_rib.c b/lib/rib/rte_rib.c index 251d0d4ef1..baee4bff5a 100644 --- a/lib/rib/rte_rib.c +++ b/lib/rib/rte_rib.c @@ -416,8 +416,8 @@ rte_rib_create(const char *name, int socket_id, const struct rte_rib_conf *conf) NULL, NULL, NULL, NULL, socket_id, 0); if (node_pool == NULL) { - RTE_LOG(ERR, LPM, - "Can not allocate mempool for RIB %s\n", name); + RTE_LOG_LINE(ERR, LPM, + "Can not allocate mempool for RIB %s", name); return NULL; } @@ -441,8 +441,8 @@ rte_rib_create(const char *name, int socket_id, const struct rte_rib_conf *conf) /* allocate tailq entry */ te = rte_zmalloc("RIB_TAILQ_ENTRY", sizeof(*te), 0); if (unlikely(te == NULL)) { - RTE_LOG(ERR, LPM, - "Can not allocate tailq entry for RIB %s\n", name); + RTE_LOG_LINE(ERR, LPM, + "Can not allocate tailq entry for RIB %s", name); rte_errno = ENOMEM; goto exit; } @@ -451,7 +451,7 @@ rte_rib_create(const char *name, int socket_id, const struct rte_rib_conf *conf) rib = rte_zmalloc_socket(mem_name, sizeof(struct rte_rib), RTE_CACHE_LINE_SIZE, socket_id); if (unlikely(rib == NULL)) { - RTE_LOG(ERR, LPM, "RIB %s memory allocation failed\n", name); + RTE_LOG_LINE(ERR, LPM, "RIB %s memory allocation failed", name); rte_errno = ENOMEM; goto free_te; } diff --git a/lib/rib/rte_rib6.c b/lib/rib/rte_rib6.c index ad3d48ab8e..ce54f51208 100644 --- a/lib/rib/rte_rib6.c +++ b/lib/rib/rte_rib6.c @@ -485,8 +485,8 @@ rte_rib6_create(const char *name, int socket_id, NULL, NULL, NULL, NULL, socket_id, 0); if (node_pool == NULL) { - RTE_LOG(ERR, LPM, - "Can not allocate mempool for RIB6 %s\n", name); + RTE_LOG_LINE(ERR, LPM, + "Can not allocate mempool for RIB6 %s", name); return NULL; } @@ -510,8 +510,8 @@ rte_rib6_create(const char *name, int socket_id, /* allocate tailq entry */ te = rte_zmalloc("RIB6_TAILQ_ENTRY", sizeof(*te), 0); if (unlikely(te == NULL)) { - RTE_LOG(ERR, LPM, - "Can not allocate tailq entry for RIB6 %s\n", name); + RTE_LOG_LINE(ERR, LPM, + "Can not allocate tailq entry for RIB6 %s", name); rte_errno = ENOMEM; goto exit; } @@ -520,7 +520,7 @@ rte_rib6_create(const char *name, int socket_id, rib = rte_zmalloc_socket(mem_name, sizeof(struct rte_rib6), RTE_CACHE_LINE_SIZE, socket_id); if (unlikely(rib == NULL)) { - RTE_LOG(ERR, LPM, "RIB6 %s memory allocation failed\n", name); + RTE_LOG_LINE(ERR, LPM, "RIB6 %s memory allocation failed", name); rte_errno = ENOMEM; goto free_te; } diff --git a/lib/ring/rte_ring.c b/lib/ring/rte_ring.c index 12046419f1..7fd6576c8c 100644 --- a/lib/ring/rte_ring.c +++ b/lib/ring/rte_ring.c @@ -55,15 +55,15 @@ rte_ring_get_memsize_elem(unsigned int esize, unsigned int count) /* Check if element size is a multiple of 4B */ if (esize % 4 != 0) { - RTE_LOG(ERR, RING, "element size is not a multiple of 4\n"); + RTE_LOG_LINE(ERR, RING, "element size is not a multiple of 4"); return -EINVAL; } /* count must be a power of 2 */ if ((!POWEROF2(count)) || (count > RTE_RING_SZ_MASK )) { - RTE_LOG(ERR, RING, - "Requested number of elements is invalid, must be power of 2, and not exceed %u\n", + RTE_LOG_LINE(ERR, RING, + "Requested number of elements is invalid, must be power of 2, and not exceed %u", RTE_RING_SZ_MASK); return -EINVAL; @@ -198,8 +198,8 @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned int count, /* future proof flags, only allow supported values */ if (flags & ~RING_F_MASK) { - RTE_LOG(ERR, RING, - "Unsupported flags requested %#x\n", flags); + RTE_LOG_LINE(ERR, RING, + "Unsupported flags requested %#x", flags); return -EINVAL; } @@ -219,8 +219,8 @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned int count, r->capacity = count; } else { if ((!POWEROF2(count)) || (count > RTE_RING_SZ_MASK)) { - RTE_LOG(ERR, RING, - "Requested size is invalid, must be power of 2, and not exceed the size limit %u\n", + RTE_LOG_LINE(ERR, RING, + "Requested size is invalid, must be power of 2, and not exceed the size limit %u", RTE_RING_SZ_MASK); return -EINVAL; } @@ -274,7 +274,7 @@ rte_ring_create_elem(const char *name, unsigned int esize, unsigned int count, te = rte_zmalloc("RING_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, RING, "Cannot reserve memory for tailq\n"); + RTE_LOG_LINE(ERR, RING, "Cannot reserve memory for tailq"); rte_errno = ENOMEM; return NULL; } @@ -299,7 +299,7 @@ rte_ring_create_elem(const char *name, unsigned int esize, unsigned int count, TAILQ_INSERT_TAIL(ring_list, te, next); } else { r = NULL; - RTE_LOG(ERR, RING, "Cannot reserve memory\n"); + RTE_LOG_LINE(ERR, RING, "Cannot reserve memory"); rte_free(te); } rte_mcfg_tailq_write_unlock(); @@ -331,8 +331,8 @@ rte_ring_free(struct rte_ring *r) * therefore, there is no memzone to free. */ if (r->memzone == NULL) { - RTE_LOG(ERR, RING, - "Cannot free ring, not created with rte_ring_create()\n"); + RTE_LOG_LINE(ERR, RING, + "Cannot free ring, not created with rte_ring_create()"); return; } @@ -355,7 +355,7 @@ rte_ring_free(struct rte_ring *r) rte_mcfg_tailq_write_unlock(); if (rte_memzone_free(r->memzone) != 0) - RTE_LOG(ERR, RING, "Cannot free memory\n"); + RTE_LOG_LINE(ERR, RING, "Cannot free memory"); rte_free(te); } diff --git a/lib/sched/rte_pie.c b/lib/sched/rte_pie.c index cce0ce762d..ac1f99e2bd 100644 --- a/lib/sched/rte_pie.c +++ b/lib/sched/rte_pie.c @@ -17,7 +17,7 @@ int rte_pie_rt_data_init(struct rte_pie *pie) { if (pie == NULL) { - RTE_LOG(ERR, SCHED, "%s: Invalid addr for pie\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: Invalid addr for pie", __func__); return -EINVAL; } @@ -39,26 +39,26 @@ rte_pie_config_init(struct rte_pie_config *pie_cfg, return -1; if (qdelay_ref <= 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for qdelay_ref\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for qdelay_ref", __func__); return -EINVAL; } if (dp_update_interval <= 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for dp_update_interval\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for dp_update_interval", __func__); return -EINVAL; } if (max_burst <= 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for max_burst\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for max_burst", __func__); return -EINVAL; } if (tailq_th <= 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for tailq_th\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for tailq_th", __func__); return -EINVAL; } diff --git a/lib/sched/rte_sched.c b/lib/sched/rte_sched.c index 76dd8dd738..75f2f12007 100644 --- a/lib/sched/rte_sched.c +++ b/lib/sched/rte_sched.c @@ -325,23 +325,23 @@ pipe_profile_check(struct rte_sched_pipe_params *params, /* Pipe parameters */ if (params == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter params\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter params", __func__); return -EINVAL; } /* TB rate: non-zero, not greater than port rate */ if (params->tb_rate == 0 || params->tb_rate > rate) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for tb rate\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for tb rate", __func__); return -EINVAL; } /* TB size: non-zero */ if (params->tb_size == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for tb size\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for tb size", __func__); return -EINVAL; } @@ -350,38 +350,38 @@ pipe_profile_check(struct rte_sched_pipe_params *params, if ((qsize[i] == 0 && params->tc_rate[i] != 0) || (qsize[i] != 0 && (params->tc_rate[i] == 0 || params->tc_rate[i] > params->tb_rate))) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for qsize or tc_rate\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for qsize or tc_rate", __func__); return -EINVAL; } } if (params->tc_rate[RTE_SCHED_TRAFFIC_CLASS_BE] == 0 || qsize[RTE_SCHED_TRAFFIC_CLASS_BE] == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for be traffic class rate\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for be traffic class rate", __func__); return -EINVAL; } /* TC period: non-zero */ if (params->tc_period == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for tc period\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for tc period", __func__); return -EINVAL; } /* Best effort tc oversubscription weight: non-zero */ if (params->tc_ov_weight == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for tc ov weight\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for tc ov weight", __func__); return -EINVAL; } /* Queue WRR weights: non-zero */ for (i = 0; i < RTE_SCHED_BE_QUEUES_PER_PIPE; i++) { if (params->wrr_weights[i] == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for wrr weight\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for wrr weight", __func__); return -EINVAL; } } @@ -397,20 +397,20 @@ subport_profile_check(struct rte_sched_subport_profile_params *params, /* Check user parameters */ if (params == NULL) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for parameter params\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Incorrect value for parameter params", __func__); return -EINVAL; } if (params->tb_rate == 0 || params->tb_rate > rate) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for tb rate\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Incorrect value for tb rate", __func__); return -EINVAL; } if (params->tb_size == 0) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for tb size\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Incorrect value for tb size", __func__); return -EINVAL; } @@ -418,21 +418,21 @@ subport_profile_check(struct rte_sched_subport_profile_params *params, uint64_t tc_rate = params->tc_rate[i]; if (tc_rate == 0 || (tc_rate > params->tb_rate)) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for tc rate\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Incorrect value for tc rate", __func__); return -EINVAL; } } if (params->tc_rate[RTE_SCHED_TRAFFIC_CLASS_BE] == 0) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect tc rate(best effort)\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Incorrect tc rate(best effort)", __func__); return -EINVAL; } if (params->tc_period == 0) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for tc period\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Incorrect value for tc period", __func__); return -EINVAL; } @@ -445,29 +445,29 @@ rte_sched_port_check_params(struct rte_sched_port_params *params) uint32_t i; if (params == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter params\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter params", __func__); return -EINVAL; } /* socket */ if (params->socket < 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for socket id\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for socket id", __func__); return -EINVAL; } /* rate */ if (params->rate == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for rate\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for rate", __func__); return -EINVAL; } /* mtu */ if (params->mtu == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for mtu\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for mtu", __func__); return -EINVAL; } @@ -475,8 +475,8 @@ rte_sched_port_check_params(struct rte_sched_port_params *params) if (params->n_subports_per_port == 0 || params->n_subports_per_port > 1u << 16 || !rte_is_power_of_2(params->n_subports_per_port)) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for number of subports\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for number of subports", __func__); return -EINVAL; } @@ -484,8 +484,8 @@ rte_sched_port_check_params(struct rte_sched_port_params *params) params->n_subport_profiles == 0 || params->n_max_subport_profiles == 0 || params->n_subport_profiles > params->n_max_subport_profiles) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for subport profiles\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for subport profiles", __func__); return -EINVAL; } @@ -496,8 +496,8 @@ rte_sched_port_check_params(struct rte_sched_port_params *params) status = subport_profile_check(p, params->rate); if (status != 0) { - RTE_LOG(ERR, SCHED, - "%s: subport profile check failed(%d)\n", + RTE_LOG_LINE(ERR, SCHED, + "%s: subport profile check failed(%d)", __func__, status); return -EINVAL; } @@ -506,8 +506,8 @@ rte_sched_port_check_params(struct rte_sched_port_params *params) /* n_pipes_per_subport: non-zero, power of 2 */ if (params->n_pipes_per_subport == 0 || !rte_is_power_of_2(params->n_pipes_per_subport)) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for maximum pipes number\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for maximum pipes number", __func__); return -EINVAL; } @@ -830,8 +830,8 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params, /* Check user parameters */ if (params == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter params\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter params", __func__); return -EINVAL; } @@ -842,14 +842,14 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params, uint16_t qsize = params->qsize[i]; if (qsize != 0 && !rte_is_power_of_2(qsize)) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for qsize\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for qsize", __func__); return -EINVAL; } } if (params->qsize[RTE_SCHED_TRAFFIC_CLASS_BE] == 0) { - RTE_LOG(ERR, SCHED, "%s: Incorrect qsize\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: Incorrect qsize", __func__); return -EINVAL; } @@ -857,8 +857,8 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params, if (params->n_pipes_per_subport_enabled == 0 || params->n_pipes_per_subport_enabled > n_max_pipes_per_subport || !rte_is_power_of_2(params->n_pipes_per_subport_enabled)) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for pipes number\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for pipes number", __func__); return -EINVAL; } @@ -867,8 +867,8 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params, params->n_pipe_profiles == 0 || params->n_max_pipe_profiles == 0 || params->n_pipe_profiles > params->n_max_pipe_profiles) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for pipe profiles\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for pipe profiles", __func__); return -EINVAL; } @@ -878,8 +878,8 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params, status = pipe_profile_check(p, rate, ¶ms->qsize[0]); if (status != 0) { - RTE_LOG(ERR, SCHED, - "%s: Pipe profile check failed(%d)\n", __func__, status); + RTE_LOG_LINE(ERR, SCHED, + "%s: Pipe profile check failed(%d)", __func__, status); return -EINVAL; } } @@ -896,8 +896,8 @@ rte_sched_port_get_memory_footprint(struct rte_sched_port_params *port_params, status = rte_sched_port_check_params(port_params); if (status != 0) { - RTE_LOG(ERR, SCHED, - "%s: Port scheduler port params check failed (%d)\n", + RTE_LOG_LINE(ERR, SCHED, + "%s: Port scheduler port params check failed (%d)", __func__, status); return 0; @@ -910,8 +910,8 @@ rte_sched_port_get_memory_footprint(struct rte_sched_port_params *port_params, port_params->n_pipes_per_subport, port_params->rate); if (status != 0) { - RTE_LOG(ERR, SCHED, - "%s: Port scheduler subport params check failed (%d)\n", + RTE_LOG_LINE(ERR, SCHED, + "%s: Port scheduler subport params check failed (%d)", __func__, status); return 0; @@ -941,8 +941,8 @@ rte_sched_port_config(struct rte_sched_port_params *params) status = rte_sched_port_check_params(params); if (status != 0) { - RTE_LOG(ERR, SCHED, - "%s: Port scheduler params check failed (%d)\n", + RTE_LOG_LINE(ERR, SCHED, + "%s: Port scheduler params check failed (%d)", __func__, status); return NULL; } @@ -956,7 +956,7 @@ rte_sched_port_config(struct rte_sched_port_params *params) port = rte_zmalloc_socket("qos_params", size0 + size1, RTE_CACHE_LINE_SIZE, params->socket); if (port == NULL) { - RTE_LOG(ERR, SCHED, "%s: Memory allocation fails\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: Memory allocation fails", __func__); return NULL; } @@ -965,7 +965,7 @@ rte_sched_port_config(struct rte_sched_port_params *params) port->subport_profiles = rte_zmalloc_socket("subport_profile", size2, RTE_CACHE_LINE_SIZE, params->socket); if (port->subport_profiles == NULL) { - RTE_LOG(ERR, SCHED, "%s: Memory allocation fails\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: Memory allocation fails", __func__); rte_free(port); return NULL; } @@ -1107,8 +1107,8 @@ rte_sched_red_config(struct rte_sched_port *port, params->cman_params->red_params[i][j].maxp_inv) != 0) { rte_sched_free_memory(port, n_subports); - RTE_LOG(NOTICE, SCHED, - "%s: RED configuration init fails\n", __func__); + RTE_LOG_LINE(NOTICE, SCHED, + "%s: RED configuration init fails", __func__); return -EINVAL; } } @@ -1127,8 +1127,8 @@ rte_sched_pie_config(struct rte_sched_port *port, for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) { if (params->cman_params->pie_params[i].tailq_th > params->qsize[i]) { - RTE_LOG(NOTICE, SCHED, - "%s: PIE tailq threshold incorrect\n", __func__); + RTE_LOG_LINE(NOTICE, SCHED, + "%s: PIE tailq threshold incorrect", __func__); return -EINVAL; } @@ -1139,8 +1139,8 @@ rte_sched_pie_config(struct rte_sched_port *port, params->cman_params->pie_params[i].tailq_th) != 0) { rte_sched_free_memory(port, n_subports); - RTE_LOG(NOTICE, SCHED, - "%s: PIE configuration init fails\n", __func__); + RTE_LOG_LINE(NOTICE, SCHED, + "%s: PIE configuration init fails", __func__); return -EINVAL; } } @@ -1171,14 +1171,14 @@ rte_sched_subport_tc_ov_config(struct rte_sched_port *port, struct rte_sched_subport *s; if (port == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter port\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter port", __func__); return -EINVAL; } if (subport_id >= port->n_subports_per_port) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter subport id\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter subport id", __func__); return -EINVAL; } @@ -1204,21 +1204,21 @@ rte_sched_subport_config(struct rte_sched_port *port, /* Check user parameters */ if (port == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter port\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter port", __func__); return 0; } if (subport_id >= port->n_subports_per_port) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for subport id\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for subport id", __func__); ret = -EINVAL; goto out; } if (subport_profile_id >= port->n_max_subport_profiles) { - RTE_LOG(ERR, SCHED, "%s: " - "Number of subport profile exceeds the max limit\n", + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Number of subport profile exceeds the max limit", __func__); ret = -EINVAL; goto out; @@ -1234,8 +1234,8 @@ rte_sched_subport_config(struct rte_sched_port *port, port->n_pipes_per_subport, port->rate); if (status != 0) { - RTE_LOG(NOTICE, SCHED, - "%s: Port scheduler params check failed (%d)\n", + RTE_LOG_LINE(NOTICE, SCHED, + "%s: Port scheduler params check failed (%d)", __func__, status); ret = -EINVAL; goto out; @@ -1250,8 +1250,8 @@ rte_sched_subport_config(struct rte_sched_port *port, s = rte_zmalloc_socket("subport_params", size0 + size1, RTE_CACHE_LINE_SIZE, port->socket); if (s == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Memory allocation fails\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Memory allocation fails", __func__); ret = -ENOMEM; goto out; } @@ -1282,8 +1282,8 @@ rte_sched_subport_config(struct rte_sched_port *port, s->cman_enabled = true; status = rte_sched_cman_config(port, s, params, n_subports); if (status) { - RTE_LOG(NOTICE, SCHED, - "%s: CMAN configuration fails\n", __func__); + RTE_LOG_LINE(NOTICE, SCHED, + "%s: CMAN configuration fails", __func__); return status; } } else { @@ -1330,8 +1330,8 @@ rte_sched_subport_config(struct rte_sched_port *port, s->bmp = rte_bitmap_init(n_subport_pipe_queues, s->bmp_array, bmp_mem_size); if (s->bmp == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Subport bitmap init error\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Subport bitmap init error", __func__); ret = -EINVAL; goto out; } @@ -1400,29 +1400,29 @@ rte_sched_pipe_config(struct rte_sched_port *port, deactivate = (pipe_profile < 0); if (port == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter port\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter port", __func__); return -EINVAL; } if (subport_id >= port->n_subports_per_port) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter subport id\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter subport id", __func__); ret = -EINVAL; goto out; } s = port->subports[subport_id]; if (pipe_id >= s->n_pipes_per_subport_enabled) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter pipe id\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter pipe id", __func__); ret = -EINVAL; goto out; } if (!deactivate && profile >= s->n_pipe_profiles) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter pipe profile\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter pipe profile", __func__); ret = -EINVAL; goto out; } @@ -1447,8 +1447,8 @@ rte_sched_pipe_config(struct rte_sched_port *port, s->tc_ov = s->tc_ov_rate > subport_tc_be_rate; if (s->tc_ov != tc_be_ov) { - RTE_LOG(DEBUG, SCHED, - "Subport %u Best-effort TC oversubscription is OFF (%.4lf >= %.4lf)\n", + RTE_LOG_LINE(DEBUG, SCHED, + "Subport %u Best-effort TC oversubscription is OFF (%.4lf >= %.4lf)", subport_id, subport_tc_be_rate, s->tc_ov_rate); } @@ -1489,8 +1489,8 @@ rte_sched_pipe_config(struct rte_sched_port *port, s->tc_ov = s->tc_ov_rate > subport_tc_be_rate; if (s->tc_ov != tc_be_ov) { - RTE_LOG(DEBUG, SCHED, - "Subport %u Best effort TC oversubscription is ON (%.4lf < %.4lf)\n", + RTE_LOG_LINE(DEBUG, SCHED, + "Subport %u Best effort TC oversubscription is ON (%.4lf < %.4lf)", subport_id, subport_tc_be_rate, s->tc_ov_rate); } p->tc_ov_period_id = s->tc_ov_period_id; @@ -1518,15 +1518,15 @@ rte_sched_subport_pipe_profile_add(struct rte_sched_port *port, /* Port */ if (port == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter port\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter port", __func__); return -EINVAL; } /* Subport id not exceeds the max limit */ if (subport_id > port->n_subports_per_port) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for subport id\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for subport id", __func__); return -EINVAL; } @@ -1534,16 +1534,16 @@ rte_sched_subport_pipe_profile_add(struct rte_sched_port *port, /* Pipe profiles exceeds the max limit */ if (s->n_pipe_profiles >= s->n_max_pipe_profiles) { - RTE_LOG(ERR, SCHED, - "%s: Number of pipe profiles exceeds the max limit\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Number of pipe profiles exceeds the max limit", __func__); return -EINVAL; } /* Pipe params */ status = pipe_profile_check(params, port->rate, &s->qsize[0]); if (status != 0) { - RTE_LOG(ERR, SCHED, - "%s: Pipe profile check failed(%d)\n", __func__, status); + RTE_LOG_LINE(ERR, SCHED, + "%s: Pipe profile check failed(%d)", __func__, status); return -EINVAL; } @@ -1553,8 +1553,8 @@ rte_sched_subport_pipe_profile_add(struct rte_sched_port *port, /* Pipe profile should not exists */ for (i = 0; i < s->n_pipe_profiles; i++) if (memcmp(s->pipe_profiles + i, pp, sizeof(*pp)) == 0) { - RTE_LOG(ERR, SCHED, - "%s: Pipe profile exists\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Pipe profile exists", __func__); return -EINVAL; } @@ -1581,20 +1581,20 @@ rte_sched_port_subport_profile_add(struct rte_sched_port *port, /* Port */ if (port == NULL) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for parameter port\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Incorrect value for parameter port", __func__); return -EINVAL; } if (params == NULL) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for parameter profile\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Incorrect value for parameter profile", __func__); return -EINVAL; } if (subport_profile_id == NULL) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for parameter subport_profile_id\n", + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Incorrect value for parameter subport_profile_id", __func__); return -EINVAL; } @@ -1603,16 +1603,16 @@ rte_sched_port_subport_profile_add(struct rte_sched_port *port, /* Subport profiles exceeds the max limit */ if (port->n_subport_profiles >= port->n_max_subport_profiles) { - RTE_LOG(ERR, SCHED, "%s: " - "Number of subport profiles exceeds the max limit\n", + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Number of subport profiles exceeds the max limit", __func__); return -EINVAL; } status = subport_profile_check(params, port->rate); if (status != 0) { - RTE_LOG(ERR, SCHED, - "%s: subport profile check failed(%d)\n", __func__, status); + RTE_LOG_LINE(ERR, SCHED, + "%s: subport profile check failed(%d)", __func__, status); return -EINVAL; } @@ -1622,8 +1622,8 @@ rte_sched_port_subport_profile_add(struct rte_sched_port *port, for (i = 0; i < port->n_subport_profiles; i++) if (memcmp(port->subport_profiles + i, dst, sizeof(*dst)) == 0) { - RTE_LOG(ERR, SCHED, - "%s: subport profile exists\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: subport profile exists", __func__); return -EINVAL; } @@ -1695,26 +1695,26 @@ rte_sched_subport_read_stats(struct rte_sched_port *port, /* Check user parameters */ if (port == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter port\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter port", __func__); return -EINVAL; } if (subport_id >= port->n_subports_per_port) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for subport id\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for subport id", __func__); return -EINVAL; } if (stats == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter stats\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter stats", __func__); return -EINVAL; } if (tc_ov == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for tc_ov\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for tc_ov", __func__); return -EINVAL; } @@ -1743,26 +1743,26 @@ rte_sched_queue_read_stats(struct rte_sched_port *port, /* Check user parameters */ if (port == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter port\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter port", __func__); return -EINVAL; } if (queue_id >= rte_sched_port_queues_per_port(port)) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for queue id\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for queue id", __func__); return -EINVAL; } if (stats == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter stats\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter stats", __func__); return -EINVAL; } if (qlen == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter qlen\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter qlen", __func__); return -EINVAL; } subport_qmask = port->n_pipes_per_subport_log2 + 4; diff --git a/lib/table/rte_table_acl.c b/lib/table/rte_table_acl.c index 902cb78eac..944f5064d2 100644 --- a/lib/table/rte_table_acl.c +++ b/lib/table/rte_table_acl.c @@ -65,21 +65,21 @@ rte_table_acl_create( /* Check input parameters */ if (p == NULL) { - RTE_LOG(ERR, TABLE, "%s: Invalid value for params\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid value for params", __func__); return NULL; } if (p->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: Invalid value for name\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid value for name", __func__); return NULL; } if (p->n_rules == 0) { - RTE_LOG(ERR, TABLE, "%s: Invalid value for n_rules\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid value for n_rules", __func__); return NULL; } if ((p->n_rule_fields == 0) || (p->n_rule_fields > RTE_ACL_MAX_FIELDS)) { - RTE_LOG(ERR, TABLE, "%s: Invalid value for n_rule_fields\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid value for n_rule_fields", __func__); return NULL; } @@ -98,8 +98,8 @@ rte_table_acl_create( acl = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE, socket_id); if (acl == NULL) { - RTE_LOG(ERR, TABLE, - "%s: Cannot allocate %u bytes for ACL table\n", + RTE_LOG_LINE(ERR, TABLE, + "%s: Cannot allocate %u bytes for ACL table", __func__, total_size); return NULL; } @@ -140,7 +140,7 @@ rte_table_acl_free(void *table) /* Check input parameters */ if (table == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } @@ -164,7 +164,7 @@ rte_table_acl_build(struct rte_table_acl *acl, struct rte_acl_ctx **acl_ctx) /* Create low level ACL table */ ctx = rte_acl_create(&acl->acl_params); if (ctx == NULL) { - RTE_LOG(ERR, TABLE, "%s: Cannot create low level ACL table\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot create low level ACL table", __func__); return -1; } @@ -176,8 +176,8 @@ rte_table_acl_build(struct rte_table_acl *acl, struct rte_acl_ctx **acl_ctx) status = rte_acl_add_rules(ctx, acl->acl_rule_list[i], 1); if (status != 0) { - RTE_LOG(ERR, TABLE, - "%s: Cannot add rule to low level ACL table\n", + RTE_LOG_LINE(ERR, TABLE, + "%s: Cannot add rule to low level ACL table", __func__); rte_acl_free(ctx); return -1; @@ -196,8 +196,8 @@ rte_table_acl_build(struct rte_table_acl *acl, struct rte_acl_ctx **acl_ctx) /* Build low level ACl table */ status = rte_acl_build(ctx, &acl->cfg); if (status != 0) { - RTE_LOG(ERR, TABLE, - "%s: Cannot build the low level ACL table\n", + RTE_LOG_LINE(ERR, TABLE, + "%s: Cannot build the low level ACL table", __func__); rte_acl_free(ctx); return -1; @@ -226,29 +226,29 @@ rte_table_acl_entry_add( /* Check input parameters */ if (table == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } if (key == NULL) { - RTE_LOG(ERR, TABLE, "%s: key parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: key parameter is NULL", __func__); return -EINVAL; } if (entry == NULL) { - RTE_LOG(ERR, TABLE, "%s: entry parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: entry parameter is NULL", __func__); return -EINVAL; } if (key_found == NULL) { - RTE_LOG(ERR, TABLE, "%s: key_found parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: key_found parameter is NULL", __func__); return -EINVAL; } if (entry_ptr == NULL) { - RTE_LOG(ERR, TABLE, "%s: entry_ptr parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: entry_ptr parameter is NULL", __func__); return -EINVAL; } if (rule->priority > RTE_ACL_MAX_PRIORITY) { - RTE_LOG(ERR, TABLE, "%s: Priority is too high\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: Priority is too high", __func__); return -EINVAL; } @@ -291,7 +291,7 @@ rte_table_acl_entry_add( /* Return if max rules */ if (free_pos_valid == 0) { - RTE_LOG(ERR, TABLE, "%s: Max number of rules reached\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Max number of rules reached", __func__); return -ENOSPC; } @@ -342,15 +342,15 @@ rte_table_acl_entry_delete( /* Check input parameters */ if (table == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } if (key == NULL) { - RTE_LOG(ERR, TABLE, "%s: key parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: key parameter is NULL", __func__); return -EINVAL; } if (key_found == NULL) { - RTE_LOG(ERR, TABLE, "%s: key_found parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: key_found parameter is NULL", __func__); return -EINVAL; } @@ -424,28 +424,28 @@ rte_table_acl_entry_add_bulk( /* Check input parameters */ if (table == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } if (keys == NULL) { - RTE_LOG(ERR, TABLE, "%s: keys parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: keys parameter is NULL", __func__); return -EINVAL; } if (entries == NULL) { - RTE_LOG(ERR, TABLE, "%s: entries parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: entries parameter is NULL", __func__); return -EINVAL; } if (n_keys == 0) { - RTE_LOG(ERR, TABLE, "%s: 0 rules to add\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: 0 rules to add", __func__); return -EINVAL; } if (key_found == NULL) { - RTE_LOG(ERR, TABLE, "%s: key_found parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: key_found parameter is NULL", __func__); return -EINVAL; } if (entries_ptr == NULL) { - RTE_LOG(ERR, TABLE, "%s: entries_ptr parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: entries_ptr parameter is NULL", __func__); return -EINVAL; } @@ -455,20 +455,20 @@ rte_table_acl_entry_add_bulk( struct rte_table_acl_rule_add_params *rule; if (keys[i] == NULL) { - RTE_LOG(ERR, TABLE, "%s: keys[%" PRIu32 "] parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: keys[%" PRIu32 "] parameter is NULL", __func__, i); return -EINVAL; } if (entries[i] == NULL) { - RTE_LOG(ERR, TABLE, "%s: entries[%" PRIu32 "] parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: entries[%" PRIu32 "] parameter is NULL", __func__, i); return -EINVAL; } rule = keys[i]; if (rule->priority > RTE_ACL_MAX_PRIORITY) { - RTE_LOG(ERR, TABLE, "%s: Priority is too high\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: Priority is too high", __func__); return -EINVAL; } } @@ -604,26 +604,26 @@ rte_table_acl_entry_delete_bulk( /* Check input parameters */ if (table == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } if (keys == NULL) { - RTE_LOG(ERR, TABLE, "%s: key parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: key parameter is NULL", __func__); return -EINVAL; } if (n_keys == 0) { - RTE_LOG(ERR, TABLE, "%s: 0 rules to delete\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: 0 rules to delete", __func__); return -EINVAL; } if (key_found == NULL) { - RTE_LOG(ERR, TABLE, "%s: key_found parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: key_found parameter is NULL", __func__); return -EINVAL; } for (i = 0; i < n_keys; i++) { if (keys[i] == NULL) { - RTE_LOG(ERR, TABLE, "%s: keys[%" PRIu32 "] parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: keys[%" PRIu32 "] parameter is NULL", __func__, i); return -EINVAL; } diff --git a/lib/table/rte_table_array.c b/lib/table/rte_table_array.c index a45b29ed6a..0b3107104d 100644 --- a/lib/table/rte_table_array.c +++ b/lib/table/rte_table_array.c @@ -61,8 +61,8 @@ rte_table_array_create(void *params, int socket_id, uint32_t entry_size) total_size = total_cl_size * RTE_CACHE_LINE_SIZE; t = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE, socket_id); if (t == NULL) { - RTE_LOG(ERR, TABLE, - "%s: Cannot allocate %u bytes for array table\n", + RTE_LOG_LINE(ERR, TABLE, + "%s: Cannot allocate %u bytes for array table", __func__, total_size); return NULL; } @@ -83,7 +83,7 @@ rte_table_array_free(void *table) /* Check input parameters */ if (t == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } @@ -107,24 +107,24 @@ rte_table_array_entry_add( /* Check input parameters */ if (table == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } if (key == NULL) { - RTE_LOG(ERR, TABLE, "%s: key parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: key parameter is NULL", __func__); return -EINVAL; } if (entry == NULL) { - RTE_LOG(ERR, TABLE, "%s: entry parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: entry parameter is NULL", __func__); return -EINVAL; } if (key_found == NULL) { - RTE_LOG(ERR, TABLE, "%s: key_found parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: key_found parameter is NULL", __func__); return -EINVAL; } if (entry_ptr == NULL) { - RTE_LOG(ERR, TABLE, "%s: entry_ptr parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: entry_ptr parameter is NULL", __func__); return -EINVAL; } diff --git a/lib/table/rte_table_hash_cuckoo.c b/lib/table/rte_table_hash_cuckoo.c index 86c960c103..228b49a893 100644 --- a/lib/table/rte_table_hash_cuckoo.c +++ b/lib/table/rte_table_hash_cuckoo.c @@ -47,27 +47,27 @@ static int check_params_create_hash_cuckoo(struct rte_table_hash_cuckoo_params *params) { if (params == NULL) { - RTE_LOG(ERR, TABLE, "NULL Input Parameters.\n"); + RTE_LOG_LINE(ERR, TABLE, "NULL Input Parameters."); return -EINVAL; } if (params->name == NULL) { - RTE_LOG(ERR, TABLE, "Table name is NULL.\n"); + RTE_LOG_LINE(ERR, TABLE, "Table name is NULL."); return -EINVAL; } if (params->key_size == 0) { - RTE_LOG(ERR, TABLE, "Invalid key_size.\n"); + RTE_LOG_LINE(ERR, TABLE, "Invalid key_size."); return -EINVAL; } if (params->n_keys == 0) { - RTE_LOG(ERR, TABLE, "Invalid n_keys.\n"); + RTE_LOG_LINE(ERR, TABLE, "Invalid n_keys."); return -EINVAL; } if (params->f_hash == NULL) { - RTE_LOG(ERR, TABLE, "f_hash is NULL.\n"); + RTE_LOG_LINE(ERR, TABLE, "f_hash is NULL."); return -EINVAL; } @@ -94,8 +94,8 @@ rte_table_hash_cuckoo_create(void *params, t = rte_zmalloc_socket(p->name, total_size, RTE_CACHE_LINE_SIZE, socket_id); if (t == NULL) { - RTE_LOG(ERR, TABLE, - "%s: Cannot allocate %u bytes for cuckoo hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, + "%s: Cannot allocate %u bytes for cuckoo hash table %s", __func__, total_size, p->name); return NULL; } @@ -114,8 +114,8 @@ rte_table_hash_cuckoo_create(void *params, if (h_table == NULL) { h_table = rte_hash_create(&hash_cuckoo_params); if (h_table == NULL) { - RTE_LOG(ERR, TABLE, - "%s: failed to create cuckoo hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, + "%s: failed to create cuckoo hash table %s", __func__, p->name); rte_free(t); return NULL; @@ -131,8 +131,8 @@ rte_table_hash_cuckoo_create(void *params, t->key_offset = p->key_offset; t->h_table = h_table; - RTE_LOG(INFO, TABLE, - "%s: Cuckoo hash table %s memory footprint is %u bytes\n", + RTE_LOG_LINE(INFO, TABLE, + "%s: Cuckoo hash table %s memory footprint is %u bytes", __func__, p->name, total_size); return t; } diff --git a/lib/table/rte_table_hash_ext.c b/lib/table/rte_table_hash_ext.c index 9f0220ded2..38ea96c654 100644 --- a/lib/table/rte_table_hash_ext.c +++ b/lib/table/rte_table_hash_ext.c @@ -128,33 +128,33 @@ check_params_create(struct rte_table_hash_params *params) { /* name */ if (params->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: name invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: name invalid value", __func__); return -EINVAL; } /* key_size */ if ((params->key_size < sizeof(uint64_t)) || (!rte_is_power_of_2(params->key_size))) { - RTE_LOG(ERR, TABLE, "%s: key_size invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: key_size invalid value", __func__); return -EINVAL; } /* n_keys */ if (params->n_keys == 0) { - RTE_LOG(ERR, TABLE, "%s: n_keys invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_keys invalid value", __func__); return -EINVAL; } /* n_buckets */ if ((params->n_buckets == 0) || (!rte_is_power_of_2(params->n_buckets))) { - RTE_LOG(ERR, TABLE, "%s: n_buckets invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_buckets invalid value", __func__); return -EINVAL; } /* f_hash */ if (params->f_hash == NULL) { - RTE_LOG(ERR, TABLE, "%s: f_hash invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: f_hash invalid value", __func__); return -EINVAL; } @@ -211,8 +211,8 @@ rte_table_hash_ext_create(void *params, int socket_id, uint32_t entry_size) key_sz + key_stack_sz + bkt_ext_stack_sz + data_sz; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes" - " for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes" + " for hash table %s", __func__, total_size, p->name); return NULL; } @@ -222,13 +222,13 @@ rte_table_hash_ext_create(void *params, int socket_id, uint32_t entry_size) RTE_CACHE_LINE_SIZE, socket_id); if (t == NULL) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes" - " for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes" + " for hash table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, "%s (%u-byte key): Hash table %s memory " - "footprint is %" PRIu64 " bytes\n", + RTE_LOG_LINE(INFO, TABLE, "%s (%u-byte key): Hash table %s memory " + "footprint is %" PRIu64 " bytes", __func__, p->key_size, p->name, total_size); /* Memory initialization */ diff --git a/lib/table/rte_table_hash_key16.c b/lib/table/rte_table_hash_key16.c index 584c3f2c98..63b28f79c0 100644 --- a/lib/table/rte_table_hash_key16.c +++ b/lib/table/rte_table_hash_key16.c @@ -107,32 +107,32 @@ check_params_create(struct rte_table_hash_params *params) { /* name */ if (params->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: name invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: name invalid value", __func__); return -EINVAL; } /* key_size */ if (params->key_size != KEY_SIZE) { - RTE_LOG(ERR, TABLE, "%s: key_size invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: key_size invalid value", __func__); return -EINVAL; } /* n_keys */ if (params->n_keys == 0) { - RTE_LOG(ERR, TABLE, "%s: n_keys is zero\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_keys is zero", __func__); return -EINVAL; } /* n_buckets */ if ((params->n_buckets == 0) || (!rte_is_power_of_2(params->n_buckets))) { - RTE_LOG(ERR, TABLE, "%s: n_buckets invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_buckets invalid value", __func__); return -EINVAL; } /* f_hash */ if (params->f_hash == NULL) { - RTE_LOG(ERR, TABLE, "%s: f_hash function pointer is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: f_hash function pointer is NULL", __func__); return -EINVAL; } @@ -181,8 +181,8 @@ rte_table_hash_create_key16_lru(void *params, total_size = sizeof(struct rte_table_hash) + n_buckets * bucket_size; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } @@ -192,13 +192,13 @@ rte_table_hash_create_key16_lru(void *params, RTE_CACHE_LINE_SIZE, socket_id); if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, "%s: Hash table %s memory footprint " - "is %" PRIu64 " bytes\n", + RTE_LOG_LINE(INFO, TABLE, "%s: Hash table %s memory footprint " + "is %" PRIu64 " bytes", __func__, p->name, total_size); /* Memory initialization */ @@ -236,7 +236,7 @@ rte_table_hash_free_key16_lru(void *table) /* Check input parameters */ if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } @@ -391,8 +391,8 @@ rte_table_hash_create_key16_ext(void *params, total_size = sizeof(struct rte_table_hash) + (p->n_buckets + n_buckets_ext) * bucket_size + stack_size; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } @@ -402,13 +402,13 @@ rte_table_hash_create_key16_ext(void *params, RTE_CACHE_LINE_SIZE, socket_id); if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, "%s: Hash table %s memory footprint " - "is %" PRIu64 " bytes\n", + RTE_LOG_LINE(INFO, TABLE, "%s: Hash table %s memory footprint " + "is %" PRIu64 " bytes", __func__, p->name, total_size); /* Memory initialization */ @@ -446,7 +446,7 @@ rte_table_hash_free_key16_ext(void *table) /* Check input parameters */ if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } diff --git a/lib/table/rte_table_hash_key32.c b/lib/table/rte_table_hash_key32.c index 22b5ca9166..6293bf518b 100644 --- a/lib/table/rte_table_hash_key32.c +++ b/lib/table/rte_table_hash_key32.c @@ -111,32 +111,32 @@ check_params_create(struct rte_table_hash_params *params) { /* name */ if (params->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: name invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: name invalid value", __func__); return -EINVAL; } /* key_size */ if (params->key_size != KEY_SIZE) { - RTE_LOG(ERR, TABLE, "%s: key_size invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: key_size invalid value", __func__); return -EINVAL; } /* n_keys */ if (params->n_keys == 0) { - RTE_LOG(ERR, TABLE, "%s: n_keys is zero\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_keys is zero", __func__); return -EINVAL; } /* n_buckets */ if ((params->n_buckets == 0) || (!rte_is_power_of_2(params->n_buckets))) { - RTE_LOG(ERR, TABLE, "%s: n_buckets invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_buckets invalid value", __func__); return -EINVAL; } /* f_hash */ if (params->f_hash == NULL) { - RTE_LOG(ERR, TABLE, "%s: f_hash function pointer is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: f_hash function pointer is NULL", __func__); return -EINVAL; } @@ -184,8 +184,8 @@ rte_table_hash_create_key32_lru(void *params, KEYS_PER_BUCKET * entry_size); total_size = sizeof(struct rte_table_hash) + n_buckets * bucket_size; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } @@ -195,14 +195,14 @@ rte_table_hash_create_key32_lru(void *params, RTE_CACHE_LINE_SIZE, socket_id); if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, + RTE_LOG_LINE(INFO, TABLE, "%s: Hash table %s memory footprint " - "is %" PRIu64 " bytes\n", + "is %" PRIu64 " bytes", __func__, p->name, total_size); /* Memory initialization */ @@ -244,7 +244,7 @@ rte_table_hash_free_key32_lru(void *table) /* Check input parameters */ if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } @@ -400,8 +400,8 @@ rte_table_hash_create_key32_ext(void *params, total_size = sizeof(struct rte_table_hash) + (p->n_buckets + n_buckets_ext) * bucket_size + stack_size; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } @@ -411,14 +411,14 @@ rte_table_hash_create_key32_ext(void *params, RTE_CACHE_LINE_SIZE, socket_id); if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, + RTE_LOG_LINE(INFO, TABLE, "%s: Hash table %s memory footprint " - "is %" PRIu64" bytes\n", + "is %" PRIu64" bytes", __func__, p->name, total_size); /* Memory initialization */ @@ -460,7 +460,7 @@ rte_table_hash_free_key32_ext(void *table) /* Check input parameters */ if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } diff --git a/lib/table/rte_table_hash_key8.c b/lib/table/rte_table_hash_key8.c index bd0ec4aac0..69e61c2ec8 100644 --- a/lib/table/rte_table_hash_key8.c +++ b/lib/table/rte_table_hash_key8.c @@ -101,32 +101,32 @@ check_params_create(struct rte_table_hash_params *params) { /* name */ if (params->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: name invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: name invalid value", __func__); return -EINVAL; } /* key_size */ if (params->key_size != KEY_SIZE) { - RTE_LOG(ERR, TABLE, "%s: key_size invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: key_size invalid value", __func__); return -EINVAL; } /* n_keys */ if (params->n_keys == 0) { - RTE_LOG(ERR, TABLE, "%s: n_keys is zero\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_keys is zero", __func__); return -EINVAL; } /* n_buckets */ if ((params->n_buckets == 0) || (!rte_is_power_of_2(params->n_buckets))) { - RTE_LOG(ERR, TABLE, "%s: n_buckets invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_buckets invalid value", __func__); return -EINVAL; } /* f_hash */ if (params->f_hash == NULL) { - RTE_LOG(ERR, TABLE, "%s: f_hash function pointer is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: f_hash function pointer is NULL", __func__); return -EINVAL; } @@ -173,8 +173,8 @@ rte_table_hash_create_key8_lru(void *params, int socket_id, uint32_t entry_size) total_size = sizeof(struct rte_table_hash) + n_buckets * bucket_size; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes" - " for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes" + " for hash table %s", __func__, total_size, p->name); return NULL; } @@ -184,14 +184,14 @@ rte_table_hash_create_key8_lru(void *params, int socket_id, uint32_t entry_size) RTE_CACHE_LINE_SIZE, socket_id); if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes" - " for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes" + " for hash table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, "%s: Hash table %s memory footprint " - "is %" PRIu64 " bytes\n", + RTE_LOG_LINE(INFO, TABLE, "%s: Hash table %s memory footprint " + "is %" PRIu64 " bytes", __func__, p->name, total_size); /* Memory initialization */ @@ -226,7 +226,7 @@ rte_table_hash_free_key8_lru(void *table) /* Check input parameters */ if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } @@ -377,8 +377,8 @@ rte_table_hash_create_key8_ext(void *params, int socket_id, uint32_t entry_size) (p->n_buckets + n_buckets_ext) * bucket_size + stack_size; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } @@ -388,14 +388,14 @@ rte_table_hash_create_key8_ext(void *params, int socket_id, uint32_t entry_size) RTE_CACHE_LINE_SIZE, socket_id); if (f == NULL) { - RTE_LOG(ERR, TABLE, + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + "for hash table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, "%s: Hash table %s memory footprint " - "is %" PRIu64 " bytes\n", + RTE_LOG_LINE(INFO, TABLE, "%s: Hash table %s memory footprint " + "is %" PRIu64 " bytes", __func__, p->name, total_size); /* Memory initialization */ @@ -430,7 +430,7 @@ rte_table_hash_free_key8_ext(void *table) /* Check input parameters */ if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } diff --git a/lib/table/rte_table_hash_lru.c b/lib/table/rte_table_hash_lru.c index 758ec4fe7a..190062b33f 100644 --- a/lib/table/rte_table_hash_lru.c +++ b/lib/table/rte_table_hash_lru.c @@ -105,33 +105,33 @@ check_params_create(struct rte_table_hash_params *params) { /* name */ if (params->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: name invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: name invalid value", __func__); return -EINVAL; } /* key_size */ if ((params->key_size < sizeof(uint64_t)) || (!rte_is_power_of_2(params->key_size))) { - RTE_LOG(ERR, TABLE, "%s: key_size invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: key_size invalid value", __func__); return -EINVAL; } /* n_keys */ if (params->n_keys == 0) { - RTE_LOG(ERR, TABLE, "%s: n_keys invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_keys invalid value", __func__); return -EINVAL; } /* n_buckets */ if ((params->n_buckets == 0) || (!rte_is_power_of_2(params->n_buckets))) { - RTE_LOG(ERR, TABLE, "%s: n_buckets invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_buckets invalid value", __func__); return -EINVAL; } /* f_hash */ if (params->f_hash == NULL) { - RTE_LOG(ERR, TABLE, "%s: f_hash invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: f_hash invalid value", __func__); return -EINVAL; } @@ -187,9 +187,9 @@ rte_table_hash_lru_create(void *params, int socket_id, uint32_t entry_size) key_stack_sz + data_sz; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes for hash " - "table %s\n", + "table %s", __func__, total_size, p->name); return NULL; } @@ -199,14 +199,14 @@ rte_table_hash_lru_create(void *params, int socket_id, uint32_t entry_size) RTE_CACHE_LINE_SIZE, socket_id); if (t == NULL) { - RTE_LOG(ERR, TABLE, + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes for hash " - "table %s\n", + "table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, "%s (%u-byte key): Hash table %s memory footprint" - " is %" PRIu64 " bytes\n", + RTE_LOG_LINE(INFO, TABLE, "%s (%u-byte key): Hash table %s memory footprint" + " is %" PRIu64 " bytes", __func__, p->key_size, p->name, total_size); /* Memory initialization */ diff --git a/lib/table/rte_table_lpm.c b/lib/table/rte_table_lpm.c index c2ef0d9ba0..989ab65ee6 100644 --- a/lib/table/rte_table_lpm.c +++ b/lib/table/rte_table_lpm.c @@ -59,29 +59,29 @@ rte_table_lpm_create(void *params, int socket_id, uint32_t entry_size) /* Check input parameters */ if (p == NULL) { - RTE_LOG(ERR, TABLE, "%s: NULL input parameters\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: NULL input parameters", __func__); return NULL; } if (p->n_rules == 0) { - RTE_LOG(ERR, TABLE, "%s: Invalid n_rules\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid n_rules", __func__); return NULL; } if (p->number_tbl8s == 0) { - RTE_LOG(ERR, TABLE, "%s: Invalid number_tbl8s\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid number_tbl8s", __func__); return NULL; } if (p->entry_unique_size == 0) { - RTE_LOG(ERR, TABLE, "%s: Invalid entry_unique_size\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid entry_unique_size", __func__); return NULL; } if (p->entry_unique_size > entry_size) { - RTE_LOG(ERR, TABLE, "%s: Invalid entry_unique_size\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid entry_unique_size", __func__); return NULL; } if (p->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: Table name is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Table name is NULL", __func__); return NULL; } @@ -93,8 +93,8 @@ rte_table_lpm_create(void *params, int socket_id, uint32_t entry_size) lpm = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE, socket_id); if (lpm == NULL) { - RTE_LOG(ERR, TABLE, - "%s: Cannot allocate %u bytes for LPM table\n", + RTE_LOG_LINE(ERR, TABLE, + "%s: Cannot allocate %u bytes for LPM table", __func__, total_size); return NULL; } @@ -107,7 +107,7 @@ rte_table_lpm_create(void *params, int socket_id, uint32_t entry_size) if (lpm->lpm == NULL) { rte_free(lpm); - RTE_LOG(ERR, TABLE, "Unable to create low-level LPM table\n"); + RTE_LOG_LINE(ERR, TABLE, "Unable to create low-level LPM table"); return NULL; } @@ -127,7 +127,7 @@ rte_table_lpm_free(void *table) /* Check input parameters */ if (lpm == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } @@ -187,21 +187,21 @@ rte_table_lpm_entry_add( /* Check input parameters */ if (lpm == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } if (ip_prefix == NULL) { - RTE_LOG(ERR, TABLE, "%s: ip_prefix parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: ip_prefix parameter is NULL", __func__); return -EINVAL; } if (entry == NULL) { - RTE_LOG(ERR, TABLE, "%s: entry parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: entry parameter is NULL", __func__); return -EINVAL; } if ((ip_prefix->depth == 0) || (ip_prefix->depth > 32)) { - RTE_LOG(ERR, TABLE, "%s: invalid depth (%d)\n", + RTE_LOG_LINE(ERR, TABLE, "%s: invalid depth (%d)", __func__, ip_prefix->depth); return -EINVAL; } @@ -216,7 +216,7 @@ rte_table_lpm_entry_add( uint8_t *nht_entry; if (nht_find_free(lpm, &nht_pos) == 0) { - RTE_LOG(ERR, TABLE, "%s: NHT full\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: NHT full", __func__); return -1; } @@ -226,7 +226,7 @@ rte_table_lpm_entry_add( /* Add rule to low level LPM table */ if (rte_lpm_add(lpm->lpm, ip_prefix->ip, ip_prefix->depth, nht_pos) < 0) { - RTE_LOG(ERR, TABLE, "%s: LPM rule add failed\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: LPM rule add failed", __func__); return -1; } @@ -253,16 +253,16 @@ rte_table_lpm_entry_delete( /* Check input parameters */ if (lpm == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } if (ip_prefix == NULL) { - RTE_LOG(ERR, TABLE, "%s: ip_prefix parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: ip_prefix parameter is NULL", __func__); return -EINVAL; } if ((ip_prefix->depth == 0) || (ip_prefix->depth > 32)) { - RTE_LOG(ERR, TABLE, "%s: invalid depth (%d)\n", __func__, + RTE_LOG_LINE(ERR, TABLE, "%s: invalid depth (%d)", __func__, ip_prefix->depth); return -EINVAL; } @@ -271,7 +271,7 @@ rte_table_lpm_entry_delete( status = rte_lpm_is_rule_present(lpm->lpm, ip_prefix->ip, ip_prefix->depth, &nht_pos); if (status < 0) { - RTE_LOG(ERR, TABLE, "%s: LPM algorithmic error\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: LPM algorithmic error", __func__); return -1; } if (status == 0) { @@ -282,7 +282,7 @@ rte_table_lpm_entry_delete( /* Delete rule from the low-level LPM table */ status = rte_lpm_delete(lpm->lpm, ip_prefix->ip, ip_prefix->depth); if (status) { - RTE_LOG(ERR, TABLE, "%s: LPM rule delete failed\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: LPM rule delete failed", __func__); return -1; } diff --git a/lib/table/rte_table_lpm_ipv6.c b/lib/table/rte_table_lpm_ipv6.c index 6f3e11a14f..5b0e643832 100644 --- a/lib/table/rte_table_lpm_ipv6.c +++ b/lib/table/rte_table_lpm_ipv6.c @@ -56,29 +56,29 @@ rte_table_lpm_ipv6_create(void *params, int socket_id, uint32_t entry_size) /* Check input parameters */ if (p == NULL) { - RTE_LOG(ERR, TABLE, "%s: NULL input parameters\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: NULL input parameters", __func__); return NULL; } if (p->n_rules == 0) { - RTE_LOG(ERR, TABLE, "%s: Invalid n_rules\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid n_rules", __func__); return NULL; } if (p->number_tbl8s == 0) { - RTE_LOG(ERR, TABLE, "%s: Invalid n_rules\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid n_rules", __func__); return NULL; } if (p->entry_unique_size == 0) { - RTE_LOG(ERR, TABLE, "%s: Invalid entry_unique_size\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid entry_unique_size", __func__); return NULL; } if (p->entry_unique_size > entry_size) { - RTE_LOG(ERR, TABLE, "%s: Invalid entry_unique_size\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid entry_unique_size", __func__); return NULL; } if (p->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: Table name is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Table name is NULL", __func__); return NULL; } @@ -90,8 +90,8 @@ rte_table_lpm_ipv6_create(void *params, int socket_id, uint32_t entry_size) lpm = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE, socket_id); if (lpm == NULL) { - RTE_LOG(ERR, TABLE, - "%s: Cannot allocate %u bytes for LPM IPv6 table\n", + RTE_LOG_LINE(ERR, TABLE, + "%s: Cannot allocate %u bytes for LPM IPv6 table", __func__, total_size); return NULL; } @@ -103,8 +103,8 @@ rte_table_lpm_ipv6_create(void *params, int socket_id, uint32_t entry_size) lpm->lpm = rte_lpm6_create(p->name, socket_id, &lpm6_config); if (lpm->lpm == NULL) { rte_free(lpm); - RTE_LOG(ERR, TABLE, - "Unable to create low-level LPM IPv6 table\n"); + RTE_LOG_LINE(ERR, TABLE, + "Unable to create low-level LPM IPv6 table"); return NULL; } @@ -124,7 +124,7 @@ rte_table_lpm_ipv6_free(void *table) /* Check input parameters */ if (lpm == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } @@ -184,21 +184,21 @@ rte_table_lpm_ipv6_entry_add( /* Check input parameters */ if (lpm == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } if (ip_prefix == NULL) { - RTE_LOG(ERR, TABLE, "%s: ip_prefix parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: ip_prefix parameter is NULL", __func__); return -EINVAL; } if (entry == NULL) { - RTE_LOG(ERR, TABLE, "%s: entry parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: entry parameter is NULL", __func__); return -EINVAL; } if ((ip_prefix->depth == 0) || (ip_prefix->depth > 128)) { - RTE_LOG(ERR, TABLE, "%s: invalid depth (%d)\n", __func__, + RTE_LOG_LINE(ERR, TABLE, "%s: invalid depth (%d)", __func__, ip_prefix->depth); return -EINVAL; } @@ -213,7 +213,7 @@ rte_table_lpm_ipv6_entry_add( uint8_t *nht_entry; if (nht_find_free(lpm, &nht_pos) == 0) { - RTE_LOG(ERR, TABLE, "%s: NHT full\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: NHT full", __func__); return -1; } @@ -224,7 +224,7 @@ rte_table_lpm_ipv6_entry_add( /* Add rule to low level LPM table */ if (rte_lpm6_add(lpm->lpm, ip_prefix->ip, ip_prefix->depth, nht_pos) < 0) { - RTE_LOG(ERR, TABLE, "%s: LPM IPv6 rule add failed\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: LPM IPv6 rule add failed", __func__); return -1; } @@ -252,16 +252,16 @@ rte_table_lpm_ipv6_entry_delete( /* Check input parameters */ if (lpm == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } if (ip_prefix == NULL) { - RTE_LOG(ERR, TABLE, "%s: ip_prefix parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: ip_prefix parameter is NULL", __func__); return -EINVAL; } if ((ip_prefix->depth == 0) || (ip_prefix->depth > 128)) { - RTE_LOG(ERR, TABLE, "%s: invalid depth (%d)\n", __func__, + RTE_LOG_LINE(ERR, TABLE, "%s: invalid depth (%d)", __func__, ip_prefix->depth); return -EINVAL; } @@ -270,7 +270,7 @@ rte_table_lpm_ipv6_entry_delete( status = rte_lpm6_is_rule_present(lpm->lpm, ip_prefix->ip, ip_prefix->depth, &nht_pos); if (status < 0) { - RTE_LOG(ERR, TABLE, "%s: LPM IPv6 algorithmic error\n", + RTE_LOG_LINE(ERR, TABLE, "%s: LPM IPv6 algorithmic error", __func__); return -1; } @@ -282,7 +282,7 @@ rte_table_lpm_ipv6_entry_delete( /* Delete rule from the low-level LPM table */ status = rte_lpm6_delete(lpm->lpm, ip_prefix->ip, ip_prefix->depth); if (status) { - RTE_LOG(ERR, TABLE, "%s: LPM IPv6 rule delete failed\n", + RTE_LOG_LINE(ERR, TABLE, "%s: LPM IPv6 rule delete failed", __func__); return -1; } diff --git a/lib/table/rte_table_stub.c b/lib/table/rte_table_stub.c index cc21516995..a54b502f79 100644 --- a/lib/table/rte_table_stub.c +++ b/lib/table/rte_table_stub.c @@ -38,8 +38,8 @@ rte_table_stub_create(__rte_unused void *params, stub = rte_zmalloc_socket("TABLE", size, RTE_CACHE_LINE_SIZE, socket_id); if (stub == NULL) { - RTE_LOG(ERR, TABLE, - "%s: Cannot allocate %u bytes for stub table\n", + RTE_LOG_LINE(ERR, TABLE, + "%s: Cannot allocate %u bytes for stub table", __func__, size); return NULL; } diff --git a/lib/vhost/fd_man.c b/lib/vhost/fd_man.c index 83586c5b4f..ff91c3169a 100644 --- a/lib/vhost/fd_man.c +++ b/lib/vhost/fd_man.c @@ -334,8 +334,8 @@ fdset_pipe_init(struct fdset *fdset) int ret; if (pipe(fdset->u.pipefd) < 0) { - RTE_LOG(ERR, VHOST_FDMAN, - "failed to create pipe for vhost fdset\n"); + RTE_LOG_LINE(ERR, VHOST_FDMAN, + "failed to create pipe for vhost fdset"); return -1; } @@ -343,8 +343,8 @@ fdset_pipe_init(struct fdset *fdset) fdset_pipe_read_cb, NULL, NULL); if (ret < 0) { - RTE_LOG(ERR, VHOST_FDMAN, - "failed to add pipe readfd %d into vhost server fdset\n", + RTE_LOG_LINE(ERR, VHOST_FDMAN, + "failed to add pipe readfd %d into vhost server fdset", fdset->u.readfd); fdset_pipe_uninit(fdset); -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 12/14] lib: convert to per line logging 2023-12-08 14:59 ` [RFC v2 12/14] lib: convert to per line logging David Marchand @ 2023-12-08 17:16 ` Stephen Hemminger 2023-12-11 12:34 ` David Marchand 2023-12-16 9:30 ` Andrew Rybchenko 1 sibling, 1 reply; 122+ messages in thread From: Stephen Hemminger @ 2023-12-08 17:16 UTC (permalink / raw) To: David Marchand Cc: dev, thomas, ferruh.yigit, bruce.richardson, mb, Konstantin Ananyev, Anatoly Burakov, Harman Kalra, Jerin Jacob, Sunil Kumar Kori, Harry van Haaren, Stanislaw Kardach, Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam, Byron Marohn, Yipeng Wang, Vladimir Medvedkin, Sameh Gobriel, Reshma Pattan, Andrew Rybchenko, Cristian Dumitrescu, David Hunt, Sivaprasad Tummala, Honnappa Nagarahalli, Volodymyr Fialko, Maxime Coquelin, Chenbo Xia On Fri, 8 Dec 2023 15:59:46 +0100 David Marchand <david.marchand@redhat.com> wrote: > Convert many libraries that call RTE_LOG(... "\n", ...) to RTE_LOG_LINE. > > Note: > - for acl and sched libraries that still has some debug multilines > messages, a direct call to RTE_LOG is used: this will make it easier to > notice such special cases, > > Signed-off-by: David Marchand <david.marchand@redhat.com> > --- > lib/acl/acl_bld.c | 28 +-- > lib/acl/acl_gen.c | 8 +- > lib/acl/rte_acl.c | 8 +- > lib/acl/tb_mem.c | 4 +- > lib/eal/common/eal_common_bus.c | 22 +- > lib/eal/common/eal_common_class.c | 4 +- > lib/eal/common/eal_common_config.c | 2 +- > lib/eal/common/eal_common_debug.c | 6 +- > lib/eal/common/eal_common_dev.c | 80 +++---- > lib/eal/common/eal_common_devargs.c | 18 +- > lib/eal/common/eal_common_dynmem.c | 34 +-- > lib/eal/common/eal_common_fbarray.c | 12 +- > lib/eal/common/eal_common_interrupts.c | 38 ++-- > lib/eal/common/eal_common_lcore.c | 26 +-- > lib/eal/common/eal_common_memalloc.c | 12 +- > lib/eal/common/eal_common_memory.c | 66 +++--- > lib/eal/common/eal_common_memzone.c | 24 +-- > lib/eal/common/eal_common_options.c | 236 ++++++++++---------- > lib/eal/common/eal_common_proc.c | 112 +++++----- > lib/eal/common/eal_common_tailqs.c | 12 +- > lib/eal/common/eal_common_thread.c | 12 +- > lib/eal/common/eal_common_timer.c | 6 +- > lib/eal/common/eal_common_trace_utils.c | 2 +- > lib/eal/common/eal_trace.h | 4 +- > lib/eal/common/hotplug_mp.c | 54 ++--- > lib/eal/common/malloc_elem.c | 6 +- > lib/eal/common/malloc_heap.c | 40 ++-- > lib/eal/common/malloc_mp.c | 72 +++---- > lib/eal/common/rte_keepalive.c | 2 +- > lib/eal/common/rte_malloc.c | 10 +- > lib/eal/common/rte_service.c | 8 +- > lib/eal/freebsd/eal.c | 74 +++---- > lib/eal/freebsd/eal_alarm.c | 2 +- > lib/eal/freebsd/eal_dev.c | 8 +- > lib/eal/freebsd/eal_hugepage_info.c | 22 +- > lib/eal/freebsd/eal_interrupts.c | 60 +++--- > lib/eal/freebsd/eal_lcore.c | 2 +- > lib/eal/freebsd/eal_memalloc.c | 10 +- > lib/eal/freebsd/eal_memory.c | 34 +-- > lib/eal/freebsd/eal_thread.c | 2 +- > lib/eal/freebsd/eal_timer.c | 10 +- > lib/eal/linux/eal.c | 122 +++++------ > lib/eal/linux/eal_alarm.c | 2 +- > lib/eal/linux/eal_dev.c | 40 ++-- > lib/eal/linux/eal_hugepage_info.c | 38 ++-- > lib/eal/linux/eal_interrupts.c | 116 +++++----- > lib/eal/linux/eal_lcore.c | 4 +- > lib/eal/linux/eal_memalloc.c | 120 +++++------ > lib/eal/linux/eal_memory.c | 208 +++++++++--------- > lib/eal/linux/eal_thread.c | 4 +- > lib/eal/linux/eal_timer.c | 10 +- > lib/eal/linux/eal_vfio.c | 270 +++++++++++------------ > lib/eal/linux/eal_vfio_mp_sync.c | 4 +- > lib/eal/riscv/rte_cycles.c | 4 +- > lib/eal/unix/eal_filesystem.c | 14 +- > lib/eal/unix/eal_firmware.c | 2 +- > lib/eal/unix/eal_unix_memory.c | 8 +- > lib/eal/unix/rte_thread.c | 34 +-- > lib/eal/windows/eal.c | 36 ++-- > lib/eal/windows/eal_alarm.c | 12 +- > lib/eal/windows/eal_debug.c | 8 +- > lib/eal/windows/eal_dev.c | 8 +- > lib/eal/windows/eal_hugepages.c | 10 +- > lib/eal/windows/eal_interrupts.c | 10 +- > lib/eal/windows/eal_lcore.c | 6 +- > lib/eal/windows/eal_memalloc.c | 50 ++--- > lib/eal/windows/eal_memory.c | 20 +- > lib/eal/windows/eal_windows.h | 4 +- > lib/eal/windows/include/rte_windows.h | 4 +- > lib/eal/windows/rte_thread.c | 28 +-- > lib/efd/rte_efd.c | 58 ++--- > lib/fib/rte_fib.c | 14 +- > lib/fib/rte_fib6.c | 14 +- > lib/hash/rte_cuckoo_hash.c | 52 ++--- > lib/hash/rte_fbk_hash.c | 4 +- > lib/hash/rte_hash_crc.c | 12 +- > lib/hash/rte_thash.c | 20 +- > lib/hash/rte_thash_gfni.c | 8 +- > lib/ip_frag/rte_ip_frag_common.c | 8 +- > lib/latencystats/rte_latencystats.c | 41 ++-- > lib/log/log.c | 6 +- > lib/lpm/rte_lpm.c | 12 +- > lib/lpm/rte_lpm6.c | 10 +- > lib/mbuf/rte_mbuf.c | 14 +- > lib/mbuf/rte_mbuf_dyn.c | 14 +- > lib/mbuf/rte_mbuf_pool_ops.c | 4 +- > lib/mempool/rte_mempool.c | 24 +-- > lib/mempool/rte_mempool.h | 2 +- > lib/mempool/rte_mempool_ops.c | 10 +- > lib/pipeline/rte_pipeline.c | 228 ++++++++++---------- > lib/port/rte_port_ethdev.c | 18 +- > lib/port/rte_port_eventdev.c | 18 +- > lib/port/rte_port_fd.c | 24 +-- > lib/port/rte_port_frag.c | 14 +- > lib/port/rte_port_ras.c | 12 +- > lib/port/rte_port_ring.c | 18 +- > lib/port/rte_port_sched.c | 12 +- > lib/port/rte_port_source_sink.c | 48 ++--- > lib/port/rte_port_sym_crypto.c | 18 +- > lib/power/guest_channel.c | 38 ++-- > lib/power/power_acpi_cpufreq.c | 106 ++++----- > lib/power/power_amd_pstate_cpufreq.c | 120 +++++------ > lib/power/power_common.c | 10 +- > lib/power/power_cppc_cpufreq.c | 118 +++++----- > lib/power/power_intel_uncore.c | 68 +++--- > lib/power/power_kvm_vm.c | 22 +- > lib/power/power_pstate_cpufreq.c | 144 ++++++------- > lib/power/rte_power.c | 22 +- > lib/power/rte_power_pmd_mgmt.c | 34 +-- > lib/power/rte_power_uncore.c | 14 +- > lib/rcu/rte_rcu_qsbr.c | 2 +- > lib/reorder/rte_reorder.c | 32 +-- > lib/rib/rte_rib.c | 10 +- > lib/rib/rte_rib6.c | 10 +- > lib/ring/rte_ring.c | 24 +-- > lib/sched/rte_pie.c | 18 +- > lib/sched/rte_sched.c | 274 ++++++++++++------------ > lib/table/rte_table_acl.c | 72 +++---- > lib/table/rte_table_array.c | 16 +- > lib/table/rte_table_hash_cuckoo.c | 22 +- > lib/table/rte_table_hash_ext.c | 22 +- > lib/table/rte_table_hash_key16.c | 38 ++-- > lib/table/rte_table_hash_key32.c | 38 ++-- > lib/table/rte_table_hash_key8.c | 38 ++-- > lib/table/rte_table_hash_lru.c | 22 +- > lib/table/rte_table_lpm.c | 42 ++-- > lib/table/rte_table_lpm_ipv6.c | 44 ++-- > lib/table/rte_table_stub.c | 4 +- > lib/vhost/fd_man.c | 8 +- > 129 files changed, 2278 insertions(+), 2279 deletions(-) Coccinelle script for later fixups? Acked-by: Stephen Hemminger <stephen@networkplumber.org> ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 12/14] lib: convert to per line logging 2023-12-08 17:16 ` Stephen Hemminger @ 2023-12-11 12:34 ` David Marchand 0 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-11 12:34 UTC (permalink / raw) To: Stephen Hemminger Cc: dev, thomas, ferruh.yigit, bruce.richardson, mb, Konstantin Ananyev, Anatoly Burakov, Harman Kalra, Jerin Jacob, Sunil Kumar Kori, Harry van Haaren, Stanislaw Kardach, Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam, Byron Marohn, Yipeng Wang, Vladimir Medvedkin, Sameh Gobriel, Reshma Pattan, Andrew Rybchenko, Cristian Dumitrescu, David Hunt, Sivaprasad Tummala, Honnappa Nagarahalli, Volodymyr Fialko, Maxime Coquelin, Chenbo Xia On Fri, Dec 8, 2023 at 6:16 PM Stephen Hemminger <stephen@networkplumber.org> wrote: > > On Fri, 8 Dec 2023 15:59:46 +0100 > David Marchand <david.marchand@redhat.com> wrote: > > > Convert many libraries that call RTE_LOG(... "\n", ...) to RTE_LOG_LINE. > > > > Note: > > - for acl and sched libraries that still has some debug multilines > > messages, a direct call to RTE_LOG is used: this will make it easier to > > notice such special cases, > > > > Signed-off-by: David Marchand <david.marchand@redhat.com> > > --- > > lib/acl/acl_bld.c | 28 +-- > > lib/acl/acl_gen.c | 8 +- > > lib/acl/rte_acl.c | 8 +- > > lib/acl/tb_mem.c | 4 +- > > lib/eal/common/eal_common_bus.c | 22 +- > > lib/eal/common/eal_common_class.c | 4 +- > > lib/eal/common/eal_common_config.c | 2 +- > > lib/eal/common/eal_common_debug.c | 6 +- > > lib/eal/common/eal_common_dev.c | 80 +++---- > > lib/eal/common/eal_common_devargs.c | 18 +- > > lib/eal/common/eal_common_dynmem.c | 34 +-- > > lib/eal/common/eal_common_fbarray.c | 12 +- > > lib/eal/common/eal_common_interrupts.c | 38 ++-- > > lib/eal/common/eal_common_lcore.c | 26 +-- > > lib/eal/common/eal_common_memalloc.c | 12 +- > > lib/eal/common/eal_common_memory.c | 66 +++--- > > lib/eal/common/eal_common_memzone.c | 24 +-- > > lib/eal/common/eal_common_options.c | 236 ++++++++++---------- > > lib/eal/common/eal_common_proc.c | 112 +++++----- > > lib/eal/common/eal_common_tailqs.c | 12 +- > > lib/eal/common/eal_common_thread.c | 12 +- > > lib/eal/common/eal_common_timer.c | 6 +- > > lib/eal/common/eal_common_trace_utils.c | 2 +- > > lib/eal/common/eal_trace.h | 4 +- > > lib/eal/common/hotplug_mp.c | 54 ++--- > > lib/eal/common/malloc_elem.c | 6 +- > > lib/eal/common/malloc_heap.c | 40 ++-- > > lib/eal/common/malloc_mp.c | 72 +++---- > > lib/eal/common/rte_keepalive.c | 2 +- > > lib/eal/common/rte_malloc.c | 10 +- > > lib/eal/common/rte_service.c | 8 +- > > lib/eal/freebsd/eal.c | 74 +++---- > > lib/eal/freebsd/eal_alarm.c | 2 +- > > lib/eal/freebsd/eal_dev.c | 8 +- > > lib/eal/freebsd/eal_hugepage_info.c | 22 +- > > lib/eal/freebsd/eal_interrupts.c | 60 +++--- > > lib/eal/freebsd/eal_lcore.c | 2 +- > > lib/eal/freebsd/eal_memalloc.c | 10 +- > > lib/eal/freebsd/eal_memory.c | 34 +-- > > lib/eal/freebsd/eal_thread.c | 2 +- > > lib/eal/freebsd/eal_timer.c | 10 +- > > lib/eal/linux/eal.c | 122 +++++------ > > lib/eal/linux/eal_alarm.c | 2 +- > > lib/eal/linux/eal_dev.c | 40 ++-- > > lib/eal/linux/eal_hugepage_info.c | 38 ++-- > > lib/eal/linux/eal_interrupts.c | 116 +++++----- > > lib/eal/linux/eal_lcore.c | 4 +- > > lib/eal/linux/eal_memalloc.c | 120 +++++------ > > lib/eal/linux/eal_memory.c | 208 +++++++++--------- > > lib/eal/linux/eal_thread.c | 4 +- > > lib/eal/linux/eal_timer.c | 10 +- > > lib/eal/linux/eal_vfio.c | 270 +++++++++++------------ > > lib/eal/linux/eal_vfio_mp_sync.c | 4 +- > > lib/eal/riscv/rte_cycles.c | 4 +- > > lib/eal/unix/eal_filesystem.c | 14 +- > > lib/eal/unix/eal_firmware.c | 2 +- > > lib/eal/unix/eal_unix_memory.c | 8 +- > > lib/eal/unix/rte_thread.c | 34 +-- > > lib/eal/windows/eal.c | 36 ++-- > > lib/eal/windows/eal_alarm.c | 12 +- > > lib/eal/windows/eal_debug.c | 8 +- > > lib/eal/windows/eal_dev.c | 8 +- > > lib/eal/windows/eal_hugepages.c | 10 +- > > lib/eal/windows/eal_interrupts.c | 10 +- > > lib/eal/windows/eal_lcore.c | 6 +- > > lib/eal/windows/eal_memalloc.c | 50 ++--- > > lib/eal/windows/eal_memory.c | 20 +- > > lib/eal/windows/eal_windows.h | 4 +- > > lib/eal/windows/include/rte_windows.h | 4 +- > > lib/eal/windows/rte_thread.c | 28 +-- > > lib/efd/rte_efd.c | 58 ++--- > > lib/fib/rte_fib.c | 14 +- > > lib/fib/rte_fib6.c | 14 +- > > lib/hash/rte_cuckoo_hash.c | 52 ++--- > > lib/hash/rte_fbk_hash.c | 4 +- > > lib/hash/rte_hash_crc.c | 12 +- > > lib/hash/rte_thash.c | 20 +- > > lib/hash/rte_thash_gfni.c | 8 +- > > lib/ip_frag/rte_ip_frag_common.c | 8 +- > > lib/latencystats/rte_latencystats.c | 41 ++-- > > lib/log/log.c | 6 +- > > lib/lpm/rte_lpm.c | 12 +- > > lib/lpm/rte_lpm6.c | 10 +- > > lib/mbuf/rte_mbuf.c | 14 +- > > lib/mbuf/rte_mbuf_dyn.c | 14 +- > > lib/mbuf/rte_mbuf_pool_ops.c | 4 +- > > lib/mempool/rte_mempool.c | 24 +-- > > lib/mempool/rte_mempool.h | 2 +- > > lib/mempool/rte_mempool_ops.c | 10 +- > > lib/pipeline/rte_pipeline.c | 228 ++++++++++---------- > > lib/port/rte_port_ethdev.c | 18 +- > > lib/port/rte_port_eventdev.c | 18 +- > > lib/port/rte_port_fd.c | 24 +-- > > lib/port/rte_port_frag.c | 14 +- > > lib/port/rte_port_ras.c | 12 +- > > lib/port/rte_port_ring.c | 18 +- > > lib/port/rte_port_sched.c | 12 +- > > lib/port/rte_port_source_sink.c | 48 ++--- > > lib/port/rte_port_sym_crypto.c | 18 +- > > lib/power/guest_channel.c | 38 ++-- > > lib/power/power_acpi_cpufreq.c | 106 ++++----- > > lib/power/power_amd_pstate_cpufreq.c | 120 +++++------ > > lib/power/power_common.c | 10 +- > > lib/power/power_cppc_cpufreq.c | 118 +++++----- > > lib/power/power_intel_uncore.c | 68 +++--- > > lib/power/power_kvm_vm.c | 22 +- > > lib/power/power_pstate_cpufreq.c | 144 ++++++------- > > lib/power/rte_power.c | 22 +- > > lib/power/rte_power_pmd_mgmt.c | 34 +-- > > lib/power/rte_power_uncore.c | 14 +- > > lib/rcu/rte_rcu_qsbr.c | 2 +- > > lib/reorder/rte_reorder.c | 32 +-- > > lib/rib/rte_rib.c | 10 +- > > lib/rib/rte_rib6.c | 10 +- > > lib/ring/rte_ring.c | 24 +-- > > lib/sched/rte_pie.c | 18 +- > > lib/sched/rte_sched.c | 274 ++++++++++++------------ > > lib/table/rte_table_acl.c | 72 +++---- > > lib/table/rte_table_array.c | 16 +- > > lib/table/rte_table_hash_cuckoo.c | 22 +- > > lib/table/rte_table_hash_ext.c | 22 +- > > lib/table/rte_table_hash_key16.c | 38 ++-- > > lib/table/rte_table_hash_key32.c | 38 ++-- > > lib/table/rte_table_hash_key8.c | 38 ++-- > > lib/table/rte_table_hash_lru.c | 22 +- > > lib/table/rte_table_lpm.c | 42 ++-- > > lib/table/rte_table_lpm_ipv6.c | 44 ++-- > > lib/table/rte_table_stub.c | 4 +- > > lib/vhost/fd_man.c | 8 +- > > 129 files changed, 2278 insertions(+), 2279 deletions(-) > > Coccinelle script for later fixups? I had a look, but I fail to see how to express matching / splitting a string for a certain delimiter with coccinelle. There is probably a need to call some external python... ? Honestly, I was hoping you would volunteer on this topic :-). -- David Marchand ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 12/14] lib: convert to per line logging 2023-12-08 14:59 ` [RFC v2 12/14] lib: convert to per line logging David Marchand 2023-12-08 17:16 ` Stephen Hemminger @ 2023-12-16 9:30 ` Andrew Rybchenko 1 sibling, 0 replies; 122+ messages in thread From: Andrew Rybchenko @ 2023-12-16 9:30 UTC (permalink / raw) To: David Marchand, dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Konstantin Ananyev, Anatoly Burakov, Harman Kalra, Jerin Jacob, Sunil Kumar Kori, Harry van Haaren, Stanislaw Kardach, Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam, Byron Marohn, Yipeng Wang, Vladimir Medvedkin, Sameh Gobriel, Reshma Pattan, Cristian Dumitrescu, David Hunt, Sivaprasad Tummala, Honnappa Nagarahalli, Volodymyr Fialko, Maxime Coquelin, Chenbo Xia On 12/8/23 17:59, David Marchand wrote: > Convert many libraries that call RTE_LOG(... "\n", ...) to RTE_LOG_LINE. > > Note: > - for acl and sched libraries that still has some debug multilines > messages, a direct call to RTE_LOG is used: this will make it easier to > notice such special cases, > > Signed-off-by: David Marchand <david.marchand@redhat.com> For mempool, Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> ^ permalink raw reply [flat|nested] 122+ messages in thread
* [RFC v2 13/14] lib: replace logging helpers 2023-12-08 14:59 ` [RFC v2 00/14] " David Marchand ` (11 preceding siblings ...) 2023-12-08 14:59 ` [RFC v2 12/14] lib: convert to per line logging David Marchand @ 2023-12-08 14:59 ` David Marchand 2023-12-08 17:18 ` Stephen Hemminger 2023-12-16 9:42 ` Andrew Rybchenko 2023-12-08 14:59 ` [RFC v2 14/14] lib: use per line logging in helpers David Marchand 13 siblings, 2 replies; 122+ messages in thread From: David Marchand @ 2023-12-08 14:59 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Konstantin Ananyev, Ruifeng Wang, Andrew Rybchenko, Ori Kam, Yipeng Wang, Sameh Gobriel, Reshma Pattan, Anatoly Burakov, David Hunt, Sivaprasad Tummala, Ciara Power, Maxime Coquelin, Chenbo Xia This is a preparation step before the next change. Many libraries have their own logging helpers that do not add a newline in their format string. Some previous changes fixed places where some of those helpers are called without a trailing newline. Using RTE_LOG_LINE in the existing helpers will ensure we don't introduce new issues in the future. The problem is that if we simply convert to the RTE_LOG_LINE helper, a future fix may introduce a regression since the logging helper change won't be backported. To address this concern, rename existing helpers: backporting a call to them will trigger some conflict or build issue in LTS branches. Note: - bpf and vhost that still has some debug multilines messages, a direct call to RTE_LOG/RTE_LOG_DP is used: this will make it easier to notice such special cases, - about previously publicly exposed logging helpers, when such helper is not publicly used (iow in public inline API), it is removed from the public API (this is the case for the member library), Signed-off-by: David Marchand <david.marchand@redhat.com> --- lib/bpf/bpf.c | 2 +- lib/bpf/bpf_convert.c | 16 +- lib/bpf/bpf_exec.c | 12 +- lib/bpf/bpf_impl.h | 5 +- lib/bpf/bpf_jit_arm64.c | 8 +- lib/bpf/bpf_jit_x86.c | 4 +- lib/bpf/bpf_load.c | 2 +- lib/bpf/bpf_load_elf.c | 24 +- lib/bpf/bpf_pkt.c | 4 +- lib/bpf/bpf_stub.c | 8 +- lib/bpf/bpf_validate.c | 38 +- lib/ethdev/ethdev_driver.c | 44 +- lib/ethdev/ethdev_pci.h | 2 +- lib/ethdev/ethdev_private.c | 10 +- lib/ethdev/rte_class_eth.c | 2 +- lib/ethdev/rte_ethdev.c | 878 +++++++++++++-------------- lib/ethdev/rte_ethdev.h | 52 +- lib/ethdev/rte_ethdev_cman.c | 16 +- lib/ethdev/rte_ethdev_telemetry.c | 44 +- lib/ethdev/rte_flow.c | 64 +- lib/ethdev/rte_flow.h | 3 - lib/ethdev/sff_telemetry.c | 30 +- lib/member/member.h | 14 + lib/member/rte_member.c | 15 +- lib/member/rte_member.h | 9 - lib/member/rte_member_heap.h | 39 +- lib/member/rte_member_ht.c | 13 +- lib/member/rte_member_sketch.c | 41 +- lib/member/rte_member_vbf.c | 9 +- lib/pdump/rte_pdump.c | 112 ++-- lib/power/power_acpi_cpufreq.c | 10 +- lib/power/power_amd_pstate_cpufreq.c | 12 +- lib/power/power_common.c | 4 +- lib/power/power_common.h | 6 +- lib/power/power_cppc_cpufreq.c | 12 +- lib/power/power_intel_uncore.c | 4 +- lib/power/power_pstate_cpufreq.c | 12 +- lib/regexdev/rte_regexdev.c | 86 +-- lib/regexdev/rte_regexdev.h | 14 +- lib/telemetry/telemetry.c | 41 +- lib/vhost/iotlb.c | 18 +- lib/vhost/socket.c | 102 ++-- lib/vhost/vdpa.c | 8 +- lib/vhost/vduse.c | 120 ++-- lib/vhost/vduse.h | 4 +- lib/vhost/vhost.c | 118 ++-- lib/vhost/vhost.h | 24 +- lib/vhost/vhost_user.c | 508 ++++++++-------- lib/vhost/virtio_net.c | 188 +++--- lib/vhost/virtio_net_ctrl.c | 38 +- 50 files changed, 1433 insertions(+), 1416 deletions(-) create mode 100644 lib/member/member.h diff --git a/lib/bpf/bpf.c b/lib/bpf/bpf.c index 8a0254d8bb..b913feeb9b 100644 --- a/lib/bpf/bpf.c +++ b/lib/bpf/bpf.c @@ -44,7 +44,7 @@ __rte_bpf_jit(struct rte_bpf *bpf) #endif if (rc != 0) - RTE_BPF_LOG(WARNING, "%s(%p) failed, error code: %d;\n", + BPF_LOG(WARNING, "%s(%p) failed, error code: %d;", __func__, bpf, rc); return rc; } diff --git a/lib/bpf/bpf_convert.c b/lib/bpf/bpf_convert.c index d441be6663..98bcd007c7 100644 --- a/lib/bpf/bpf_convert.c +++ b/lib/bpf/bpf_convert.c @@ -226,8 +226,8 @@ static bool convert_bpf_load(const struct bpf_insn *fp, case SKF_AD_OFF + SKF_AD_RANDOM: case SKF_AD_OFF + SKF_AD_ALU_XOR_X: /* Linux has special negative offsets to access meta-data. */ - RTE_BPF_LOG(ERR, - "rte_bpf_convert: socket offset %d not supported\n", + BPF_LOG(ERR, + "rte_bpf_convert: socket offset %d not supported", fp->k - SKF_AD_OFF); return true; default: @@ -246,7 +246,7 @@ static int bpf_convert_filter(const struct bpf_insn *prog, size_t len, uint8_t bpf_src; if (len > BPF_MAXINSNS) { - RTE_BPF_LOG(ERR, "%s: cBPF program too long (%zu insns)\n", + BPF_LOG(ERR, "%s: cBPF program too long (%zu insns)", __func__, len); return -EINVAL; } @@ -482,7 +482,7 @@ static int bpf_convert_filter(const struct bpf_insn *prog, size_t len, /* Unknown instruction. */ default: - RTE_BPF_LOG(ERR, "%s: Unknown instruction!: %#x\n", + BPF_LOG(ERR, "%s: Unknown instruction!: %#x", __func__, fp->code); goto err; } @@ -526,7 +526,7 @@ rte_bpf_convert(const struct bpf_program *prog) int ret; if (prog == NULL) { - RTE_BPF_LOG(ERR, "%s: NULL program\n", __func__); + BPF_LOG(ERR, "%s: NULL program", __func__); rte_errno = EINVAL; return NULL; } @@ -534,12 +534,12 @@ rte_bpf_convert(const struct bpf_program *prog) /* 1st pass: calculate the eBPF program length */ ret = bpf_convert_filter(prog->bf_insns, prog->bf_len, NULL, &ebpf_len); if (ret < 0) { - RTE_BPF_LOG(ERR, "%s: cannot get eBPF length\n", __func__); + BPF_LOG(ERR, "%s: cannot get eBPF length", __func__); rte_errno = -ret; return NULL; } - RTE_BPF_LOG(DEBUG, "%s: prog len cBPF=%u -> eBPF=%u\n", + BPF_LOG(DEBUG, "%s: prog len cBPF=%u -> eBPF=%u", __func__, prog->bf_len, ebpf_len); prm = rte_zmalloc("bpf_filter", @@ -555,7 +555,7 @@ rte_bpf_convert(const struct bpf_program *prog) /* 2nd pass: remap cBPF to eBPF instructions */ ret = bpf_convert_filter(prog->bf_insns, prog->bf_len, ebpf, &ebpf_len); if (ret < 0) { - RTE_BPF_LOG(ERR, "%s: cannot convert cBPF to eBPF\n", __func__); + BPF_LOG(ERR, "%s: cannot convert cBPF to eBPF", __func__); free(prm); rte_errno = -ret; return NULL; diff --git a/lib/bpf/bpf_exec.c b/lib/bpf/bpf_exec.c index 09f4a9a571..5274333482 100644 --- a/lib/bpf/bpf_exec.c +++ b/lib/bpf/bpf_exec.c @@ -43,8 +43,8 @@ #define BPF_DIV_ZERO_CHECK(bpf, reg, ins, type) do { \ if ((type)(reg)[(ins)->src_reg] == 0) { \ - RTE_BPF_LOG(ERR, \ - "%s(%p): division by 0 at pc: %#zx;\n", \ + BPF_LOG(ERR, \ + "%s(%p): division by 0 at pc: %#zx;", \ __func__, bpf, \ (uintptr_t)(ins) - (uintptr_t)(bpf)->prm.ins); \ return 0; \ @@ -136,8 +136,8 @@ bpf_ld_mbuf(const struct rte_bpf *bpf, uint64_t reg[EBPF_REG_NUM], mb = (const struct rte_mbuf *)(uintptr_t)reg[EBPF_REG_6]; p = rte_pktmbuf_read(mb, off, len, reg + EBPF_REG_0); if (p == NULL) - RTE_BPF_LOG(DEBUG, "%s(bpf=%p, mbuf=%p, ofs=%u, len=%u): " - "load beyond packet boundary at pc: %#zx;\n", + BPF_LOG(DEBUG, "%s(bpf=%p, mbuf=%p, ofs=%u, len=%u): " + "load beyond packet boundary at pc: %#zx;", __func__, bpf, mb, off, len, (uintptr_t)(ins) - (uintptr_t)(bpf)->prm.ins); return p; @@ -462,8 +462,8 @@ bpf_exec(const struct rte_bpf *bpf, uint64_t reg[EBPF_REG_NUM]) case (BPF_JMP | EBPF_EXIT): return reg[EBPF_REG_0]; default: - RTE_BPF_LOG(ERR, - "%s(%p): invalid opcode %#x at pc: %#zx;\n", + BPF_LOG(ERR, + "%s(%p): invalid opcode %#x at pc: %#zx;", __func__, bpf, ins->code, (uintptr_t)ins - (uintptr_t)bpf->prm.ins); return 0; diff --git a/lib/bpf/bpf_impl.h b/lib/bpf/bpf_impl.h index b483569071..30d83d2b40 100644 --- a/lib/bpf/bpf_impl.h +++ b/lib/bpf/bpf_impl.h @@ -27,9 +27,10 @@ int __rte_bpf_jit_x86(struct rte_bpf *bpf); int __rte_bpf_jit_arm64(struct rte_bpf *bpf); extern int rte_bpf_logtype; +#define RTE_LOGTYPE_BPF rte_bpf_logtype -#define RTE_BPF_LOG(lvl, fmt, args...) \ - rte_log(RTE_LOG_## lvl, rte_bpf_logtype, fmt, ##args) +#define BPF_LOG(lvl, fmt, args...) \ + RTE_LOG(lvl, BPF, fmt "\n", ##args) static inline size_t bpf_size(uint32_t bpf_op_sz) diff --git a/lib/bpf/bpf_jit_arm64.c b/lib/bpf/bpf_jit_arm64.c index f9ddafd7dc..9a92174158 100644 --- a/lib/bpf/bpf_jit_arm64.c +++ b/lib/bpf/bpf_jit_arm64.c @@ -98,8 +98,8 @@ check_invalid_args(struct a64_jit_ctx *ctx, uint32_t limit) for (idx = 0; idx < limit; idx++) { if (rte_le_to_cpu_32(ctx->ins[idx]) == A64_INVALID_OP_CODE) { - RTE_BPF_LOG(ERR, - "%s: invalid opcode at %u;\n", __func__, idx); + BPF_LOG(ERR, + "%s: invalid opcode at %u;", __func__, idx); return -EINVAL; } } @@ -1378,8 +1378,8 @@ emit(struct a64_jit_ctx *ctx, struct rte_bpf *bpf) emit_epilogue(ctx); break; default: - RTE_BPF_LOG(ERR, - "%s(%p): invalid opcode %#x at pc: %u;\n", + BPF_LOG(ERR, + "%s(%p): invalid opcode %#x at pc: %u;", __func__, bpf, ins->code, i); return -EINVAL; } diff --git a/lib/bpf/bpf_jit_x86.c b/lib/bpf/bpf_jit_x86.c index a73b2006db..7f760e82c7 100644 --- a/lib/bpf/bpf_jit_x86.c +++ b/lib/bpf/bpf_jit_x86.c @@ -1476,8 +1476,8 @@ emit(struct bpf_jit_state *st, const struct rte_bpf *bpf) emit_epilog(st); break; default: - RTE_BPF_LOG(ERR, - "%s(%p): invalid opcode %#x at pc: %u;\n", + BPF_LOG(ERR, + "%s(%p): invalid opcode %#x at pc: %u;", __func__, bpf, ins->code, i); return -EINVAL; } diff --git a/lib/bpf/bpf_load.c b/lib/bpf/bpf_load.c index 45ce9210da..9a4132bc90 100644 --- a/lib/bpf/bpf_load.c +++ b/lib/bpf/bpf_load.c @@ -98,7 +98,7 @@ rte_bpf_load(const struct rte_bpf_prm *prm) if (rc != 0) { rte_errno = -rc; - RTE_BPF_LOG(ERR, "%s: %d-th xsym is invalid\n", __func__, i); + BPF_LOG(ERR, "%s: %d-th xsym is invalid", __func__, i); return NULL; } diff --git a/lib/bpf/bpf_load_elf.c b/lib/bpf/bpf_load_elf.c index 02a5d8ba0d..e25351e664 100644 --- a/lib/bpf/bpf_load_elf.c +++ b/lib/bpf/bpf_load_elf.c @@ -84,8 +84,8 @@ resolve_xsym(const char *sn, size_t ofs, struct ebpf_insn *ins, size_t ins_sz, * as an ordinary EBPF_CALL. */ if (ins[idx].src_reg == EBPF_PSEUDO_CALL) { - RTE_BPF_LOG(INFO, "%s(%u): " - "EBPF_PSEUDO_CALL to external function: %s\n", + BPF_LOG(INFO, "%s(%u): " + "EBPF_PSEUDO_CALL to external function: %s", __func__, idx, sn); ins[idx].src_reg = EBPF_REG_0; } @@ -121,7 +121,7 @@ check_elf_header(const Elf64_Ehdr *eh) err = "unexpected machine type"; if (err != NULL) { - RTE_BPF_LOG(ERR, "%s(): %s\n", __func__, err); + BPF_LOG(ERR, "%s(): %s", __func__, err); return -EINVAL; } @@ -144,7 +144,7 @@ find_elf_code(Elf *elf, const char *section, Elf_Data **psd, size_t *pidx) eh = elf64_getehdr(elf); if (eh == NULL) { rc = elf_errno(); - RTE_BPF_LOG(ERR, "%s(%p, %s) error code: %d(%s)\n", + BPF_LOG(ERR, "%s(%p, %s) error code: %d(%s)", __func__, elf, section, rc, elf_errmsg(rc)); return -EINVAL; } @@ -167,7 +167,7 @@ find_elf_code(Elf *elf, const char *section, Elf_Data **psd, size_t *pidx) if (sd == NULL || sd->d_size == 0 || sd->d_size % sizeof(struct ebpf_insn) != 0) { rc = elf_errno(); - RTE_BPF_LOG(ERR, "%s(%p, %s) error code: %d(%s)\n", + BPF_LOG(ERR, "%s(%p, %s) error code: %d(%s)", __func__, elf, section, rc, elf_errmsg(rc)); return -EINVAL; } @@ -216,8 +216,8 @@ process_reloc(Elf *elf, size_t sym_idx, Elf64_Rel *re, size_t re_sz, rc = resolve_xsym(sn, ofs, ins, ins_sz, prm); if (rc != 0) { - RTE_BPF_LOG(ERR, - "resolve_xsym(%s, %zu) error code: %d\n", + BPF_LOG(ERR, + "resolve_xsym(%s, %zu) error code: %d", sn, ofs, rc); return rc; } @@ -309,7 +309,7 @@ rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname, fd = open(fname, O_RDONLY); if (fd < 0) { rc = errno; - RTE_BPF_LOG(ERR, "%s(%s) error code: %d(%s)\n", + BPF_LOG(ERR, "%s(%s) error code: %d(%s)", __func__, fname, rc, strerror(rc)); rte_errno = EINVAL; return NULL; @@ -319,15 +319,15 @@ rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname, close(fd); if (bpf == NULL) { - RTE_BPF_LOG(ERR, + BPF_LOG(ERR, "%s(fname=\"%s\", sname=\"%s\") failed, " - "error code: %d\n", + "error code: %d", __func__, fname, sname, rte_errno); return NULL; } - RTE_BPF_LOG(INFO, "%s(fname=\"%s\", sname=\"%s\") " - "successfully creates %p(jit={.func=%p,.sz=%zu});\n", + BPF_LOG(INFO, "%s(fname=\"%s\", sname=\"%s\") " + "successfully creates %p(jit={.func=%p,.sz=%zu});", __func__, fname, sname, bpf, bpf->jit.func, bpf->jit.sz); return bpf; } diff --git a/lib/bpf/bpf_pkt.c b/lib/bpf/bpf_pkt.c index 7a8e4a6ef4..e3b05a4553 100644 --- a/lib/bpf/bpf_pkt.c +++ b/lib/bpf/bpf_pkt.c @@ -512,7 +512,7 @@ bpf_eth_elf_load(struct bpf_eth_cbh *cbh, uint16_t port, uint16_t queue, ftx = select_tx_callback(prm->prog_arg.type, flags); if (frx == NULL && ftx == NULL) { - RTE_BPF_LOG(ERR, "%s(%u, %u): no callback selected;\n", + BPF_LOG(ERR, "%s(%u, %u): no callback selected;", __func__, port, queue); return -EINVAL; } @@ -524,7 +524,7 @@ bpf_eth_elf_load(struct bpf_eth_cbh *cbh, uint16_t port, uint16_t queue, rte_bpf_get_jit(bpf, &jit); if ((flags & RTE_BPF_ETH_F_JIT) != 0 && jit.func == NULL) { - RTE_BPF_LOG(ERR, "%s(%u, %u): no JIT generated;\n", + BPF_LOG(ERR, "%s(%u, %u): no JIT generated;", __func__, port, queue); rte_bpf_destroy(bpf); return -ENOTSUP; diff --git a/lib/bpf/bpf_stub.c b/lib/bpf/bpf_stub.c index e9f23304bc..f4ec789412 100644 --- a/lib/bpf/bpf_stub.c +++ b/lib/bpf/bpf_stub.c @@ -19,8 +19,8 @@ rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname, return NULL; } - RTE_BPF_LOG(ERR, "%s() is not supported with current config, " - "rebuild with libelf installed\n", + BPF_LOG(ERR, "%s() is not supported with current config, " + "rebuild with libelf installed", __func__); rte_errno = ENOTSUP; return NULL; @@ -36,8 +36,8 @@ rte_bpf_convert(const struct bpf_program *prog) return NULL; } - RTE_BPF_LOG(ERR, "%s() is not supported with current config, " - "rebuild with libpcap installed\n", + BPF_LOG(ERR, "%s() is not supported with current config, " + "rebuild with libpcap installed", __func__); rte_errno = ENOTSUP; return NULL; diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c index f246b3c5eb..62bcf9ead1 100644 --- a/lib/bpf/bpf_validate.c +++ b/lib/bpf/bpf_validate.c @@ -1812,15 +1812,15 @@ add_edge(struct bpf_verifier *bvf, struct inst_node *node, uint32_t nidx) uint32_t ne; if (nidx > bvf->prm->nb_ins) { - RTE_BPF_LOG(ERR, "%s: program boundary violation at pc: %u, " - "next pc: %u\n", + BPF_LOG(ERR, "%s: program boundary violation at pc: %u, " + "next pc: %u", __func__, get_node_idx(bvf, node), nidx); return -EINVAL; } ne = node->nb_edge; if (ne >= RTE_DIM(node->edge_dest)) { - RTE_BPF_LOG(ERR, "%s: internal error at pc: %u\n", + BPF_LOG(ERR, "%s: internal error at pc: %u", __func__, get_node_idx(bvf, node)); return -EINVAL; } @@ -1927,7 +1927,7 @@ log_unreachable(const struct bpf_verifier *bvf) if (node->colour == WHITE && ins->code != (BPF_LD | BPF_IMM | EBPF_DW)) - RTE_BPF_LOG(ERR, "unreachable code at pc: %u;\n", i); + BPF_LOG(ERR, "unreachable code at pc: %u;", i); } } @@ -1948,8 +1948,8 @@ log_loop(const struct bpf_verifier *bvf) for (j = 0; j != node->nb_edge; j++) { if (node->edge_type[j] == BACK_EDGE) - RTE_BPF_LOG(ERR, - "loop at pc:%u --> pc:%u;\n", + BPF_LOG(ERR, + "loop at pc:%u --> pc:%u;", i, node->edge_dest[j]); } } @@ -1979,7 +1979,7 @@ validate(struct bpf_verifier *bvf) err = check_syntax(ins); if (err != 0) { - RTE_BPF_LOG(ERR, "%s: %s at pc: %u\n", + BPF_LOG(ERR, "%s: %s at pc: %u", __func__, err, i); rc |= -EINVAL; } @@ -2048,7 +2048,7 @@ validate(struct bpf_verifier *bvf) dfs(bvf); - RTE_BPF_LOG(DEBUG, "%s(%p) stats:\n" + RTE_LOG(DEBUG, BPF, "%s(%p) stats:\n" "nb_nodes=%u;\n" "nb_jcc_nodes=%u;\n" "node_color={[WHITE]=%u, [GREY]=%u,, [BLACK]=%u};\n" @@ -2062,7 +2062,7 @@ validate(struct bpf_verifier *bvf) bvf->edge_type[BACK_EDGE], bvf->edge_type[CROSS_EDGE]); if (bvf->node_colour[BLACK] != bvf->nb_nodes) { - RTE_BPF_LOG(ERR, "%s(%p) unreachable instructions;\n", + BPF_LOG(ERR, "%s(%p) unreachable instructions;", __func__, bvf); log_unreachable(bvf); return -EINVAL; @@ -2070,13 +2070,13 @@ validate(struct bpf_verifier *bvf) if (bvf->node_colour[GREY] != 0 || bvf->node_colour[WHITE] != 0 || bvf->edge_type[UNKNOWN_EDGE] != 0) { - RTE_BPF_LOG(ERR, "%s(%p) DFS internal error;\n", + BPF_LOG(ERR, "%s(%p) DFS internal error;", __func__, bvf); return -EINVAL; } if (bvf->edge_type[BACK_EDGE] != 0) { - RTE_BPF_LOG(ERR, "%s(%p) loops detected;\n", + BPF_LOG(ERR, "%s(%p) loops detected;", __func__, bvf); log_loop(bvf); return -EINVAL; @@ -2144,8 +2144,8 @@ save_eval_state(struct bpf_verifier *bvf, struct inst_node *node) /* get new eval_state for this node */ st = pull_eval_state(bvf); if (st == NULL) { - RTE_BPF_LOG(ERR, - "%s: internal error (out of space) at pc: %u\n", + BPF_LOG(ERR, + "%s: internal error (out of space) at pc: %u", __func__, get_node_idx(bvf, node)); return -ENOMEM; } @@ -2157,7 +2157,7 @@ save_eval_state(struct bpf_verifier *bvf, struct inst_node *node) node->evst = bvf->evst; bvf->evst = st; - RTE_BPF_LOG(DEBUG, "%s(bvf=%p,node=%u) old/new states: %p/%p;\n", + BPF_LOG(DEBUG, "%s(bvf=%p,node=%u) old/new states: %p/%p;", __func__, bvf, get_node_idx(bvf, node), node->evst, bvf->evst); return 0; @@ -2169,7 +2169,7 @@ save_eval_state(struct bpf_verifier *bvf, struct inst_node *node) static void restore_eval_state(struct bpf_verifier *bvf, struct inst_node *node) { - RTE_BPF_LOG(DEBUG, "%s(bvf=%p,node=%u) old/new states: %p/%p;\n", + BPF_LOG(DEBUG, "%s(bvf=%p,node=%u) old/new states: %p/%p;", __func__, bvf, get_node_idx(bvf, node), bvf->evst, node->evst); bvf->evst = node->evst; @@ -2184,12 +2184,12 @@ log_dbg_eval_state(const struct bpf_verifier *bvf, const struct ebpf_insn *ins, const struct bpf_eval_state *st; const struct bpf_reg_val *rv; - RTE_BPF_LOG(DEBUG, "%s(pc=%u):\n", __func__, pc); + BPF_LOG(DEBUG, "%s(pc=%u):", __func__, pc); st = bvf->evst; rv = st->rv + ins->dst_reg; - RTE_BPF_LOG(DEBUG, + RTE_LOG(DEBUG, BPF, "r%u={\n" "\tv={type=%u, size=%zu},\n" "\tmask=0x%" PRIx64 ",\n" @@ -2263,7 +2263,7 @@ evaluate(struct bpf_verifier *bvf) if (ins_chk[op].eval != NULL && rc == 0) { err = ins_chk[op].eval(bvf, ins + idx); if (err != NULL) { - RTE_BPF_LOG(ERR, "%s: %s at pc: %u\n", + BPF_LOG(ERR, "%s: %s at pc: %u", __func__, err, idx); rc = -EINVAL; } @@ -2312,7 +2312,7 @@ __rte_bpf_validate(struct rte_bpf *bpf) bpf->prm.prog_arg.type != RTE_BPF_ARG_PTR && (sizeof(uint64_t) != sizeof(uintptr_t) || bpf->prm.prog_arg.type != RTE_BPF_ARG_PTR_MBUF)) { - RTE_BPF_LOG(ERR, "%s: unsupported argument type\n", __func__); + BPF_LOG(ERR, "%s: unsupported argument type", __func__); return -ENOTSUP; } diff --git a/lib/ethdev/ethdev_driver.c b/lib/ethdev/ethdev_driver.c index 55a9dcc565..bd917a15fc 100644 --- a/lib/ethdev/ethdev_driver.c +++ b/lib/ethdev/ethdev_driver.c @@ -80,12 +80,12 @@ rte_eth_dev_allocate(const char *name) name_len = strnlen(name, RTE_ETH_NAME_MAX_LEN); if (name_len == 0) { - RTE_ETHDEV_LOG(ERR, "Zero length Ethernet device name\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Zero length Ethernet device name"); return NULL; } if (name_len >= RTE_ETH_NAME_MAX_LEN) { - RTE_ETHDEV_LOG(ERR, "Ethernet device name is too long\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Ethernet device name is too long"); return NULL; } @@ -96,16 +96,16 @@ rte_eth_dev_allocate(const char *name) goto unlock; if (eth_dev_allocated(name) != NULL) { - RTE_ETHDEV_LOG(ERR, - "Ethernet device with name %s already allocated\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethernet device with name %s already allocated", name); goto unlock; } port_id = eth_dev_find_free_port(); if (port_id == RTE_MAX_ETHPORTS) { - RTE_ETHDEV_LOG(ERR, - "Reached maximum number of Ethernet ports\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Reached maximum number of Ethernet ports"); goto unlock; } @@ -163,8 +163,8 @@ rte_eth_dev_attach_secondary(const char *name) break; } if (i == RTE_MAX_ETHPORTS) { - RTE_ETHDEV_LOG(ERR, - "Device %s is not driven by the primary process\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Device %s is not driven by the primary process", name); } else { eth_dev = eth_dev_get(i); @@ -302,8 +302,8 @@ rte_eth_dev_create(struct rte_device *device, const char *name, device->numa_node); if (!ethdev->data->dev_private) { - RTE_ETHDEV_LOG(ERR, - "failed to allocate private data\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "failed to allocate private data"); retval = -ENOMEM; goto probe_failed; } @@ -311,8 +311,8 @@ rte_eth_dev_create(struct rte_device *device, const char *name, } else { ethdev = rte_eth_dev_attach_secondary(name); if (!ethdev) { - RTE_ETHDEV_LOG(ERR, - "secondary process attach failed, ethdev doesn't exist\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "secondary process attach failed, ethdev doesn't exist"); return -ENODEV; } } @@ -322,15 +322,15 @@ rte_eth_dev_create(struct rte_device *device, const char *name, if (ethdev_bus_specific_init) { retval = ethdev_bus_specific_init(ethdev, bus_init_params); if (retval) { - RTE_ETHDEV_LOG(ERR, - "ethdev bus specific initialisation failed\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "ethdev bus specific initialisation failed"); goto probe_failed; } } retval = ethdev_init(ethdev, init_params); if (retval) { - RTE_ETHDEV_LOG(ERR, "ethdev initialisation failed\n"); + RTE_ETHDEV_LOG_LINE(ERR, "ethdev initialisation failed"); goto probe_failed; } @@ -394,7 +394,7 @@ void rte_eth_dev_internal_reset(struct rte_eth_dev *dev) { if (dev->data->dev_started) { - RTE_ETHDEV_LOG(ERR, "Port %u must be stopped to allow reset\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port %u must be stopped to allow reset", dev->data->port_id); return; } @@ -487,7 +487,7 @@ rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da) pair = &args.pairs[i]; if (strcmp("representor", pair->key) == 0) { if (eth_da->type != RTE_ETH_REPRESENTOR_NONE) { - RTE_ETHDEV_LOG(ERR, "duplicated representor key: %s\n", + RTE_ETHDEV_LOG_LINE(ERR, "duplicated representor key: %s", dargs); result = -1; goto parse_cleanup; @@ -524,7 +524,7 @@ rte_eth_dma_zone_free(const struct rte_eth_dev *dev, const char *ring_name, rc = eth_dev_dma_mzone_name(z_name, sizeof(z_name), dev->data->port_id, queue_id, ring_name); if (rc >= RTE_MEMZONE_NAMESIZE) { - RTE_ETHDEV_LOG(ERR, "ring name too long\n"); + RTE_ETHDEV_LOG_LINE(ERR, "ring name too long"); return -ENAMETOOLONG; } @@ -549,7 +549,7 @@ rte_eth_dma_zone_reserve(const struct rte_eth_dev *dev, const char *ring_name, rc = eth_dev_dma_mzone_name(z_name, sizeof(z_name), dev->data->port_id, queue_id, ring_name); if (rc >= RTE_MEMZONE_NAMESIZE) { - RTE_ETHDEV_LOG(ERR, "ring name too long\n"); + RTE_ETHDEV_LOG_LINE(ERR, "ring name too long"); rte_errno = ENAMETOOLONG; return NULL; } @@ -559,8 +559,8 @@ rte_eth_dma_zone_reserve(const struct rte_eth_dev *dev, const char *ring_name, if ((socket_id != SOCKET_ID_ANY && socket_id != mz->socket_id) || size > mz->len || ((uintptr_t)mz->addr & (align - 1)) != 0) { - RTE_ETHDEV_LOG(ERR, - "memzone %s does not justify the requested attributes\n", + RTE_ETHDEV_LOG_LINE(ERR, + "memzone %s does not justify the requested attributes", mz->name); return NULL; } @@ -713,7 +713,7 @@ rte_eth_representor_id_get(uint16_t port_id, if (info->ranges[i].controller != controller) continue; if (info->ranges[i].id_end < info->ranges[i].id_base) { - RTE_ETHDEV_LOG(WARNING, "Port %hu invalid representor ID Range %u - %u, entry %d\n", + RTE_ETHDEV_LOG_LINE(WARNING, "Port %hu invalid representor ID Range %u - %u, entry %d", port_id, info->ranges[i].id_base, info->ranges[i].id_end, i); continue; diff --git a/lib/ethdev/ethdev_pci.h b/lib/ethdev/ethdev_pci.h index ddb559aa95..737fff1833 100644 --- a/lib/ethdev/ethdev_pci.h +++ b/lib/ethdev/ethdev_pci.h @@ -31,7 +31,7 @@ rte_eth_copy_pci_info(struct rte_eth_dev *eth_dev, struct rte_pci_device *pci_dev) { if ((eth_dev == NULL) || (pci_dev == NULL)) { - RTE_ETHDEV_LOG(ERR, "NULL pointer eth_dev=%p pci_dev=%p\n", + RTE_ETHDEV_LOG_LINE(ERR, "NULL pointer eth_dev=%p pci_dev=%p", (void *)eth_dev, (void *)pci_dev); return; } diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c index 0e1c7b23c1..a656df293c 100644 --- a/lib/ethdev/ethdev_private.c +++ b/lib/ethdev/ethdev_private.c @@ -182,7 +182,7 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data) RTE_DIM(eth_da->representor_ports)); done: if (str == NULL) - RTE_ETHDEV_LOG(ERR, "wrong representor format: %s\n", str); + RTE_ETHDEV_LOG_LINE(ERR, "wrong representor format: %s", str); return str == NULL ? -1 : 0; } @@ -214,7 +214,7 @@ dummy_eth_rx_burst(void *rxq, port_id = queue - per_port_queues; if (port_id < RTE_DIM(per_port_queues) && !queue->rx_warn_once) { - RTE_ETHDEV_LOG(ERR, "lcore %u called rx_pkt_burst for not ready port %"PRIuPTR"\n", + RTE_ETHDEV_LOG_LINE(ERR, "lcore %u called rx_pkt_burst for not ready port %"PRIuPTR, rte_lcore_id(), port_id); rte_dump_stack(); queue->rx_warn_once = true; @@ -233,7 +233,7 @@ dummy_eth_tx_burst(void *txq, port_id = queue - per_port_queues; if (port_id < RTE_DIM(per_port_queues) && !queue->tx_warn_once) { - RTE_ETHDEV_LOG(ERR, "lcore %u called tx_pkt_burst for not ready port %"PRIuPTR"\n", + RTE_ETHDEV_LOG_LINE(ERR, "lcore %u called tx_pkt_burst for not ready port %"PRIuPTR, rte_lcore_id(), port_id); rte_dump_stack(); queue->tx_warn_once = true; @@ -337,7 +337,7 @@ eth_dev_shared_data_prepare(void) sizeof(*eth_dev_shared_data), rte_socket_id(), flags); if (mz == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot allocate ethdev shared data\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot allocate ethdev shared data"); goto out; } @@ -355,7 +355,7 @@ eth_dev_shared_data_prepare(void) /* Clean remaining any traces of a previous shared mem */ eth_dev_shared_mz = NULL; eth_dev_shared_data = NULL; - RTE_ETHDEV_LOG(ERR, "Cannot lookup ethdev shared data\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot lookup ethdev shared data"); goto out; } if (mz == eth_dev_shared_mz && mz->addr == eth_dev_shared_data) diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c index 311beb17cb..bc003db8af 100644 --- a/lib/ethdev/rte_class_eth.c +++ b/lib/ethdev/rte_class_eth.c @@ -165,7 +165,7 @@ eth_dev_iterate(const void *start, valid_keys = eth_params_keys; kvargs = rte_kvargs_parse(str, valid_keys); if (kvargs == NULL) { - RTE_ETHDEV_LOG(ERR, "cannot parse argument list\n"); + RTE_ETHDEV_LOG_LINE(ERR, "cannot parse argument list"); rte_errno = EINVAL; return NULL; } diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index b21764e6fa..8bfeebb5ac 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -182,13 +182,13 @@ rte_eth_iterator_init(struct rte_dev_iterator *iter, const char *devargs_str) int str_size; if (iter == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot initialize NULL iterator\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot initialize NULL iterator"); return -EINVAL; } if (devargs_str == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot initialize iterator from NULL device description string\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot initialize iterator from NULL device description string"); return -EINVAL; } @@ -279,7 +279,7 @@ rte_eth_iterator_init(struct rte_dev_iterator *iter, const char *devargs_str) error: if (ret == -ENOTSUP) - RTE_ETHDEV_LOG(ERR, "Bus %s does not support iterating.\n", + RTE_ETHDEV_LOG_LINE(ERR, "Bus %s does not support iterating.", iter->bus->name); rte_devargs_reset(&devargs); free(bus_str); @@ -291,8 +291,8 @@ uint16_t rte_eth_iterator_next(struct rte_dev_iterator *iter) { if (iter == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get next device from NULL iterator\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get next device from NULL iterator"); return RTE_MAX_ETHPORTS; } @@ -331,7 +331,7 @@ void rte_eth_iterator_cleanup(struct rte_dev_iterator *iter) { if (iter == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot do clean up from NULL iterator\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot do clean up from NULL iterator"); return; } @@ -447,7 +447,7 @@ rte_eth_dev_owner_new(uint64_t *owner_id) int ret; if (owner_id == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get new owner ID to NULL\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get new owner ID to NULL"); return -EINVAL; } @@ -477,30 +477,30 @@ eth_dev_owner_set(const uint16_t port_id, const uint64_t old_owner_id, struct rte_eth_dev_owner *port_owner; if (port_id >= RTE_MAX_ETHPORTS || !eth_dev_is_allocated(ethdev)) { - RTE_ETHDEV_LOG(ERR, "Port ID %"PRIu16" is not allocated\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port ID %"PRIu16" is not allocated", port_id); return -ENODEV; } if (new_owner == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot set ethdev port %u owner from NULL owner\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set ethdev port %u owner from NULL owner", port_id); return -EINVAL; } if (!eth_is_valid_owner_id(new_owner->id) && !eth_is_valid_owner_id(old_owner_id)) { - RTE_ETHDEV_LOG(ERR, - "Invalid owner old_id=%016"PRIx64" new_id=%016"PRIx64"\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid owner old_id=%016"PRIx64" new_id=%016"PRIx64, old_owner_id, new_owner->id); return -EINVAL; } port_owner = &rte_eth_devices[port_id].data->owner; if (port_owner->id != old_owner_id) { - RTE_ETHDEV_LOG(ERR, - "Cannot set owner to port %u already owned by %s_%016"PRIX64"\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set owner to port %u already owned by %s_%016"PRIX64, port_id, port_owner->name, port_owner->id); return -EPERM; } @@ -510,7 +510,7 @@ eth_dev_owner_set(const uint16_t port_id, const uint64_t old_owner_id, port_owner->id = new_owner->id; - RTE_ETHDEV_LOG(DEBUG, "Port %u owner is %s_%016"PRIx64"\n", + RTE_ETHDEV_LOG_LINE(DEBUG, "Port %u owner is %s_%016"PRIx64, port_id, new_owner->name, new_owner->id); return 0; @@ -575,14 +575,14 @@ rte_eth_dev_owner_delete(const uint64_t owner_id) memset(&data->owner, 0, sizeof(struct rte_eth_dev_owner)); } - RTE_ETHDEV_LOG(NOTICE, - "All port owners owned by %016"PRIx64" identifier have removed\n", + RTE_ETHDEV_LOG_LINE(NOTICE, + "All port owners owned by %016"PRIx64" identifier have removed", owner_id); eth_dev_shared_data->allocated_owners--; eth_dev_shared_data_release(); } else { - RTE_ETHDEV_LOG(ERR, - "Invalid owner ID=%016"PRIx64"\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid owner ID=%016"PRIx64, owner_id); ret = -EINVAL; } @@ -604,13 +604,13 @@ rte_eth_dev_owner_get(const uint16_t port_id, struct rte_eth_dev_owner *owner) ethdev = &rte_eth_devices[port_id]; if (!eth_dev_is_allocated(ethdev)) { - RTE_ETHDEV_LOG(ERR, "Port ID %"PRIu16" is not allocated\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port ID %"PRIu16" is not allocated", port_id); return -ENODEV; } if (owner == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u owner to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u owner to NULL", port_id); return -EINVAL; } @@ -699,7 +699,7 @@ rte_eth_dev_get_name_by_port(uint16_t port_id, char *name) RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); if (name == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u name to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u name to NULL", port_id); return -EINVAL; } @@ -724,13 +724,13 @@ rte_eth_dev_get_port_by_name(const char *name, uint16_t *port_id) uint16_t pid; if (name == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get port ID from NULL name\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get port ID from NULL name"); return -EINVAL; } if (port_id == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get port ID to NULL for %s\n", name); + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get port ID to NULL for %s", name); return -EINVAL; } @@ -766,16 +766,16 @@ eth_dev_validate_rx_queue(const struct rte_eth_dev *dev, uint16_t rx_queue_id) if (rx_queue_id >= dev->data->nb_rx_queues) { port_id = dev->data->port_id; - RTE_ETHDEV_LOG(ERR, - "Invalid Rx queue_id=%u of device with port_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid Rx queue_id=%u of device with port_id=%u", rx_queue_id, port_id); return -EINVAL; } if (dev->data->rx_queues[rx_queue_id] == NULL) { port_id = dev->data->port_id; - RTE_ETHDEV_LOG(ERR, - "Queue %u of device with port_id=%u has not been setup\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Queue %u of device with port_id=%u has not been setup", rx_queue_id, port_id); return -EINVAL; } @@ -790,16 +790,16 @@ eth_dev_validate_tx_queue(const struct rte_eth_dev *dev, uint16_t tx_queue_id) if (tx_queue_id >= dev->data->nb_tx_queues) { port_id = dev->data->port_id; - RTE_ETHDEV_LOG(ERR, - "Invalid Tx queue_id=%u of device with port_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid Tx queue_id=%u of device with port_id=%u", tx_queue_id, port_id); return -EINVAL; } if (dev->data->tx_queues[tx_queue_id] == NULL) { port_id = dev->data->port_id; - RTE_ETHDEV_LOG(ERR, - "Queue %u of device with port_id=%u has not been setup\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Queue %u of device with port_id=%u has not been setup", tx_queue_id, port_id); return -EINVAL; } @@ -839,8 +839,8 @@ rte_eth_dev_rx_queue_start(uint16_t port_id, uint16_t rx_queue_id) dev = &rte_eth_devices[port_id]; if (!dev->data->dev_started) { - RTE_ETHDEV_LOG(ERR, - "Port %u must be started before start any queue\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u must be started before start any queue", port_id); return -EINVAL; } @@ -853,15 +853,15 @@ rte_eth_dev_rx_queue_start(uint16_t port_id, uint16_t rx_queue_id) return -ENOTSUP; if (rte_eth_dev_is_rx_hairpin_queue(dev, rx_queue_id)) { - RTE_ETHDEV_LOG(INFO, - "Can't start Rx hairpin queue %"PRIu16" of device with port_id=%"PRIu16"\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Can't start Rx hairpin queue %"PRIu16" of device with port_id=%"PRIu16, rx_queue_id, port_id); return -EINVAL; } if (dev->data->rx_queue_state[rx_queue_id] != RTE_ETH_QUEUE_STATE_STOPPED) { - RTE_ETHDEV_LOG(INFO, - "Queue %"PRIu16" of device with port_id=%"PRIu16" already started\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Queue %"PRIu16" of device with port_id=%"PRIu16" already started", rx_queue_id, port_id); return 0; } @@ -890,15 +890,15 @@ rte_eth_dev_rx_queue_stop(uint16_t port_id, uint16_t rx_queue_id) return -ENOTSUP; if (rte_eth_dev_is_rx_hairpin_queue(dev, rx_queue_id)) { - RTE_ETHDEV_LOG(INFO, - "Can't stop Rx hairpin queue %"PRIu16" of device with port_id=%"PRIu16"\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Can't stop Rx hairpin queue %"PRIu16" of device with port_id=%"PRIu16, rx_queue_id, port_id); return -EINVAL; } if (dev->data->rx_queue_state[rx_queue_id] == RTE_ETH_QUEUE_STATE_STOPPED) { - RTE_ETHDEV_LOG(INFO, - "Queue %"PRIu16" of device with port_id=%"PRIu16" already stopped\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Queue %"PRIu16" of device with port_id=%"PRIu16" already stopped", rx_queue_id, port_id); return 0; } @@ -920,8 +920,8 @@ rte_eth_dev_tx_queue_start(uint16_t port_id, uint16_t tx_queue_id) dev = &rte_eth_devices[port_id]; if (!dev->data->dev_started) { - RTE_ETHDEV_LOG(ERR, - "Port %u must be started before start any queue\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u must be started before start any queue", port_id); return -EINVAL; } @@ -934,15 +934,15 @@ rte_eth_dev_tx_queue_start(uint16_t port_id, uint16_t tx_queue_id) return -ENOTSUP; if (rte_eth_dev_is_tx_hairpin_queue(dev, tx_queue_id)) { - RTE_ETHDEV_LOG(INFO, - "Can't start Tx hairpin queue %"PRIu16" of device with port_id=%"PRIu16"\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Can't start Tx hairpin queue %"PRIu16" of device with port_id=%"PRIu16, tx_queue_id, port_id); return -EINVAL; } if (dev->data->tx_queue_state[tx_queue_id] != RTE_ETH_QUEUE_STATE_STOPPED) { - RTE_ETHDEV_LOG(INFO, - "Queue %"PRIu16" of device with port_id=%"PRIu16" already started\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Queue %"PRIu16" of device with port_id=%"PRIu16" already started", tx_queue_id, port_id); return 0; } @@ -971,15 +971,15 @@ rte_eth_dev_tx_queue_stop(uint16_t port_id, uint16_t tx_queue_id) return -ENOTSUP; if (rte_eth_dev_is_tx_hairpin_queue(dev, tx_queue_id)) { - RTE_ETHDEV_LOG(INFO, - "Can't stop Tx hairpin queue %"PRIu16" of device with port_id=%"PRIu16"\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Can't stop Tx hairpin queue %"PRIu16" of device with port_id=%"PRIu16, tx_queue_id, port_id); return -EINVAL; } if (dev->data->tx_queue_state[tx_queue_id] == RTE_ETH_QUEUE_STATE_STOPPED) { - RTE_ETHDEV_LOG(INFO, - "Queue %"PRIu16" of device with port_id=%"PRIu16" already stopped\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Queue %"PRIu16" of device with port_id=%"PRIu16" already stopped", tx_queue_id, port_id); return 0; } @@ -1153,19 +1153,19 @@ eth_dev_check_lro_pkt_size(uint16_t port_id, uint32_t config_size, if (dev_info_size == 0) { if (config_size != max_rx_pkt_len) { - RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size" - " %u != %u is not allowed\n", + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%d max_lro_pkt_size" + " %u != %u is not allowed", port_id, config_size, max_rx_pkt_len); ret = -EINVAL; } } else if (config_size > dev_info_size) { - RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size %u " - "> max allowed value %u\n", port_id, config_size, + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%d max_lro_pkt_size %u " + "> max allowed value %u", port_id, config_size, dev_info_size); ret = -EINVAL; } else if (config_size < RTE_ETHER_MIN_LEN) { - RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size %u " - "< min allowed value %u\n", port_id, config_size, + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%d max_lro_pkt_size %u " + "< min allowed value %u", port_id, config_size, (unsigned int)RTE_ETHER_MIN_LEN); ret = -EINVAL; } @@ -1203,16 +1203,16 @@ eth_dev_validate_offloads(uint16_t port_id, uint64_t req_offloads, /* Check if any offload is requested but not enabled. */ offload = RTE_BIT64(rte_ctz64(offloads_diff)); if (offload & req_offloads) { - RTE_ETHDEV_LOG(ERR, - "Port %u failed to enable %s offload %s\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u failed to enable %s offload %s", port_id, offload_type, offload_name(offload)); ret = -EINVAL; } /* Check if offload couldn't be disabled. */ if (offload & set_offloads) { - RTE_ETHDEV_LOG(DEBUG, - "Port %u %s offload %s is not requested but enabled\n", + RTE_ETHDEV_LOG_LINE(DEBUG, + "Port %u %s offload %s is not requested but enabled", port_id, offload_type, offload_name(offload)); } @@ -1244,14 +1244,14 @@ eth_dev_validate_mtu(uint16_t port_id, struct rte_eth_dev_info *dev_info, uint32_t frame_size; if (mtu < dev_info->min_mtu) { - RTE_ETHDEV_LOG(ERR, - "MTU (%u) < device min MTU (%u) for port_id %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "MTU (%u) < device min MTU (%u) for port_id %u", mtu, dev_info->min_mtu, port_id); return -EINVAL; } if (mtu > dev_info->max_mtu) { - RTE_ETHDEV_LOG(ERR, - "MTU (%u) > device max MTU (%u) for port_id %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "MTU (%u) > device max MTU (%u) for port_id %u", mtu, dev_info->max_mtu, port_id); return -EINVAL; } @@ -1260,15 +1260,15 @@ eth_dev_validate_mtu(uint16_t port_id, struct rte_eth_dev_info *dev_info, dev_info->max_mtu); frame_size = mtu + overhead_len; if (frame_size < RTE_ETHER_MIN_LEN) { - RTE_ETHDEV_LOG(ERR, - "Frame size (%u) < min frame size (%u) for port_id %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Frame size (%u) < min frame size (%u) for port_id %u", frame_size, RTE_ETHER_MIN_LEN, port_id); return -EINVAL; } if (frame_size > dev_info->max_rx_pktlen) { - RTE_ETHDEV_LOG(ERR, - "Frame size (%u) > device max frame size (%u) for port_id %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Frame size (%u) > device max frame size (%u) for port_id %u", frame_size, dev_info->max_rx_pktlen, port_id); return -EINVAL; } @@ -1292,8 +1292,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, dev = &rte_eth_devices[port_id]; if (dev_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot configure ethdev port %u from NULL config\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot configure ethdev port %u from NULL config", port_id); return -EINVAL; } @@ -1302,8 +1302,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, return -ENOTSUP; if (dev->data->dev_started) { - RTE_ETHDEV_LOG(ERR, - "Port %u must be stopped to allow configuration\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u must be stopped to allow configuration", port_id); return -EBUSY; } @@ -1334,7 +1334,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, dev_conf->rxmode.reserved_64s[1] != 0 || dev_conf->rxmode.reserved_ptrs[0] != NULL || dev_conf->rxmode.reserved_ptrs[1] != NULL) { - RTE_ETHDEV_LOG(ERR, "Rxmode reserved fields not zero\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rxmode reserved fields not zero"); ret = -EINVAL; goto rollback; } @@ -1343,7 +1343,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, dev_conf->txmode.reserved_64s[1] != 0 || dev_conf->txmode.reserved_ptrs[0] != NULL || dev_conf->txmode.reserved_ptrs[1] != NULL) { - RTE_ETHDEV_LOG(ERR, "txmode reserved fields not zero\n"); + RTE_ETHDEV_LOG_LINE(ERR, "txmode reserved fields not zero"); ret = -EINVAL; goto rollback; } @@ -1368,16 +1368,16 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, } if (nb_rx_q > RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Number of Rx queues requested (%u) is greater than max supported(%d)\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Number of Rx queues requested (%u) is greater than max supported(%d)", nb_rx_q, RTE_MAX_QUEUES_PER_PORT); ret = -EINVAL; goto rollback; } if (nb_tx_q > RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Number of Tx queues requested (%u) is greater than max supported(%d)\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Number of Tx queues requested (%u) is greater than max supported(%d)", nb_tx_q, RTE_MAX_QUEUES_PER_PORT); ret = -EINVAL; goto rollback; @@ -1389,14 +1389,14 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, * configured device. */ if (nb_rx_q > dev_info.max_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%u nb_rx_queues=%u > %u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u nb_rx_queues=%u > %u", port_id, nb_rx_q, dev_info.max_rx_queues); ret = -EINVAL; goto rollback; } if (nb_tx_q > dev_info.max_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%u nb_tx_queues=%u > %u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u nb_tx_queues=%u > %u", port_id, nb_tx_q, dev_info.max_tx_queues); ret = -EINVAL; goto rollback; @@ -1405,14 +1405,14 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, /* Check that the device supports requested interrupts */ if ((dev_conf->intr_conf.lsc == 1) && (!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC))) { - RTE_ETHDEV_LOG(ERR, "Driver %s does not support lsc\n", + RTE_ETHDEV_LOG_LINE(ERR, "Driver %s does not support lsc", dev->device->driver->name); ret = -EINVAL; goto rollback; } if ((dev_conf->intr_conf.rmv == 1) && (!(dev->data->dev_flags & RTE_ETH_DEV_INTR_RMV))) { - RTE_ETHDEV_LOG(ERR, "Driver %s does not support rmv\n", + RTE_ETHDEV_LOG_LINE(ERR, "Driver %s does not support rmv", dev->device->driver->name); ret = -EINVAL; goto rollback; @@ -1456,14 +1456,14 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, dev_conf->rxmode.offloads) { char buffer[512]; - RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%u does not support Rx offloads %s\n", + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u does not support Rx offloads %s", port_id, eth_dev_offload_names( dev_conf->rxmode.offloads & ~dev_info.rx_offload_capa, buffer, sizeof(buffer), rte_eth_dev_rx_offload_name)); - RTE_ETHDEV_LOG(DEBUG, "Ethdev port_id=%u was requested Rx offloads %s\n", + RTE_ETHDEV_LOG_LINE(DEBUG, "Ethdev port_id=%u was requested Rx offloads %s", port_id, eth_dev_offload_names(dev_conf->rxmode.offloads, buffer, sizeof(buffer), rte_eth_dev_rx_offload_name)); - RTE_ETHDEV_LOG(DEBUG, "Ethdev port_id=%u supports Rx offloads %s\n", + RTE_ETHDEV_LOG_LINE(DEBUG, "Ethdev port_id=%u supports Rx offloads %s", port_id, eth_dev_offload_names(dev_info.rx_offload_capa, buffer, sizeof(buffer), rte_eth_dev_rx_offload_name)); @@ -1474,14 +1474,14 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, dev_conf->txmode.offloads) { char buffer[512]; - RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%u does not support Tx offloads %s\n", + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u does not support Tx offloads %s", port_id, eth_dev_offload_names( dev_conf->txmode.offloads & ~dev_info.tx_offload_capa, buffer, sizeof(buffer), rte_eth_dev_tx_offload_name)); - RTE_ETHDEV_LOG(DEBUG, "Ethdev port_id=%u was requested Tx offloads %s\n", + RTE_ETHDEV_LOG_LINE(DEBUG, "Ethdev port_id=%u was requested Tx offloads %s", port_id, eth_dev_offload_names(dev_conf->txmode.offloads, buffer, sizeof(buffer), rte_eth_dev_tx_offload_name)); - RTE_ETHDEV_LOG(DEBUG, "Ethdev port_id=%u supports Tx offloads %s\n", + RTE_ETHDEV_LOG_LINE(DEBUG, "Ethdev port_id=%u supports Tx offloads %s", port_id, eth_dev_offload_names(dev_info.tx_offload_capa, buffer, sizeof(buffer), rte_eth_dev_tx_offload_name)); ret = -EINVAL; @@ -1495,8 +1495,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, if ((dev_info.flow_type_rss_offloads | dev_conf->rx_adv_conf.rss_conf.rss_hf) != dev_info.flow_type_rss_offloads) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%u invalid rss_hf: 0x%"PRIx64", valid value: 0x%"PRIx64"\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethdev port_id=%u invalid rss_hf: 0x%"PRIx64", valid value: 0x%"PRIx64, port_id, dev_conf->rx_adv_conf.rss_conf.rss_hf, dev_info.flow_type_rss_offloads); ret = -EINVAL; @@ -1506,8 +1506,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, /* Check if Rx RSS distribution is disabled but RSS hash is enabled. */ if (((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0) && (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%u config invalid Rx mq_mode without RSS but %s offload is requested\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethdev port_id=%u config invalid Rx mq_mode without RSS but %s offload is requested", port_id, rte_eth_dev_rx_offload_name(RTE_ETH_RX_OFFLOAD_RSS_HASH)); ret = -EINVAL; @@ -1516,8 +1516,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, if (dev_conf->rx_adv_conf.rss_conf.rss_key != NULL && dev_conf->rx_adv_conf.rss_conf.rss_key_len != dev_info.hash_key_size) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%u invalid RSS key len: %u, valid value: %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethdev port_id=%u invalid RSS key len: %u, valid value: %u", port_id, dev_conf->rx_adv_conf.rss_conf.rss_key_len, dev_info.hash_key_size); ret = -EINVAL; @@ -1527,9 +1527,9 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, algorithm = dev_conf->rx_adv_conf.rss_conf.algorithm; if ((size_t)algorithm >= CHAR_BIT * sizeof(dev_info.rss_algo_capa) || (dev_info.rss_algo_capa & RTE_ETH_HASH_ALGO_TO_CAPA(algorithm)) == 0) { - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u configured RSS hash algorithm (%u)" - "is not in the algorithm capability (0x%" PRIx32 ")\n", + "is not in the algorithm capability (0x%" PRIx32 ")", port_id, algorithm, dev_info.rss_algo_capa); ret = -EINVAL; goto rollback; @@ -1540,8 +1540,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, */ diag = eth_dev_rx_queue_config(dev, nb_rx_q); if (diag != 0) { - RTE_ETHDEV_LOG(ERR, - "Port%u eth_dev_rx_queue_config = %d\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port%u eth_dev_rx_queue_config = %d", port_id, diag); ret = diag; goto rollback; @@ -1549,8 +1549,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, diag = eth_dev_tx_queue_config(dev, nb_tx_q); if (diag != 0) { - RTE_ETHDEV_LOG(ERR, - "Port%u eth_dev_tx_queue_config = %d\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port%u eth_dev_tx_queue_config = %d", port_id, diag); eth_dev_rx_queue_config(dev, 0); ret = diag; @@ -1559,7 +1559,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, diag = (*dev->dev_ops->dev_configure)(dev); if (diag != 0) { - RTE_ETHDEV_LOG(ERR, "Port%u dev_configure = %d\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port%u dev_configure = %d", port_id, diag); ret = eth_err(port_id, diag); goto reset_queues; @@ -1568,7 +1568,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, /* Initialize Rx profiling if enabled at compilation time. */ diag = __rte_eth_dev_profile_init(port_id, dev); if (diag != 0) { - RTE_ETHDEV_LOG(ERR, "Port%u __rte_eth_dev_profile_init = %d\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port%u __rte_eth_dev_profile_init = %d", port_id, diag); ret = eth_err(port_id, diag); goto reset_queues; @@ -1666,8 +1666,8 @@ eth_dev_config_restore(struct rte_eth_dev *dev, ret = eth_err(port_id, (*dev->dev_ops->promiscuous_enable)(dev)); if (ret != 0 && ret != -ENOTSUP) { - RTE_ETHDEV_LOG(ERR, - "Failed to enable promiscuous mode for device (port %u): %s\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to enable promiscuous mode for device (port %u): %s", port_id, rte_strerror(-ret)); return ret; } @@ -1676,8 +1676,8 @@ eth_dev_config_restore(struct rte_eth_dev *dev, ret = eth_err(port_id, (*dev->dev_ops->promiscuous_disable)(dev)); if (ret != 0 && ret != -ENOTSUP) { - RTE_ETHDEV_LOG(ERR, - "Failed to disable promiscuous mode for device (port %u): %s\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to disable promiscuous mode for device (port %u): %s", port_id, rte_strerror(-ret)); return ret; } @@ -1693,8 +1693,8 @@ eth_dev_config_restore(struct rte_eth_dev *dev, ret = eth_err(port_id, (*dev->dev_ops->allmulticast_enable)(dev)); if (ret != 0 && ret != -ENOTSUP) { - RTE_ETHDEV_LOG(ERR, - "Failed to enable allmulticast mode for device (port %u): %s\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to enable allmulticast mode for device (port %u): %s", port_id, rte_strerror(-ret)); return ret; } @@ -1703,8 +1703,8 @@ eth_dev_config_restore(struct rte_eth_dev *dev, ret = eth_err(port_id, (*dev->dev_ops->allmulticast_disable)(dev)); if (ret != 0 && ret != -ENOTSUP) { - RTE_ETHDEV_LOG(ERR, - "Failed to disable allmulticast mode for device (port %u): %s\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to disable allmulticast mode for device (port %u): %s", port_id, rte_strerror(-ret)); return ret; } @@ -1728,15 +1728,15 @@ rte_eth_dev_start(uint16_t port_id) return -ENOTSUP; if (dev->data->dev_configured == 0) { - RTE_ETHDEV_LOG(INFO, - "Device with port_id=%"PRIu16" is not configured.\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Device with port_id=%"PRIu16" is not configured.", port_id); return -EINVAL; } if (dev->data->dev_started != 0) { - RTE_ETHDEV_LOG(INFO, - "Device with port_id=%"PRIu16" already started\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Device with port_id=%"PRIu16" already started", port_id); return 0; } @@ -1757,13 +1757,13 @@ rte_eth_dev_start(uint16_t port_id) ret = eth_dev_config_restore(dev, &dev_info, port_id); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, - "Error during restoring configuration for device (port %u): %s\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Error during restoring configuration for device (port %u): %s", port_id, rte_strerror(-ret)); ret_stop = rte_eth_dev_stop(port_id); if (ret_stop != 0) { - RTE_ETHDEV_LOG(ERR, - "Failed to stop device (port %u): %s\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to stop device (port %u): %s", port_id, rte_strerror(-ret_stop)); } @@ -1796,8 +1796,8 @@ rte_eth_dev_stop(uint16_t port_id) return -ENOTSUP; if (dev->data->dev_started == 0) { - RTE_ETHDEV_LOG(INFO, - "Device with port_id=%"PRIu16" already stopped\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Device with port_id=%"PRIu16" already stopped", port_id); return 0; } @@ -1866,7 +1866,7 @@ rte_eth_dev_close(uint16_t port_id) */ if (rte_eal_process_type() == RTE_PROC_PRIMARY && dev->data->dev_started) { - RTE_ETHDEV_LOG(ERR, "Cannot close started device (port %u)\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot close started device (port %u)", port_id); return -EINVAL; } @@ -1897,8 +1897,8 @@ rte_eth_dev_reset(uint16_t port_id) ret = rte_eth_dev_stop(port_id); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, - "Failed to stop device (port %u) before reset: %s - ignore\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to stop device (port %u) before reset: %s - ignore", port_id, rte_strerror(-ret)); } ret = eth_err(port_id, dev->dev_ops->dev_reset(dev)); @@ -1946,7 +1946,7 @@ rte_eth_check_rx_mempool(struct rte_mempool *mp, uint16_t offset, */ if (mp->private_data_size < sizeof(struct rte_pktmbuf_pool_private)) { - RTE_ETHDEV_LOG(ERR, "%s private_data_size %u < %u\n", + RTE_ETHDEV_LOG_LINE(ERR, "%s private_data_size %u < %u", mp->name, mp->private_data_size, (unsigned int) sizeof(struct rte_pktmbuf_pool_private)); @@ -1954,8 +1954,8 @@ rte_eth_check_rx_mempool(struct rte_mempool *mp, uint16_t offset, } data_room_size = rte_pktmbuf_data_room_size(mp); if (data_room_size < offset + min_length) { - RTE_ETHDEV_LOG(ERR, - "%s mbuf_data_room_size %u < %u (%u + %u)\n", + RTE_ETHDEV_LOG_LINE(ERR, + "%s mbuf_data_room_size %u < %u (%u + %u)", mp->name, data_room_size, offset + min_length, offset, min_length); return -EINVAL; @@ -2001,8 +2001,8 @@ rte_eth_rx_queue_check_split(uint16_t port_id, int i; if (n_seg > seg_capa->max_nseg) { - RTE_ETHDEV_LOG(ERR, - "Requested Rx segments %u exceed supported %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Requested Rx segments %u exceed supported %u", n_seg, seg_capa->max_nseg); return -EINVAL; } @@ -2023,24 +2023,24 @@ rte_eth_rx_queue_check_split(uint16_t port_id, uint32_t proto_hdr = rx_seg[seg_idx].proto_hdr; if (mpl == NULL) { - RTE_ETHDEV_LOG(ERR, "null mempool pointer\n"); + RTE_ETHDEV_LOG_LINE(ERR, "null mempool pointer"); ret = -EINVAL; goto out; } if (seg_idx != 0 && mp_first != mpl && seg_capa->multi_pools == 0) { - RTE_ETHDEV_LOG(ERR, "Receiving to multiple pools is not supported\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Receiving to multiple pools is not supported"); ret = -ENOTSUP; goto out; } if (offset != 0) { if (seg_capa->offset_allowed == 0) { - RTE_ETHDEV_LOG(ERR, "Rx segmentation with offset is not supported\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx segmentation with offset is not supported"); ret = -ENOTSUP; goto out; } if (offset & offset_mask) { - RTE_ETHDEV_LOG(ERR, "Rx segmentation invalid offset alignment %u, %u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Rx segmentation invalid offset alignment %u, %u", offset, seg_capa->offset_align_log2); ret = -EINVAL; @@ -2053,22 +2053,22 @@ rte_eth_rx_queue_check_split(uint16_t port_id, if (proto_hdr != 0) { /* Split based on protocol headers. */ if (length != 0) { - RTE_ETHDEV_LOG(ERR, - "Do not set length split and protocol split within a segment\n" + RTE_ETHDEV_LOG_LINE(ERR, + "Do not set length split and protocol split within a segment" ); ret = -EINVAL; goto out; } if ((proto_hdr & prev_proto_hdrs) != 0) { - RTE_ETHDEV_LOG(ERR, - "Repeat with previous protocol headers or proto-split after length-based split\n" + RTE_ETHDEV_LOG_LINE(ERR, + "Repeat with previous protocol headers or proto-split after length-based split" ); ret = -EINVAL; goto out; } if (ptype_cnt <= 0) { - RTE_ETHDEV_LOG(ERR, - "Port %u failed to get supported buffer split header protocols\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u failed to get supported buffer split header protocols", port_id); ret = -ENOTSUP; goto out; @@ -2078,8 +2078,8 @@ rte_eth_rx_queue_check_split(uint16_t port_id, break; } if (i == ptype_cnt) { - RTE_ETHDEV_LOG(ERR, - "Requested Rx split header protocols 0x%x is not supported.\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Requested Rx split header protocols 0x%x is not supported.", proto_hdr); ret = -EINVAL; goto out; @@ -2109,8 +2109,8 @@ rte_eth_rx_queue_check_mempools(struct rte_mempool **rx_mempools, int ret; if (n_mempools > dev_info->max_rx_mempools) { - RTE_ETHDEV_LOG(ERR, - "Too many Rx mempools %u vs maximum %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Too many Rx mempools %u vs maximum %u", n_mempools, dev_info->max_rx_mempools); return -EINVAL; } @@ -2119,7 +2119,7 @@ rte_eth_rx_queue_check_mempools(struct rte_mempool **rx_mempools, struct rte_mempool *mp = rx_mempools[pool_idx]; if (mp == NULL) { - RTE_ETHDEV_LOG(ERR, "null Rx mempool pointer\n"); + RTE_ETHDEV_LOG_LINE(ERR, "null Rx mempool pointer"); return -EINVAL; } @@ -2153,7 +2153,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, dev = &rte_eth_devices[port_id]; if (rx_queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", rx_queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", rx_queue_id); return -EINVAL; } @@ -2165,7 +2165,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, rx_conf->reserved_64s[1] != 0 || rx_conf->reserved_ptrs[0] != NULL || rx_conf->reserved_ptrs[1] != NULL)) { - RTE_ETHDEV_LOG(ERR, "Rx conf reserved fields not zero\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx conf reserved fields not zero"); return -EINVAL; } @@ -2181,8 +2181,8 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, if ((mp != NULL) + (rx_conf != NULL && rx_conf->rx_nseg > 0) + (rx_conf != NULL && rx_conf->rx_nmempool > 0) != 1) { - RTE_ETHDEV_LOG(ERR, - "Ambiguous Rx mempools configuration\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Ambiguous Rx mempools configuration"); return -EINVAL; } @@ -2196,9 +2196,9 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, mbp_buf_size = rte_pktmbuf_data_room_size(mp); buf_data_size = mbp_buf_size - RTE_PKTMBUF_HEADROOM; if (buf_data_size > dev_info.max_rx_bufsize) - RTE_ETHDEV_LOG(DEBUG, + RTE_ETHDEV_LOG_LINE(DEBUG, "For port_id=%u, the mbuf data buffer size (%u) is bigger than " - "max buffer size (%u) device can utilize, so mbuf size can be reduced.\n", + "max buffer size (%u) device can utilize, so mbuf size can be reduced.", port_id, buf_data_size, dev_info.max_rx_bufsize); } else if (rx_conf != NULL && rx_conf->rx_nseg > 0) { const struct rte_eth_rxseg_split *rx_seg; @@ -2206,8 +2206,8 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, /* Extended multi-segment configuration check. */ if (rx_conf->rx_seg == NULL) { - RTE_ETHDEV_LOG(ERR, - "Memory pool is null and no multi-segment configuration provided\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Memory pool is null and no multi-segment configuration provided"); return -EINVAL; } @@ -2221,13 +2221,13 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, if (ret != 0) return ret; } else { - RTE_ETHDEV_LOG(ERR, "No Rx segmentation offload configured\n"); + RTE_ETHDEV_LOG_LINE(ERR, "No Rx segmentation offload configured"); return -EINVAL; } } else if (rx_conf != NULL && rx_conf->rx_nmempool > 0) { /* Extended multi-pool configuration check. */ if (rx_conf->rx_mempools == NULL) { - RTE_ETHDEV_LOG(ERR, "Memory pools array is null\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Memory pools array is null"); return -EINVAL; } @@ -2238,7 +2238,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, if (ret != 0) return ret; } else { - RTE_ETHDEV_LOG(ERR, "Missing Rx mempool configuration\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Missing Rx mempool configuration"); return -EINVAL; } @@ -2254,8 +2254,8 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, nb_rx_desc < dev_info.rx_desc_lim.nb_min || nb_rx_desc % dev_info.rx_desc_lim.nb_align != 0) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for nb_rx_desc(=%hu), should be: <= %hu, >= %hu, and a product of %hu\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for nb_rx_desc(=%hu), should be: <= %hu, >= %hu, and a product of %hu", nb_rx_desc, dev_info.rx_desc_lim.nb_max, dev_info.rx_desc_lim.nb_min, dev_info.rx_desc_lim.nb_align); @@ -2299,9 +2299,9 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, */ if ((local_conf.offloads & dev_info.rx_queue_offload_capa) != local_conf.offloads) { - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%d rx_queue_id=%d, new added offloads 0x%"PRIx64" must be " - "within per-queue offload capabilities 0x%"PRIx64" in %s()\n", + "within per-queue offload capabilities 0x%"PRIx64" in %s()", port_id, rx_queue_id, local_conf.offloads, dev_info.rx_queue_offload_capa, __func__); @@ -2310,8 +2310,8 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, if (local_conf.share_group > 0 && (dev_info.dev_capa & RTE_ETH_DEV_CAPA_RXQ_SHARE) == 0) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%d rx_queue_id=%d, enabled share_group=%hu while device doesn't support Rx queue share\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethdev port_id=%d rx_queue_id=%d, enabled share_group=%hu while device doesn't support Rx queue share", port_id, rx_queue_id, local_conf.share_group); return -EINVAL; } @@ -2367,20 +2367,20 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id, dev = &rte_eth_devices[port_id]; if (rx_queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", rx_queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", rx_queue_id); return -EINVAL; } if (conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot setup ethdev port %u Rx hairpin queue from NULL config\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot setup ethdev port %u Rx hairpin queue from NULL config", port_id); return -EINVAL; } if (conf->reserved != 0) { - RTE_ETHDEV_LOG(ERR, - "Rx hairpin reserved field not zero\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Rx hairpin reserved field not zero"); return -EINVAL; } @@ -2393,42 +2393,42 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id, if (nb_rx_desc == 0) nb_rx_desc = cap.max_nb_desc; if (nb_rx_desc > cap.max_nb_desc) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for nb_rx_desc(=%hu), should be: <= %hu\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for nb_rx_desc(=%hu), should be: <= %hu", nb_rx_desc, cap.max_nb_desc); return -EINVAL; } if (conf->peer_count > cap.max_rx_2_tx) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Rx queue(=%u), should be: <= %hu\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for number of peers for Rx queue(=%u), should be: <= %hu", conf->peer_count, cap.max_rx_2_tx); return -EINVAL; } if (conf->use_locked_device_memory && !cap.rx_cap.locked_device_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to use locked device memory for Rx queue, which is not supported\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to use locked device memory for Rx queue, which is not supported"); return -EINVAL; } if (conf->use_rte_memory && !cap.rx_cap.rte_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to use DPDK memory for Rx queue, which is not supported\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to use DPDK memory for Rx queue, which is not supported"); return -EINVAL; } if (conf->use_locked_device_memory && conf->use_rte_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to use mutually exclusive memory settings for Rx queue\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to use mutually exclusive memory settings for Rx queue"); return -EINVAL; } if (conf->force_memory && !conf->use_locked_device_memory && !conf->use_rte_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to force Rx queue memory settings, but none is set\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to force Rx queue memory settings, but none is set"); return -EINVAL; } if (conf->peer_count == 0) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Rx queue(=%u), should be: > 0\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for number of peers for Rx queue(=%u), should be: > 0", conf->peer_count); return -EINVAL; } @@ -2438,7 +2438,7 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id, count++; } if (count > cap.max_nb_queues) { - RTE_ETHDEV_LOG(ERR, "To many Rx hairpin queues max is %d\n", + RTE_ETHDEV_LOG_LINE(ERR, "To many Rx hairpin queues max is %d", cap.max_nb_queues); return -EINVAL; } @@ -2472,7 +2472,7 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id, dev = &rte_eth_devices[port_id]; if (tx_queue_id >= dev->data->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u\n", tx_queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u", tx_queue_id); return -EINVAL; } @@ -2484,7 +2484,7 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id, tx_conf->reserved_64s[1] != 0 || tx_conf->reserved_ptrs[0] != NULL || tx_conf->reserved_ptrs[1] != NULL)) { - RTE_ETHDEV_LOG(ERR, "Tx conf reserved fields not zero\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Tx conf reserved fields not zero"); return -EINVAL; } @@ -2502,8 +2502,8 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id, if (nb_tx_desc > dev_info.tx_desc_lim.nb_max || nb_tx_desc < dev_info.tx_desc_lim.nb_min || nb_tx_desc % dev_info.tx_desc_lim.nb_align != 0) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for nb_tx_desc(=%hu), should be: <= %hu, >= %hu, and a product of %hu\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for nb_tx_desc(=%hu), should be: <= %hu, >= %hu, and a product of %hu", nb_tx_desc, dev_info.tx_desc_lim.nb_max, dev_info.tx_desc_lim.nb_min, dev_info.tx_desc_lim.nb_align); @@ -2547,9 +2547,9 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id, */ if ((local_conf.offloads & dev_info.tx_queue_offload_capa) != local_conf.offloads) { - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%d tx_queue_id=%d, new added offloads 0x%"PRIx64" must be " - "within per-queue offload capabilities 0x%"PRIx64" in %s()\n", + "within per-queue offload capabilities 0x%"PRIx64" in %s()", port_id, tx_queue_id, local_conf.offloads, dev_info.tx_queue_offload_capa, __func__); @@ -2576,13 +2576,13 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id, dev = &rte_eth_devices[port_id]; if (tx_queue_id >= dev->data->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u\n", tx_queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u", tx_queue_id); return -EINVAL; } if (conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot setup ethdev port %u Tx hairpin queue from NULL config\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot setup ethdev port %u Tx hairpin queue from NULL config", port_id); return -EINVAL; } @@ -2596,42 +2596,42 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id, if (nb_tx_desc == 0) nb_tx_desc = cap.max_nb_desc; if (nb_tx_desc > cap.max_nb_desc) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for nb_tx_desc(=%hu), should be: <= %hu\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for nb_tx_desc(=%hu), should be: <= %hu", nb_tx_desc, cap.max_nb_desc); return -EINVAL; } if (conf->peer_count > cap.max_tx_2_rx) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Tx queue(=%u), should be: <= %hu\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for number of peers for Tx queue(=%u), should be: <= %hu", conf->peer_count, cap.max_tx_2_rx); return -EINVAL; } if (conf->use_locked_device_memory && !cap.tx_cap.locked_device_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to use locked device memory for Tx queue, which is not supported\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to use locked device memory for Tx queue, which is not supported"); return -EINVAL; } if (conf->use_rte_memory && !cap.tx_cap.rte_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to use DPDK memory for Tx queue, which is not supported\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to use DPDK memory for Tx queue, which is not supported"); return -EINVAL; } if (conf->use_locked_device_memory && conf->use_rte_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to use mutually exclusive memory settings for Tx queue\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to use mutually exclusive memory settings for Tx queue"); return -EINVAL; } if (conf->force_memory && !conf->use_locked_device_memory && !conf->use_rte_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to force Tx queue memory settings, but none is set\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to force Tx queue memory settings, but none is set"); return -EINVAL; } if (conf->peer_count == 0) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Tx queue(=%u), should be: > 0\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for number of peers for Tx queue(=%u), should be: > 0", conf->peer_count); return -EINVAL; } @@ -2641,7 +2641,7 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id, count++; } if (count > cap.max_nb_queues) { - RTE_ETHDEV_LOG(ERR, "To many Tx hairpin queues max is %d\n", + RTE_ETHDEV_LOG_LINE(ERR, "To many Tx hairpin queues max is %d", cap.max_nb_queues); return -EINVAL; } @@ -2671,7 +2671,7 @@ rte_eth_hairpin_bind(uint16_t tx_port, uint16_t rx_port) dev = &rte_eth_devices[tx_port]; if (dev->data->dev_started == 0) { - RTE_ETHDEV_LOG(ERR, "Tx port %d is not started\n", tx_port); + RTE_ETHDEV_LOG_LINE(ERR, "Tx port %d is not started", tx_port); return -EBUSY; } @@ -2679,8 +2679,8 @@ rte_eth_hairpin_bind(uint16_t tx_port, uint16_t rx_port) return -ENOTSUP; ret = (*dev->dev_ops->hairpin_bind)(dev, rx_port); if (ret != 0) - RTE_ETHDEV_LOG(ERR, "Failed to bind hairpin Tx %d" - " to Rx %d (%d - all ports)\n", + RTE_ETHDEV_LOG_LINE(ERR, "Failed to bind hairpin Tx %d" + " to Rx %d (%d - all ports)", tx_port, rx_port, RTE_MAX_ETHPORTS); rte_eth_trace_hairpin_bind(tx_port, rx_port, ret); @@ -2698,7 +2698,7 @@ rte_eth_hairpin_unbind(uint16_t tx_port, uint16_t rx_port) dev = &rte_eth_devices[tx_port]; if (dev->data->dev_started == 0) { - RTE_ETHDEV_LOG(ERR, "Tx port %d is already stopped\n", tx_port); + RTE_ETHDEV_LOG_LINE(ERR, "Tx port %d is already stopped", tx_port); return -EBUSY; } @@ -2706,8 +2706,8 @@ rte_eth_hairpin_unbind(uint16_t tx_port, uint16_t rx_port) return -ENOTSUP; ret = (*dev->dev_ops->hairpin_unbind)(dev, rx_port); if (ret != 0) - RTE_ETHDEV_LOG(ERR, "Failed to unbind hairpin Tx %d" - " from Rx %d (%d - all ports)\n", + RTE_ETHDEV_LOG_LINE(ERR, "Failed to unbind hairpin Tx %d" + " from Rx %d (%d - all ports)", tx_port, rx_port, RTE_MAX_ETHPORTS); rte_eth_trace_hairpin_unbind(tx_port, rx_port, ret); @@ -2726,15 +2726,15 @@ rte_eth_hairpin_get_peer_ports(uint16_t port_id, uint16_t *peer_ports, dev = &rte_eth_devices[port_id]; if (peer_ports == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u hairpin peer ports to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u hairpin peer ports to NULL", port_id); return -EINVAL; } if (len == 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u hairpin peer ports to array with zero size\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u hairpin peer ports to array with zero size", port_id); return -EINVAL; } @@ -2745,7 +2745,7 @@ rte_eth_hairpin_get_peer_ports(uint16_t port_id, uint16_t *peer_ports, ret = (*dev->dev_ops->hairpin_get_peer_ports)(dev, peer_ports, len, direction); if (ret < 0) - RTE_ETHDEV_LOG(ERR, "Failed to get %d hairpin peer %s ports\n", + RTE_ETHDEV_LOG_LINE(ERR, "Failed to get %d hairpin peer %s ports", port_id, direction ? "Rx" : "Tx"); rte_eth_trace_hairpin_get_peer_ports(port_id, peer_ports, len, @@ -2780,8 +2780,8 @@ rte_eth_tx_buffer_set_err_callback(struct rte_eth_dev_tx_buffer *buffer, buffer_tx_error_fn cbfn, void *userdata) { if (buffer == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot set Tx buffer error callback to NULL buffer\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set Tx buffer error callback to NULL buffer"); return -EINVAL; } @@ -2799,7 +2799,7 @@ rte_eth_tx_buffer_init(struct rte_eth_dev_tx_buffer *buffer, uint16_t size) int ret = 0; if (buffer == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot initialize NULL buffer\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot initialize NULL buffer"); return -EINVAL; } @@ -2977,7 +2977,7 @@ rte_eth_link_get(uint16_t port_id, struct rte_eth_link *eth_link) dev = &rte_eth_devices[port_id]; if (eth_link == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u link to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u link to NULL", port_id); return -EINVAL; } @@ -3005,7 +3005,7 @@ rte_eth_link_get_nowait(uint16_t port_id, struct rte_eth_link *eth_link) dev = &rte_eth_devices[port_id]; if (eth_link == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u link to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u link to NULL", port_id); return -EINVAL; } @@ -3093,18 +3093,18 @@ rte_eth_link_to_str(char *str, size_t len, const struct rte_eth_link *eth_link) int ret; if (str == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot convert link to NULL string\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot convert link to NULL string"); return -EINVAL; } if (len == 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot convert link to string with zero size\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot convert link to string with zero size"); return -EINVAL; } if (eth_link == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot convert to string from NULL link\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot convert to string from NULL link"); return -EINVAL; } @@ -3133,7 +3133,7 @@ rte_eth_stats_get(uint16_t port_id, struct rte_eth_stats *stats) dev = &rte_eth_devices[port_id]; if (stats == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u stats to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u stats to NULL", port_id); return -EINVAL; } @@ -3220,15 +3220,15 @@ rte_eth_xstats_get_id_by_name(uint16_t port_id, const char *xstat_name, RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); if (xstat_name == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u xstats ID from NULL xstat name\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u xstats ID from NULL xstat name", port_id); return -ENOMEM; } if (id == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u xstats ID to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u xstats ID to NULL", port_id); return -ENOMEM; } @@ -3236,7 +3236,7 @@ rte_eth_xstats_get_id_by_name(uint16_t port_id, const char *xstat_name, /* Get count */ cnt_xstats = rte_eth_xstats_get_names_by_id(port_id, NULL, 0, NULL); if (cnt_xstats < 0) { - RTE_ETHDEV_LOG(ERR, "Cannot get count of xstats\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get count of xstats"); return -ENODEV; } @@ -3245,7 +3245,7 @@ rte_eth_xstats_get_id_by_name(uint16_t port_id, const char *xstat_name, if (cnt_xstats != rte_eth_xstats_get_names_by_id( port_id, xstats_names, cnt_xstats, NULL)) { - RTE_ETHDEV_LOG(ERR, "Cannot get xstats lookup\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get xstats lookup"); return -1; } @@ -3376,7 +3376,7 @@ rte_eth_xstats_get_names_by_id(uint16_t port_id, sizeof(struct rte_eth_xstat_name)); if (!xstats_names_copy) { - RTE_ETHDEV_LOG(ERR, "Can't allocate memory\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Can't allocate memory"); return -ENOMEM; } @@ -3404,7 +3404,7 @@ rte_eth_xstats_get_names_by_id(uint16_t port_id, /* Filter stats */ for (i = 0; i < size; i++) { if (ids[i] >= expected_entries) { - RTE_ETHDEV_LOG(ERR, "Id value isn't valid\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Id value isn't valid"); free(xstats_names_copy); return -1; } @@ -3600,7 +3600,7 @@ rte_eth_xstats_get_by_id(uint16_t port_id, const uint64_t *ids, /* Filter stats */ for (i = 0; i < size; i++) { if (ids[i] >= expected_entries) { - RTE_ETHDEV_LOG(ERR, "Id value isn't valid\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Id value isn't valid"); return -1; } values[i] = xstats[ids[i]].value; @@ -3748,8 +3748,8 @@ rte_eth_dev_fw_version_get(uint16_t port_id, char *fw_version, size_t fw_size) dev = &rte_eth_devices[port_id]; if (fw_version == NULL && fw_size > 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u FW version to NULL when string size is non zero\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u FW version to NULL when string size is non zero", port_id); return -EINVAL; } @@ -3781,7 +3781,7 @@ rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info) dev = &rte_eth_devices[port_id]; if (dev_info == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u info to NULL", port_id); return -EINVAL; } @@ -3837,8 +3837,8 @@ rte_eth_dev_conf_get(uint16_t port_id, struct rte_eth_conf *dev_conf) dev = &rte_eth_devices[port_id]; if (dev_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u configuration to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u configuration to NULL", port_id); return -EINVAL; } @@ -3862,8 +3862,8 @@ rte_eth_dev_get_supported_ptypes(uint16_t port_id, uint32_t ptype_mask, dev = &rte_eth_devices[port_id]; if (ptypes == NULL && num > 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u supported packet types to NULL when array size is non zero\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u supported packet types to NULL when array size is non zero", port_id); return -EINVAL; } @@ -3912,8 +3912,8 @@ rte_eth_dev_set_ptypes(uint16_t port_id, uint32_t ptype_mask, dev = &rte_eth_devices[port_id]; if (num > 0 && set_ptypes == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u set packet types to NULL when array size is non zero\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u set packet types to NULL when array size is non zero", port_id); return -EINVAL; } @@ -3992,7 +3992,7 @@ rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr *ma, struct rte_eth_dev_info dev_info; if (ma == NULL) { - RTE_ETHDEV_LOG(ERR, "%s: invalid parameters\n", __func__); + RTE_ETHDEV_LOG_LINE(ERR, "%s: invalid parameters", __func__); return -EINVAL; } @@ -4019,8 +4019,8 @@ rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr) dev = &rte_eth_devices[port_id]; if (mac_addr == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u MAC address to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u MAC address to NULL", port_id); return -EINVAL; } @@ -4041,7 +4041,7 @@ rte_eth_dev_get_mtu(uint16_t port_id, uint16_t *mtu) dev = &rte_eth_devices[port_id]; if (mtu == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u MTU to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u MTU to NULL", port_id); return -EINVAL; } @@ -4082,8 +4082,8 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu) } if (dev->data->dev_configured == 0) { - RTE_ETHDEV_LOG(ERR, - "Port %u must be configured before MTU set\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u must be configured before MTU set", port_id); return -EINVAL; } @@ -4110,13 +4110,13 @@ rte_eth_dev_vlan_filter(uint16_t port_id, uint16_t vlan_id, int on) if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)) { - RTE_ETHDEV_LOG(ERR, "Port %u: VLAN-filtering disabled\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port %u: VLAN-filtering disabled", port_id); return -ENOSYS; } if (vlan_id > 4095) { - RTE_ETHDEV_LOG(ERR, "Port_id=%u invalid vlan_id=%u > 4095\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port_id=%u invalid vlan_id=%u > 4095", port_id, vlan_id); return -EINVAL; } @@ -4156,7 +4156,7 @@ rte_eth_dev_set_vlan_strip_on_queue(uint16_t port_id, uint16_t rx_queue_id, dev = &rte_eth_devices[port_id]; if (rx_queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid rx_queue_id=%u\n", rx_queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid rx_queue_id=%u", rx_queue_id); return -EINVAL; } @@ -4261,10 +4261,10 @@ rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask) /* Rx VLAN offloading must be within its device capabilities */ if ((dev_offloads & dev_info.rx_offload_capa) != dev_offloads) { new_offloads = dev_offloads & ~orig_offloads; - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u requested new added VLAN offloads " "0x%" PRIx64 " must be within Rx offloads capabilities " - "0x%" PRIx64 " in %s()\n", + "0x%" PRIx64 " in %s()", port_id, new_offloads, dev_info.rx_offload_capa, __func__); return -EINVAL; @@ -4342,8 +4342,8 @@ rte_eth_dev_flow_ctrl_get(uint16_t port_id, struct rte_eth_fc_conf *fc_conf) dev = &rte_eth_devices[port_id]; if (fc_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u flow control config to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u flow control config to NULL", port_id); return -EINVAL; } @@ -4368,14 +4368,14 @@ rte_eth_dev_flow_ctrl_set(uint16_t port_id, struct rte_eth_fc_conf *fc_conf) dev = &rte_eth_devices[port_id]; if (fc_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot set ethdev port %u flow control from NULL config\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set ethdev port %u flow control from NULL config", port_id); return -EINVAL; } if ((fc_conf->send_xon != 0) && (fc_conf->send_xon != 1)) { - RTE_ETHDEV_LOG(ERR, "Invalid send_xon, only 0/1 allowed\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid send_xon, only 0/1 allowed"); return -EINVAL; } @@ -4399,14 +4399,14 @@ rte_eth_dev_priority_flow_ctrl_set(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (pfc_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot set ethdev port %u priority flow control from NULL config\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set ethdev port %u priority flow control from NULL config", port_id); return -EINVAL; } if (pfc_conf->priority > (RTE_ETH_DCB_NUM_USER_PRIORITIES - 1)) { - RTE_ETHDEV_LOG(ERR, "Invalid priority, only 0-7 allowed\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid priority, only 0-7 allowed"); return -EINVAL; } @@ -4428,16 +4428,16 @@ validate_rx_pause_config(struct rte_eth_dev_info *dev_info, uint8_t tc_max, if ((pfc_queue_conf->mode == RTE_ETH_FC_RX_PAUSE) || (pfc_queue_conf->mode == RTE_ETH_FC_FULL)) { if (pfc_queue_conf->rx_pause.tx_qid >= dev_info->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, - "PFC Tx queue not in range for Rx pause requested:%d configured:%d\n", + RTE_ETHDEV_LOG_LINE(ERR, + "PFC Tx queue not in range for Rx pause requested:%d configured:%d", pfc_queue_conf->rx_pause.tx_qid, dev_info->nb_tx_queues); return -EINVAL; } if (pfc_queue_conf->rx_pause.tc >= tc_max) { - RTE_ETHDEV_LOG(ERR, - "PFC TC not in range for Rx pause requested:%d max:%d\n", + RTE_ETHDEV_LOG_LINE(ERR, + "PFC TC not in range for Rx pause requested:%d max:%d", pfc_queue_conf->rx_pause.tc, tc_max); return -EINVAL; } @@ -4453,16 +4453,16 @@ validate_tx_pause_config(struct rte_eth_dev_info *dev_info, uint8_t tc_max, if ((pfc_queue_conf->mode == RTE_ETH_FC_TX_PAUSE) || (pfc_queue_conf->mode == RTE_ETH_FC_FULL)) { if (pfc_queue_conf->tx_pause.rx_qid >= dev_info->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, - "PFC Rx queue not in range for Tx pause requested:%d configured:%d\n", + RTE_ETHDEV_LOG_LINE(ERR, + "PFC Rx queue not in range for Tx pause requested:%d configured:%d", pfc_queue_conf->tx_pause.rx_qid, dev_info->nb_rx_queues); return -EINVAL; } if (pfc_queue_conf->tx_pause.tc >= tc_max) { - RTE_ETHDEV_LOG(ERR, - "PFC TC not in range for Tx pause requested:%d max:%d\n", + RTE_ETHDEV_LOG_LINE(ERR, + "PFC TC not in range for Tx pause requested:%d max:%d", pfc_queue_conf->tx_pause.tc, tc_max); return -EINVAL; } @@ -4482,7 +4482,7 @@ rte_eth_dev_priority_flow_ctrl_queue_info_get(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (pfc_queue_info == NULL) { - RTE_ETHDEV_LOG(ERR, "PFC info param is NULL for port (%u)\n", + RTE_ETHDEV_LOG_LINE(ERR, "PFC info param is NULL for port (%u)", port_id); return -EINVAL; } @@ -4511,7 +4511,7 @@ rte_eth_dev_priority_flow_ctrl_queue_configure(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (pfc_queue_conf == NULL) { - RTE_ETHDEV_LOG(ERR, "PFC parameters are NULL for port (%u)\n", + RTE_ETHDEV_LOG_LINE(ERR, "PFC parameters are NULL for port (%u)", port_id); return -EINVAL; } @@ -4525,7 +4525,7 @@ rte_eth_dev_priority_flow_ctrl_queue_configure(uint16_t port_id, return ret; if (pfc_info.tc_max == 0) { - RTE_ETHDEV_LOG(ERR, "Ethdev port %u does not support PFC TC values\n", + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port %u does not support PFC TC values", port_id); return -ENOTSUP; } @@ -4533,14 +4533,14 @@ rte_eth_dev_priority_flow_ctrl_queue_configure(uint16_t port_id, /* Check requested mode supported or not */ if (pfc_info.mode_capa == RTE_ETH_FC_RX_PAUSE && pfc_queue_conf->mode == RTE_ETH_FC_TX_PAUSE) { - RTE_ETHDEV_LOG(ERR, "PFC Tx pause unsupported for port (%d)\n", + RTE_ETHDEV_LOG_LINE(ERR, "PFC Tx pause unsupported for port (%d)", port_id); return -EINVAL; } if (pfc_info.mode_capa == RTE_ETH_FC_TX_PAUSE && pfc_queue_conf->mode == RTE_ETH_FC_RX_PAUSE) { - RTE_ETHDEV_LOG(ERR, "PFC Rx pause unsupported for port (%d)\n", + RTE_ETHDEV_LOG_LINE(ERR, "PFC Rx pause unsupported for port (%d)", port_id); return -EINVAL; } @@ -4597,7 +4597,7 @@ eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t i, idx, shift; if (max_rxq == 0) { - RTE_ETHDEV_LOG(ERR, "No receive queue is available\n"); + RTE_ETHDEV_LOG_LINE(ERR, "No receive queue is available"); return -EINVAL; } @@ -4606,8 +4606,8 @@ eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf, shift = i % RTE_ETH_RETA_GROUP_SIZE; if ((reta_conf[idx].mask & RTE_BIT64(shift)) && (reta_conf[idx].reta[shift] >= max_rxq)) { - RTE_ETHDEV_LOG(ERR, - "reta_conf[%u]->reta[%u]: %u exceeds the maximum rxq index: %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "reta_conf[%u]->reta[%u]: %u exceeds the maximum rxq index: %u", idx, shift, reta_conf[idx].reta[shift], max_rxq); return -EINVAL; @@ -4630,15 +4630,15 @@ rte_eth_dev_rss_reta_update(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (reta_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot update ethdev port %u RSS RETA to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot update ethdev port %u RSS RETA to NULL", port_id); return -EINVAL; } if (reta_size == 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot update ethdev port %u RSS RETA with zero size\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot update ethdev port %u RSS RETA with zero size", port_id); return -EINVAL; } @@ -4656,7 +4656,7 @@ rte_eth_dev_rss_reta_update(uint16_t port_id, mq_mode = dev->data->dev_conf.rxmode.mq_mode; if (!(mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) { - RTE_ETHDEV_LOG(ERR, "Multi-queue RSS mode isn't enabled.\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Multi-queue RSS mode isn't enabled."); return -ENOTSUP; } @@ -4682,8 +4682,8 @@ rte_eth_dev_rss_reta_query(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (reta_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot query ethdev port %u RSS RETA from NULL config\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot query ethdev port %u RSS RETA from NULL config", port_id); return -EINVAL; } @@ -4716,8 +4716,8 @@ rte_eth_dev_rss_hash_update(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (rss_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot update ethdev port %u RSS hash from NULL config\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot update ethdev port %u RSS hash from NULL config", port_id); return -EINVAL; } @@ -4729,8 +4729,8 @@ rte_eth_dev_rss_hash_update(uint16_t port_id, rss_conf->rss_hf = rte_eth_rss_hf_refine(rss_conf->rss_hf); if ((dev_info.flow_type_rss_offloads | rss_conf->rss_hf) != dev_info.flow_type_rss_offloads) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%u invalid rss_hf: 0x%"PRIx64", valid value: 0x%"PRIx64"\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethdev port_id=%u invalid rss_hf: 0x%"PRIx64", valid value: 0x%"PRIx64, port_id, rss_conf->rss_hf, dev_info.flow_type_rss_offloads); return -EINVAL; @@ -4738,14 +4738,14 @@ rte_eth_dev_rss_hash_update(uint16_t port_id, mq_mode = dev->data->dev_conf.rxmode.mq_mode; if (!(mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) { - RTE_ETHDEV_LOG(ERR, "Multi-queue RSS mode isn't enabled.\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Multi-queue RSS mode isn't enabled."); return -ENOTSUP; } if (rss_conf->rss_key != NULL && rss_conf->rss_key_len != dev_info.hash_key_size) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%u invalid RSS key len: %u, valid value: %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethdev port_id=%u invalid RSS key len: %u, valid value: %u", port_id, rss_conf->rss_key_len, dev_info.hash_key_size); return -EINVAL; } @@ -4753,9 +4753,9 @@ rte_eth_dev_rss_hash_update(uint16_t port_id, if ((size_t)rss_conf->algorithm >= CHAR_BIT * sizeof(dev_info.rss_algo_capa) || (dev_info.rss_algo_capa & RTE_ETH_HASH_ALGO_TO_CAPA(rss_conf->algorithm)) == 0) { - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u configured RSS hash algorithm (%u)" - "is not in the algorithm capability (0x%" PRIx32 ")\n", + "is not in the algorithm capability (0x%" PRIx32 ")", port_id, rss_conf->algorithm, dev_info.rss_algo_capa); return -EINVAL; } @@ -4782,8 +4782,8 @@ rte_eth_dev_rss_hash_conf_get(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (rss_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u RSS hash config to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u RSS hash config to NULL", port_id); return -EINVAL; } @@ -4794,8 +4794,8 @@ rte_eth_dev_rss_hash_conf_get(uint16_t port_id, if (rss_conf->rss_key != NULL && rss_conf->rss_key_len < dev_info.hash_key_size) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%u invalid RSS key len: %u, should not be less than: %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethdev port_id=%u invalid RSS key len: %u, should not be less than: %u", port_id, rss_conf->rss_key_len, dev_info.hash_key_size); return -EINVAL; } @@ -4837,14 +4837,14 @@ rte_eth_dev_udp_tunnel_port_add(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (udp_tunnel == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot add ethdev port %u UDP tunnel port from NULL UDP tunnel\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot add ethdev port %u UDP tunnel port from NULL UDP tunnel", port_id); return -EINVAL; } if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) { - RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid tunnel type"); return -EINVAL; } @@ -4869,14 +4869,14 @@ rte_eth_dev_udp_tunnel_port_delete(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (udp_tunnel == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot delete ethdev port %u UDP tunnel port from NULL UDP tunnel\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot delete ethdev port %u UDP tunnel port from NULL UDP tunnel", port_id); return -EINVAL; } if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) { - RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid tunnel type"); return -EINVAL; } @@ -4938,8 +4938,8 @@ rte_eth_fec_get_capability(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (speed_fec_capa == NULL && num > 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u FEC capability to NULL when array size is non zero\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u FEC capability to NULL when array size is non zero", port_id); return -EINVAL; } @@ -4963,8 +4963,8 @@ rte_eth_fec_get(uint16_t port_id, uint32_t *fec_capa) dev = &rte_eth_devices[port_id]; if (fec_capa == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u current FEC mode to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u current FEC mode to NULL", port_id); return -EINVAL; } @@ -4988,7 +4988,7 @@ rte_eth_fec_set(uint16_t port_id, uint32_t fec_capa) dev = &rte_eth_devices[port_id]; if (fec_capa == 0) { - RTE_ETHDEV_LOG(ERR, "At least one FEC mode should be specified\n"); + RTE_ETHDEV_LOG_LINE(ERR, "At least one FEC mode should be specified"); return -EINVAL; } @@ -5040,8 +5040,8 @@ rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr, dev = &rte_eth_devices[port_id]; if (addr == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot add ethdev port %u MAC address from NULL address\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot add ethdev port %u MAC address from NULL address", port_id); return -EINVAL; } @@ -5050,12 +5050,12 @@ rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr, return -ENOTSUP; if (rte_is_zero_ether_addr(addr)) { - RTE_ETHDEV_LOG(ERR, "Port %u: Cannot add NULL MAC address\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port %u: Cannot add NULL MAC address", port_id); return -EINVAL; } if (pool >= RTE_ETH_64_POOLS) { - RTE_ETHDEV_LOG(ERR, "Pool ID must be 0-%d\n", RTE_ETH_64_POOLS - 1); + RTE_ETHDEV_LOG_LINE(ERR, "Pool ID must be 0-%d", RTE_ETH_64_POOLS - 1); return -EINVAL; } @@ -5063,7 +5063,7 @@ rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr, if (index < 0) { index = eth_dev_get_mac_addr_index(port_id, &null_mac_addr); if (index < 0) { - RTE_ETHDEV_LOG(ERR, "Port %u: MAC address array full\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port %u: MAC address array full", port_id); return -ENOSPC; } @@ -5103,8 +5103,8 @@ rte_eth_dev_mac_addr_remove(uint16_t port_id, struct rte_ether_addr *addr) dev = &rte_eth_devices[port_id]; if (addr == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot remove ethdev port %u MAC address from NULL address\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot remove ethdev port %u MAC address from NULL address", port_id); return -EINVAL; } @@ -5114,8 +5114,8 @@ rte_eth_dev_mac_addr_remove(uint16_t port_id, struct rte_ether_addr *addr) index = eth_dev_get_mac_addr_index(port_id, addr); if (index == 0) { - RTE_ETHDEV_LOG(ERR, - "Port %u: Cannot remove default MAC address\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u: Cannot remove default MAC address", port_id); return -EADDRINUSE; } else if (index < 0) @@ -5146,8 +5146,8 @@ rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct rte_ether_addr *addr) dev = &rte_eth_devices[port_id]; if (addr == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot set ethdev port %u default MAC address from NULL address\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set ethdev port %u default MAC address from NULL address", port_id); return -EINVAL; } @@ -5161,8 +5161,8 @@ rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct rte_ether_addr *addr) /* Keep address unique in dev->data->mac_addrs[]. */ index = eth_dev_get_mac_addr_index(port_id, addr); if (index > 0) { - RTE_ETHDEV_LOG(ERR, - "New default address for port %u was already in the address list. Please remove it first.\n", + RTE_ETHDEV_LOG_LINE(ERR, + "New default address for port %u was already in the address list. Please remove it first.", port_id); return -EEXIST; } @@ -5220,14 +5220,14 @@ rte_eth_dev_uc_hash_table_set(uint16_t port_id, struct rte_ether_addr *addr, dev = &rte_eth_devices[port_id]; if (addr == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot set ethdev port %u unicast hash table from NULL address\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set ethdev port %u unicast hash table from NULL address", port_id); return -EINVAL; } if (rte_is_zero_ether_addr(addr)) { - RTE_ETHDEV_LOG(ERR, "Port %u: Cannot add NULL MAC address\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port %u: Cannot add NULL MAC address", port_id); return -EINVAL; } @@ -5239,15 +5239,15 @@ rte_eth_dev_uc_hash_table_set(uint16_t port_id, struct rte_ether_addr *addr, if (index < 0) { if (!on) { - RTE_ETHDEV_LOG(ERR, - "Port %u: the MAC address was not set in UTA\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u: the MAC address was not set in UTA", port_id); return -EINVAL; } index = eth_dev_get_hash_mac_addr_index(port_id, &null_mac_addr); if (index < 0) { - RTE_ETHDEV_LOG(ERR, "Port %u: MAC address array full\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port %u: MAC address array full", port_id); return -ENOSPC; } @@ -5309,15 +5309,15 @@ int rte_eth_set_queue_rate_limit(uint16_t port_id, uint16_t queue_idx, link = dev->data->dev_link; if (queue_idx > dev_info.max_tx_queues) { - RTE_ETHDEV_LOG(ERR, - "Set queue rate limit:port %u: invalid queue ID=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Set queue rate limit:port %u: invalid queue ID=%u", port_id, queue_idx); return -EINVAL; } if (tx_rate > link.link_speed) { - RTE_ETHDEV_LOG(ERR, - "Set queue rate limit:invalid tx_rate=%u, bigger than link speed= %d\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Set queue rate limit:invalid tx_rate=%u, bigger than link speed= %d", tx_rate, link.link_speed); return -EINVAL; } @@ -5342,15 +5342,15 @@ int rte_eth_rx_avail_thresh_set(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id > dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, - "Set queue avail thresh: port %u: invalid queue ID=%u.\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Set queue avail thresh: port %u: invalid queue ID=%u.", port_id, queue_id); return -EINVAL; } if (avail_thresh > 99) { - RTE_ETHDEV_LOG(ERR, - "Set queue avail thresh: port %u: threshold should be <= 99.\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Set queue avail thresh: port %u: threshold should be <= 99.", port_id); return -EINVAL; } @@ -5415,14 +5415,14 @@ rte_eth_dev_callback_register(uint16_t port_id, uint16_t last_port; if (cb_fn == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot register ethdev port %u callback from NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot register ethdev port %u callback from NULL", port_id); return -EINVAL; } if (!rte_eth_dev_is_valid_port(port_id) && port_id != RTE_ETH_ALL) { - RTE_ETHDEV_LOG(ERR, "Invalid port_id=%d\n", port_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid port_id=%d", port_id); return -EINVAL; } @@ -5485,14 +5485,14 @@ rte_eth_dev_callback_unregister(uint16_t port_id, uint16_t last_port; if (cb_fn == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot unregister ethdev port %u callback from NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot unregister ethdev port %u callback from NULL", port_id); return -EINVAL; } if (!rte_eth_dev_is_valid_port(port_id) && port_id != RTE_ETH_ALL) { - RTE_ETHDEV_LOG(ERR, "Invalid port_id=%d\n", port_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid port_id=%d", port_id); return -EINVAL; } @@ -5551,13 +5551,13 @@ rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data) dev = &rte_eth_devices[port_id]; if (!dev->intr_handle) { - RTE_ETHDEV_LOG(ERR, "Rx Intr handle unset\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx Intr handle unset"); return -ENOTSUP; } intr_handle = dev->intr_handle; if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) { - RTE_ETHDEV_LOG(ERR, "Rx Intr vector unset\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx Intr vector unset"); return -EPERM; } @@ -5568,8 +5568,8 @@ rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data) rte_ethdev_trace_rx_intr_ctl(port_id, qid, epfd, op, data, rc); if (rc && rc != -EEXIST) { - RTE_ETHDEV_LOG(ERR, - "p %u q %u Rx ctl error op %d epfd %d vec %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "p %u q %u Rx ctl error op %d epfd %d vec %u", port_id, qid, op, epfd, vec); } } @@ -5590,18 +5590,18 @@ rte_eth_dev_rx_intr_ctl_q_get_fd(uint16_t port_id, uint16_t queue_id) dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", queue_id); return -1; } if (!dev->intr_handle) { - RTE_ETHDEV_LOG(ERR, "Rx Intr handle unset\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx Intr handle unset"); return -1; } intr_handle = dev->intr_handle; if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) { - RTE_ETHDEV_LOG(ERR, "Rx Intr vector unset\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx Intr vector unset"); return -1; } @@ -5628,18 +5628,18 @@ rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", queue_id); return -EINVAL; } if (!dev->intr_handle) { - RTE_ETHDEV_LOG(ERR, "Rx Intr handle unset\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx Intr handle unset"); return -ENOTSUP; } intr_handle = dev->intr_handle; if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) { - RTE_ETHDEV_LOG(ERR, "Rx Intr vector unset\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx Intr vector unset"); return -EPERM; } @@ -5649,8 +5649,8 @@ rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id, rte_ethdev_trace_rx_intr_ctl_q(port_id, queue_id, epfd, op, data, rc); if (rc && rc != -EEXIST) { - RTE_ETHDEV_LOG(ERR, - "p %u q %u Rx ctl error op %d epfd %d vec %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "p %u q %u Rx ctl error op %d epfd %d vec %u", port_id, queue_id, op, epfd, vec); return rc; } @@ -5949,28 +5949,28 @@ rte_eth_rx_queue_info_get(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", queue_id); return -EINVAL; } if (qinfo == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u Rx queue %u info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u Rx queue %u info to NULL", port_id, queue_id); return -EINVAL; } if (dev->data->rx_queues == NULL || dev->data->rx_queues[queue_id] == NULL) { - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Rx queue %"PRIu16" of device with port_id=%" - PRIu16" has not been setup\n", + PRIu16" has not been setup", queue_id, port_id); return -EINVAL; } if (rte_eth_dev_is_rx_hairpin_queue(dev, queue_id)) { - RTE_ETHDEV_LOG(INFO, - "Can't get hairpin Rx queue %"PRIu16" info of device with port_id=%"PRIu16"\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Can't get hairpin Rx queue %"PRIu16" info of device with port_id=%"PRIu16, queue_id, port_id); return -EINVAL; } @@ -5997,28 +5997,28 @@ rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u", queue_id); return -EINVAL; } if (qinfo == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u Tx queue %u info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u Tx queue %u info to NULL", port_id, queue_id); return -EINVAL; } if (dev->data->tx_queues == NULL || dev->data->tx_queues[queue_id] == NULL) { - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Tx queue %"PRIu16" of device with port_id=%" - PRIu16" has not been setup\n", + PRIu16" has not been setup", queue_id, port_id); return -EINVAL; } if (rte_eth_dev_is_tx_hairpin_queue(dev, queue_id)) { - RTE_ETHDEV_LOG(INFO, - "Can't get hairpin Tx queue %"PRIu16" info of device with port_id=%"PRIu16"\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Can't get hairpin Tx queue %"PRIu16" info of device with port_id=%"PRIu16, queue_id, port_id); return -EINVAL; } @@ -6068,13 +6068,13 @@ rte_eth_rx_burst_mode_get(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", queue_id); return -EINVAL; } if (mode == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u Rx queue %u burst mode to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u Rx queue %u burst mode to NULL", port_id, queue_id); return -EINVAL; } @@ -6101,13 +6101,13 @@ rte_eth_tx_burst_mode_get(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u", queue_id); return -EINVAL; } if (mode == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u Tx queue %u burst mode to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u Tx queue %u burst mode to NULL", port_id, queue_id); return -EINVAL; } @@ -6134,13 +6134,13 @@ rte_eth_get_monitor_addr(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", queue_id); return -EINVAL; } if (pmc == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u Rx queue %u power monitor condition to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u Rx queue %u power monitor condition to NULL", port_id, queue_id); return -EINVAL; } @@ -6224,8 +6224,8 @@ rte_eth_timesync_read_rx_timestamp(uint16_t port_id, struct timespec *timestamp, dev = &rte_eth_devices[port_id]; if (timestamp == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot read ethdev port %u Rx timestamp to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot read ethdev port %u Rx timestamp to NULL", port_id); return -EINVAL; } @@ -6253,8 +6253,8 @@ rte_eth_timesync_read_tx_timestamp(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (timestamp == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot read ethdev port %u Tx timestamp to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot read ethdev port %u Tx timestamp to NULL", port_id); return -EINVAL; } @@ -6299,8 +6299,8 @@ rte_eth_timesync_read_time(uint16_t port_id, struct timespec *timestamp) dev = &rte_eth_devices[port_id]; if (timestamp == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot read ethdev port %u timesync time to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot read ethdev port %u timesync time to NULL", port_id); return -EINVAL; } @@ -6325,8 +6325,8 @@ rte_eth_timesync_write_time(uint16_t port_id, const struct timespec *timestamp) dev = &rte_eth_devices[port_id]; if (timestamp == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot write ethdev port %u timesync from NULL time\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot write ethdev port %u timesync from NULL time", port_id); return -EINVAL; } @@ -6351,7 +6351,7 @@ rte_eth_read_clock(uint16_t port_id, uint64_t *clock) dev = &rte_eth_devices[port_id]; if (clock == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot read ethdev port %u clock to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot read ethdev port %u clock to NULL", port_id); return -EINVAL; } @@ -6375,8 +6375,8 @@ rte_eth_dev_get_reg_info(uint16_t port_id, struct rte_dev_reg_info *info) dev = &rte_eth_devices[port_id]; if (info == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u register info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u register info to NULL", port_id); return -EINVAL; } @@ -6418,8 +6418,8 @@ rte_eth_dev_get_eeprom(uint16_t port_id, struct rte_dev_eeprom_info *info) dev = &rte_eth_devices[port_id]; if (info == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u EEPROM info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u EEPROM info to NULL", port_id); return -EINVAL; } @@ -6443,8 +6443,8 @@ rte_eth_dev_set_eeprom(uint16_t port_id, struct rte_dev_eeprom_info *info) dev = &rte_eth_devices[port_id]; if (info == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot set ethdev port %u EEPROM from NULL info\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set ethdev port %u EEPROM from NULL info", port_id); return -EINVAL; } @@ -6469,8 +6469,8 @@ rte_eth_dev_get_module_info(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (modinfo == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u EEPROM module info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u EEPROM module info to NULL", port_id); return -EINVAL; } @@ -6495,22 +6495,22 @@ rte_eth_dev_get_module_eeprom(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (info == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u module EEPROM info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u module EEPROM info to NULL", port_id); return -EINVAL; } if (info->data == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u module EEPROM data to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u module EEPROM data to NULL", port_id); return -EINVAL; } if (info->length == 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u module EEPROM to data with zero size\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u module EEPROM to data with zero size", port_id); return -EINVAL; } @@ -6535,8 +6535,8 @@ rte_eth_dev_get_dcb_info(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (dcb_info == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u DCB info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u DCB info to NULL", port_id); return -EINVAL; } @@ -6601,8 +6601,8 @@ rte_eth_dev_hairpin_capability_get(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (cap == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u hairpin capability to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u hairpin capability to NULL", port_id); return -EINVAL; } @@ -6627,8 +6627,8 @@ rte_eth_dev_pool_ops_supported(uint16_t port_id, const char *pool) dev = &rte_eth_devices[port_id]; if (pool == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot test ethdev port %u mempool operation from NULL pool\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot test ethdev port %u mempool operation from NULL pool", port_id); return -EINVAL; } @@ -6672,14 +6672,14 @@ rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features) dev = &rte_eth_devices[port_id]; if (dev->data->dev_configured != 0) { - RTE_ETHDEV_LOG(ERR, - "The port (ID=%"PRIu16") is already configured\n", + RTE_ETHDEV_LOG_LINE(ERR, + "The port (ID=%"PRIu16") is already configured", port_id); return -EBUSY; } if (features == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid features (NULL)\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid features (NULL)"); return -EINVAL; } @@ -6708,15 +6708,15 @@ rte_eth_ip_reassembly_capability_get(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (dev->data->dev_configured == 0) { - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Device with port_id=%u is not configured, " - "cannot get IP reassembly capability\n", + "cannot get IP reassembly capability", port_id); return -EINVAL; } if (reassembly_capa == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get reassembly capability to NULL\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get reassembly capability to NULL"); return -EINVAL; } @@ -6744,15 +6744,15 @@ rte_eth_ip_reassembly_conf_get(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (dev->data->dev_configured == 0) { - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Device with port_id=%u is not configured, " - "cannot get IP reassembly configuration\n", + "cannot get IP reassembly configuration", port_id); return -EINVAL; } if (conf == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get reassembly info to NULL\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get reassembly info to NULL"); return -EINVAL; } @@ -6778,24 +6778,24 @@ rte_eth_ip_reassembly_conf_set(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (dev->data->dev_configured == 0) { - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Device with port_id=%u is not configured, " - "cannot set IP reassembly configuration\n", + "cannot set IP reassembly configuration", port_id); return -EINVAL; } if (dev->data->dev_started != 0) { - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Device with port_id=%u started, " - "cannot configure IP reassembly params.\n", + "cannot configure IP reassembly params.", port_id); return -EINVAL; } if (conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Invalid IP reassembly configuration (NULL)\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid IP reassembly configuration (NULL)"); return -EINVAL; } @@ -6818,7 +6818,7 @@ rte_eth_dev_priv_dump(uint16_t port_id, FILE *file) dev = &rte_eth_devices[port_id]; if (file == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid file (NULL)\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid file (NULL)"); return -EINVAL; } @@ -6837,12 +6837,12 @@ rte_eth_rx_descriptor_dump(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", queue_id); return -EINVAL; } if (file == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid file (NULL)\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid file (NULL)"); return -EINVAL; } @@ -6863,12 +6863,12 @@ rte_eth_tx_descriptor_dump(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u", queue_id); return -EINVAL; } if (file == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid file (NULL)\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid file (NULL)"); return -EINVAL; } @@ -6890,8 +6890,8 @@ rte_eth_buffer_split_get_supported_hdr_ptypes(uint16_t port_id, uint32_t *ptypes dev = &rte_eth_devices[port_id]; if (ptypes == NULL && num > 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u supported header protocol types to NULL when array size is non zero\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u supported header protocol types to NULL when array size is non zero", port_id); return -EINVAL; } @@ -6944,7 +6944,7 @@ int rte_eth_dev_map_aggr_tx_affinity(uint16_t port_id, uint16_t tx_queue_id, dev = &rte_eth_devices[port_id]; if (tx_queue_id >= dev->data->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u\n", tx_queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u", tx_queue_id); return -EINVAL; } @@ -6952,30 +6952,30 @@ int rte_eth_dev_map_aggr_tx_affinity(uint16_t port_id, uint16_t tx_queue_id, return -ENOTSUP; if (dev->data->dev_configured == 0) { - RTE_ETHDEV_LOG(ERR, - "Port %u must be configured before Tx affinity mapping\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u must be configured before Tx affinity mapping", port_id); return -EINVAL; } if (dev->data->dev_started) { - RTE_ETHDEV_LOG(ERR, - "Port %u must be stopped to allow configuration\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u must be stopped to allow configuration", port_id); return -EBUSY; } aggr_ports = rte_eth_dev_count_aggr_ports(port_id); if (aggr_ports == 0) { - RTE_ETHDEV_LOG(ERR, - "Port %u has no aggregated port\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u has no aggregated port", port_id); return -ENOTSUP; } if (affinity > aggr_ports) { - RTE_ETHDEV_LOG(ERR, - "Port %u map invalid affinity %u exceeds the maximum number %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u map invalid affinity %u exceeds the maximum number %u", port_id, affinity, aggr_ports); return -EINVAL; } diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index 77331ce652..18debce99c 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -176,9 +176,11 @@ extern "C" { #include "rte_dev_info.h" extern int rte_eth_dev_logtype; +#define RTE_LOGTYPE_ETHDEV rte_eth_dev_logtype -#define RTE_ETHDEV_LOG(level, ...) \ - rte_log(RTE_LOG_ ## level, rte_eth_dev_logtype, "" __VA_ARGS__) +#define RTE_ETHDEV_LOG_LINE(level, ...) \ + RTE_LOG(level, ETHDEV, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_FMT_TAIL(__VA_ARGS__,))) struct rte_mbuf; @@ -2000,14 +2002,14 @@ struct rte_eth_fec_capa { /* Macros to check for valid port */ #define RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, retval) do { \ if (!rte_eth_dev_is_valid_port(port_id)) { \ - RTE_ETHDEV_LOG(ERR, "Invalid port_id=%u\n", port_id); \ + RTE_ETHDEV_LOG_LINE(ERR, "Invalid port_id=%u", port_id); \ return retval; \ } \ } while (0) #define RTE_ETH_VALID_PORTID_OR_RET(port_id) do { \ if (!rte_eth_dev_is_valid_port(port_id)) { \ - RTE_ETHDEV_LOG(ERR, "Invalid port_id=%u\n", port_id); \ + RTE_ETHDEV_LOG_LINE(ERR, "Invalid port_id=%u", port_id); \ return; \ } \ } while (0) @@ -6052,8 +6054,8 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id, #ifdef RTE_ETHDEV_DEBUG_RX if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Invalid port_id=%u or queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid port_id=%u or queue_id=%u", port_id, queue_id); return 0; } @@ -6067,7 +6069,7 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id, RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0); if (qd == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u for port_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u for port_id=%u", queue_id, port_id); return 0; } @@ -6123,8 +6125,8 @@ rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id) #ifdef RTE_ETHDEV_DEBUG_RX if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Invalid port_id=%u or queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid port_id=%u or queue_id=%u", port_id, queue_id); return -EINVAL; } @@ -6196,8 +6198,8 @@ rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id, #ifdef RTE_ETHDEV_DEBUG_RX if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Invalid port_id=%u or queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid port_id=%u or queue_id=%u", port_id, queue_id); return -EINVAL; } @@ -6267,8 +6269,8 @@ static inline int rte_eth_tx_descriptor_status(uint16_t port_id, #ifdef RTE_ETHDEV_DEBUG_TX if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Invalid port_id=%u or queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid port_id=%u or queue_id=%u", port_id, queue_id); return -EINVAL; } @@ -6391,8 +6393,8 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id, #ifdef RTE_ETHDEV_DEBUG_TX if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Invalid port_id=%u or queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid port_id=%u or queue_id=%u", port_id, queue_id); return 0; } @@ -6406,7 +6408,7 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id, RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0); if (qd == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u for port_id=%u", queue_id, port_id); return 0; } @@ -6501,8 +6503,8 @@ rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id, #ifdef RTE_ETHDEV_DEBUG_TX if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Invalid port_id=%u or queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid port_id=%u or queue_id=%u", port_id, queue_id); rte_errno = ENODEV; return 0; @@ -6515,12 +6517,12 @@ rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id, #ifdef RTE_ETHDEV_DEBUG_TX if (!rte_eth_dev_is_valid_port(port_id)) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx port_id=%u\n", port_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx port_id=%u", port_id); rte_errno = ENODEV; return 0; } if (qd == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u for port_id=%u", queue_id, port_id); rte_errno = EINVAL; return 0; @@ -6706,8 +6708,8 @@ rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id, #ifdef RTE_ETHDEV_DEBUG_TX if (tx_port_id >= RTE_MAX_ETHPORTS || tx_queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Invalid tx_port_id=%u or tx_queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid tx_port_id=%u or tx_queue_id=%u", tx_port_id, tx_queue_id); return 0; } @@ -6721,7 +6723,7 @@ rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id, RTE_ETH_VALID_PORTID_OR_ERR_RET(tx_port_id, 0); if (qd1 == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u for port_id=%u", tx_queue_id, tx_port_id); return 0; } @@ -6732,7 +6734,7 @@ rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id, #ifdef RTE_ETHDEV_DEBUG_RX if (rx_port_id >= RTE_MAX_ETHPORTS || rx_queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, "Invalid rx_port_id=%u or rx_queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Invalid rx_port_id=%u or rx_queue_id=%u", rx_port_id, rx_queue_id); return 0; } @@ -6746,7 +6748,7 @@ rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id, RTE_ETH_VALID_PORTID_OR_ERR_RET(rx_port_id, 0); if (qd2 == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u for port_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u for port_id=%u", rx_queue_id, rx_port_id); return 0; } diff --git a/lib/ethdev/rte_ethdev_cman.c b/lib/ethdev/rte_ethdev_cman.c index a9c4637521..41e38bdc89 100644 --- a/lib/ethdev/rte_ethdev_cman.c +++ b/lib/ethdev/rte_ethdev_cman.c @@ -21,12 +21,12 @@ rte_eth_cman_info_get(uint16_t port_id, struct rte_eth_cman_info *info) dev = &rte_eth_devices[port_id]; if (info == NULL) { - RTE_ETHDEV_LOG(ERR, "congestion management info is NULL\n"); + RTE_ETHDEV_LOG_LINE(ERR, "congestion management info is NULL"); return -EINVAL; } if (dev->dev_ops->cman_info_get == NULL) { - RTE_ETHDEV_LOG(ERR, "Function not implemented\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Function not implemented"); return -ENOTSUP; } @@ -49,12 +49,12 @@ rte_eth_cman_config_init(uint16_t port_id, struct rte_eth_cman_config *config) dev = &rte_eth_devices[port_id]; if (config == NULL) { - RTE_ETHDEV_LOG(ERR, "congestion management config is NULL\n"); + RTE_ETHDEV_LOG_LINE(ERR, "congestion management config is NULL"); return -EINVAL; } if (dev->dev_ops->cman_config_init == NULL) { - RTE_ETHDEV_LOG(ERR, "Function not implemented\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Function not implemented"); return -ENOTSUP; } @@ -77,12 +77,12 @@ rte_eth_cman_config_set(uint16_t port_id, const struct rte_eth_cman_config *conf dev = &rte_eth_devices[port_id]; if (config == NULL) { - RTE_ETHDEV_LOG(ERR, "congestion management config is NULL\n"); + RTE_ETHDEV_LOG_LINE(ERR, "congestion management config is NULL"); return -EINVAL; } if (dev->dev_ops->cman_config_set == NULL) { - RTE_ETHDEV_LOG(ERR, "Function not implemented\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Function not implemented"); return -ENOTSUP; } @@ -104,12 +104,12 @@ rte_eth_cman_config_get(uint16_t port_id, struct rte_eth_cman_config *config) dev = &rte_eth_devices[port_id]; if (config == NULL) { - RTE_ETHDEV_LOG(ERR, "congestion management config is NULL\n"); + RTE_ETHDEV_LOG_LINE(ERR, "congestion management config is NULL"); return -EINVAL; } if (dev->dev_ops->cman_config_get == NULL) { - RTE_ETHDEV_LOG(ERR, "Function not implemented\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Function not implemented"); return -ENOTSUP; } diff --git a/lib/ethdev/rte_ethdev_telemetry.c b/lib/ethdev/rte_ethdev_telemetry.c index b01028ce9b..6b873e7abe 100644 --- a/lib/ethdev/rte_ethdev_telemetry.c +++ b/lib/ethdev/rte_ethdev_telemetry.c @@ -36,8 +36,8 @@ eth_dev_parse_port_params(const char *params, uint16_t *port_id, pi = strtoul(params, end_param, 0); if (**end_param != '\0' && !has_next) - RTE_ETHDEV_LOG(NOTICE, - "Extra parameters passed to ethdev telemetry command, ignoring\n"); + RTE_ETHDEV_LOG_LINE(NOTICE, + "Extra parameters passed to ethdev telemetry command, ignoring"); if (pi >= UINT16_MAX || !rte_eth_dev_is_valid_port(pi)) return -EINVAL; @@ -153,8 +153,8 @@ eth_dev_handle_port_xstats(const char *cmd __rte_unused, kvlist = rte_kvargs_parse(end_param, valid_keys); ret = rte_kvargs_process(kvlist, NULL, eth_dev_parse_hide_zero, &hide_zero); if (kvlist == NULL || ret != 0) - RTE_ETHDEV_LOG(NOTICE, - "Unknown extra parameters passed to ethdev telemetry command, ignoring\n"); + RTE_ETHDEV_LOG_LINE(NOTICE, + "Unknown extra parameters passed to ethdev telemetry command, ignoring"); rte_kvargs_free(kvlist); } @@ -445,8 +445,8 @@ eth_dev_handle_port_flow_ctrl(const char *cmd __rte_unused, ret = rte_eth_dev_flow_ctrl_get(port_id, &fc_conf); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, - "Failed to get flow ctrl info, ret = %d\n", ret); + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to get flow ctrl info, ret = %d", ret); return ret; } @@ -496,8 +496,8 @@ ethdev_parse_queue_params(const char *params, bool is_rx, qid = strtoul(qid_param, &end_param, 0); } if (*end_param != '\0') - RTE_ETHDEV_LOG(NOTICE, - "Extra parameters passed to ethdev telemetry command, ignoring\n"); + RTE_ETHDEV_LOG_LINE(NOTICE, + "Extra parameters passed to ethdev telemetry command, ignoring"); if (qid >= UINT16_MAX) return -EINVAL; @@ -522,8 +522,8 @@ eth_dev_add_burst_mode(uint16_t port_id, uint16_t queue_id, return 0; if (ret != 0) { - RTE_ETHDEV_LOG(ERR, - "Failed to get burst mode for port %u\n", port_id); + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to get burst mode for port %u", port_id); return ret; } @@ -689,8 +689,8 @@ eth_dev_add_dcb_info(uint16_t port_id, struct rte_tel_data *d) ret = rte_eth_dev_get_dcb_info(port_id, &dcb_info); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, - "Failed to get dcb info, ret = %d\n", ret); + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to get dcb info, ret = %d", ret); return ret; } @@ -769,8 +769,8 @@ eth_dev_handle_port_rss_info(const char *cmd __rte_unused, ret = rte_eth_dev_info_get(port_id, &dev_info); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, - "Failed to get device info, ret = %d\n", ret); + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to get device info, ret = %d", ret); return ret; } @@ -823,7 +823,7 @@ eth_dev_fec_capas_to_string(uint32_t fec_capa, char *fec_name, uint32_t len) count = snprintf(fec_name, len, "unknown "); if (count >= len) { - RTE_ETHDEV_LOG(WARNING, "FEC capa names may be truncated\n"); + RTE_ETHDEV_LOG_LINE(WARNING, "FEC capa names may be truncated"); count = len; } @@ -994,8 +994,8 @@ eth_dev_handle_port_vlan(const char *cmd __rte_unused, ret = rte_eth_dev_conf_get(port_id, &dev_conf); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, - "Failed to get device configuration, ret = %d\n", ret); + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to get device configuration, ret = %d", ret); return ret; } @@ -1115,7 +1115,7 @@ eth_dev_handle_port_tm_caps(const char *cmd __rte_unused, ret = rte_tm_capabilities_get(port_id, &cap, &error); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, "error: %s, error type: %u\n", + RTE_ETHDEV_LOG_LINE(ERR, "error: %s, error type: %u", error.message ? error.message : "no stated reason", error.type); return ret; @@ -1229,8 +1229,8 @@ eth_dev_parse_tm_params(char *params, uint32_t *result) ret = strtoul(splited_param, ¶ms, 0); if (*params != '\0') - RTE_ETHDEV_LOG(NOTICE, - "Extra parameters passed to ethdev telemetry command, ignoring\n"); + RTE_ETHDEV_LOG_LINE(NOTICE, + "Extra parameters passed to ethdev telemetry command, ignoring"); if (ret >= UINT32_MAX) return -EINVAL; @@ -1263,7 +1263,7 @@ eth_dev_handle_port_tm_level_caps(const char *cmd __rte_unused, ret = rte_tm_level_capabilities_get(port_id, level_id, &cap, &error); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, "error: %s, error type: %u\n", + RTE_ETHDEV_LOG_LINE(ERR, "error: %s, error type: %u", error.message ? error.message : "no stated reason", error.type); return ret; @@ -1389,7 +1389,7 @@ eth_dev_handle_port_tm_node_caps(const char *cmd __rte_unused, return 0; out: - RTE_ETHDEV_LOG(WARNING, "error: %s, error type: %u\n", + RTE_ETHDEV_LOG_LINE(WARNING, "error: %s, error type: %u", error.message ? error.message : "no stated reason", error.type); return ret; diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 549e329558..f49d1d3767 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -18,6 +18,8 @@ #include "ethdev_trace.h" +#define FLOW_LOG RTE_ETHDEV_LOG_LINE + /* Mbuf dynamic field name for metadata. */ int32_t rte_flow_dynf_metadata_offs = -1; @@ -1614,13 +1616,13 @@ rte_flow_info_get(uint16_t port_id, if (unlikely(!ops)) return -rte_errno; if (dev->data->dev_configured == 0) { - RTE_FLOW_LOG(INFO, - "Device with port_id=%"PRIu16" is not configured.\n", + FLOW_LOG(INFO, + "Device with port_id=%"PRIu16" is not configured.", port_id); return -EINVAL; } if (port_info == NULL) { - RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id); + FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.", port_id); return -EINVAL; } if (likely(!!ops->info_get)) { @@ -1651,23 +1653,23 @@ rte_flow_configure(uint16_t port_id, if (unlikely(!ops)) return -rte_errno; if (dev->data->dev_configured == 0) { - RTE_FLOW_LOG(INFO, - "Device with port_id=%"PRIu16" is not configured.\n", + FLOW_LOG(INFO, + "Device with port_id=%"PRIu16" is not configured.", port_id); return -EINVAL; } if (dev->data->dev_started != 0) { - RTE_FLOW_LOG(INFO, - "Device with port_id=%"PRIu16" already started.\n", + FLOW_LOG(INFO, + "Device with port_id=%"PRIu16" already started.", port_id); return -EINVAL; } if (port_attr == NULL) { - RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id); + FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.", port_id); return -EINVAL; } if (queue_attr == NULL) { - RTE_FLOW_LOG(ERR, "Port %"PRIu16" queue info is NULL.\n", port_id); + FLOW_LOG(ERR, "Port %"PRIu16" queue info is NULL.", port_id); return -EINVAL; } if ((port_attr->flags & RTE_FLOW_PORT_FLAG_SHARE_INDIRECT) && @@ -1704,8 +1706,8 @@ rte_flow_pattern_template_create(uint16_t port_id, if (unlikely(!ops)) return NULL; if (dev->data->flow_configured == 0) { - RTE_FLOW_LOG(INFO, - "Flow engine on port_id=%"PRIu16" is not configured.\n", + FLOW_LOG(INFO, + "Flow engine on port_id=%"PRIu16" is not configured.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_STATE, @@ -1713,8 +1715,8 @@ rte_flow_pattern_template_create(uint16_t port_id, return NULL; } if (template_attr == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" template attr is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" template attr is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, @@ -1722,8 +1724,8 @@ rte_flow_pattern_template_create(uint16_t port_id, return NULL; } if (pattern == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" pattern is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" pattern is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, @@ -1791,8 +1793,8 @@ rte_flow_actions_template_create(uint16_t port_id, if (unlikely(!ops)) return NULL; if (dev->data->flow_configured == 0) { - RTE_FLOW_LOG(INFO, - "Flow engine on port_id=%"PRIu16" is not configured.\n", + FLOW_LOG(INFO, + "Flow engine on port_id=%"PRIu16" is not configured.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_STATE, @@ -1800,8 +1802,8 @@ rte_flow_actions_template_create(uint16_t port_id, return NULL; } if (template_attr == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" template attr is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" template attr is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, @@ -1809,8 +1811,8 @@ rte_flow_actions_template_create(uint16_t port_id, return NULL; } if (actions == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" actions is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" actions is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, @@ -1818,8 +1820,8 @@ rte_flow_actions_template_create(uint16_t port_id, return NULL; } if (masks == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" masks is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" masks is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, @@ -1889,8 +1891,8 @@ rte_flow_template_table_create(uint16_t port_id, if (unlikely(!ops)) return NULL; if (dev->data->flow_configured == 0) { - RTE_FLOW_LOG(INFO, - "Flow engine on port_id=%"PRIu16" is not configured.\n", + FLOW_LOG(INFO, + "Flow engine on port_id=%"PRIu16" is not configured.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_STATE, @@ -1898,8 +1900,8 @@ rte_flow_template_table_create(uint16_t port_id, return NULL; } if (table_attr == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" table attr is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" table attr is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, @@ -1907,8 +1909,8 @@ rte_flow_template_table_create(uint16_t port_id, return NULL; } if (pattern_templates == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" pattern templates is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" pattern templates is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, @@ -1916,8 +1918,8 @@ rte_flow_template_table_create(uint16_t port_id, return NULL; } if (actions_templates == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" actions templates is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" actions templates is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index affdc8121b..78b6bbb159 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -46,9 +46,6 @@ extern "C" { #endif -#define RTE_FLOW_LOG(level, ...) \ - rte_log(RTE_LOG_ ## level, rte_eth_dev_logtype, "" __VA_ARGS__) - /** * Flow rule attributes. * diff --git a/lib/ethdev/sff_telemetry.c b/lib/ethdev/sff_telemetry.c index f29e7fa882..b3f239d967 100644 --- a/lib/ethdev/sff_telemetry.c +++ b/lib/ethdev/sff_telemetry.c @@ -19,7 +19,7 @@ sff_port_module_eeprom_parse(uint16_t port_id, struct rte_tel_data *d) int ret; if (d == NULL) { - RTE_ETHDEV_LOG(ERR, "Dict invalid\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Dict invalid"); return; } @@ -27,16 +27,16 @@ sff_port_module_eeprom_parse(uint16_t port_id, struct rte_tel_data *d) if (ret != 0) { switch (ret) { case -ENODEV: - RTE_ETHDEV_LOG(ERR, "Port index %d invalid\n", port_id); + RTE_ETHDEV_LOG_LINE(ERR, "Port index %d invalid", port_id); break; case -ENOTSUP: - RTE_ETHDEV_LOG(ERR, "Operation not supported by device\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Operation not supported by device"); break; case -EIO: - RTE_ETHDEV_LOG(ERR, "Device is removed\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Device is removed"); break; default: - RTE_ETHDEV_LOG(ERR, "Unable to get port module info, %d\n", ret); + RTE_ETHDEV_LOG_LINE(ERR, "Unable to get port module info, %d", ret); break; } return; @@ -46,7 +46,7 @@ sff_port_module_eeprom_parse(uint16_t port_id, struct rte_tel_data *d) einfo.length = minfo.eeprom_len; einfo.data = calloc(1, minfo.eeprom_len); if (einfo.data == NULL) { - RTE_ETHDEV_LOG(ERR, "Allocation of port %u EEPROM data failed\n", port_id); + RTE_ETHDEV_LOG_LINE(ERR, "Allocation of port %u EEPROM data failed", port_id); return; } @@ -54,16 +54,16 @@ sff_port_module_eeprom_parse(uint16_t port_id, struct rte_tel_data *d) if (ret != 0) { switch (ret) { case -ENODEV: - RTE_ETHDEV_LOG(ERR, "Port index %d invalid\n", port_id); + RTE_ETHDEV_LOG_LINE(ERR, "Port index %d invalid", port_id); break; case -ENOTSUP: - RTE_ETHDEV_LOG(ERR, "Operation not supported by device\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Operation not supported by device"); break; case -EIO: - RTE_ETHDEV_LOG(ERR, "Device is removed\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Device is removed"); break; default: - RTE_ETHDEV_LOG(ERR, "Unable to get port module EEPROM, %d\n", ret); + RTE_ETHDEV_LOG_LINE(ERR, "Unable to get port module EEPROM, %d", ret); break; } free(einfo.data); @@ -84,7 +84,7 @@ sff_port_module_eeprom_parse(uint16_t port_id, struct rte_tel_data *d) sff_8636_show_all(einfo.data, einfo.length, d); break; default: - RTE_ETHDEV_LOG(NOTICE, "Unsupported module type: %u\n", minfo.type); + RTE_ETHDEV_LOG_LINE(NOTICE, "Unsupported module type: %u", minfo.type); break; } @@ -99,7 +99,7 @@ ssf_add_dict_string(struct rte_tel_data *d, const char *name_str, const char *va if (d->type != TEL_DICT) return; if (d->data_len >= RTE_TEL_MAX_DICT_ENTRIES) { - RTE_ETHDEV_LOG(ERR, "data_len has exceeded the maximum number of inserts\n"); + RTE_ETHDEV_LOG_LINE(ERR, "data_len has exceeded the maximum number of inserts"); return; } @@ -135,13 +135,13 @@ eth_dev_handle_port_module_eeprom(const char *cmd __rte_unused, const char *para port_id = strtoul(params, &end_param, 0); if (errno != 0 || port_id >= UINT16_MAX) { - RTE_ETHDEV_LOG(ERR, "Invalid argument, %d\n", errno); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid argument, %d", errno); return -1; } if (*end_param != '\0') - RTE_ETHDEV_LOG(NOTICE, - "Extra parameters [%s] passed to ethdev telemetry command, ignoring\n", + RTE_ETHDEV_LOG_LINE(NOTICE, + "Extra parameters [%s] passed to ethdev telemetry command, ignoring", end_param); rte_tel_data_start_dict(d); diff --git a/lib/member/member.h b/lib/member/member.h new file mode 100644 index 0000000000..ce150f7689 --- /dev/null +++ b/lib/member/member.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Red Hat, Inc. + */ + +#include <rte_log.h> + +extern int librte_member_logtype; +#define RTE_LOGTYPE_MEMBER librte_member_logtype + +#define MEMBER_LOG(level, ...) \ + RTE_LOG(level, MEMBER, \ + RTE_FMT("%s(): " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + __func__, RTE_FMT_TAIL(__VA_ARGS__,))) + diff --git a/lib/member/rte_member.c b/lib/member/rte_member.c index 8f859f7fbd..57eb7affab 100644 --- a/lib/member/rte_member.c +++ b/lib/member/rte_member.c @@ -11,6 +11,7 @@ #include <rte_tailq.h> #include <rte_ring_elem.h> +#include "member.h" #include "rte_member.h" #include "rte_member_ht.h" #include "rte_member_vbf.h" @@ -102,8 +103,8 @@ rte_member_create(const struct rte_member_parameters *params) if (params->key_len == 0 || params->prim_hash_seed == params->sec_hash_seed) { rte_errno = EINVAL; - RTE_MEMBER_LOG(ERR, "Create setsummary with " - "invalid parameters\n"); + MEMBER_LOG(ERR, "Create setsummary with " + "invalid parameters"); return NULL; } @@ -112,7 +113,7 @@ rte_member_create(const struct rte_member_parameters *params) sketch_key_ring = rte_ring_create_elem(ring_name, sizeof(uint32_t), rte_align32pow2(params->top_k), params->socket_id, 0); if (sketch_key_ring == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Ring Memory allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Ring Memory allocation failed"); return NULL; } } @@ -135,7 +136,7 @@ rte_member_create(const struct rte_member_parameters *params) } te = rte_zmalloc("MEMBER_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_MEMBER_LOG(ERR, "tailq entry allocation failed\n"); + MEMBER_LOG(ERR, "tailq entry allocation failed"); goto error_unlock_exit; } @@ -144,7 +145,7 @@ rte_member_create(const struct rte_member_parameters *params) sizeof(struct rte_member_setsum), RTE_CACHE_LINE_SIZE, params->socket_id); if (setsum == NULL) { - RTE_MEMBER_LOG(ERR, "Create setsummary failed\n"); + MEMBER_LOG(ERR, "Create setsummary failed"); goto error_unlock_exit; } strlcpy(setsum->name, params->name, sizeof(setsum->name)); @@ -171,8 +172,8 @@ rte_member_create(const struct rte_member_parameters *params) if (ret < 0) goto error_unlock_exit; - RTE_MEMBER_LOG(DEBUG, "Creating a setsummary table with " - "mode %u\n", setsum->type); + MEMBER_LOG(DEBUG, "Creating a setsummary table with " + "mode %u", setsum->type); te->data = (void *)setsum; TAILQ_INSERT_TAIL(member_list, te, next); diff --git a/lib/member/rte_member.h b/lib/member/rte_member.h index b585904368..3278bbb5c1 100644 --- a/lib/member/rte_member.h +++ b/lib/member/rte_member.h @@ -100,15 +100,6 @@ typedef uint16_t member_set_t; #define MEMBER_HASH_FUNC rte_jhash #endif -extern int librte_member_logtype; - -#define RTE_MEMBER_LOG(level, ...) \ - rte_log(RTE_LOG_ ## level, \ - librte_member_logtype, \ - RTE_FMT("%s(): " RTE_FMT_HEAD(__VA_ARGS__,), \ - __func__, \ - RTE_FMT_TAIL(__VA_ARGS__,))) - /** @internal setsummary structure. */ struct rte_member_setsum; diff --git a/lib/member/rte_member_heap.h b/lib/member/rte_member_heap.h index 9c4a01aebe..e0a3d54eab 100644 --- a/lib/member/rte_member_heap.h +++ b/lib/member/rte_member_heap.h @@ -6,6 +6,7 @@ #ifndef RTE_MEMBER_HEAP_H #define RTE_MEMBER_HEAP_H +#include "member.h" #include <rte_ring_elem.h> #include "rte_member.h" @@ -129,16 +130,16 @@ resize_hash_table(struct minheap *hp) while (1) { new_bkt_cnt = hp->hashtable->bkt_cnt * HASH_RESIZE_MULTI; - RTE_MEMBER_LOG(ERR, "Sketch Minheap HT load factor is [%f]\n", + MEMBER_LOG(ERR, "Sketch Minheap HT load factor is [%f]", hp->hashtable->num_item / ((float)hp->hashtable->bkt_cnt * HASH_BKT_SIZE)); - RTE_MEMBER_LOG(ERR, "Sketch Minheap HT resize happen!\n"); + MEMBER_LOG(ERR, "Sketch Minheap HT resize happen!"); rte_free(hp->hashtable); hp->hashtable = rte_zmalloc_socket(NULL, sizeof(struct hash) + new_bkt_cnt * sizeof(struct hash_bkt), RTE_CACHE_LINE_SIZE, hp->socket); if (hp->hashtable == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Minheap HT allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Minheap HT allocation failed"); return -ENOMEM; } @@ -147,8 +148,8 @@ resize_hash_table(struct minheap *hp) for (i = 0; i < hp->size; ++i) { if (hash_table_insert(hp->elem[i].key, i + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, - "Sketch Minheap HT resize insert fail!\n"); + MEMBER_LOG(ERR, + "Sketch Minheap HT resize insert fail!"); break; } } @@ -174,7 +175,7 @@ rte_member_minheap_init(struct minheap *heap, int size, heap->elem = rte_zmalloc_socket(NULL, sizeof(struct node) * size, RTE_CACHE_LINE_SIZE, socket); if (heap->elem == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Minheap elem allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Minheap elem allocation failed"); return -ENOMEM; } @@ -188,7 +189,7 @@ rte_member_minheap_init(struct minheap *heap, int size, RTE_CACHE_LINE_SIZE, socket); if (heap->hashtable == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Minheap HT allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Minheap HT allocation failed"); rte_free(heap->elem); return -ENOMEM; } @@ -231,13 +232,13 @@ rte_member_heapify(struct minheap *hp, uint32_t idx, bool update_hash) if (update_hash) { if (hash_table_update(hp->elem[smallest].key, idx + 1, smallest + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table update failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table update failed"); return; } if (hash_table_update(hp->elem[idx].key, smallest + 1, idx + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table update failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table update failed"); return; } } @@ -255,7 +256,7 @@ rte_member_minheap_insert_node(struct minheap *hp, const void *key, uint32_t slot_id; if (rte_ring_sc_dequeue_elem(free_key_slot, &slot_id, sizeof(uint32_t)) != 0) { - RTE_MEMBER_LOG(ERR, "Minheap get empty keyslot failed\n"); + MEMBER_LOG(ERR, "Minheap get empty keyslot failed"); return -1; } @@ -270,7 +271,7 @@ rte_member_minheap_insert_node(struct minheap *hp, const void *key, hp->elem[i] = hp->elem[PARENT(i)]; if (hash_table_update(hp->elem[i].key, PARENT(i) + 1, i + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table update failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table update failed"); return -1; } i = PARENT(i); @@ -279,7 +280,7 @@ rte_member_minheap_insert_node(struct minheap *hp, const void *key, if (hash_table_insert(key, i + 1, hp->key_len, hp->hashtable) < 0) { if (resize_hash_table(hp) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table resize failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table resize failed"); return -1; } } @@ -296,7 +297,7 @@ rte_member_minheap_delete_node(struct minheap *hp, const void *key, uint32_t offset = RTE_PTR_DIFF(hp->elem[idx].key, key_slot) / hp->key_len; if (hash_table_del(key, idx + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table delete failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table delete failed"); return -1; } @@ -311,7 +312,7 @@ rte_member_minheap_delete_node(struct minheap *hp, const void *key, if (hash_table_update(hp->elem[idx].key, hp->size, idx + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table update failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table update failed"); return -1; } hp->size--; @@ -332,7 +333,7 @@ rte_member_minheap_replace_node(struct minheap *hp, recycle_key = hp->elem[0].key; if (hash_table_del(recycle_key, 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table delete failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table delete failed"); return -1; } @@ -340,7 +341,7 @@ rte_member_minheap_replace_node(struct minheap *hp, if (hash_table_update(hp->elem[0].key, hp->size, 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table update failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table update failed"); return -1; } hp->size--; @@ -358,7 +359,7 @@ rte_member_minheap_replace_node(struct minheap *hp, hp->elem[i] = hp->elem[PARENT(i)]; if (hash_table_update(hp->elem[i].key, PARENT(i) + 1, i + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table update failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table update failed"); return -1; } i = PARENT(i); @@ -367,9 +368,9 @@ rte_member_minheap_replace_node(struct minheap *hp, hp->elem[i] = nd; if (hash_table_insert(new_key, i + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table replace insert failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table replace insert failed"); if (resize_hash_table(hp) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table replace resize failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table replace resize failed"); return -1; } } diff --git a/lib/member/rte_member_ht.c b/lib/member/rte_member_ht.c index a85561b472..357097ff4b 100644 --- a/lib/member/rte_member_ht.c +++ b/lib/member/rte_member_ht.c @@ -9,6 +9,7 @@ #include <rte_log.h> #include <rte_vect.h> +#include "member.h" #include "rte_member.h" #include "rte_member_ht.h" @@ -84,8 +85,8 @@ rte_member_create_ht(struct rte_member_setsum *ss, !rte_is_power_of_2(RTE_MEMBER_BUCKET_ENTRIES) || num_entries < RTE_MEMBER_BUCKET_ENTRIES) { rte_errno = EINVAL; - RTE_MEMBER_LOG(ERR, - "Membership HT create with invalid parameters\n"); + MEMBER_LOG(ERR, + "Membership HT create with invalid parameters"); return -EINVAL; } @@ -98,8 +99,8 @@ rte_member_create_ht(struct rte_member_setsum *ss, RTE_CACHE_LINE_SIZE, ss->socket_id); if (buckets == NULL) { - RTE_MEMBER_LOG(ERR, "memory allocation failed for HT " - "setsummary\n"); + MEMBER_LOG(ERR, "memory allocation failed for HT " + "setsummary"); return -ENOMEM; } @@ -121,8 +122,8 @@ rte_member_create_ht(struct rte_member_setsum *ss, #endif ss->sig_cmp_fn = RTE_MEMBER_COMPARE_SCALAR; - RTE_MEMBER_LOG(DEBUG, "Hash table based filter created, " - "the table has %u entries, %u buckets\n", + MEMBER_LOG(DEBUG, "Hash table based filter created, " + "the table has %u entries, %u buckets", num_entries, num_buckets); return 0; } diff --git a/lib/member/rte_member_sketch.c b/lib/member/rte_member_sketch.c index d5f35aabe9..e006e835d9 100644 --- a/lib/member/rte_member_sketch.c +++ b/lib/member/rte_member_sketch.c @@ -14,6 +14,7 @@ #include <rte_prefetch.h> #include <rte_ring_elem.h> +#include "member.h" #include "rte_member.h" #include "rte_member_sketch.h" #include "rte_member_heap.h" @@ -118,8 +119,8 @@ rte_member_create_sketch(struct rte_member_setsum *ss, if (params->sample_rate == 0 || params->sample_rate > 1) { rte_errno = EINVAL; - RTE_MEMBER_LOG(ERR, - "Membership Sketch created with invalid parameters\n"); + MEMBER_LOG(ERR, + "Membership Sketch created with invalid parameters"); return -EINVAL; } @@ -141,8 +142,8 @@ rte_member_create_sketch(struct rte_member_setsum *ss, if (ss->use_avx512 == true) { #ifdef CC_AVX512_SUPPORT ss->num_row = NUM_ROW_VEC; - RTE_MEMBER_LOG(NOTICE, - "Membership Sketch AVX512 update/lookup/delete ops is selected\n"); + MEMBER_LOG(NOTICE, + "Membership Sketch AVX512 update/lookup/delete ops is selected"); ss->sketch_update = sketch_update_avx512; ss->sketch_lookup = sketch_lookup_avx512; ss->sketch_delete = sketch_delete_avx512; @@ -151,8 +152,8 @@ rte_member_create_sketch(struct rte_member_setsum *ss, #endif { ss->num_row = NUM_ROW_SCALAR; - RTE_MEMBER_LOG(NOTICE, - "Membership Sketch SCALAR update/lookup/delete ops is selected\n"); + MEMBER_LOG(NOTICE, + "Membership Sketch SCALAR update/lookup/delete ops is selected"); ss->sketch_update = sketch_update_scalar; ss->sketch_lookup = sketch_lookup_scalar; ss->sketch_delete = sketch_delete_scalar; @@ -173,21 +174,21 @@ rte_member_create_sketch(struct rte_member_setsum *ss, sizeof(uint64_t) * num_col * ss->num_row, RTE_CACHE_LINE_SIZE, ss->socket_id); if (ss->table == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Table memory allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Table memory allocation failed"); return -ENOMEM; } ss->hash_seeds = rte_zmalloc_socket(NULL, sizeof(uint64_t) * ss->num_row, RTE_CACHE_LINE_SIZE, ss->socket_id); if (ss->hash_seeds == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Hashseeds memory allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Hashseeds memory allocation failed"); return -ENOMEM; } ss->runtime_var = rte_zmalloc_socket(NULL, sizeof(struct sketch_runtime), RTE_CACHE_LINE_SIZE, ss->socket_id); if (ss->runtime_var == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Runtime memory allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Runtime memory allocation failed"); rte_free(ss); return -ENOMEM; } @@ -205,7 +206,7 @@ rte_member_create_sketch(struct rte_member_setsum *ss, runtime->key_slots = rte_zmalloc_socket(NULL, ss->key_len * ss->topk, RTE_CACHE_LINE_SIZE, ss->socket_id); if (runtime->key_slots == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Key Slots allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Key Slots allocation failed"); goto error; } @@ -216,14 +217,14 @@ rte_member_create_sketch(struct rte_member_setsum *ss, if (rte_member_minheap_init(&(runtime->heap), params->top_k, ss->socket_id, params->prim_hash_seed) < 0) { - RTE_MEMBER_LOG(ERR, "Sketch Minheap allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Minheap allocation failed"); goto error_runtime; } runtime->report_array = rte_zmalloc_socket(NULL, sizeof(struct node) * ss->topk, RTE_CACHE_LINE_SIZE, ss->socket_id); if (runtime->report_array == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Runtime Report Array allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Runtime Report Array allocation failed"); goto error_runtime; } @@ -239,8 +240,8 @@ rte_member_create_sketch(struct rte_member_setsum *ss, ss->converge_thresh = 10 * pow(ss->error_rate, -2.0) * sqrt(log(1 / delta)); } - RTE_MEMBER_LOG(DEBUG, "Sketch created, " - "the total memory required is %u Bytes\n", ss->num_col * ss->num_row * 8); + MEMBER_LOG(DEBUG, "Sketch created, " + "the total memory required is %u Bytes", ss->num_col * ss->num_row * 8); return 0; @@ -382,8 +383,8 @@ should_converge(const struct rte_member_setsum *ss) /* For count min sketch - L1 norm */ if (runtime_var->pkt_cnt > ss->converge_thresh) { runtime_var->converged = 1; - RTE_MEMBER_LOG(DEBUG, "Sketch converged, begin sampling " - "from key count %"PRIu64"\n", + MEMBER_LOG(DEBUG, "Sketch converged, begin sampling " + "from key count %"PRIu64, runtime_var->pkt_cnt); } } @@ -471,8 +472,8 @@ rte_member_add_sketch(const struct rte_member_setsum *ss, * the rte_member_add_sketch_byte_count routine should be used. */ if (ss->count_byte == 1) { - RTE_MEMBER_LOG(ERR, "Sketch is Byte Mode, " - "should use rte_member_add_byte_count()!\n"); + MEMBER_LOG(ERR, "Sketch is Byte Mode, " + "should use rte_member_add_byte_count()!"); return -EINVAL; } @@ -528,8 +529,8 @@ rte_member_add_sketch_byte_count(const struct rte_member_setsum *ss, /* should not call this API if not in count byte mode */ if (ss->count_byte == 0) { - RTE_MEMBER_LOG(ERR, "Sketch is Pkt Mode, " - "should use rte_member_add()!\n"); + MEMBER_LOG(ERR, "Sketch is Pkt Mode, " + "should use rte_member_add()!"); return -EINVAL; } diff --git a/lib/member/rte_member_vbf.c b/lib/member/rte_member_vbf.c index 5a0c51ecc0..5ad9487fad 100644 --- a/lib/member/rte_member_vbf.c +++ b/lib/member/rte_member_vbf.c @@ -9,6 +9,7 @@ #include <rte_errno.h> #include <rte_log.h> +#include "member.h" #include "rte_member.h" #include "rte_member_vbf.h" @@ -35,7 +36,7 @@ rte_member_create_vbf(struct rte_member_setsum *ss, params->false_positive_rate == 0 || params->false_positive_rate > 1) { rte_errno = EINVAL; - RTE_MEMBER_LOG(ERR, "Membership vBF create with invalid parameters\n"); + MEMBER_LOG(ERR, "Membership vBF create with invalid parameters"); return -EINVAL; } @@ -56,7 +57,7 @@ rte_member_create_vbf(struct rte_member_setsum *ss, if (fp_one_bf == 0) { rte_errno = EINVAL; - RTE_MEMBER_LOG(ERR, "Membership BF false positive rate is too small\n"); + MEMBER_LOG(ERR, "Membership BF false positive rate is too small"); return -EINVAL; } @@ -111,10 +112,10 @@ rte_member_create_vbf(struct rte_member_setsum *ss, ss->mul_shift = rte_ctz32(ss->num_set); ss->div_shift = rte_ctz32(32 >> ss->mul_shift); - RTE_MEMBER_LOG(DEBUG, "vector bloom filter created, " + MEMBER_LOG(DEBUG, "vector bloom filter created, " "each bloom filter expects %u keys, needs %u bits, %u hashes, " "with false positive rate set as %.5f, " - "The new calculated vBF false positive rate is %.5f\n", + "The new calculated vBF false positive rate is %.5f", num_keys_per_bf, ss->bits, ss->num_hashes, fp_one_bf, new_fp); ss->table = rte_zmalloc_socket(NULL, ss->num_set * (ss->bits >> 3), diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c index 5a1ec14d7a..70963e7ee7 100644 --- a/lib/pdump/rte_pdump.c +++ b/lib/pdump/rte_pdump.c @@ -16,10 +16,10 @@ #include "rte_pdump.h" RTE_LOG_REGISTER_DEFAULT(pdump_logtype, NOTICE); +#define RTE_LOGTYPE_PDUMP pdump_logtype -/* Macro for printing using RTE_LOG */ -#define PDUMP_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, pdump_logtype, "%s(): " fmt, \ +#define PDUMP_LOG_LINE(level, fmt, args...) \ + RTE_LOG(level, PDUMP, "%s(): " fmt "\n", \ __func__, ## args) /* Used for the multi-process communication */ @@ -181,8 +181,8 @@ pdump_register_rx_callbacks(enum pdump_version ver, if (operation == ENABLE) { if (cbs->cb) { - PDUMP_LOG(ERR, - "rx callback for port=%d queue=%d, already exists\n", + PDUMP_LOG_LINE(ERR, + "rx callback for port=%d queue=%d, already exists", port, qid); return -EEXIST; } @@ -195,8 +195,8 @@ pdump_register_rx_callbacks(enum pdump_version ver, cbs->cb = rte_eth_add_first_rx_callback(port, qid, pdump_rx, cbs); if (cbs->cb == NULL) { - PDUMP_LOG(ERR, - "failed to add rx callback, errno=%d\n", + PDUMP_LOG_LINE(ERR, + "failed to add rx callback, errno=%d", rte_errno); return rte_errno; } @@ -204,15 +204,15 @@ pdump_register_rx_callbacks(enum pdump_version ver, int ret; if (cbs->cb == NULL) { - PDUMP_LOG(ERR, - "no existing rx callback for port=%d queue=%d\n", + PDUMP_LOG_LINE(ERR, + "no existing rx callback for port=%d queue=%d", port, qid); return -EINVAL; } ret = rte_eth_remove_rx_callback(port, qid, cbs->cb); if (ret < 0) { - PDUMP_LOG(ERR, - "failed to remove rx callback, errno=%d\n", + PDUMP_LOG_LINE(ERR, + "failed to remove rx callback, errno=%d", -ret); return ret; } @@ -239,8 +239,8 @@ pdump_register_tx_callbacks(enum pdump_version ver, if (operation == ENABLE) { if (cbs->cb) { - PDUMP_LOG(ERR, - "tx callback for port=%d queue=%d, already exists\n", + PDUMP_LOG_LINE(ERR, + "tx callback for port=%d queue=%d, already exists", port, qid); return -EEXIST; } @@ -253,8 +253,8 @@ pdump_register_tx_callbacks(enum pdump_version ver, cbs->cb = rte_eth_add_tx_callback(port, qid, pdump_tx, cbs); if (cbs->cb == NULL) { - PDUMP_LOG(ERR, - "failed to add tx callback, errno=%d\n", + PDUMP_LOG_LINE(ERR, + "failed to add tx callback, errno=%d", rte_errno); return rte_errno; } @@ -262,15 +262,15 @@ pdump_register_tx_callbacks(enum pdump_version ver, int ret; if (cbs->cb == NULL) { - PDUMP_LOG(ERR, - "no existing tx callback for port=%d queue=%d\n", + PDUMP_LOG_LINE(ERR, + "no existing tx callback for port=%d queue=%d", port, qid); return -EINVAL; } ret = rte_eth_remove_tx_callback(port, qid, cbs->cb); if (ret < 0) { - PDUMP_LOG(ERR, - "failed to remove tx callback, errno=%d\n", + PDUMP_LOG_LINE(ERR, + "failed to remove tx callback, errno=%d", -ret); return ret; } @@ -295,22 +295,22 @@ set_pdump_rxtx_cbs(const struct pdump_request *p) /* Check for possible DPDK version mismatch */ if (!(p->ver == V1 || p->ver == V2)) { - PDUMP_LOG(ERR, - "incorrect client version %u\n", p->ver); + PDUMP_LOG_LINE(ERR, + "incorrect client version %u", p->ver); return -EINVAL; } if (p->prm) { if (p->prm->prog_arg.type != RTE_BPF_ARG_PTR_MBUF) { - PDUMP_LOG(ERR, - "invalid BPF program type: %u\n", + PDUMP_LOG_LINE(ERR, + "invalid BPF program type: %u", p->prm->prog_arg.type); return -EINVAL; } filter = rte_bpf_load(p->prm); if (filter == NULL) { - PDUMP_LOG(ERR, "cannot load BPF filter: %s\n", + PDUMP_LOG_LINE(ERR, "cannot load BPF filter: %s", rte_strerror(rte_errno)); return -rte_errno; } @@ -324,8 +324,8 @@ set_pdump_rxtx_cbs(const struct pdump_request *p) ret = rte_eth_dev_get_port_by_name(p->device, &port); if (ret < 0) { - PDUMP_LOG(ERR, - "failed to get port id for device id=%s\n", + PDUMP_LOG_LINE(ERR, + "failed to get port id for device id=%s", p->device); return -EINVAL; } @@ -336,8 +336,8 @@ set_pdump_rxtx_cbs(const struct pdump_request *p) ret = rte_eth_dev_info_get(port, &dev_info); if (ret != 0) { - PDUMP_LOG(ERR, - "Error during getting device (port %u) info: %s\n", + PDUMP_LOG_LINE(ERR, + "Error during getting device (port %u) info: %s", port, strerror(-ret)); return ret; } @@ -345,19 +345,19 @@ set_pdump_rxtx_cbs(const struct pdump_request *p) nb_rx_q = dev_info.nb_rx_queues; nb_tx_q = dev_info.nb_tx_queues; if (nb_rx_q == 0 && flags & RTE_PDUMP_FLAG_RX) { - PDUMP_LOG(ERR, - "number of rx queues cannot be 0\n"); + PDUMP_LOG_LINE(ERR, + "number of rx queues cannot be 0"); return -EINVAL; } if (nb_tx_q == 0 && flags & RTE_PDUMP_FLAG_TX) { - PDUMP_LOG(ERR, - "number of tx queues cannot be 0\n"); + PDUMP_LOG_LINE(ERR, + "number of tx queues cannot be 0"); return -EINVAL; } if ((nb_tx_q == 0 || nb_rx_q == 0) && flags == RTE_PDUMP_FLAG_RXTX) { - PDUMP_LOG(ERR, - "both tx&rx queues must be non zero\n"); + PDUMP_LOG_LINE(ERR, + "both tx&rx queues must be non zero"); return -EINVAL; } } @@ -394,7 +394,7 @@ pdump_server(const struct rte_mp_msg *mp_msg, const void *peer) /* recv client requests */ if (mp_msg->len_param != sizeof(*cli_req)) { - PDUMP_LOG(ERR, "failed to recv from client\n"); + PDUMP_LOG_LINE(ERR, "failed to recv from client"); resp->err_value = -EINVAL; } else { cli_req = (const struct pdump_request *)mp_msg->param; @@ -407,7 +407,7 @@ pdump_server(const struct rte_mp_msg *mp_msg, const void *peer) mp_resp.len_param = sizeof(*resp); mp_resp.num_fds = 0; if (rte_mp_reply(&mp_resp, peer) < 0) { - PDUMP_LOG(ERR, "failed to send to client:%s\n", + PDUMP_LOG_LINE(ERR, "failed to send to client:%s", strerror(rte_errno)); return -1; } @@ -424,7 +424,7 @@ rte_pdump_init(void) mz = rte_memzone_reserve(MZ_RTE_PDUMP_STATS, sizeof(*pdump_stats), rte_socket_id(), 0); if (mz == NULL) { - PDUMP_LOG(ERR, "cannot allocate pdump statistics\n"); + PDUMP_LOG_LINE(ERR, "cannot allocate pdump statistics"); rte_errno = ENOMEM; return -1; } @@ -454,22 +454,22 @@ static int pdump_validate_ring_mp(struct rte_ring *ring, struct rte_mempool *mp) { if (ring == NULL || mp == NULL) { - PDUMP_LOG(ERR, "NULL ring or mempool\n"); + PDUMP_LOG_LINE(ERR, "NULL ring or mempool"); rte_errno = EINVAL; return -1; } if (mp->flags & RTE_MEMPOOL_F_SP_PUT || mp->flags & RTE_MEMPOOL_F_SC_GET) { - PDUMP_LOG(ERR, + PDUMP_LOG_LINE(ERR, "mempool with SP or SC set not valid for pdump," - "must have MP and MC set\n"); + "must have MP and MC set"); rte_errno = EINVAL; return -1; } if (rte_ring_is_prod_single(ring) || rte_ring_is_cons_single(ring)) { - PDUMP_LOG(ERR, + PDUMP_LOG_LINE(ERR, "ring with SP or SC set is not valid for pdump," - "must have MP and MC set\n"); + "must have MP and MC set"); rte_errno = EINVAL; return -1; } @@ -481,16 +481,16 @@ static int pdump_validate_flags(uint32_t flags) { if ((flags & RTE_PDUMP_FLAG_RXTX) == 0) { - PDUMP_LOG(ERR, - "invalid flags, should be either rx/tx/rxtx\n"); + PDUMP_LOG_LINE(ERR, + "invalid flags, should be either rx/tx/rxtx"); rte_errno = EINVAL; return -1; } /* mask off the flags we know about */ if (flags & ~(RTE_PDUMP_FLAG_RXTX | RTE_PDUMP_FLAG_PCAPNG)) { - PDUMP_LOG(ERR, - "unknown flags: %#x\n", flags); + PDUMP_LOG_LINE(ERR, + "unknown flags: %#x", flags); rte_errno = ENOTSUP; return -1; } @@ -504,14 +504,14 @@ pdump_validate_port(uint16_t port, char *name) int ret = 0; if (port >= RTE_MAX_ETHPORTS) { - PDUMP_LOG(ERR, "Invalid port id %u\n", port); + PDUMP_LOG_LINE(ERR, "Invalid port id %u", port); rte_errno = EINVAL; return -1; } ret = rte_eth_dev_get_name_by_port(port, name); if (ret < 0) { - PDUMP_LOG(ERR, "port %u to name mapping failed\n", + PDUMP_LOG_LINE(ERR, "port %u to name mapping failed", port); rte_errno = EINVAL; return -1; @@ -536,8 +536,8 @@ pdump_prepare_client_request(const char *device, uint16_t queue, struct pdump_response *resp; if (rte_eal_process_type() == RTE_PROC_PRIMARY) { - PDUMP_LOG(ERR, - "pdump enable/disable not allowed in primary process\n"); + PDUMP_LOG_LINE(ERR, + "pdump enable/disable not allowed in primary process"); return -EINVAL; } @@ -570,8 +570,8 @@ pdump_prepare_client_request(const char *device, uint16_t queue, } if (ret < 0) - PDUMP_LOG(ERR, - "client request for pdump enable/disable failed\n"); + PDUMP_LOG_LINE(ERR, + "client request for pdump enable/disable failed"); return ret; } @@ -738,8 +738,8 @@ rte_pdump_stats(uint16_t port, struct rte_pdump_stats *stats) memset(stats, 0, sizeof(*stats)); ret = rte_eth_dev_info_get(port, &dev_info); if (ret != 0) { - PDUMP_LOG(ERR, - "Error during getting device (port %u) info: %s\n", + PDUMP_LOG_LINE(ERR, + "Error during getting device (port %u) info: %s", port, strerror(-ret)); return ret; } @@ -747,7 +747,7 @@ rte_pdump_stats(uint16_t port, struct rte_pdump_stats *stats) if (pdump_stats == NULL) { if (rte_eal_process_type() == RTE_PROC_PRIMARY) { /* rte_pdump_init was not called */ - PDUMP_LOG(ERR, "pdump stats not initialized\n"); + PDUMP_LOG_LINE(ERR, "pdump stats not initialized"); rte_errno = EINVAL; return -1; } @@ -756,7 +756,7 @@ rte_pdump_stats(uint16_t port, struct rte_pdump_stats *stats) mz = rte_memzone_lookup(MZ_RTE_PDUMP_STATS); if (mz == NULL) { /* rte_pdump_init was not called in primary process?? */ - PDUMP_LOG(ERR, "can not find pdump stats\n"); + PDUMP_LOG_LINE(ERR, "can not find pdump stats"); rte_errno = EINVAL; return -1; } diff --git a/lib/power/power_acpi_cpufreq.c b/lib/power/power_acpi_cpufreq.c index dd143f2cc8..aecfdfa15d 100644 --- a/lib/power/power_acpi_cpufreq.c +++ b/lib/power/power_acpi_cpufreq.c @@ -72,7 +72,7 @@ set_freq_internal(struct acpi_power_info *pi, uint32_t idx) if (idx == pi->curr_idx) return 0; - POWER_DEBUG_TRACE("Frequency[%u] %u to be set for lcore %u\n", + POWER_DEBUG_LOG("Frequency[%u] %u to be set for lcore %u", idx, pi->freqs[idx], pi->lcore_id); if (fseek(pi->f, 0, SEEK_SET) < 0) { RTE_LOG_LINE(ERR, POWER, "Fail to set file position indicator to 0 " @@ -155,7 +155,7 @@ power_get_available_freqs(struct acpi_power_info *pi) /* Store the available frequencies into power context */ for (i = 0, pi->nb_freqs = 0; i < count; i++) { - POWER_DEBUG_TRACE("Lcore %u frequency[%d]: %s\n", pi->lcore_id, + POWER_DEBUG_LOG("Lcore %u frequency[%d]: %s", pi->lcore_id, i, freqs[i]); pi->freqs[pi->nb_freqs++] = strtoul(freqs[i], &p, POWER_CONVERT_TO_DECIMAL); @@ -164,17 +164,17 @@ power_get_available_freqs(struct acpi_power_info *pi) if ((pi->freqs[0]-1000) == pi->freqs[1]) { pi->turbo_available = 1; pi->turbo_enable = 1; - POWER_DEBUG_TRACE("Lcore %u Can do Turbo Boost\n", + POWER_DEBUG_LOG("Lcore %u Can do Turbo Boost", pi->lcore_id); } else { pi->turbo_available = 0; pi->turbo_enable = 0; - POWER_DEBUG_TRACE("Turbo Boost not available on Lcore %u\n", + POWER_DEBUG_LOG("Turbo Boost not available on Lcore %u", pi->lcore_id); } ret = 0; - POWER_DEBUG_TRACE("%d frequency(s) of lcore %u are available\n", + POWER_DEBUG_LOG("%d frequency(s) of lcore %u are available", count, pi->lcore_id); out: if (f != NULL) diff --git a/lib/power/power_amd_pstate_cpufreq.c b/lib/power/power_amd_pstate_cpufreq.c index 44581fd48b..f8f43a49b2 100644 --- a/lib/power/power_amd_pstate_cpufreq.c +++ b/lib/power/power_amd_pstate_cpufreq.c @@ -79,7 +79,7 @@ set_freq_internal(struct amd_pstate_power_info *pi, uint32_t idx) if (idx == pi->curr_idx) return 0; - POWER_DEBUG_TRACE("Frequency[%u] %u to be set for lcore %u\n", + POWER_DEBUG_LOG("Frequency[%u] %u to be set for lcore %u", idx, pi->freqs[idx], pi->lcore_id); if (fseek(pi->f, 0, SEEK_SET) < 0) { RTE_LOG_LINE(ERR, POWER, "Fail to set file position indicator to 0 " @@ -153,14 +153,14 @@ power_check_turbo(struct amd_pstate_power_info *pi) pi->turbo_available = 1; pi->turbo_enable = 1; ret = 0; - POWER_DEBUG_TRACE("Lcore %u can do Turbo Boost! highest perf %u, " - "nominal perf %u\n", + POWER_DEBUG_LOG("Lcore %u can do Turbo Boost! highest perf %u, " + "nominal perf %u", pi->lcore_id, highest_perf, nominal_perf); } else { pi->turbo_available = 0; pi->turbo_enable = 0; - POWER_DEBUG_TRACE("Lcore %u Turbo not available! highest perf %u, " - "nominal perf %u\n", + POWER_DEBUG_LOG("Lcore %u Turbo not available! highest perf %u, " + "nominal perf %u", pi->lcore_id, highest_perf, nominal_perf); } @@ -277,7 +277,7 @@ power_get_available_freqs(struct amd_pstate_power_info *pi) ret = 0; - POWER_DEBUG_TRACE("%d frequency(s) of lcore %u are available\n", + POWER_DEBUG_LOG("%d frequency(s) of lcore %u are available", num_freqs, pi->lcore_id); out: diff --git a/lib/power/power_common.c b/lib/power/power_common.c index bc57642cd1..b3d438c4de 100644 --- a/lib/power/power_common.c +++ b/lib/power/power_common.c @@ -182,8 +182,8 @@ power_set_governor(unsigned int lcore_id, const char *new_governor, /* Check if current governor is already what we want */ if (strcmp(buf, new_governor) == 0) { ret = 0; - POWER_DEBUG_TRACE("Power management governor of lcore %u is " - "already %s\n", lcore_id, new_governor); + POWER_DEBUG_LOG("Power management governor of lcore %u is " + "already %s", lcore_id, new_governor); goto out; } diff --git a/lib/power/power_common.h b/lib/power/power_common.h index c3fcbf4c10..ea2febbd86 100644 --- a/lib/power/power_common.h +++ b/lib/power/power_common.h @@ -14,10 +14,10 @@ extern int power_logtype; #define RTE_LOGTYPE_POWER power_logtype #ifdef RTE_LIBRTE_POWER_DEBUG -#define POWER_DEBUG_TRACE(fmt, args...) \ - RTE_LOG(ERR, POWER, "%s: " fmt, __func__, ## args) +#define POWER_DEBUG_LOG(fmt, args...) \ + RTE_LOG(ERR, POWER, "%s: " fmt "\n", __func__, ## args) #else -#define POWER_DEBUG_TRACE(fmt, args...) +#define POWER_DEBUG_LOG(fmt, args...) #endif /* check if scaling driver matches one we want */ diff --git a/lib/power/power_cppc_cpufreq.c b/lib/power/power_cppc_cpufreq.c index 83e1e62830..31eb6942a2 100644 --- a/lib/power/power_cppc_cpufreq.c +++ b/lib/power/power_cppc_cpufreq.c @@ -82,7 +82,7 @@ set_freq_internal(struct cppc_power_info *pi, uint32_t idx) if (idx == pi->curr_idx) return 0; - POWER_DEBUG_TRACE("Frequency[%u] %u to be set for lcore %u\n", + POWER_DEBUG_LOG("Frequency[%u] %u to be set for lcore %u", idx, pi->freqs[idx], pi->lcore_id); if (fseek(pi->f, 0, SEEK_SET) < 0) { RTE_LOG_LINE(ERR, POWER, "Fail to set file position indicator to 0 " @@ -172,14 +172,14 @@ power_check_turbo(struct cppc_power_info *pi) pi->turbo_available = 1; pi->turbo_enable = 1; ret = 0; - POWER_DEBUG_TRACE("Lcore %u can do Turbo Boost! highest perf %u, " - "nominal perf %u\n", + POWER_DEBUG_LOG("Lcore %u can do Turbo Boost! highest perf %u, " + "nominal perf %u", pi->lcore_id, highest_perf, nominal_perf); } else { pi->turbo_available = 0; pi->turbo_enable = 0; - POWER_DEBUG_TRACE("Lcore %u Turbo not available! highest perf %u, " - "nominal perf %u\n", + POWER_DEBUG_LOG("Lcore %u Turbo not available! highest perf %u, " + "nominal perf %u", pi->lcore_id, highest_perf, nominal_perf); } @@ -265,7 +265,7 @@ power_get_available_freqs(struct cppc_power_info *pi) ret = 0; - POWER_DEBUG_TRACE("%d frequency(s) of lcore %u are available\n", + POWER_DEBUG_LOG("%d frequency(s) of lcore %u are available", num_freqs, pi->lcore_id); out: diff --git a/lib/power/power_intel_uncore.c b/lib/power/power_intel_uncore.c index 0ee8e603d2..2cc3045056 100644 --- a/lib/power/power_intel_uncore.c +++ b/lib/power/power_intel_uncore.c @@ -90,7 +90,7 @@ set_uncore_freq_internal(struct uncore_power_info *ui, uint32_t idx) return -1; } - POWER_DEBUG_TRACE("Uncore frequency '%u' to be set for pkg %02u die %02u\n", + POWER_DEBUG_LOG("Uncore frequency '%u' to be set for pkg %02u die %02u", target_uncore_freq, ui->pkg, ui->die); /* write the minimum value first if the target freq is less than current max */ @@ -235,7 +235,7 @@ power_get_available_uncore_freqs(struct uncore_power_info *ui) ret = 0; - POWER_DEBUG_TRACE("%d frequency(s) of pkg %02u die %02u are available\n", + POWER_DEBUG_LOG("%d frequency(s) of pkg %02u die %02u are available", num_uncore_freqs, ui->pkg, ui->die); out: diff --git a/lib/power/power_pstate_cpufreq.c b/lib/power/power_pstate_cpufreq.c index 56aa302b5d..ca704e672c 100644 --- a/lib/power/power_pstate_cpufreq.c +++ b/lib/power/power_pstate_cpufreq.c @@ -104,7 +104,7 @@ power_read_turbo_pct(uint64_t *outVal) goto out; } - POWER_DEBUG_TRACE("power turbo pct: %"PRIu64"\n", *outVal); + POWER_DEBUG_LOG("power turbo pct: %"PRIu64, *outVal); out: close(fd); return ret; @@ -204,7 +204,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) max_non_turbo = base_min_ratio + (100 - max_non_turbo) * (base_max_ratio - base_min_ratio) / 100; - POWER_DEBUG_TRACE("no turbo perf %"PRIu64"\n", max_non_turbo); + POWER_DEBUG_LOG("no turbo perf %"PRIu64, max_non_turbo); pi->non_turbo_max_ratio = (uint32_t)max_non_turbo; @@ -310,7 +310,7 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx) return -1; } - POWER_DEBUG_TRACE("Frequency '%u' to be set for lcore %u\n", + POWER_DEBUG_LOG("Frequency '%u' to be set for lcore %u", target_freq, pi->lcore_id); fflush(pi->f_cur_min); @@ -333,7 +333,7 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx) return -1; } - POWER_DEBUG_TRACE("Frequency '%u' to be set for lcore %u\n", + POWER_DEBUG_LOG("Frequency '%u' to be set for lcore %u", target_freq, pi->lcore_id); fflush(pi->f_cur_max); @@ -434,7 +434,7 @@ power_get_available_freqs(struct pstate_power_info *pi) else base_max_freq = pi->non_turbo_max_ratio * BUS_FREQ; - POWER_DEBUG_TRACE("sys min %u, sys max %u, base_max %u\n", + POWER_DEBUG_LOG("sys min %u, sys max %u, base_max %u", sys_min_freq, sys_max_freq, base_max_freq); @@ -471,7 +471,7 @@ power_get_available_freqs(struct pstate_power_info *pi) ret = 0; - POWER_DEBUG_TRACE("%d frequency(s) of lcore %u are available\n", + POWER_DEBUG_LOG("%d frequency(s) of lcore %u are available", num_freqs, pi->lcore_id); out: diff --git a/lib/regexdev/rte_regexdev.c b/lib/regexdev/rte_regexdev.c index d38a85eb0b..b2c4b49d97 100644 --- a/lib/regexdev/rte_regexdev.c +++ b/lib/regexdev/rte_regexdev.c @@ -73,16 +73,16 @@ regexdev_check_name(const char *name) size_t name_len; if (name == NULL) { - RTE_REGEXDEV_LOG(ERR, "Name can't be NULL\n"); + RTE_REGEXDEV_LOG_LINE(ERR, "Name can't be NULL"); return -EINVAL; } name_len = strnlen(name, RTE_REGEXDEV_NAME_MAX_LEN); if (name_len == 0) { - RTE_REGEXDEV_LOG(ERR, "Zero length RegEx device name\n"); + RTE_REGEXDEV_LOG_LINE(ERR, "Zero length RegEx device name"); return -EINVAL; } if (name_len >= RTE_REGEXDEV_NAME_MAX_LEN) { - RTE_REGEXDEV_LOG(ERR, "RegEx device name is too long\n"); + RTE_REGEXDEV_LOG_LINE(ERR, "RegEx device name is too long"); return -EINVAL; } return (int)name_len; @@ -101,17 +101,17 @@ rte_regexdev_register(const char *name) return NULL; dev = regexdev_allocated(name); if (dev != NULL) { - RTE_REGEXDEV_LOG(ERR, "RegEx device already allocated\n"); + RTE_REGEXDEV_LOG_LINE(ERR, "RegEx device already allocated"); return NULL; } dev_id = regexdev_find_free_dev(); if (dev_id == RTE_MAX_REGEXDEV_DEVS) { - RTE_REGEXDEV_LOG - (ERR, "Reached maximum number of RegEx devices\n"); + RTE_REGEXDEV_LOG_LINE + (ERR, "Reached maximum number of RegEx devices"); return NULL; } if (regexdev_shared_data_prepare() < 0) { - RTE_REGEXDEV_LOG(ERR, "Cannot allocate RegEx shared data\n"); + RTE_REGEXDEV_LOG_LINE(ERR, "Cannot allocate RegEx shared data"); return NULL; } @@ -215,8 +215,8 @@ rte_regexdev_configure(uint8_t dev_id, const struct rte_regexdev_config *cfg) if (*dev->dev_ops->dev_configure == NULL) return -ENOTSUP; if (dev->data->dev_started) { - RTE_REGEXDEV_LOG - (ERR, "Dev %u must be stopped to allow configuration\n", + RTE_REGEXDEV_LOG_LINE + (ERR, "Dev %u must be stopped to allow configuration", dev_id); return -EBUSY; } @@ -225,66 +225,66 @@ rte_regexdev_configure(uint8_t dev_id, const struct rte_regexdev_config *cfg) return ret; if ((cfg->dev_cfg_flags & RTE_REGEXDEV_CFG_CROSS_BUFFER_SCAN_F) && !(dev_info.regexdev_capa & RTE_REGEXDEV_SUPP_CROSS_BUFFER_F)) { - RTE_REGEXDEV_LOG(ERR, - "Dev %u doesn't support cross buffer scan\n", + RTE_REGEXDEV_LOG_LINE(ERR, + "Dev %u doesn't support cross buffer scan", dev_id); return -EINVAL; } if ((cfg->dev_cfg_flags & RTE_REGEXDEV_CFG_MATCH_AS_END_F) && !(dev_info.regexdev_capa & RTE_REGEXDEV_SUPP_MATCH_AS_END_F)) { - RTE_REGEXDEV_LOG(ERR, - "Dev %u doesn't support match as end\n", + RTE_REGEXDEV_LOG_LINE(ERR, + "Dev %u doesn't support match as end", dev_id); return -EINVAL; } if ((cfg->dev_cfg_flags & RTE_REGEXDEV_CFG_MATCH_ALL_F) && !(dev_info.regexdev_capa & RTE_REGEXDEV_SUPP_MATCH_ALL_F)) { - RTE_REGEXDEV_LOG(ERR, - "Dev %u doesn't support match all\n", + RTE_REGEXDEV_LOG_LINE(ERR, + "Dev %u doesn't support match all", dev_id); return -EINVAL; } if (cfg->nb_groups == 0) { - RTE_REGEXDEV_LOG(ERR, "Dev %u num of groups must be > 0\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %u num of groups must be > 0", dev_id); return -EINVAL; } if (cfg->nb_groups > dev_info.max_groups) { - RTE_REGEXDEV_LOG(ERR, "Dev %u num of groups %d > %d\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %u num of groups %d > %d", dev_id, cfg->nb_groups, dev_info.max_groups); return -EINVAL; } if (cfg->nb_max_matches == 0) { - RTE_REGEXDEV_LOG(ERR, "Dev %u num of matches must be > 0\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %u num of matches must be > 0", dev_id); return -EINVAL; } if (cfg->nb_max_matches > dev_info.max_matches) { - RTE_REGEXDEV_LOG(ERR, "Dev %u num of matches %d > %d\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %u num of matches %d > %d", dev_id, cfg->nb_max_matches, dev_info.max_matches); return -EINVAL; } if (cfg->nb_queue_pairs == 0) { - RTE_REGEXDEV_LOG(ERR, "Dev %u num of queues must be > 0\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %u num of queues must be > 0", dev_id); return -EINVAL; } if (cfg->nb_queue_pairs > dev_info.max_queue_pairs) { - RTE_REGEXDEV_LOG(ERR, "Dev %u num of queues %d > %d\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %u num of queues %d > %d", dev_id, cfg->nb_queue_pairs, dev_info.max_queue_pairs); return -EINVAL; } if (cfg->nb_rules_per_group == 0) { - RTE_REGEXDEV_LOG(ERR, - "Dev %u num of rules per group must be > 0\n", + RTE_REGEXDEV_LOG_LINE(ERR, + "Dev %u num of rules per group must be > 0", dev_id); return -EINVAL; } if (cfg->nb_rules_per_group > dev_info.max_rules_per_group) { - RTE_REGEXDEV_LOG(ERR, - "Dev %u num of rules per group %d > %d\n", + RTE_REGEXDEV_LOG_LINE(ERR, + "Dev %u num of rules per group %d > %d", dev_id, cfg->nb_rules_per_group, dev_info.max_rules_per_group); return -EINVAL; @@ -306,21 +306,21 @@ rte_regexdev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id, if (*dev->dev_ops->dev_qp_setup == NULL) return -ENOTSUP; if (dev->data->dev_started) { - RTE_REGEXDEV_LOG - (ERR, "Dev %u must be stopped to allow configuration\n", + RTE_REGEXDEV_LOG_LINE + (ERR, "Dev %u must be stopped to allow configuration", dev_id); return -EBUSY; } if (queue_pair_id >= dev->data->dev_conf.nb_queue_pairs) { - RTE_REGEXDEV_LOG(ERR, - "Dev %u invalid queue %d > %d\n", + RTE_REGEXDEV_LOG_LINE(ERR, + "Dev %u invalid queue %d > %d", dev_id, queue_pair_id, dev->data->dev_conf.nb_queue_pairs); return -EINVAL; } if (dev->data->dev_started) { - RTE_REGEXDEV_LOG - (ERR, "Dev %u must be stopped to allow configuration\n", + RTE_REGEXDEV_LOG_LINE + (ERR, "Dev %u must be stopped to allow configuration", dev_id); return -EBUSY; } @@ -383,7 +383,7 @@ rte_regexdev_attr_get(uint8_t dev_id, enum rte_regexdev_attr_id attr_id, if (*dev->dev_ops->dev_attr_get == NULL) return -ENOTSUP; if (attr_value == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d attribute value can't be NULL\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d attribute value can't be NULL", dev_id); return -EINVAL; } @@ -401,7 +401,7 @@ rte_regexdev_attr_set(uint8_t dev_id, enum rte_regexdev_attr_id attr_id, if (*dev->dev_ops->dev_attr_set == NULL) return -ENOTSUP; if (attr_value == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d attribute value can't be NULL\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d attribute value can't be NULL", dev_id); return -EINVAL; } @@ -420,7 +420,7 @@ rte_regexdev_rule_db_update(uint8_t dev_id, if (*dev->dev_ops->dev_rule_db_update == NULL) return -ENOTSUP; if (rules == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d rules can't be NULL\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d rules can't be NULL", dev_id); return -EINVAL; } @@ -450,7 +450,7 @@ rte_regexdev_rule_db_import(uint8_t dev_id, const char *rule_db, if (*dev->dev_ops->dev_db_import == NULL) return -ENOTSUP; if (rule_db == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d rules can't be NULL\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d rules can't be NULL", dev_id); return -EINVAL; } @@ -480,7 +480,7 @@ rte_regexdev_xstats_names_get(uint8_t dev_id, if (*dev->dev_ops->dev_xstats_names_get == NULL) return -ENOTSUP; if (xstats_map == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d xstats map can't be NULL\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d xstats map can't be NULL", dev_id); return -EINVAL; } @@ -498,11 +498,11 @@ rte_regexdev_xstats_get(uint8_t dev_id, const uint16_t *ids, if (*dev->dev_ops->dev_xstats_get == NULL) return -ENOTSUP; if (ids == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d ids can't be NULL\n", dev_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d ids can't be NULL", dev_id); return -EINVAL; } if (values == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d values can't be NULL\n", dev_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d values can't be NULL", dev_id); return -EINVAL; } return (*dev->dev_ops->dev_xstats_get)(dev, ids, values, n); @@ -519,15 +519,15 @@ rte_regexdev_xstats_by_name_get(uint8_t dev_id, const char *name, if (*dev->dev_ops->dev_xstats_by_name_get == NULL) return -ENOTSUP; if (name == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d name can't be NULL\n", dev_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d name can't be NULL", dev_id); return -EINVAL; } if (id == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d id can't be NULL\n", dev_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d id can't be NULL", dev_id); return -EINVAL; } if (value == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d value can't be NULL\n", dev_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d value can't be NULL", dev_id); return -EINVAL; } return (*dev->dev_ops->dev_xstats_by_name_get)(dev, name, id, value); @@ -544,7 +544,7 @@ rte_regexdev_xstats_reset(uint8_t dev_id, const uint16_t *ids, if (*dev->dev_ops->dev_xstats_reset == NULL) return -ENOTSUP; if (ids == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d ids can't be NULL\n", dev_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d ids can't be NULL", dev_id); return -EINVAL; } return (*dev->dev_ops->dev_xstats_reset)(dev, ids, nb_ids); @@ -572,7 +572,7 @@ rte_regexdev_dump(uint8_t dev_id, FILE *f) if (*dev->dev_ops->dev_dump == NULL) return -ENOTSUP; if (f == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d file can't be NULL\n", dev_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d file can't be NULL", dev_id); return -EINVAL; } return (*dev->dev_ops->dev_dump)(dev, f); diff --git a/lib/regexdev/rte_regexdev.h b/lib/regexdev/rte_regexdev.h index d50af775b5..dc111317a5 100644 --- a/lib/regexdev/rte_regexdev.h +++ b/lib/regexdev/rte_regexdev.h @@ -206,21 +206,23 @@ extern "C" { #define RTE_REGEXDEV_NAME_MAX_LEN RTE_DEV_NAME_MAX_LEN extern int rte_regexdev_logtype; +#define RTE_LOGTYPE_REGEXDEV rte_regexdev_logtype -#define RTE_REGEXDEV_LOG(level, ...) \ - rte_log(RTE_LOG_ ## level, rte_regexdev_logtype, "" __VA_ARGS__) +#define RTE_REGEXDEV_LOG_LINE(level, ...) \ + RTE_LOG(level, REGEXDEV, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_FMT_TAIL(__VA_ARGS__,))) /* Macros to check for valid port */ #define RTE_REGEXDEV_VALID_DEV_ID_OR_ERR_RET(dev_id, retval) do { \ if (!rte_regexdev_is_valid_dev(dev_id)) { \ - RTE_REGEXDEV_LOG(ERR, "Invalid dev_id=%u\n", dev_id); \ + RTE_REGEXDEV_LOG_LINE(ERR, "Invalid dev_id=%u", dev_id); \ return retval; \ } \ } while (0) #define RTE_REGEXDEV_VALID_DEV_ID_OR_RET(dev_id) do { \ if (!rte_regexdev_is_valid_dev(dev_id)) { \ - RTE_REGEXDEV_LOG(ERR, "Invalid dev_id=%u\n", dev_id); \ + RTE_REGEXDEV_LOG_LINE(ERR, "Invalid dev_id=%u", dev_id); \ return; \ } \ } while (0) @@ -1475,7 +1477,7 @@ rte_regexdev_enqueue_burst(uint8_t dev_id, uint16_t qp_id, if (*dev->enqueue == NULL) return -ENOTSUP; if (qp_id >= dev->data->dev_conf.nb_queue_pairs) { - RTE_REGEXDEV_LOG(ERR, "Invalid queue %d\n", qp_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Invalid queue %d", qp_id); return -EINVAL; } #endif @@ -1535,7 +1537,7 @@ rte_regexdev_dequeue_burst(uint8_t dev_id, uint16_t qp_id, if (*dev->dequeue == NULL) return -ENOTSUP; if (qp_id >= dev->data->dev_conf.nb_queue_pairs) { - RTE_REGEXDEV_LOG(ERR, "Invalid queue %d\n", qp_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Invalid queue %d", qp_id); return -EINVAL; } #endif diff --git a/lib/telemetry/telemetry.c b/lib/telemetry/telemetry.c index 92982842a8..5c655e2b25 100644 --- a/lib/telemetry/telemetry.c +++ b/lib/telemetry/telemetry.c @@ -56,7 +56,10 @@ static const char *socket_dir; /* runtime directory */ static rte_cpuset_t *thread_cpuset; RTE_LOG_REGISTER_DEFAULT(logtype, WARNING); -#define TMTY_LOG(l, ...) rte_log(RTE_LOG_ ## l, logtype, "TELEMETRY: " __VA_ARGS__) +#define RTE_LOGTYPE_TMTY logtype +#define TMTY_LOG_LINE(l, ...) \ + RTE_LOG(l, TMTY, RTE_FMT("TELEMETRY: " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_FMT_TAIL(__VA_ARGS__,))) /* list of command callbacks, with one command registered by default */ static struct cmd_callback *callbacks; @@ -417,7 +420,7 @@ socket_listener(void *socket) struct socket *s = (struct socket *)socket; int s_accepted = accept(s->sock, NULL, NULL); if (s_accepted < 0) { - TMTY_LOG(ERR, "Error with accept, telemetry thread quitting\n"); + TMTY_LOG_LINE(ERR, "Error with accept, telemetry thread quitting"); return NULL; } if (s->num_clients != NULL) { @@ -433,7 +436,7 @@ socket_listener(void *socket) rc = pthread_create(&th, NULL, s->fn, (void *)(uintptr_t)s_accepted); if (rc != 0) { - TMTY_LOG(ERR, "Error with create client thread: %s\n", + TMTY_LOG_LINE(ERR, "Error with create client thread: %s", strerror(rc)); close(s_accepted); if (s->num_clients != NULL) @@ -469,22 +472,22 @@ create_socket(char *path) { int sock = socket(AF_UNIX, SOCK_SEQPACKET, 0); if (sock < 0) { - TMTY_LOG(ERR, "Error with socket creation, %s\n", strerror(errno)); + TMTY_LOG_LINE(ERR, "Error with socket creation, %s", strerror(errno)); return -1; } struct sockaddr_un sun = {.sun_family = AF_UNIX}; strlcpy(sun.sun_path, path, sizeof(sun.sun_path)); - TMTY_LOG(DEBUG, "Attempting socket bind to path '%s'\n", path); + TMTY_LOG_LINE(DEBUG, "Attempting socket bind to path '%s'", path); if (bind(sock, (void *) &sun, sizeof(sun)) < 0) { struct stat st; - TMTY_LOG(DEBUG, "Initial bind to socket '%s' failed.\n", path); + TMTY_LOG_LINE(DEBUG, "Initial bind to socket '%s' failed.", path); /* first check if we have a runtime dir */ if (stat(socket_dir, &st) < 0 || !S_ISDIR(st.st_mode)) { - TMTY_LOG(ERR, "Cannot access DPDK runtime directory: %s\n", socket_dir); + TMTY_LOG_LINE(ERR, "Cannot access DPDK runtime directory: %s", socket_dir); close(sock); return -ENOENT; } @@ -496,22 +499,22 @@ create_socket(char *path) } /* socket is not active, delete and attempt rebind */ - TMTY_LOG(DEBUG, "Attempting unlink and retrying bind\n"); + TMTY_LOG_LINE(DEBUG, "Attempting unlink and retrying bind"); unlink(sun.sun_path); if (bind(sock, (void *) &sun, sizeof(sun)) < 0) { - TMTY_LOG(ERR, "Error binding socket: %s\n", strerror(errno)); + TMTY_LOG_LINE(ERR, "Error binding socket: %s", strerror(errno)); close(sock); return -errno; /* if unlink failed, this will be -EADDRINUSE as above */ } } if (listen(sock, 1) < 0) { - TMTY_LOG(ERR, "Error calling listen for socket: %s\n", strerror(errno)); + TMTY_LOG_LINE(ERR, "Error calling listen for socket: %s", strerror(errno)); unlink(sun.sun_path); close(sock); return -errno; } - TMTY_LOG(DEBUG, "Socket creation and binding ok\n"); + TMTY_LOG_LINE(DEBUG, "Socket creation and binding ok"); return sock; } @@ -535,14 +538,14 @@ telemetry_legacy_init(void) int rc; if (num_legacy_callbacks == 1) { - TMTY_LOG(WARNING, "No legacy callbacks, legacy socket not created\n"); + TMTY_LOG_LINE(WARNING, "No legacy callbacks, legacy socket not created"); return -1; } v1_socket.fn = legacy_client_handler; if ((size_t) snprintf(v1_socket.path, sizeof(v1_socket.path), "%s/telemetry", socket_dir) >= sizeof(v1_socket.path)) { - TMTY_LOG(ERR, "Error with socket binding, path too long\n"); + TMTY_LOG_LINE(ERR, "Error with socket binding, path too long"); return -1; } v1_socket.sock = create_socket(v1_socket.path); @@ -552,7 +555,7 @@ telemetry_legacy_init(void) } rc = pthread_create(&t_old, NULL, socket_listener, &v1_socket); if (rc != 0) { - TMTY_LOG(ERR, "Error with create legacy socket thread: %s\n", + TMTY_LOG_LINE(ERR, "Error with create legacy socket thread: %s", strerror(rc)); close(v1_socket.sock); v1_socket.sock = -1; @@ -562,7 +565,7 @@ telemetry_legacy_init(void) } pthread_setaffinity_np(t_old, sizeof(*thread_cpuset), thread_cpuset); set_thread_name(t_old, "dpdk-telemet-v1"); - TMTY_LOG(DEBUG, "Legacy telemetry socket initialized ok\n"); + TMTY_LOG_LINE(DEBUG, "Legacy telemetry socket initialized ok"); pthread_detach(t_old); return 0; } @@ -584,7 +587,7 @@ telemetry_v2_init(void) "Returns help text for a command. Parameters: string command"); v2_socket.fn = client_handler; if (strlcpy(spath, get_socket_path(socket_dir, 2), sizeof(spath)) >= sizeof(spath)) { - TMTY_LOG(ERR, "Error with socket binding, path too long\n"); + TMTY_LOG_LINE(ERR, "Error with socket binding, path too long"); return -1; } memcpy(v2_socket.path, spath, sizeof(v2_socket.path)); @@ -599,14 +602,14 @@ telemetry_v2_init(void) /* add a suffix to the path if the basic version fails */ if (snprintf(v2_socket.path, sizeof(v2_socket.path), "%s:%d", spath, ++suffix) >= (int)sizeof(v2_socket.path)) { - TMTY_LOG(ERR, "Error with socket binding, path too long\n"); + TMTY_LOG_LINE(ERR, "Error with socket binding, path too long"); return -1; } v2_socket.sock = create_socket(v2_socket.path); } rc = pthread_create(&t_new, NULL, socket_listener, &v2_socket); if (rc != 0) { - TMTY_LOG(ERR, "Error with create socket thread: %s\n", + TMTY_LOG_LINE(ERR, "Error with create socket thread: %s", strerror(rc)); close(v2_socket.sock); v2_socket.sock = -1; @@ -634,7 +637,7 @@ rte_telemetry_init(const char *runtime_dir, const char *rte_version, rte_cpuset_ #ifndef RTE_EXEC_ENV_WINDOWS if (telemetry_v2_init() != 0) return -1; - TMTY_LOG(DEBUG, "Telemetry initialized ok\n"); + TMTY_LOG_LINE(DEBUG, "Telemetry initialized ok"); telemetry_legacy_init(); #endif /* RTE_EXEC_ENV_WINDOWS */ diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c index 10ab77262e..f2c275a7d7 100644 --- a/lib/vhost/iotlb.c +++ b/lib/vhost/iotlb.c @@ -150,16 +150,16 @@ vhost_user_iotlb_pending_insert(struct virtio_net *dev, uint64_t iova, uint8_t p node = vhost_user_iotlb_pool_get(dev); if (node == NULL) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "IOTLB pool empty, clear entries for pending insertion\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "IOTLB pool empty, clear entries for pending insertion"); if (!TAILQ_EMPTY(&dev->iotlb_pending_list)) vhost_user_iotlb_pending_remove_all(dev); else vhost_user_iotlb_cache_random_evict(dev); node = vhost_user_iotlb_pool_get(dev); if (node == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "IOTLB pool still empty, pending insertion failure\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "IOTLB pool still empty, pending insertion failure"); return; } } @@ -253,16 +253,16 @@ vhost_user_iotlb_cache_insert(struct virtio_net *dev, uint64_t iova, uint64_t ua new_node = vhost_user_iotlb_pool_get(dev); if (new_node == NULL) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "IOTLB pool empty, clear entries for cache insertion\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "IOTLB pool empty, clear entries for cache insertion"); if (!TAILQ_EMPTY(&dev->iotlb_list)) vhost_user_iotlb_cache_random_evict(dev); else vhost_user_iotlb_pending_remove_all(dev); new_node = vhost_user_iotlb_pool_get(dev); if (new_node == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "IOTLB pool still empty, cache insertion failed\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "IOTLB pool still empty, cache insertion failed"); return; } } @@ -415,7 +415,7 @@ vhost_user_iotlb_init(struct virtio_net *dev) dev->iotlb_pool = rte_calloc_socket("iotlb", IOTLB_CACHE_SIZE, sizeof(struct vhost_iotlb_entry), 0, socket); if (!dev->iotlb_pool) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to create IOTLB cache pool\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to create IOTLB cache pool"); return -1; } for (i = 0; i < IOTLB_CACHE_SIZE; i++) diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c index 5882e44176..a2fdac30a4 100644 --- a/lib/vhost/socket.c +++ b/lib/vhost/socket.c @@ -128,17 +128,17 @@ read_fd_message(char *ifname, int sockfd, char *buf, int buflen, int *fds, int m ret = recvmsg(sockfd, &msgh, 0); if (ret <= 0) { if (ret) - VHOST_LOG_CONFIG(ifname, ERR, "recvmsg failed on fd %d (%s)\n", + VHOST_CONFIG_LOG(ifname, ERR, "recvmsg failed on fd %d (%s)", sockfd, strerror(errno)); return ret; } if (msgh.msg_flags & MSG_TRUNC) - VHOST_LOG_CONFIG(ifname, ERR, "truncated msg (fd %d)\n", sockfd); + VHOST_CONFIG_LOG(ifname, ERR, "truncated msg (fd %d)", sockfd); /* MSG_CTRUNC may be caused by LSM misconfiguration */ if (msgh.msg_flags & MSG_CTRUNC) - VHOST_LOG_CONFIG(ifname, ERR, "truncated control data (fd %d)\n", sockfd); + VHOST_CONFIG_LOG(ifname, ERR, "truncated control data (fd %d)", sockfd); for (cmsg = CMSG_FIRSTHDR(&msgh); cmsg != NULL; cmsg = CMSG_NXTHDR(&msgh, cmsg)) { @@ -181,7 +181,7 @@ send_fd_message(char *ifname, int sockfd, char *buf, int buflen, int *fds, int f msgh.msg_controllen = sizeof(control); cmsg = CMSG_FIRSTHDR(&msgh); if (cmsg == NULL) { - VHOST_LOG_CONFIG(ifname, ERR, "cmsg == NULL\n"); + VHOST_CONFIG_LOG(ifname, ERR, "cmsg == NULL"); errno = EINVAL; return -1; } @@ -199,7 +199,7 @@ send_fd_message(char *ifname, int sockfd, char *buf, int buflen, int *fds, int f } while (ret < 0 && errno == EINTR); if (ret < 0) { - VHOST_LOG_CONFIG(ifname, ERR, "sendmsg error on fd %d (%s)\n", + VHOST_CONFIG_LOG(ifname, ERR, "sendmsg error on fd %d (%s)", sockfd, strerror(errno)); return ret; } @@ -252,13 +252,13 @@ vhost_user_add_connection(int fd, struct vhost_user_socket *vsocket) dev->async_copy = 1; } - VHOST_LOG_CONFIG(vsocket->path, INFO, "new device, handle is %d\n", vid); + VHOST_CONFIG_LOG(vsocket->path, INFO, "new device, handle is %d", vid); if (vsocket->notify_ops->new_connection) { ret = vsocket->notify_ops->new_connection(vid); if (ret < 0) { - VHOST_LOG_CONFIG(vsocket->path, ERR, - "failed to add vhost user connection with fd %d\n", + VHOST_CONFIG_LOG(vsocket->path, ERR, + "failed to add vhost user connection with fd %d", fd); goto err_cleanup; } @@ -270,8 +270,8 @@ vhost_user_add_connection(int fd, struct vhost_user_socket *vsocket) ret = fdset_add(&vhost_user.fdset, fd, vhost_user_read_cb, NULL, conn); if (ret < 0) { - VHOST_LOG_CONFIG(vsocket->path, ERR, - "failed to add fd %d into vhost server fdset\n", + VHOST_CONFIG_LOG(vsocket->path, ERR, + "failed to add fd %d into vhost server fdset", fd); if (vsocket->notify_ops->destroy_connection) @@ -304,7 +304,7 @@ vhost_user_server_new_connection(int fd, void *dat, int *remove __rte_unused) if (fd < 0) return; - VHOST_LOG_CONFIG(vsocket->path, INFO, "new vhost user connection is %d\n", fd); + VHOST_CONFIG_LOG(vsocket->path, INFO, "new vhost user connection is %d", fd); vhost_user_add_connection(fd, vsocket); } @@ -352,12 +352,12 @@ create_unix_socket(struct vhost_user_socket *vsocket) fd = socket(AF_UNIX, SOCK_STREAM, 0); if (fd < 0) return -1; - VHOST_LOG_CONFIG(vsocket->path, INFO, "vhost-user %s: socket created, fd: %d\n", + VHOST_CONFIG_LOG(vsocket->path, INFO, "vhost-user %s: socket created, fd: %d", vsocket->is_server ? "server" : "client", fd); if (!vsocket->is_server && fcntl(fd, F_SETFL, O_NONBLOCK)) { - VHOST_LOG_CONFIG(vsocket->path, ERR, - "vhost-user: can't set nonblocking mode for socket, fd: %d (%s)\n", + VHOST_CONFIG_LOG(vsocket->path, ERR, + "vhost-user: can't set nonblocking mode for socket, fd: %d (%s)", fd, strerror(errno)); close(fd); return -1; @@ -391,11 +391,11 @@ vhost_user_start_server(struct vhost_user_socket *vsocket) */ ret = bind(fd, (struct sockaddr *)&vsocket->un, sizeof(vsocket->un)); if (ret < 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to bind: %s; remove it and try again\n", + VHOST_CONFIG_LOG(path, ERR, "failed to bind: %s; remove it and try again", strerror(errno)); goto err; } - VHOST_LOG_CONFIG(path, INFO, "binding succeeded\n"); + VHOST_CONFIG_LOG(path, INFO, "binding succeeded"); ret = listen(fd, MAX_VIRTIO_BACKLOG); if (ret < 0) @@ -404,7 +404,7 @@ vhost_user_start_server(struct vhost_user_socket *vsocket) ret = fdset_add(&vhost_user.fdset, fd, vhost_user_server_new_connection, NULL, vsocket); if (ret < 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to add listen fd %d to vhost server fdset\n", + VHOST_CONFIG_LOG(path, ERR, "failed to add listen fd %d to vhost server fdset", fd); goto err; } @@ -444,12 +444,12 @@ vhost_user_connect_nonblock(char *path, int fd, struct sockaddr *un, size_t sz) flags = fcntl(fd, F_GETFL, 0); if (flags < 0) { - VHOST_LOG_CONFIG(path, ERR, "can't get flags for connfd %d (%s)\n", + VHOST_CONFIG_LOG(path, ERR, "can't get flags for connfd %d (%s)", fd, strerror(errno)); return -2; } if ((flags & O_NONBLOCK) && fcntl(fd, F_SETFL, flags & ~O_NONBLOCK)) { - VHOST_LOG_CONFIG(path, ERR, "can't disable nonblocking on fd %d\n", fd); + VHOST_CONFIG_LOG(path, ERR, "can't disable nonblocking on fd %d", fd); return -2; } return 0; @@ -477,15 +477,15 @@ vhost_user_client_reconnect(void *arg __rte_unused) sizeof(reconn->un)); if (ret == -2) { close(reconn->fd); - VHOST_LOG_CONFIG(reconn->vsocket->path, ERR, - "reconnection for fd %d failed\n", + VHOST_CONFIG_LOG(reconn->vsocket->path, ERR, + "reconnection for fd %d failed", reconn->fd); goto remove_fd; } if (ret == -1) continue; - VHOST_LOG_CONFIG(reconn->vsocket->path, INFO, "connected\n"); + VHOST_CONFIG_LOG(reconn->vsocket->path, INFO, "connected"); vhost_user_add_connection(reconn->fd, reconn->vsocket); remove_fd: TAILQ_REMOVE(&reconn_list.head, reconn, next); @@ -506,7 +506,7 @@ vhost_user_reconnect_init(void) ret = pthread_mutex_init(&reconn_list.mutex, NULL); if (ret < 0) { - VHOST_LOG_CONFIG("thread", ERR, "%s: failed to initialize mutex\n", __func__); + VHOST_CONFIG_LOG("thread", ERR, "%s: failed to initialize mutex", __func__); return ret; } TAILQ_INIT(&reconn_list.head); @@ -514,10 +514,10 @@ vhost_user_reconnect_init(void) ret = rte_thread_create_internal_control(&reconn_tid, "vhost-reco", vhost_user_client_reconnect, NULL); if (ret != 0) { - VHOST_LOG_CONFIG("thread", ERR, "failed to create reconnect thread\n"); + VHOST_CONFIG_LOG("thread", ERR, "failed to create reconnect thread"); if (pthread_mutex_destroy(&reconn_list.mutex)) - VHOST_LOG_CONFIG("thread", ERR, - "%s: failed to destroy reconnect mutex\n", + VHOST_CONFIG_LOG("thread", ERR, + "%s: failed to destroy reconnect mutex", __func__); } @@ -539,17 +539,17 @@ vhost_user_start_client(struct vhost_user_socket *vsocket) return 0; } - VHOST_LOG_CONFIG(path, WARNING, "failed to connect: %s\n", strerror(errno)); + VHOST_CONFIG_LOG(path, WARNING, "failed to connect: %s", strerror(errno)); if (ret == -2 || !vsocket->reconnect) { close(fd); return -1; } - VHOST_LOG_CONFIG(path, INFO, "reconnecting...\n"); + VHOST_CONFIG_LOG(path, INFO, "reconnecting..."); reconn = malloc(sizeof(*reconn)); if (reconn == NULL) { - VHOST_LOG_CONFIG(path, ERR, "failed to allocate memory for reconnect\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to allocate memory for reconnect"); close(fd); return -1; } @@ -638,7 +638,7 @@ rte_vhost_driver_get_vdpa_dev_type(const char *path, uint32_t *type) pthread_mutex_lock(&vhost_user.mutex); vsocket = find_vhost_user_socket(path); if (!vsocket) { - VHOST_LOG_CONFIG(path, ERR, "socket file is not registered yet.\n"); + VHOST_CONFIG_LOG(path, ERR, "socket file is not registered yet."); ret = -1; goto unlock_exit; } @@ -731,7 +731,7 @@ rte_vhost_driver_get_features(const char *path, uint64_t *features) pthread_mutex_lock(&vhost_user.mutex); vsocket = find_vhost_user_socket(path); if (!vsocket) { - VHOST_LOG_CONFIG(path, ERR, "socket file is not registered yet.\n"); + VHOST_CONFIG_LOG(path, ERR, "socket file is not registered yet."); ret = -1; goto unlock_exit; } @@ -743,7 +743,7 @@ rte_vhost_driver_get_features(const char *path, uint64_t *features) } if (vdpa_dev->ops->get_features(vdpa_dev, &vdpa_features) < 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to get vdpa features for socket file.\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to get vdpa features for socket file."); ret = -1; goto unlock_exit; } @@ -781,7 +781,7 @@ rte_vhost_driver_get_protocol_features(const char *path, pthread_mutex_lock(&vhost_user.mutex); vsocket = find_vhost_user_socket(path); if (!vsocket) { - VHOST_LOG_CONFIG(path, ERR, "socket file is not registered yet.\n"); + VHOST_CONFIG_LOG(path, ERR, "socket file is not registered yet."); ret = -1; goto unlock_exit; } @@ -794,7 +794,7 @@ rte_vhost_driver_get_protocol_features(const char *path, if (vdpa_dev->ops->get_protocol_features(vdpa_dev, &vdpa_protocol_features) < 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to get vdpa protocol features.\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to get vdpa protocol features."); ret = -1; goto unlock_exit; } @@ -818,7 +818,7 @@ rte_vhost_driver_get_queue_num(const char *path, uint32_t *queue_num) pthread_mutex_lock(&vhost_user.mutex); vsocket = find_vhost_user_socket(path); if (!vsocket) { - VHOST_LOG_CONFIG(path, ERR, "socket file is not registered yet.\n"); + VHOST_CONFIG_LOG(path, ERR, "socket file is not registered yet."); ret = -1; goto unlock_exit; } @@ -830,7 +830,7 @@ rte_vhost_driver_get_queue_num(const char *path, uint32_t *queue_num) } if (vdpa_dev->ops->get_queue_num(vdpa_dev, &vdpa_queue_num) < 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to get vdpa queue number.\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to get vdpa queue number."); ret = -1; goto unlock_exit; } @@ -848,10 +848,10 @@ rte_vhost_driver_set_max_queue_num(const char *path, uint32_t max_queue_pairs) struct vhost_user_socket *vsocket; int ret = 0; - VHOST_LOG_CONFIG(path, INFO, "Setting max queue pairs to %u\n", max_queue_pairs); + VHOST_CONFIG_LOG(path, INFO, "Setting max queue pairs to %u", max_queue_pairs); if (max_queue_pairs > VHOST_MAX_QUEUE_PAIRS) { - VHOST_LOG_CONFIG(path, ERR, "Library only supports up to %u queue pairs\n", + VHOST_CONFIG_LOG(path, ERR, "Library only supports up to %u queue pairs", VHOST_MAX_QUEUE_PAIRS); return -1; } @@ -859,7 +859,7 @@ rte_vhost_driver_set_max_queue_num(const char *path, uint32_t max_queue_pairs) pthread_mutex_lock(&vhost_user.mutex); vsocket = find_vhost_user_socket(path); if (!vsocket) { - VHOST_LOG_CONFIG(path, ERR, "socket file is not registered yet.\n"); + VHOST_CONFIG_LOG(path, ERR, "socket file is not registered yet."); ret = -1; goto unlock_exit; } @@ -898,7 +898,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags) pthread_mutex_lock(&vhost_user.mutex); if (vhost_user.vsocket_cnt == MAX_VHOST_SOCKET) { - VHOST_LOG_CONFIG(path, ERR, "the number of vhost sockets reaches maximum\n"); + VHOST_CONFIG_LOG(path, ERR, "the number of vhost sockets reaches maximum"); goto out; } @@ -908,14 +908,14 @@ rte_vhost_driver_register(const char *path, uint64_t flags) memset(vsocket, 0, sizeof(struct vhost_user_socket)); vsocket->path = strdup(path); if (vsocket->path == NULL) { - VHOST_LOG_CONFIG(path, ERR, "failed to copy socket path string\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to copy socket path string"); vhost_user_socket_mem_free(vsocket); goto out; } TAILQ_INIT(&vsocket->conn_list); ret = pthread_mutex_init(&vsocket->conn_mutex, NULL); if (ret) { - VHOST_LOG_CONFIG(path, ERR, "failed to init connection mutex\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to init connection mutex"); goto out_free; } @@ -936,7 +936,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags) if (vsocket->async_copy && (vsocket->iommu_support || (flags & RTE_VHOST_USER_POSTCOPY_SUPPORT))) { - VHOST_LOG_CONFIG(path, ERR, "async copy with IOMMU or post-copy not supported\n"); + VHOST_CONFIG_LOG(path, ERR, "async copy with IOMMU or post-copy not supported"); goto out_mutex; } @@ -965,7 +965,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags) if (vsocket->async_copy) { vsocket->supported_features &= ~(1ULL << VHOST_F_LOG_ALL); vsocket->features &= ~(1ULL << VHOST_F_LOG_ALL); - VHOST_LOG_CONFIG(path, INFO, "logging feature is disabled in async copy mode\n"); + VHOST_CONFIG_LOG(path, INFO, "logging feature is disabled in async copy mode"); } /* @@ -979,8 +979,8 @@ rte_vhost_driver_register(const char *path, uint64_t flags) (1ULL << VIRTIO_NET_F_HOST_TSO6) | (1ULL << VIRTIO_NET_F_HOST_UFO); - VHOST_LOG_CONFIG(path, INFO, "Linear buffers requested without external buffers,\n"); - VHOST_LOG_CONFIG(path, INFO, "disabling host segmentation offloading support\n"); + VHOST_CONFIG_LOG(path, INFO, "Linear buffers requested without external buffers,"); + VHOST_CONFIG_LOG(path, INFO, "disabling host segmentation offloading support"); vsocket->supported_features &= ~seg_offload_features; vsocket->features &= ~seg_offload_features; } @@ -995,7 +995,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags) ~(1ULL << VHOST_USER_PROTOCOL_F_PAGEFAULT); } else { #ifndef RTE_LIBRTE_VHOST_POSTCOPY - VHOST_LOG_CONFIG(path, ERR, "Postcopy requested but not compiled\n"); + VHOST_CONFIG_LOG(path, ERR, "Postcopy requested but not compiled"); ret = -1; goto out_mutex; #endif @@ -1023,7 +1023,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags) out_mutex: if (pthread_mutex_destroy(&vsocket->conn_mutex)) { - VHOST_LOG_CONFIG(path, ERR, "failed to destroy connection mutex\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to destroy connection mutex"); } out_free: vhost_user_socket_mem_free(vsocket); @@ -1113,7 +1113,7 @@ rte_vhost_driver_unregister(const char *path) goto again; } - VHOST_LOG_CONFIG(path, INFO, "free connfd %d\n", conn->connfd); + VHOST_CONFIG_LOG(path, INFO, "free connfd %d", conn->connfd); close(conn->connfd); vhost_destroy_device(conn->vid); TAILQ_REMOVE(&vsocket->conn_list, conn, next); @@ -1192,14 +1192,14 @@ rte_vhost_driver_start(const char *path) * rebuild the wait list of poll. */ if (fdset_pipe_init(&vhost_user.fdset) < 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to create pipe for vhost fdset\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to create pipe for vhost fdset"); return -1; } int ret = rte_thread_create_internal_control(&fdset_tid, "vhost-evt", fdset_event_dispatch, &vhost_user.fdset); if (ret != 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to create fdset handling thread\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to create fdset handling thread"); fdset_pipe_uninit(&vhost_user.fdset); return -1; } diff --git a/lib/vhost/vdpa.c b/lib/vhost/vdpa.c index 219eef879c..9776fc07a9 100644 --- a/lib/vhost/vdpa.c +++ b/lib/vhost/vdpa.c @@ -84,8 +84,8 @@ rte_vdpa_register_device(struct rte_device *rte_dev, !ops->get_protocol_features || !ops->dev_conf || !ops->dev_close || !ops->set_vring_state || !ops->set_features) { - VHOST_LOG_CONFIG(rte_dev->name, ERR, - "Some mandatory vDPA ops aren't implemented\n"); + VHOST_CONFIG_LOG(rte_dev->name, ERR, + "Some mandatory vDPA ops aren't implemented"); return NULL; } @@ -107,8 +107,8 @@ rte_vdpa_register_device(struct rte_device *rte_dev, if (ops->get_dev_type) { ret = ops->get_dev_type(dev, &dev->type); if (ret) { - VHOST_LOG_CONFIG(rte_dev->name, ERR, - "Failed to get vdpa dev type.\n"); + VHOST_CONFIG_LOG(rte_dev->name, ERR, + "Failed to get vdpa dev type."); ret = -1; goto out_unlock; } diff --git a/lib/vhost/vduse.c b/lib/vhost/vduse.c index 080b58f7de..c7ba5a61dd 100644 --- a/lib/vhost/vduse.c +++ b/lib/vhost/vduse.c @@ -78,32 +78,32 @@ vduse_iotlb_miss(struct virtio_net *dev, uint64_t iova, uint8_t perm __rte_unuse ret = ioctl(dev->vduse_dev_fd, VDUSE_IOTLB_GET_FD, &entry); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to get IOTLB entry for 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to get IOTLB entry for 0x%" PRIx64, iova); return -1; } fd = ret; - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "New IOTLB entry:\n"); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "\tIOVA: %" PRIx64 " - %" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "New IOTLB entry:"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "\tIOVA: %" PRIx64 " - %" PRIx64, (uint64_t)entry.start, (uint64_t)entry.last); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "\toffset: %" PRIx64 "\n", (uint64_t)entry.offset); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "\tfd: %d\n", fd); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "\tperm: %x\n", entry.perm); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "\toffset: %" PRIx64, (uint64_t)entry.offset); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "\tfd: %d", fd); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "\tperm: %x", entry.perm); size = entry.last - entry.start + 1; mmap_addr = mmap(0, size + entry.offset, entry.perm, MAP_SHARED, fd, 0); if (!mmap_addr) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Failed to mmap IOTLB entry for 0x%" PRIx64 "\n", iova); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Failed to mmap IOTLB entry for 0x%" PRIx64, iova); ret = -1; goto close_fd; } ret = fstat(fd, &stat); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to get page size.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to get page size."); munmap(mmap_addr, entry.offset + size); goto close_fd; } @@ -134,14 +134,14 @@ vduse_control_queue_event(int fd, void *arg, int *remove __rte_unused) ret = read(fd, &buf, sizeof(buf)); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to read control queue event: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to read control queue event: %s", strerror(errno)); return; } - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "Control queue kicked\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "Control queue kicked"); if (virtio_net_ctrl_handle(dev)) - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to handle ctrl request\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to handle ctrl request"); } static void @@ -156,21 +156,21 @@ vduse_vring_setup(struct virtio_net *dev, unsigned int index) vq_info.index = index; ret = ioctl(dev->vduse_dev_fd, VDUSE_VQ_GET_INFO, &vq_info); if (ret) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to get VQ %u info: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to get VQ %u info: %s", index, strerror(errno)); return; } - VHOST_LOG_CONFIG(dev->ifname, INFO, "VQ %u info:\n", index); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tnum: %u\n", vq_info.num); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tdesc_addr: %llx\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "VQ %u info:", index); + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tnum: %u", vq_info.num); + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tdesc_addr: %llx", (unsigned long long)vq_info.desc_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tdriver_addr: %llx\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tdriver_addr: %llx", (unsigned long long)vq_info.driver_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tdevice_addr: %llx\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tdevice_addr: %llx", (unsigned long long)vq_info.device_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tavail_idx: %u\n", vq_info.split.avail_index); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tready: %u\n", vq_info.ready); + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tavail_idx: %u", vq_info.split.avail_index); + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tready: %u", vq_info.ready); vq->last_avail_idx = vq_info.split.avail_index; vq->size = vq_info.num; @@ -182,12 +182,12 @@ vduse_vring_setup(struct virtio_net *dev, unsigned int index) vq->kickfd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC); if (vq->kickfd < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to init kickfd for VQ %u: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to init kickfd for VQ %u: %s", index, strerror(errno)); vq->kickfd = VIRTIO_INVALID_EVENTFD; return; } - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tkick fd: %d\n", vq->kickfd); + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tkick fd: %d", vq->kickfd); vq->shadow_used_split = rte_malloc_socket(NULL, vq->size * sizeof(struct vring_used_elem), @@ -198,12 +198,12 @@ vduse_vring_setup(struct virtio_net *dev, unsigned int index) vhost_user_iotlb_rd_lock(vq); if (vring_translate(dev, vq)) - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to translate vring %d addresses\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to translate vring %d addresses", index); if (vhost_enable_guest_notification(dev, vq, 0)) - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Failed to disable guest notifications on vring %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Failed to disable guest notifications on vring %d", index); vhost_user_iotlb_rd_unlock(vq); @@ -212,7 +212,7 @@ vduse_vring_setup(struct virtio_net *dev, unsigned int index) ret = ioctl(dev->vduse_dev_fd, VDUSE_VQ_SETUP_KICKFD, &vq_efd); if (ret) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to setup kickfd for VQ %u: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to setup kickfd for VQ %u: %s", index, strerror(errno)); close(vq->kickfd); vq->kickfd = VIRTIO_UNINITIALIZED_EVENTFD; @@ -222,8 +222,8 @@ vduse_vring_setup(struct virtio_net *dev, unsigned int index) if (vq == dev->cvq) { ret = fdset_add(&vduse.fdset, vq->kickfd, vduse_control_queue_event, NULL, dev); if (ret) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Failed to setup kickfd handler for VQ %u: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Failed to setup kickfd handler for VQ %u: %s", index, strerror(errno)); vq_efd.fd = VDUSE_EVENTFD_DEASSIGN; ioctl(dev->vduse_dev_fd, VDUSE_VQ_SETUP_KICKFD, &vq_efd); @@ -232,7 +232,7 @@ vduse_vring_setup(struct virtio_net *dev, unsigned int index) } fdset_pipe_notify(&vduse.fdset); vhost_enable_guest_notification(dev, vq, 1); - VHOST_LOG_CONFIG(dev->ifname, INFO, "Ctrl queue event handler installed\n"); + VHOST_CONFIG_LOG(dev->ifname, INFO, "Ctrl queue event handler installed"); } } @@ -253,7 +253,7 @@ vduse_vring_cleanup(struct virtio_net *dev, unsigned int index) ret = ioctl(dev->vduse_dev_fd, VDUSE_VQ_SETUP_KICKFD, &vq_efd); if (ret) - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to cleanup kickfd for VQ %u: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to cleanup kickfd for VQ %u: %s", index, strerror(errno)); close(vq->kickfd); @@ -279,23 +279,23 @@ vduse_device_start(struct virtio_net *dev) { unsigned int i, ret; - VHOST_LOG_CONFIG(dev->ifname, INFO, "Starting device...\n"); + VHOST_CONFIG_LOG(dev->ifname, INFO, "Starting device..."); dev->notify_ops = vhost_driver_callback_get(dev->ifname); if (!dev->notify_ops) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Failed to get callback ops for driver\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Failed to get callback ops for driver"); return; } ret = ioctl(dev->vduse_dev_fd, VDUSE_DEV_GET_FEATURES, &dev->features); if (ret) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to get features: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to get features: %s", strerror(errno)); return; } - VHOST_LOG_CONFIG(dev->ifname, INFO, "Negotiated Virtio features: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "Negotiated Virtio features: 0x%" PRIx64, dev->features); if (dev->features & @@ -331,7 +331,7 @@ vduse_device_stop(struct virtio_net *dev) { unsigned int i; - VHOST_LOG_CONFIG(dev->ifname, INFO, "Stopping device...\n"); + VHOST_CONFIG_LOG(dev->ifname, INFO, "Stopping device..."); vhost_destroy_device_notify(dev); @@ -357,34 +357,34 @@ vduse_events_handler(int fd, void *arg, int *remove __rte_unused) ret = read(fd, &req, sizeof(req)); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to read request: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to read request: %s", strerror(errno)); return; } else if (ret < (int)sizeof(req)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Incomplete to read request %d\n", ret); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Incomplete to read request %d", ret); return; } - VHOST_LOG_CONFIG(dev->ifname, INFO, "New request: %s (%u)\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "New request: %s (%u)", vduse_req_id_to_str(req.type), req.type); switch (req.type) { case VDUSE_GET_VQ_STATE: vq = dev->virtqueue[req.vq_state.index]; - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tvq index: %u, avail_index: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tvq index: %u, avail_index: %u", req.vq_state.index, vq->last_avail_idx); resp.vq_state.split.avail_index = vq->last_avail_idx; resp.result = VDUSE_REQ_RESULT_OK; break; case VDUSE_SET_STATUS: - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tnew status: 0x%08x\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tnew status: 0x%08x", req.s.status); old_status = dev->status; dev->status = req.s.status; resp.result = VDUSE_REQ_RESULT_OK; break; case VDUSE_UPDATE_IOTLB: - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tIOVA range: %" PRIx64 " - %" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tIOVA range: %" PRIx64 " - %" PRIx64, (uint64_t)req.iova.start, (uint64_t)req.iova.last); vhost_user_iotlb_cache_remove(dev, req.iova.start, req.iova.last - req.iova.start + 1); @@ -399,7 +399,7 @@ vduse_events_handler(int fd, void *arg, int *remove __rte_unused) ret = write(dev->vduse_dev_fd, &resp, sizeof(resp)); if (ret != sizeof(resp)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to write response %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to write response %s", strerror(errno)); return; } @@ -411,7 +411,7 @@ vduse_events_handler(int fd, void *arg, int *remove __rte_unused) vduse_device_stop(dev); } - VHOST_LOG_CONFIG(dev->ifname, INFO, "Request %s (%u) handled successfully\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "Request %s (%u) handled successfully", vduse_req_id_to_str(req.type), req.type); } @@ -435,14 +435,14 @@ vduse_device_create(const char *path, bool compliant_ol_flags) * rebuild the wait list of poll. */ if (fdset_pipe_init(&vduse.fdset) < 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to create pipe for vduse fdset\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to create pipe for vduse fdset"); return -1; } ret = rte_thread_create_internal_control(&fdset_tid, "vduse-evt", fdset_event_dispatch, &vduse.fdset); if (ret != 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to create vduse fdset handling thread\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to create vduse fdset handling thread"); fdset_pipe_uninit(&vduse.fdset); return -1; } @@ -452,13 +452,13 @@ vduse_device_create(const char *path, bool compliant_ol_flags) control_fd = open(VDUSE_CTRL_PATH, O_RDWR); if (control_fd < 0) { - VHOST_LOG_CONFIG(name, ERR, "Failed to open %s: %s\n", + VHOST_CONFIG_LOG(name, ERR, "Failed to open %s: %s", VDUSE_CTRL_PATH, strerror(errno)); return -1; } if (ioctl(control_fd, VDUSE_SET_API_VERSION, &ver)) { - VHOST_LOG_CONFIG(name, ERR, "Failed to set API version: %" PRIu64 ": %s\n", + VHOST_CONFIG_LOG(name, ERR, "Failed to set API version: %" PRIu64 ": %s", ver, strerror(errno)); ret = -1; goto out_ctrl_close; @@ -467,24 +467,24 @@ vduse_device_create(const char *path, bool compliant_ol_flags) dev_config = malloc(offsetof(struct vduse_dev_config, config) + sizeof(vnet_config)); if (!dev_config) { - VHOST_LOG_CONFIG(name, ERR, "Failed to allocate VDUSE config\n"); + VHOST_CONFIG_LOG(name, ERR, "Failed to allocate VDUSE config"); ret = -1; goto out_ctrl_close; } ret = rte_vhost_driver_get_features(path, &features); if (ret < 0) { - VHOST_LOG_CONFIG(name, ERR, "Failed to get backend features\n"); + VHOST_CONFIG_LOG(name, ERR, "Failed to get backend features"); goto out_free; } ret = rte_vhost_driver_get_queue_num(path, &max_queue_pairs); if (ret < 0) { - VHOST_LOG_CONFIG(name, ERR, "Failed to get max queue pairs\n"); + VHOST_CONFIG_LOG(name, ERR, "Failed to get max queue pairs"); goto out_free; } - VHOST_LOG_CONFIG(path, INFO, "VDUSE max queue pairs: %u\n", max_queue_pairs); + VHOST_CONFIG_LOG(path, INFO, "VDUSE max queue pairs: %u", max_queue_pairs); total_queues = max_queue_pairs * 2; if (max_queue_pairs == 1) @@ -506,14 +506,14 @@ vduse_device_create(const char *path, bool compliant_ol_flags) ret = ioctl(control_fd, VDUSE_CREATE_DEV, dev_config); if (ret < 0) { - VHOST_LOG_CONFIG(name, ERR, "Failed to create VDUSE device: %s\n", + VHOST_CONFIG_LOG(name, ERR, "Failed to create VDUSE device: %s", strerror(errno)); goto out_free; } dev_fd = open(path, O_RDWR); if (dev_fd < 0) { - VHOST_LOG_CONFIG(name, ERR, "Failed to open device %s: %s\n", + VHOST_CONFIG_LOG(name, ERR, "Failed to open device %s: %s", path, strerror(errno)); ret = -1; goto out_dev_close; @@ -521,14 +521,14 @@ vduse_device_create(const char *path, bool compliant_ol_flags) ret = fcntl(dev_fd, F_SETFL, O_NONBLOCK); if (ret < 0) { - VHOST_LOG_CONFIG(name, ERR, "Failed to set chardev as non-blocking: %s\n", + VHOST_CONFIG_LOG(name, ERR, "Failed to set chardev as non-blocking: %s", strerror(errno)); goto out_dev_close; } vid = vhost_new_device(&vduse_backend_ops); if (vid < 0) { - VHOST_LOG_CONFIG(name, ERR, "Failed to create new Vhost device\n"); + VHOST_CONFIG_LOG(name, ERR, "Failed to create new Vhost device"); ret = -1; goto out_dev_close; } @@ -549,7 +549,7 @@ vduse_device_create(const char *path, bool compliant_ol_flags) ret = alloc_vring_queue(dev, i); if (ret) { - VHOST_LOG_CONFIG(name, ERR, "Failed to alloc vring %d metadata\n", i); + VHOST_CONFIG_LOG(name, ERR, "Failed to alloc vring %d metadata", i); goto out_dev_destroy; } @@ -558,7 +558,7 @@ vduse_device_create(const char *path, bool compliant_ol_flags) ret = ioctl(dev->vduse_dev_fd, VDUSE_VQ_SETUP, &vq_cfg); if (ret) { - VHOST_LOG_CONFIG(name, ERR, "Failed to set-up VQ %d\n", i); + VHOST_CONFIG_LOG(name, ERR, "Failed to set-up VQ %d", i); goto out_dev_destroy; } } @@ -567,7 +567,7 @@ vduse_device_create(const char *path, bool compliant_ol_flags) ret = fdset_add(&vduse.fdset, dev->vduse_dev_fd, vduse_events_handler, NULL, dev); if (ret) { - VHOST_LOG_CONFIG(name, ERR, "Failed to add fd %d to vduse fdset\n", + VHOST_CONFIG_LOG(name, ERR, "Failed to add fd %d to vduse fdset", dev->vduse_dev_fd); goto out_dev_destroy; } @@ -624,7 +624,7 @@ vduse_device_destroy(const char *path) if (dev->vduse_ctrl_fd >= 0) { ret = ioctl(dev->vduse_ctrl_fd, VDUSE_DESTROY_DEV, name); if (ret) - VHOST_LOG_CONFIG(name, ERR, "Failed to destroy VDUSE device: %s\n", + VHOST_CONFIG_LOG(name, ERR, "Failed to destroy VDUSE device: %s", strerror(errno)); close(dev->vduse_ctrl_fd); dev->vduse_ctrl_fd = -1; diff --git a/lib/vhost/vduse.h b/lib/vhost/vduse.h index 4879b1f900..0d8f3f1205 100644 --- a/lib/vhost/vduse.h +++ b/lib/vhost/vduse.h @@ -21,14 +21,14 @@ vduse_device_create(const char *path, bool compliant_ol_flags) { RTE_SET_USED(compliant_ol_flags); - VHOST_LOG_CONFIG(path, ERR, "VDUSE support disabled at build time\n"); + VHOST_CONFIG_LOG(path, ERR, "VDUSE support disabled at build time"); return -1; } static inline int vduse_device_destroy(const char *path) { - VHOST_LOG_CONFIG(path, ERR, "VDUSE support disabled at build time\n"); + VHOST_CONFIG_LOG(path, ERR, "VDUSE support disabled at build time"); return -1; } diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index 8a1f992d9d..5912a42979 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -100,8 +100,8 @@ __vhost_iova_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq, vhost_user_iotlb_pending_insert(dev, iova, perm); if (vhost_iotlb_miss(dev, iova, perm)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "IOTLB miss req failed for IOVA 0x%" PRIx64 "\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "IOTLB miss req failed for IOVA 0x%" PRIx64, iova); vhost_user_iotlb_pending_remove(dev, iova, 1, perm); } @@ -174,8 +174,8 @@ __vhost_log_write_iova(struct virtio_net *dev, struct vhost_virtqueue *vq, hva = __vhost_iova_to_vva(dev, vq, iova, &map_len, VHOST_ACCESS_RW); if (map_len != len) { - VHOST_LOG_DATA(dev->ifname, ERR, - "failed to write log for IOVA 0x%" PRIx64 ". No IOTLB entry found\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "failed to write log for IOVA 0x%" PRIx64 ". No IOTLB entry found", iova); return; } @@ -292,8 +292,8 @@ __vhost_log_cache_write_iova(struct virtio_net *dev, struct vhost_virtqueue *vq, hva = __vhost_iova_to_vva(dev, vq, iova, &map_len, VHOST_ACCESS_RW); if (map_len != len) { - VHOST_LOG_DATA(dev->ifname, ERR, - "failed to write log for IOVA 0x%" PRIx64 ". No IOTLB entry found\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "failed to write log for IOVA 0x%" PRIx64 ". No IOTLB entry found", iova); return; } @@ -473,9 +473,9 @@ translate_log_addr(struct virtio_net *dev, struct vhost_virtqueue *vq, gpa = hva_to_gpa(dev, hva, exp_size); if (!gpa) { - VHOST_LOG_DATA(dev->ifname, ERR, + VHOST_DATA_LOG(dev->ifname, ERR, "failed to find GPA for log_addr: 0x%" - PRIx64 " hva: 0x%" PRIx64 "\n", + PRIx64 " hva: 0x%" PRIx64, log_addr, hva); return 0; } @@ -609,7 +609,7 @@ init_vring_queue(struct virtio_net *dev __rte_unused, struct vhost_virtqueue *vq #ifdef RTE_LIBRTE_VHOST_NUMA if (get_mempolicy(&numa_node, NULL, 0, vq, MPOL_F_NODE | MPOL_F_ADDR)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to query numa node: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to query numa node: %s", rte_strerror(errno)); numa_node = SOCKET_ID_ANY; } @@ -640,8 +640,8 @@ alloc_vring_queue(struct virtio_net *dev, uint32_t vring_idx) vq = rte_zmalloc(NULL, sizeof(struct vhost_virtqueue), 0); if (vq == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for vring %u.\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for vring %u.", i); return -1; } @@ -678,8 +678,8 @@ reset_device(struct virtio_net *dev) struct vhost_virtqueue *vq = dev->virtqueue[i]; if (!vq) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to reset vring, virtqueue not allocated (%d)\n", i); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to reset vring, virtqueue not allocated (%d)", i); continue; } reset_vring_queue(dev, vq); @@ -697,17 +697,17 @@ vhost_new_device(struct vhost_backend_ops *ops) int i; if (ops == NULL) { - VHOST_LOG_CONFIG("device", ERR, "missing backend ops.\n"); + VHOST_CONFIG_LOG("device", ERR, "missing backend ops."); return -1; } if (ops->iotlb_miss == NULL) { - VHOST_LOG_CONFIG("device", ERR, "missing IOTLB miss backend op.\n"); + VHOST_CONFIG_LOG("device", ERR, "missing IOTLB miss backend op."); return -1; } if (ops->inject_irq == NULL) { - VHOST_LOG_CONFIG("device", ERR, "missing IRQ injection backend op.\n"); + VHOST_CONFIG_LOG("device", ERR, "missing IRQ injection backend op."); return -1; } @@ -718,14 +718,14 @@ vhost_new_device(struct vhost_backend_ops *ops) } if (i == RTE_MAX_VHOST_DEVICE) { - VHOST_LOG_CONFIG("device", ERR, "failed to find a free slot for new device.\n"); + VHOST_CONFIG_LOG("device", ERR, "failed to find a free slot for new device."); pthread_mutex_unlock(&vhost_dev_lock); return -1; } dev = rte_zmalloc(NULL, sizeof(struct virtio_net), 0); if (dev == NULL) { - VHOST_LOG_CONFIG("device", ERR, "failed to allocate memory for new device.\n"); + VHOST_CONFIG_LOG("device", ERR, "failed to allocate memory for new device."); pthread_mutex_unlock(&vhost_dev_lock); return -1; } @@ -832,7 +832,7 @@ vhost_setup_virtio_net(int vid, bool enable, bool compliant_ol_flags, bool stats dev->flags &= ~VIRTIO_DEV_SUPPORT_IOMMU; if (vhost_user_iotlb_init(dev) < 0) - VHOST_LOG_CONFIG("device", ERR, "failed to init IOTLB\n"); + VHOST_CONFIG_LOG("device", ERR, "failed to init IOTLB"); } @@ -891,7 +891,7 @@ rte_vhost_get_numa_node(int vid) ret = get_mempolicy(&numa_node, NULL, 0, dev, MPOL_F_NODE | MPOL_F_ADDR); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to query numa node: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to query numa node: %s", rte_strerror(errno)); return -1; } @@ -1608,8 +1608,8 @@ rte_vhost_rx_queue_count(int vid, uint16_t qid) return 0; if (unlikely(qid >= dev->nr_vring || (qid & 1) == 0)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid virtqueue idx %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid virtqueue idx %d.", __func__, qid); return 0; } @@ -1775,16 +1775,16 @@ async_channel_register(struct virtio_net *dev, struct vhost_virtqueue *vq) int node = vq->numa_node; if (unlikely(vq->async)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "async register failed: already registered (qid: %d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "async register failed: already registered (qid: %d)", vq->index); return -1; } async = rte_zmalloc_socket(NULL, sizeof(struct vhost_async), 0, node); if (!async) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate async metadata (qid: %d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate async metadata (qid: %d)", vq->index); return -1; } @@ -1792,8 +1792,8 @@ async_channel_register(struct virtio_net *dev, struct vhost_virtqueue *vq) async->pkts_info = rte_malloc_socket(NULL, vq->size * sizeof(struct async_inflight_info), RTE_CACHE_LINE_SIZE, node); if (!async->pkts_info) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate async_pkts_info (qid: %d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate async_pkts_info (qid: %d)", vq->index); goto out_free_async; } @@ -1801,8 +1801,8 @@ async_channel_register(struct virtio_net *dev, struct vhost_virtqueue *vq) async->pkts_cmpl_flag = rte_zmalloc_socket(NULL, vq->size * sizeof(bool), RTE_CACHE_LINE_SIZE, node); if (!async->pkts_cmpl_flag) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate async pkts_cmpl_flag (qid: %d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate async pkts_cmpl_flag (qid: %d)", vq->index); goto out_free_async; } @@ -1812,8 +1812,8 @@ async_channel_register(struct virtio_net *dev, struct vhost_virtqueue *vq) vq->size * sizeof(struct vring_used_elem_packed), RTE_CACHE_LINE_SIZE, node); if (!async->buffers_packed) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate async buffers (qid: %d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate async buffers (qid: %d)", vq->index); goto out_free_inflight; } @@ -1822,8 +1822,8 @@ async_channel_register(struct virtio_net *dev, struct vhost_virtqueue *vq) vq->size * sizeof(struct vring_used_elem), RTE_CACHE_LINE_SIZE, node); if (!async->descs_split) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate async descs (qid: %d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate async descs (qid: %d)", vq->index); goto out_free_inflight; } @@ -1914,8 +1914,8 @@ rte_vhost_async_channel_unregister(int vid, uint16_t queue_id) return ret; if (rte_rwlock_write_trylock(&vq->access_lock)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to unregister async channel, virtqueue busy.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to unregister async channel, virtqueue busy."); return ret; } @@ -1927,9 +1927,9 @@ rte_vhost_async_channel_unregister(int vid, uint16_t queue_id) if (!vq->async) { ret = 0; } else if (vq->async->pkts_inflight_n) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to unregister async channel.\n"); - VHOST_LOG_CONFIG(dev->ifname, ERR, - "inflight packets must be completed before unregistration.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to unregister async channel."); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "inflight packets must be completed before unregistration."); } else { vhost_free_async_mem(vq); ret = 0; @@ -1964,9 +1964,9 @@ rte_vhost_async_channel_unregister_thread_unsafe(int vid, uint16_t queue_id) return 0; if (vq->async->pkts_inflight_n) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to unregister async channel.\n"); - VHOST_LOG_CONFIG(dev->ifname, ERR, - "inflight packets must be completed before unregistration.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to unregister async channel."); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "inflight packets must be completed before unregistration."); return -1; } @@ -1985,17 +1985,17 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id) pthread_mutex_lock(&vhost_dma_lock); if (!rte_dma_is_valid(dma_id)) { - VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id); + VHOST_CONFIG_LOG("dma", ERR, "DMA %d is not found.", dma_id); goto error; } if (rte_dma_info_get(dma_id, &info) != 0) { - VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d information.\n", dma_id); + VHOST_CONFIG_LOG("dma", ERR, "Fail to get DMA %d information.", dma_id); goto error; } if (vchan_id >= info.max_vchans) { - VHOST_LOG_CONFIG("dma", ERR, "Invalid DMA %d vChannel %u.\n", dma_id, vchan_id); + VHOST_CONFIG_LOG("dma", ERR, "Invalid DMA %d vChannel %u.", dma_id, vchan_id); goto error; } @@ -2005,8 +2005,8 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id) vchans = rte_zmalloc(NULL, sizeof(struct async_dma_vchan_info) * info.max_vchans, RTE_CACHE_LINE_SIZE); if (vchans == NULL) { - VHOST_LOG_CONFIG("dma", ERR, - "Failed to allocate vchans for DMA %d vChannel %u.\n", + VHOST_CONFIG_LOG("dma", ERR, + "Failed to allocate vchans for DMA %d vChannel %u.", dma_id, vchan_id); goto error; } @@ -2015,7 +2015,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id) } if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr) { - VHOST_LOG_CONFIG("dma", INFO, "DMA %d vChannel %u already registered.\n", + VHOST_CONFIG_LOG("dma", INFO, "DMA %d vChannel %u already registered.", dma_id, vchan_id); pthread_mutex_unlock(&vhost_dma_lock); return 0; @@ -2027,8 +2027,8 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id) pkts_cmpl_flag_addr = rte_zmalloc(NULL, sizeof(bool *) * max_desc, RTE_CACHE_LINE_SIZE); if (!pkts_cmpl_flag_addr) { - VHOST_LOG_CONFIG("dma", ERR, - "Failed to allocate pkts_cmpl_flag_addr for DMA %d vChannel %u.\n", + VHOST_CONFIG_LOG("dma", ERR, + "Failed to allocate pkts_cmpl_flag_addr for DMA %d vChannel %u.", dma_id, vchan_id); if (dma_copy_track[dma_id].nr_vchans == 0) { @@ -2070,8 +2070,8 @@ rte_vhost_async_get_inflight(int vid, uint16_t queue_id) return ret; if (rte_rwlock_write_trylock(&vq->access_lock)) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "failed to check in-flight packets. virtqueue busy.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "failed to check in-flight packets. virtqueue busy."); return ret; } @@ -2284,30 +2284,30 @@ rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id) pthread_mutex_lock(&vhost_dma_lock); if (!rte_dma_is_valid(dma_id)) { - VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id); + VHOST_CONFIG_LOG("dma", ERR, "DMA %d is not found.", dma_id); goto error; } if (rte_dma_info_get(dma_id, &info) != 0) { - VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d information.\n", dma_id); + VHOST_CONFIG_LOG("dma", ERR, "Fail to get DMA %d information.", dma_id); goto error; } if (vchan_id >= info.max_vchans || !dma_copy_track[dma_id].vchans || !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr) { - VHOST_LOG_CONFIG("dma", ERR, "Invalid channel %d:%u.\n", dma_id, vchan_id); + VHOST_CONFIG_LOG("dma", ERR, "Invalid channel %d:%u.", dma_id, vchan_id); goto error; } if (rte_dma_stats_get(dma_id, vchan_id, &stats) != 0) { - VHOST_LOG_CONFIG("dma", ERR, - "Failed to get stats for DMA %d vChannel %u.\n", dma_id, vchan_id); + VHOST_CONFIG_LOG("dma", ERR, + "Failed to get stats for DMA %d vChannel %u.", dma_id, vchan_id); goto error; } if (stats.submitted - stats.completed != 0) { - VHOST_LOG_CONFIG("dma", ERR, - "Do not unconfigure when there are inflight packets.\n"); + VHOST_CONFIG_LOG("dma", ERR, + "Do not unconfigure when there are inflight packets."); goto error; } diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index 5f24911190..5a74d0e628 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -673,17 +673,17 @@ vhost_log_write_iova(struct virtio_net *dev, struct vhost_virtqueue *vq, } extern int vhost_config_log_level; +#define RTE_LOGTYPE_VHOST_CONFIG vhost_config_log_level extern int vhost_data_log_level; +#define RTE_LOGTYPE_VHOST_DATA vhost_data_log_level -#define VHOST_LOG_CONFIG(prefix, level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, vhost_config_log_level, \ - "VHOST_CONFIG: (%s) " fmt, prefix, ##args) +#define VHOST_CONFIG_LOG(prefix, level, fmt, args...) \ + RTE_LOG(level, VHOST_CONFIG, \ + "VHOST_CONFIG: (%s) " fmt "\n", prefix, ##args) -#define VHOST_LOG_DATA(prefix, level, fmt, args...) \ - (void)((RTE_LOG_ ## level <= RTE_LOG_DP_LEVEL) ? \ - rte_log(RTE_LOG_ ## level, vhost_data_log_level, \ - "VHOST_DATA: (%s) " fmt, prefix, ##args) : \ - 0) +#define VHOST_DATA_LOG(prefix, level, fmt, args...) \ + RTE_LOG_DP(level, VHOST_DATA, \ + "VHOST_DATA: (%s) " fmt "\n", prefix, ##args) #ifdef RTE_LIBRTE_VHOST_DEBUG #define VHOST_MAX_PRINT_BUFF 6072 @@ -702,7 +702,7 @@ extern int vhost_data_log_level; } \ snprintf(packet + strnlen(packet, VHOST_MAX_PRINT_BUFF), VHOST_MAX_PRINT_BUFF - strnlen(packet, VHOST_MAX_PRINT_BUFF), "\n"); \ \ - VHOST_LOG_DATA(device->ifname, DEBUG, "%s", packet); \ + RTE_LOG_DP(DEBUG, VHOST_DATA, "VHOST_DATA: (%s) %s", dev->ifname, packet); \ } while (0) #else #define PRINT_PACKET(device, addr, size, header) do {} while (0) @@ -830,7 +830,7 @@ get_device(int vid) dev = vhost_devices[vid]; if (unlikely(!dev)) { - VHOST_LOG_CONFIG("device", ERR, "(%d) device not found.\n", vid); + VHOST_CONFIG_LOG("device", ERR, "(%d) device not found.", vid); } return dev; @@ -963,8 +963,8 @@ vhost_vring_call_split(struct virtio_net *dev, struct vhost_virtqueue *vq) vq->signalled_used = new; vq->signalled_used_valid = true; - VHOST_LOG_DATA(dev->ifname, DEBUG, - "%s: used_event_idx=%d, old=%d, new=%d\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "%s: used_event_idx=%d, old=%d, new=%d", __func__, vhost_used_event(vq), old, new); if (vhost_need_event(vhost_used_event(vq), new, old) || diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index 413f068bcd..bac10e6182 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -93,8 +93,8 @@ validate_msg_fds(struct virtio_net *dev, struct vhu_msg_context *ctx, int expect if (ctx->fd_num == expected_fds) return 0; - VHOST_LOG_CONFIG(dev->ifname, ERR, - "expect %d FDs for request %s, received %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "expect %d FDs for request %s, received %d", expected_fds, vhost_message_handlers[ctx->msg.request.frontend].description, ctx->fd_num); @@ -144,7 +144,7 @@ async_dma_map(struct virtio_net *dev, bool do_map) return; /* DMA mapping errors won't stop VHOST_USER_SET_MEM_TABLE. */ - VHOST_LOG_CONFIG(dev->ifname, ERR, "DMA engine map failed\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "DMA engine map failed"); } } @@ -160,7 +160,7 @@ async_dma_map(struct virtio_net *dev, bool do_map) if (rte_errno == EINVAL) return; - VHOST_LOG_CONFIG(dev->ifname, ERR, "DMA engine unmap failed\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "DMA engine unmap failed"); } } } @@ -339,7 +339,7 @@ vhost_user_set_features(struct virtio_net **pdev, rte_vhost_driver_get_features(dev->ifname, &vhost_features); if (features & ~vhost_features) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "received invalid negotiated features.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "received invalid negotiated features."); dev->flags |= VIRTIO_DEV_FEATURES_FAILED; dev->status &= ~VIRTIO_DEVICE_STATUS_FEATURES_OK; @@ -356,8 +356,8 @@ vhost_user_set_features(struct virtio_net **pdev, * is enabled when the live-migration starts. */ if ((dev->features ^ features) & ~(1ULL << VHOST_F_LOG_ALL)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "features changed while device is running.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "features changed while device is running."); return RTE_VHOST_MSG_RESULT_ERR; } @@ -374,11 +374,11 @@ vhost_user_set_features(struct virtio_net **pdev, } else { dev->vhost_hlen = sizeof(struct virtio_net_hdr); } - VHOST_LOG_CONFIG(dev->ifname, INFO, - "negotiated Virtio features: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "negotiated Virtio features: 0x%" PRIx64, dev->features); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "mergeable RX buffers %s, virtio 1 %s\n", + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "mergeable RX buffers %s, virtio 1 %s", (dev->features & (1 << VIRTIO_NET_F_MRG_RXBUF)) ? "on" : "off", (dev->features & (1ULL << VIRTIO_F_VERSION_1)) ? "on" : "off"); @@ -426,8 +426,8 @@ vhost_user_set_vring_num(struct virtio_net **pdev, struct vhost_virtqueue *vq = dev->virtqueue[ctx->msg.payload.state.index]; if (ctx->msg.payload.state.num > 32768) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid virtqueue size %u\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid virtqueue size %u", ctx->msg.payload.state.num); return RTE_VHOST_MSG_RESULT_ERR; } @@ -445,8 +445,8 @@ vhost_user_set_vring_num(struct virtio_net **pdev, */ if (!vq_is_packed(dev)) { if (vq->size & (vq->size - 1)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid virtqueue size %u\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid virtqueue size %u", vq->size); return RTE_VHOST_MSG_RESULT_ERR; } @@ -459,8 +459,8 @@ vhost_user_set_vring_num(struct virtio_net **pdev, sizeof(struct vring_used_elem_packed), RTE_CACHE_LINE_SIZE, vq->numa_node); if (!vq->shadow_used_packed) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for shadow used ring.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for shadow used ring."); return RTE_VHOST_MSG_RESULT_ERR; } @@ -472,8 +472,8 @@ vhost_user_set_vring_num(struct virtio_net **pdev, RTE_CACHE_LINE_SIZE, vq->numa_node); if (!vq->shadow_used_split) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for vq internal data.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for vq internal data."); return RTE_VHOST_MSG_RESULT_ERR; } } @@ -483,8 +483,8 @@ vhost_user_set_vring_num(struct virtio_net **pdev, vq->size * sizeof(struct batch_copy_elem), RTE_CACHE_LINE_SIZE, vq->numa_node); if (!vq->batch_copy_elems) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for batching copy.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for batching copy."); return RTE_VHOST_MSG_RESULT_ERR; } @@ -520,8 +520,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) ret = get_mempolicy(&node, NULL, 0, vq->desc, MPOL_F_NODE | MPOL_F_ADDR); if (ret) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "unable to get virtqueue %d numa information.\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "unable to get virtqueue %d numa information.", vq->index); return; } @@ -531,15 +531,15 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) vq = rte_realloc_socket(*pvq, sizeof(**pvq), 0, node); if (!vq) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc virtqueue %d on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc virtqueue %d on node %d", (*pvq)->index, node); return; } *pvq = vq; if (vq != dev->virtqueue[vq->index]) { - VHOST_LOG_CONFIG(dev->ifname, INFO, "reallocated virtqueue on node %d\n", node); + VHOST_CONFIG_LOG(dev->ifname, INFO, "reallocated virtqueue on node %d", node); dev->virtqueue[vq->index] = vq; } @@ -549,8 +549,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) sup = rte_realloc_socket(vq->shadow_used_packed, vq->size * sizeof(*sup), RTE_CACHE_LINE_SIZE, node); if (!sup) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc shadow packed on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc shadow packed on node %d", node); return; } @@ -561,8 +561,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) sus = rte_realloc_socket(vq->shadow_used_split, vq->size * sizeof(*sus), RTE_CACHE_LINE_SIZE, node); if (!sus) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc shadow split on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc shadow split on node %d", node); return; } @@ -572,8 +572,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) bce = rte_realloc_socket(vq->batch_copy_elems, vq->size * sizeof(*bce), RTE_CACHE_LINE_SIZE, node); if (!bce) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc batch copy elem on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc batch copy elem on node %d", node); return; } @@ -584,8 +584,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) lc = rte_realloc_socket(vq->log_cache, sizeof(*lc) * VHOST_LOG_CACHE_NR, 0, node); if (!lc) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc log cache on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc log cache on node %d", node); return; } @@ -597,8 +597,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) ri = rte_realloc_socket(vq->resubmit_inflight, sizeof(*ri), 0, node); if (!ri) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc resubmit inflight on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc resubmit inflight on node %d", node); return; } @@ -610,8 +610,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) rd = rte_realloc_socket(ri->resubmit_list, sizeof(*rd) * ri->resubmit_num, 0, node); if (!rd) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc resubmit list on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc resubmit list on node %d", node); return; } @@ -628,7 +628,7 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) ret = get_mempolicy(&dev_node, NULL, 0, dev, MPOL_F_NODE | MPOL_F_ADDR); if (ret) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "unable to get numa information.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "unable to get numa information."); return; } @@ -637,20 +637,20 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) dev = rte_realloc_socket(*pdev, sizeof(**pdev), 0, node); if (!dev) { - VHOST_LOG_CONFIG((*pdev)->ifname, ERR, "failed to realloc dev on node %d\n", node); + VHOST_CONFIG_LOG((*pdev)->ifname, ERR, "failed to realloc dev on node %d", node); return; } *pdev = dev; - VHOST_LOG_CONFIG(dev->ifname, INFO, "reallocated device on node %d\n", node); + VHOST_CONFIG_LOG(dev->ifname, INFO, "reallocated device on node %d", node); vhost_devices[dev->vid] = dev; mem_size = sizeof(struct rte_vhost_memory) + sizeof(struct rte_vhost_mem_region) * dev->mem->nregions; mem = rte_realloc_socket(dev->mem, mem_size, 0, node); if (!mem) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc mem table on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc mem table on node %d", node); return; } @@ -659,8 +659,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) gp = rte_realloc_socket(dev->guest_pages, dev->max_guest_pages * sizeof(*gp), RTE_CACHE_LINE_SIZE, node); if (!gp) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc guest pages on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc guest pages on node %d", node); return; } @@ -771,8 +771,8 @@ mem_set_dump(struct virtio_net *dev, void *ptr, size_t size, bool enable, uint64 size_t len = end - (uintptr_t)start; if (madvise(start, len, enable ? MADV_DODUMP : MADV_DONTDUMP) == -1) { - VHOST_LOG_CONFIG(dev->ifname, INFO, - "could not set coredump preference (%s).\n", strerror(errno)); + VHOST_CONFIG_LOG(dev->ifname, INFO, + "could not set coredump preference (%s).", strerror(errno)); } #endif } @@ -791,7 +791,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) vq->log_guest_addr = log_addr_to_gpa(dev, vq); if (vq->log_guest_addr == 0) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "failed to map log_guest_addr.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "failed to map log_guest_addr."); return; } } @@ -803,7 +803,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) if (vq->desc_packed == NULL || len != sizeof(struct vring_packed_desc) * vq->size) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "failed to map desc_packed ring.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "failed to map desc_packed ring."); return; } @@ -819,8 +819,8 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) vq, vq->ring_addrs.avail_user_addr, &len); if (vq->driver_event == NULL || len != sizeof(struct vring_packed_desc_event)) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "failed to find driver area address.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "failed to find driver area address."); return; } @@ -832,8 +832,8 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) vq, vq->ring_addrs.used_user_addr, &len); if (vq->device_event == NULL || len != sizeof(struct vring_packed_desc_event)) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "failed to find device area address.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "failed to find device area address."); return; } @@ -851,7 +851,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) vq->desc = (struct vring_desc *)(uintptr_t)ring_addr_to_vva(dev, vq, vq->ring_addrs.desc_user_addr, &len); if (vq->desc == 0 || len != sizeof(struct vring_desc) * vq->size) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "failed to map desc ring.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "failed to map desc ring."); return; } @@ -867,7 +867,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) vq->avail = (struct vring_avail *)(uintptr_t)ring_addr_to_vva(dev, vq, vq->ring_addrs.avail_user_addr, &len); if (vq->avail == 0 || len != expected_len) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "failed to map avail ring.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "failed to map avail ring."); return; } @@ -880,28 +880,28 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) vq->used = (struct vring_used *)(uintptr_t)ring_addr_to_vva(dev, vq, vq->ring_addrs.used_user_addr, &len); if (vq->used == 0 || len != expected_len) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "failed to map used ring.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "failed to map used ring."); return; } mem_set_dump(dev, vq->used, len, true, hua_to_alignment(dev->mem, vq->used)); if (vq->last_used_idx != vq->used->idx) { - VHOST_LOG_CONFIG(dev->ifname, WARNING, - "last_used_idx (%u) and vq->used->idx (%u) mismatches;\n", + VHOST_CONFIG_LOG(dev->ifname, WARNING, + "last_used_idx (%u) and vq->used->idx (%u) mismatches;", vq->last_used_idx, vq->used->idx); vq->last_used_idx = vq->used->idx; vq->last_avail_idx = vq->used->idx; - VHOST_LOG_CONFIG(dev->ifname, WARNING, - "some packets maybe resent for Tx and dropped for Rx\n"); + VHOST_CONFIG_LOG(dev->ifname, WARNING, + "some packets maybe resent for Tx and dropped for Rx"); } vq->access_ok = true; - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "mapped address desc: %p\n", vq->desc); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "mapped address avail: %p\n", vq->avail); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "mapped address used: %p\n", vq->used); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "log_guest_addr: %" PRIx64 "\n", vq->log_guest_addr); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "mapped address desc: %p", vq->desc); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "mapped address avail: %p", vq->avail); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "mapped address used: %p", vq->used); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "log_guest_addr: %" PRIx64, vq->log_guest_addr); } /* @@ -975,8 +975,8 @@ vhost_user_set_vring_base(struct virtio_net **pdev, vq->last_avail_idx = ctx->msg.payload.state.num; } - VHOST_LOG_CONFIG(dev->ifname, INFO, - "vring base idx:%u last_used_idx:%u last_avail_idx:%u.\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "vring base idx:%u last_used_idx:%u last_avail_idx:%u.", ctx->msg.payload.state.index, vq->last_used_idx, vq->last_avail_idx); return RTE_VHOST_MSG_RESULT_OK; @@ -996,7 +996,7 @@ add_one_guest_page(struct virtio_net *dev, uint64_t guest_phys_addr, dev->max_guest_pages * sizeof(*page), RTE_CACHE_LINE_SIZE); if (dev->guest_pages == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "cannot realloc guest_pages\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "cannot realloc guest_pages"); rte_free(old_pages); return -1; } @@ -1077,12 +1077,12 @@ dump_guest_pages(struct virtio_net *dev) for (i = 0; i < dev->nr_guest_pages; i++) { page = &dev->guest_pages[i]; - VHOST_LOG_CONFIG(dev->ifname, INFO, "guest physical page region %u\n", i); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tguest_phys_addr: %" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "guest physical page region %u", i); + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tguest_phys_addr: %" PRIx64, page->guest_phys_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\thost_iova : %" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "\thost_iova : %" PRIx64, page->host_iova); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tsize : %" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tsize : %" PRIx64, page->size); } } @@ -1131,9 +1131,9 @@ vhost_user_postcopy_region_register(struct virtio_net *dev, if (ioctl(dev->postcopy_ufd, UFFDIO_REGISTER, ®_struct)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to register ufd for region " - "%" PRIx64 " - %" PRIx64 " (ufd = %d) %s\n", + "%" PRIx64 " - %" PRIx64 " (ufd = %d) %s", (uint64_t)reg_struct.range.start, (uint64_t)reg_struct.range.start + (uint64_t)reg_struct.range.len - 1, @@ -1142,8 +1142,8 @@ vhost_user_postcopy_region_register(struct virtio_net *dev, return -1; } - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t userfaultfd registered for range : %" PRIx64 " - %" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t userfaultfd registered for range : %" PRIx64 " - %" PRIx64, (uint64_t)reg_struct.range.start, (uint64_t)reg_struct.range.start + (uint64_t)reg_struct.range.len - 1); @@ -1190,8 +1190,8 @@ vhost_user_postcopy_register(struct virtio_net *dev, int main_fd, * we've got to wait before we're allowed to generate faults. */ if (read_vhost_message(dev, main_fd, &ack_ctx) <= 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to read qemu ack on postcopy set-mem-table\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to read qemu ack on postcopy set-mem-table"); return -1; } @@ -1199,8 +1199,8 @@ vhost_user_postcopy_register(struct virtio_net *dev, int main_fd, return -1; if (ack_ctx.msg.request.frontend != VHOST_USER_SET_MEM_TABLE) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "bad qemu ack on postcopy set-mem-table (%d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "bad qemu ack on postcopy set-mem-table (%d)", ack_ctx.msg.request.frontend); return -1; } @@ -1227,8 +1227,8 @@ vhost_user_mmap_region(struct virtio_net *dev, /* Check for memory_size + mmap_offset overflow */ if (mmap_offset >= -region->size) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "mmap_offset (%#"PRIx64") and memory_size (%#"PRIx64") overflow\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "mmap_offset (%#"PRIx64") and memory_size (%#"PRIx64") overflow", mmap_offset, region->size); return -1; } @@ -1243,7 +1243,7 @@ vhost_user_mmap_region(struct virtio_net *dev, */ alignment = get_blk_size(region->fd); if (alignment == (uint64_t)-1) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "couldn't get hugepage size through fstat\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "couldn't get hugepage size through fstat"); return -1; } mmap_size = RTE_ALIGN_CEIL(mmap_size, alignment); @@ -1256,8 +1256,8 @@ vhost_user_mmap_region(struct virtio_net *dev, * mmap() kernel implementation would return an error, but * better catch it before and provide useful info in the logs. */ - VHOST_LOG_CONFIG(dev->ifname, ERR, - "mmap size (0x%" PRIx64 ") or alignment (0x%" PRIx64 ") is invalid\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "mmap size (0x%" PRIx64 ") or alignment (0x%" PRIx64 ") is invalid", region->size + mmap_offset, alignment); return -1; } @@ -1267,7 +1267,7 @@ vhost_user_mmap_region(struct virtio_net *dev, MAP_SHARED | populate, region->fd, 0); if (mmap_addr == MAP_FAILED) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "mmap failed (%s).\n", strerror(errno)); + VHOST_CONFIG_LOG(dev->ifname, ERR, "mmap failed (%s).", strerror(errno)); return -1; } @@ -1278,35 +1278,35 @@ vhost_user_mmap_region(struct virtio_net *dev, if (dev->async_copy) { if (add_guest_pages(dev, region, alignment) < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "adding guest pages to region failed.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "adding guest pages to region failed."); return -1; } } - VHOST_LOG_CONFIG(dev->ifname, INFO, - "guest memory region size: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "guest memory region size: 0x%" PRIx64, region->size); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t guest physical addr: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t guest physical addr: 0x%" PRIx64, region->guest_phys_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t guest virtual addr: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t guest virtual addr: 0x%" PRIx64, region->guest_user_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t host virtual addr: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t host virtual addr: 0x%" PRIx64, region->host_user_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t mmap addr : 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t mmap addr : 0x%" PRIx64, (uint64_t)(uintptr_t)mmap_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t mmap size : 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t mmap size : 0x%" PRIx64, mmap_size); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t mmap align: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t mmap align: 0x%" PRIx64, alignment); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t mmap off : 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t mmap off : 0x%" PRIx64, mmap_offset); return 0; @@ -1329,14 +1329,14 @@ vhost_user_set_mem_table(struct virtio_net **pdev, return RTE_VHOST_MSG_RESULT_ERR; if (memory->nregions > VHOST_MEMORY_MAX_NREGIONS) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "too many memory regions (%u)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "too many memory regions (%u)", memory->nregions); goto close_msg_fds; } if (dev->mem && !vhost_memory_changed(memory, dev->mem)) { - VHOST_LOG_CONFIG(dev->ifname, INFO, "memory regions not changed\n"); + VHOST_CONFIG_LOG(dev->ifname, INFO, "memory regions not changed"); close_msg_fds(ctx); @@ -1386,8 +1386,8 @@ vhost_user_set_mem_table(struct virtio_net **pdev, RTE_CACHE_LINE_SIZE, numa_node); if (dev->guest_pages == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for dev->guest_pages\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for dev->guest_pages"); goto close_msg_fds; } } @@ -1395,7 +1395,7 @@ vhost_user_set_mem_table(struct virtio_net **pdev, dev->mem = rte_zmalloc_socket("vhost-mem-table", sizeof(struct rte_vhost_memory) + sizeof(struct rte_vhost_mem_region) * memory->nregions, 0, numa_node); if (dev->mem == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to allocate memory for dev->mem\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to allocate memory for dev->mem"); goto free_guest_pages; } @@ -1416,7 +1416,7 @@ vhost_user_set_mem_table(struct virtio_net **pdev, mmap_offset = memory->regions[i].mmap_offset; if (vhost_user_mmap_region(dev, reg, mmap_offset) < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to mmap region %u\n", i); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to mmap region %u", i); goto free_mem_table; } @@ -1538,7 +1538,7 @@ virtio_is_ready(struct virtio_net *dev) dev->flags |= VIRTIO_DEV_READY; if (!(dev->flags & VIRTIO_DEV_RUNNING)) - VHOST_LOG_CONFIG(dev->ifname, INFO, "virtio is now ready for processing.\n"); + VHOST_CONFIG_LOG(dev->ifname, INFO, "virtio is now ready for processing."); return 1; } @@ -1559,7 +1559,7 @@ inflight_mem_alloc(struct virtio_net *dev, const char *name, size_t size, int *f if (mfd == -1) { mfd = mkstemp(fname); if (mfd == -1) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to get inflight buffer fd\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to get inflight buffer fd"); return NULL; } @@ -1567,14 +1567,14 @@ inflight_mem_alloc(struct virtio_net *dev, const char *name, size_t size, int *f } if (ftruncate(mfd, size) == -1) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to alloc inflight buffer\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to alloc inflight buffer"); close(mfd); return NULL; } ptr = mmap(0, size, PROT_READ | PROT_WRITE, MAP_SHARED, mfd, 0); if (ptr == MAP_FAILED) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to mmap inflight buffer\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to mmap inflight buffer"); close(mfd); return NULL; } @@ -1616,8 +1616,8 @@ vhost_user_get_inflight_fd(struct virtio_net **pdev, void *addr; if (ctx->msg.size != sizeof(ctx->msg.payload.inflight)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid get_inflight_fd message size is %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid get_inflight_fd message size is %d", ctx->msg.size); return RTE_VHOST_MSG_RESULT_ERR; } @@ -1633,7 +1633,7 @@ vhost_user_get_inflight_fd(struct virtio_net **pdev, dev->inflight_info = rte_zmalloc_socket("inflight_info", sizeof(struct inflight_mem_info), 0, numa_node); if (!dev->inflight_info) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to alloc dev inflight area\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to alloc dev inflight area"); return RTE_VHOST_MSG_RESULT_ERR; } dev->inflight_info->fd = -1; @@ -1642,11 +1642,11 @@ vhost_user_get_inflight_fd(struct virtio_net **pdev, num_queues = ctx->msg.payload.inflight.num_queues; queue_size = ctx->msg.payload.inflight.queue_size; - VHOST_LOG_CONFIG(dev->ifname, INFO, - "get_inflight_fd num_queues: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "get_inflight_fd num_queues: %u", ctx->msg.payload.inflight.num_queues); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "get_inflight_fd queue_size: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "get_inflight_fd queue_size: %u", ctx->msg.payload.inflight.queue_size); if (vq_is_packed(dev)) @@ -1657,7 +1657,7 @@ vhost_user_get_inflight_fd(struct virtio_net **pdev, mmap_size = num_queues * pervq_inflight_size; addr = inflight_mem_alloc(dev, "vhost-inflight", mmap_size, &fd); if (!addr) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to alloc vhost inflight area\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to alloc vhost inflight area"); ctx->msg.payload.inflight.mmap_size = 0; return RTE_VHOST_MSG_RESULT_ERR; } @@ -1691,14 +1691,14 @@ vhost_user_get_inflight_fd(struct virtio_net **pdev, } } - VHOST_LOG_CONFIG(dev->ifname, INFO, - "send inflight mmap_size: %"PRIu64"\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "send inflight mmap_size: %"PRIu64, ctx->msg.payload.inflight.mmap_size); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "send inflight mmap_offset: %"PRIu64"\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "send inflight mmap_offset: %"PRIu64, ctx->msg.payload.inflight.mmap_offset); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "send inflight fd: %d\n", ctx->fds[0]); + VHOST_CONFIG_LOG(dev->ifname, INFO, + "send inflight fd: %d", ctx->fds[0]); return RTE_VHOST_MSG_RESULT_REPLY; } @@ -1722,8 +1722,8 @@ vhost_user_set_inflight_fd(struct virtio_net **pdev, fd = ctx->fds[0]; if (ctx->msg.size != sizeof(ctx->msg.payload.inflight) || fd < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid set_inflight_fd message size is %d,fd is %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid set_inflight_fd message size is %d,fd is %d", ctx->msg.size, fd); return RTE_VHOST_MSG_RESULT_ERR; } @@ -1738,21 +1738,21 @@ vhost_user_set_inflight_fd(struct virtio_net **pdev, else pervq_inflight_size = get_pervq_shm_size_split(queue_size); - VHOST_LOG_CONFIG(dev->ifname, INFO, "set_inflight_fd mmap_size: %"PRIu64"\n", mmap_size); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "set_inflight_fd mmap_offset: %"PRIu64"\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "set_inflight_fd mmap_size: %"PRIu64, mmap_size); + VHOST_CONFIG_LOG(dev->ifname, INFO, + "set_inflight_fd mmap_offset: %"PRIu64, mmap_offset); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "set_inflight_fd num_queues: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "set_inflight_fd num_queues: %u", num_queues); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "set_inflight_fd queue_size: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "set_inflight_fd queue_size: %u", queue_size); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "set_inflight_fd fd: %d\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "set_inflight_fd fd: %d", fd); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "set_inflight_fd pervq_inflight_size: %d\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "set_inflight_fd pervq_inflight_size: %d", pervq_inflight_size); /* @@ -1766,7 +1766,7 @@ vhost_user_set_inflight_fd(struct virtio_net **pdev, dev->inflight_info = rte_zmalloc_socket("inflight_info", sizeof(struct inflight_mem_info), 0, numa_node); if (dev->inflight_info == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to alloc dev inflight area\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to alloc dev inflight area"); return RTE_VHOST_MSG_RESULT_ERR; } dev->inflight_info->fd = -1; @@ -1780,7 +1780,7 @@ vhost_user_set_inflight_fd(struct virtio_net **pdev, addr = mmap(0, mmap_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, mmap_offset); if (addr == MAP_FAILED) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to mmap share memory.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to mmap share memory."); return RTE_VHOST_MSG_RESULT_ERR; } @@ -1831,8 +1831,8 @@ vhost_user_set_vring_call(struct virtio_net **pdev, file.fd = VIRTIO_INVALID_EVENTFD; else file.fd = ctx->fds[0]; - VHOST_LOG_CONFIG(dev->ifname, INFO, - "vring call idx:%d file:%d\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "vring call idx:%d file:%d", file.index, file.fd); vq = dev->virtqueue[file.index]; @@ -1863,7 +1863,7 @@ static int vhost_user_set_vring_err(struct virtio_net **pdev, if (!(ctx->msg.payload.u64 & VHOST_USER_VRING_NOFD_MASK)) close(ctx->fds[0]); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "not implemented\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "not implemented"); return RTE_VHOST_MSG_RESULT_OK; } @@ -1929,8 +1929,8 @@ vhost_check_queue_inflights_split(struct virtio_net *dev, resubmit = rte_zmalloc_socket("resubmit", sizeof(struct rte_vhost_resubmit_info), 0, vq->numa_node); if (!resubmit) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for resubmit info.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for resubmit info."); return RTE_VHOST_MSG_RESULT_ERR; } @@ -1938,8 +1938,8 @@ vhost_check_queue_inflights_split(struct virtio_net *dev, resubmit_num * sizeof(struct rte_vhost_resubmit_desc), 0, vq->numa_node); if (!resubmit->resubmit_list) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for inflight desc.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for inflight desc."); rte_free(resubmit); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2025,8 +2025,8 @@ vhost_check_queue_inflights_packed(struct virtio_net *dev, resubmit = rte_zmalloc_socket("resubmit", sizeof(struct rte_vhost_resubmit_info), 0, vq->numa_node); if (resubmit == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for resubmit info.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for resubmit info."); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2034,8 +2034,8 @@ vhost_check_queue_inflights_packed(struct virtio_net *dev, resubmit_num * sizeof(struct rte_vhost_resubmit_desc), 0, vq->numa_node); if (resubmit->resubmit_list == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for resubmit desc.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for resubmit desc."); rte_free(resubmit); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2082,8 +2082,8 @@ vhost_user_set_vring_kick(struct virtio_net **pdev, file.fd = VIRTIO_INVALID_EVENTFD; else file.fd = ctx->fds[0]; - VHOST_LOG_CONFIG(dev->ifname, INFO, - "vring kick idx:%d file:%d\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "vring kick idx:%d file:%d", file.index, file.fd); /* Interpret ring addresses only when ring is started. */ @@ -2111,15 +2111,15 @@ vhost_user_set_vring_kick(struct virtio_net **pdev, if (vq_is_packed(dev)) { if (vhost_check_queue_inflights_packed(dev, vq)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to inflights for vq: %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to inflights for vq: %d", file.index); return RTE_VHOST_MSG_RESULT_ERR; } } else { if (vhost_check_queue_inflights_split(dev, vq)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to inflights for vq: %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to inflights for vq: %d", file.index); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2159,8 +2159,8 @@ vhost_user_get_vring_base(struct virtio_net **pdev, ctx->msg.payload.state.num = vq->last_avail_idx; } - VHOST_LOG_CONFIG(dev->ifname, INFO, - "vring base idx:%d file:%d\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "vring base idx:%d file:%d", ctx->msg.payload.state.index, ctx->msg.payload.state.num); /* * Based on current qemu vhost-user implementation, this message is @@ -2217,8 +2217,8 @@ vhost_user_set_vring_enable(struct virtio_net **pdev, bool enable = !!ctx->msg.payload.state.num; int index = (int)ctx->msg.payload.state.index; - VHOST_LOG_CONFIG(dev->ifname, INFO, - "set queue enable: %d to qp idx: %d\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "set queue enable: %d to qp idx: %d", enable, index); vq = dev->virtqueue[index]; @@ -2226,8 +2226,8 @@ vhost_user_set_vring_enable(struct virtio_net **pdev, /* vhost_user_lock_all_queue_pairs locked all qps */ vq_assert_lock(dev, vq); if (enable && vq->async && vq->async->pkts_inflight_n) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to enable vring. Inflight packets must be completed first\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to enable vring. Inflight packets must be completed first"); return RTE_VHOST_MSG_RESULT_ERR; } } @@ -2267,13 +2267,13 @@ vhost_user_set_protocol_features(struct virtio_net **pdev, rte_vhost_driver_get_protocol_features(dev->ifname, &backend_protocol_features); if (protocol_features & ~backend_protocol_features) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "received invalid protocol features.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "received invalid protocol features."); return RTE_VHOST_MSG_RESULT_ERR; } dev->protocol_features = protocol_features; - VHOST_LOG_CONFIG(dev->ifname, INFO, - "negotiated Vhost-user protocol features: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "negotiated Vhost-user protocol features: 0x%" PRIx64, dev->protocol_features); return RTE_VHOST_MSG_RESULT_OK; @@ -2295,13 +2295,13 @@ vhost_user_set_log_base(struct virtio_net **pdev, return RTE_VHOST_MSG_RESULT_ERR; if (fd < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "invalid log fd: %d\n", fd); + VHOST_CONFIG_LOG(dev->ifname, ERR, "invalid log fd: %d", fd); return RTE_VHOST_MSG_RESULT_ERR; } if (ctx->msg.size != sizeof(VhostUserLog)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid log base msg size: %"PRId32" != %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid log base msg size: %"PRId32" != %d", ctx->msg.size, (int)sizeof(VhostUserLog)); goto close_msg_fds; } @@ -2311,14 +2311,14 @@ vhost_user_set_log_base(struct virtio_net **pdev, /* Check for mmap size and offset overflow. */ if (off >= -size) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "log offset %#"PRIx64" and log size %#"PRIx64" overflow\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "log offset %#"PRIx64" and log size %#"PRIx64" overflow", off, size); goto close_msg_fds; } - VHOST_LOG_CONFIG(dev->ifname, INFO, - "log mmap size: %"PRId64", offset: %"PRId64"\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "log mmap size: %"PRId64", offset: %"PRId64, size, off); /* @@ -2329,7 +2329,7 @@ vhost_user_set_log_base(struct virtio_net **pdev, alignment = get_blk_size(fd); close(fd); if (addr == MAP_FAILED) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "mmap log base failed!\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "mmap log base failed!"); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2359,8 +2359,8 @@ vhost_user_set_log_base(struct virtio_net **pdev, * caching will be done, which will impact performance */ if (!vq->log_cache) - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate VQ logging cache\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate VQ logging cache"); } /* @@ -2387,7 +2387,7 @@ static int vhost_user_set_log_fd(struct virtio_net **pdev, return RTE_VHOST_MSG_RESULT_ERR; close(ctx->fds[0]); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "not implemented.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "not implemented."); return RTE_VHOST_MSG_RESULT_OK; } @@ -2409,8 +2409,8 @@ vhost_user_send_rarp(struct virtio_net **pdev, uint8_t *mac = (uint8_t *)&ctx->msg.payload.u64; struct rte_vdpa_device *vdpa_dev; - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "MAC: " RTE_ETHER_ADDR_PRT_FMT "\n", + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "MAC: " RTE_ETHER_ADDR_PRT_FMT, mac[0], mac[1], mac[2], mac[3], mac[4], mac[5]); memcpy(dev->mac.addr_bytes, mac, 6); @@ -2438,8 +2438,8 @@ vhost_user_net_set_mtu(struct virtio_net **pdev, if (ctx->msg.payload.u64 < VIRTIO_MIN_MTU || ctx->msg.payload.u64 > VIRTIO_MAX_MTU) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid MTU size (%"PRIu64")\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid MTU size (%"PRIu64")", ctx->msg.payload.u64); return RTE_VHOST_MSG_RESULT_ERR; @@ -2462,8 +2462,8 @@ vhost_user_set_req_fd(struct virtio_net **pdev, return RTE_VHOST_MSG_RESULT_ERR; if (fd < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid file descriptor for backend channel (%d)\n", fd); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid file descriptor for backend channel (%d)", fd); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2563,7 +2563,7 @@ vhost_user_get_config(struct virtio_net **pdev, return RTE_VHOST_MSG_RESULT_ERR; if (!vdpa_dev) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "is not vDPA device!\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "is not vDPA device!"); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2573,10 +2573,10 @@ vhost_user_get_config(struct virtio_net **pdev, ctx->msg.payload.cfg.size); if (ret != 0) { ctx->msg.size = 0; - VHOST_LOG_CONFIG(dev->ifname, ERR, "get_config() return error!\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "get_config() return error!"); } } else { - VHOST_LOG_CONFIG(dev->ifname, ERR, "get_config() not supported!\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "get_config() not supported!"); } return RTE_VHOST_MSG_RESULT_REPLY; @@ -2595,14 +2595,14 @@ vhost_user_set_config(struct virtio_net **pdev, return RTE_VHOST_MSG_RESULT_ERR; if (ctx->msg.payload.cfg.size > VHOST_USER_MAX_CONFIG_SIZE) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "vhost_user_config size: %"PRIu32", should not be larger than %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "vhost_user_config size: %"PRIu32", should not be larger than %d", ctx->msg.payload.cfg.size, VHOST_USER_MAX_CONFIG_SIZE); goto out; } if (!vdpa_dev) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "is not vDPA device!\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "is not vDPA device!"); goto out; } @@ -2613,9 +2613,9 @@ vhost_user_set_config(struct virtio_net **pdev, ctx->msg.payload.cfg.size, ctx->msg.payload.cfg.flags); if (ret) - VHOST_LOG_CONFIG(dev->ifname, ERR, "set_config() return error!\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "set_config() return error!"); } else { - VHOST_LOG_CONFIG(dev->ifname, ERR, "set_config() not supported!\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "set_config() not supported!"); } return RTE_VHOST_MSG_RESULT_OK; @@ -2676,7 +2676,7 @@ vhost_user_iotlb_msg(struct virtio_net **pdev, } break; default: - VHOST_LOG_CONFIG(dev->ifname, ERR, "invalid IOTLB message type (%d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "invalid IOTLB message type (%d)", imsg->type); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2696,16 +2696,16 @@ vhost_user_set_postcopy_advise(struct virtio_net **pdev, dev->postcopy_ufd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK); if (dev->postcopy_ufd == -1) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "userfaultfd not available: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "userfaultfd not available: %s", strerror(errno)); return RTE_VHOST_MSG_RESULT_ERR; } api_struct.api = UFFD_API; api_struct.features = 0; if (ioctl(dev->postcopy_ufd, UFFDIO_API, &api_struct)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "UFFDIO_API ioctl failure: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "UFFDIO_API ioctl failure: %s", strerror(errno)); close(dev->postcopy_ufd); dev->postcopy_ufd = -1; @@ -2731,8 +2731,8 @@ vhost_user_set_postcopy_listen(struct virtio_net **pdev, struct virtio_net *dev = *pdev; if (dev->mem && dev->mem->nregions) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "regions already registered at postcopy-listen\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "regions already registered at postcopy-listen"); return RTE_VHOST_MSG_RESULT_ERR; } dev->postcopy_listening = 1; @@ -2783,8 +2783,8 @@ vhost_user_set_status(struct virtio_net **pdev, /* As per Virtio specification, the device status is 8bits long */ if (ctx->msg.payload.u64 > UINT8_MAX) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid VHOST_USER_SET_STATUS payload 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid VHOST_USER_SET_STATUS payload 0x%" PRIx64, ctx->msg.payload.u64); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2793,8 +2793,8 @@ vhost_user_set_status(struct virtio_net **pdev, if ((dev->status & VIRTIO_DEVICE_STATUS_FEATURES_OK) && (dev->flags & VIRTIO_DEV_FEATURES_FAILED)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "FEATURES_OK bit is set but feature negotiation failed\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "FEATURES_OK bit is set but feature negotiation failed"); /* * Clear the bit to let the driver know about the feature * negotiation failure @@ -2802,27 +2802,27 @@ vhost_user_set_status(struct virtio_net **pdev, dev->status &= ~VIRTIO_DEVICE_STATUS_FEATURES_OK; } - VHOST_LOG_CONFIG(dev->ifname, INFO, "new device status(0x%08x):\n", dev->status); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t-RESET: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "new device status(0x%08x):", dev->status); + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t-RESET: %u", (dev->status == VIRTIO_DEVICE_STATUS_RESET)); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t-ACKNOWLEDGE: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t-ACKNOWLEDGE: %u", !!(dev->status & VIRTIO_DEVICE_STATUS_ACK)); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t-DRIVER: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t-DRIVER: %u", !!(dev->status & VIRTIO_DEVICE_STATUS_DRIVER)); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t-FEATURES_OK: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t-FEATURES_OK: %u", !!(dev->status & VIRTIO_DEVICE_STATUS_FEATURES_OK)); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t-DRIVER_OK: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t-DRIVER_OK: %u", !!(dev->status & VIRTIO_DEVICE_STATUS_DRIVER_OK)); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t-DEVICE_NEED_RESET: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t-DEVICE_NEED_RESET: %u", !!(dev->status & VIRTIO_DEVICE_STATUS_DEV_NEED_RESET)); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t-FAILED: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t-FAILED: %u", !!(dev->status & VIRTIO_DEVICE_STATUS_FAILED)); return RTE_VHOST_MSG_RESULT_OK; @@ -2881,14 +2881,14 @@ read_vhost_message(struct virtio_net *dev, int sockfd, struct vhu_msg_context * goto out; if (ret != VHOST_USER_HDR_SIZE) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Unexpected header size read\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Unexpected header size read"); ret = -1; goto out; } if (ctx->msg.size) { if (ctx->msg.size > sizeof(ctx->msg.payload)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "invalid msg size: %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "invalid msg size: %d", ctx->msg.size); ret = -1; goto out; @@ -2897,7 +2897,7 @@ read_vhost_message(struct virtio_net *dev, int sockfd, struct vhu_msg_context * if (ret <= 0) goto out; if (ret != (int)ctx->msg.size) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "read control message failed\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "read control message failed"); ret = -1; goto out; } @@ -2949,24 +2949,24 @@ send_vhost_backend_message_process_reply(struct virtio_net *dev, struct vhu_msg_ rte_spinlock_lock(&dev->backend_req_lock); ret = send_vhost_backend_message(dev, ctx); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to send config change (%d)\n", ret); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to send config change (%d)", ret); goto out; } ret = read_vhost_message(dev, dev->backend_req_fd, &msg_reply); if (ret <= 0) { if (ret < 0) - VHOST_LOG_CONFIG(dev->ifname, ERR, - "vhost read backend message reply failed\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "vhost read backend message reply failed"); else - VHOST_LOG_CONFIG(dev->ifname, INFO, "vhost peer closed\n"); + VHOST_CONFIG_LOG(dev->ifname, INFO, "vhost peer closed"); ret = -1; goto out; } if (msg_reply.msg.request.backend != ctx->msg.request.backend) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "received unexpected msg type (%u), expected %u\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "received unexpected msg type (%u), expected %u", msg_reply.msg.request.backend, ctx->msg.request.backend); ret = -1; goto out; @@ -3010,7 +3010,7 @@ vhost_user_check_and_alloc_queue_pair(struct virtio_net *dev, } if (vring_idx >= VHOST_MAX_VRING) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "invalid vring index: %u\n", vring_idx); + VHOST_CONFIG_LOG(dev->ifname, ERR, "invalid vring index: %u", vring_idx); return -1; } @@ -3078,8 +3078,8 @@ vhost_user_msg_handler(int vid, int fd) if (!dev->notify_ops) { dev->notify_ops = vhost_driver_callback_get(dev->ifname); if (!dev->notify_ops) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to get callback ops for driver\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to get callback ops for driver"); return -1; } } @@ -3087,7 +3087,7 @@ vhost_user_msg_handler(int vid, int fd) ctx.msg.request.frontend = VHOST_USER_NONE; ret = read_vhost_message(dev, fd, &ctx); if (ret == 0) { - VHOST_LOG_CONFIG(dev->ifname, INFO, "vhost peer closed\n"); + VHOST_CONFIG_LOG(dev->ifname, INFO, "vhost peer closed"); return -1; } @@ -3098,7 +3098,7 @@ vhost_user_msg_handler(int vid, int fd) msg_handler = NULL; if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "vhost read message %s%s%sfailed\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "vhost read message %s%s%sfailed", msg_handler != NULL ? "for " : "", msg_handler != NULL ? msg_handler->description : "", msg_handler != NULL ? " " : ""); @@ -3107,20 +3107,20 @@ vhost_user_msg_handler(int vid, int fd) if (msg_handler != NULL && msg_handler->description != NULL) { if (request != VHOST_USER_IOTLB_MSG) - VHOST_LOG_CONFIG(dev->ifname, INFO, - "read message %s\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "read message %s", msg_handler->description); else - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "read message %s\n", + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "read message %s", msg_handler->description); } else { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "external request %d\n", request); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "external request %d", request); } ret = vhost_user_check_and_alloc_queue_pair(dev, &ctx); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to alloc queue\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to alloc queue"); return -1; } @@ -3187,20 +3187,20 @@ vhost_user_msg_handler(int vid, int fd) switch (msg_result) { case RTE_VHOST_MSG_RESULT_ERR: - VHOST_LOG_CONFIG(dev->ifname, ERR, - "processing %s failed.\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "processing %s failed.", msg_handler->description); handled = true; break; case RTE_VHOST_MSG_RESULT_OK: - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "processing %s succeeded.\n", + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "processing %s succeeded.", msg_handler->description); handled = true; break; case RTE_VHOST_MSG_RESULT_REPLY: - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "processing %s succeeded and needs reply.\n", + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "processing %s succeeded and needs reply.", msg_handler->description); send_vhost_reply(dev, fd, &ctx); handled = true; @@ -3229,8 +3229,8 @@ vhost_user_msg_handler(int vid, int fd) /* If message was not handled at this stage, treat it as an error */ if (!handled) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "vhost message (req: %d) was not handled.\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "vhost message (req: %d) was not handled.", request); close_msg_fds(&ctx); msg_result = RTE_VHOST_MSG_RESULT_ERR; @@ -3247,7 +3247,7 @@ vhost_user_msg_handler(int vid, int fd) ctx.fd_num = 0; send_vhost_reply(dev, fd, &ctx); } else if (msg_result == RTE_VHOST_MSG_RESULT_ERR) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "vhost message handling failed.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "vhost message handling failed."); ret = -1; goto unlock; } @@ -3296,7 +3296,7 @@ vhost_user_msg_handler(int vid, int fd) if (!(dev->flags & VIRTIO_DEV_VDPA_CONFIGURED)) { if (vdpa_dev->ops->dev_conf(dev->vid)) - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to configure vDPA device\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to configure vDPA device"); else dev->flags |= VIRTIO_DEV_VDPA_CONFIGURED; } @@ -3324,8 +3324,8 @@ vhost_user_iotlb_miss(struct virtio_net *dev, uint64_t iova, uint8_t perm) ret = send_vhost_message(dev, dev->backend_req_fd, &ctx); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to send IOTLB miss message (%d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to send IOTLB miss message (%d)", ret); return ret; } @@ -3358,7 +3358,7 @@ rte_vhost_backend_config_change(int vid, bool need_reply) } if (ret < 0) - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to send config change (%d)\n", ret); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to send config change (%d)", ret); return ret; } @@ -3390,7 +3390,7 @@ static int vhost_user_backend_set_vring_host_notifier(struct virtio_net *dev, ret = send_vhost_backend_message_process_reply(dev, &ctx); if (ret < 0) - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to set host notifier (%d)\n", ret); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to set host notifier (%d)", ret); return ret; } diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 8af20f1487..280d4845f8 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -130,8 +130,8 @@ vhost_async_dma_transfer_one(struct virtio_net *dev, struct vhost_virtqueue *vq, */ if (unlikely(copy_idx < 0)) { if (!vhost_async_dma_copy_log) { - VHOST_LOG_DATA(dev->ifname, ERR, - "DMA copy failed for channel %d:%u\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "DMA copy failed for channel %d:%u", dma_id, vchan_id); vhost_async_dma_copy_log = true; } @@ -201,8 +201,8 @@ vhost_async_dma_check_completed(struct virtio_net *dev, int16_t dma_id, uint16_t */ nr_copies = rte_dma_completed(dma_id, vchan_id, max_pkts, &last_idx, &has_error); if (unlikely(!vhost_async_dma_complete_log && has_error)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "DMA completion failure on channel %d:%u\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "DMA completion failure on channel %d:%u", dma_id, vchan_id); vhost_async_dma_complete_log = true; } else if (nr_copies == 0) { @@ -1062,7 +1062,7 @@ async_iter_initialize(struct virtio_net *dev, struct vhost_async *async) struct vhost_iov_iter *iter; if (unlikely(async->iovec_idx >= VHOST_MAX_ASYNC_VEC)) { - VHOST_LOG_DATA(dev->ifname, ERR, "no more async iovec available\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "no more async iovec available"); return -1; } @@ -1084,7 +1084,7 @@ async_iter_add_iovec(struct virtio_net *dev, struct vhost_async *async, static bool vhost_max_async_vec_log; if (!vhost_max_async_vec_log) { - VHOST_LOG_DATA(dev->ifname, ERR, "no more async iovec available\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "no more async iovec available"); vhost_max_async_vec_log = true; } @@ -1145,8 +1145,8 @@ async_fill_seg(struct virtio_net *dev, struct vhost_virtqueue *vq, host_iova = (void *)(uintptr_t)gpa_to_first_hpa(dev, buf_iova + buf_offset, cpy_len, &mapped_len); if (unlikely(!host_iova)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: failed to get host iova.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: failed to get host iova.", __func__); return -1; } @@ -1243,7 +1243,7 @@ mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, } else hdr = (struct virtio_net_hdr_mrg_rxbuf *)(uintptr_t)hdr_addr; - VHOST_LOG_DATA(dev->ifname, DEBUG, "RX: num merge buffers %d\n", num_buffers); + VHOST_DATA_LOG(dev->ifname, DEBUG, "RX: num merge buffers %d", num_buffers); if (unlikely(buf_len < dev->vhost_hlen)) { buf_offset = dev->vhost_hlen - buf_len; @@ -1428,14 +1428,14 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, if (unlikely(reserve_avail_buf_split(dev, vq, pkt_len, buf_vec, &num_buffers, avail_head, &nr_vec) < 0)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, - "failed to get enough desc from vring\n"); + VHOST_DATA_LOG(dev->ifname, DEBUG, + "failed to get enough desc from vring"); vq->shadow_used_idx -= num_buffers; break; } - VHOST_LOG_DATA(dev->ifname, DEBUG, - "current index %d | end index %d\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "current index %d | end index %d", vq->last_avail_idx, vq->last_avail_idx + num_buffers); if (mbuf_to_desc(dev, vq, pkts[pkt_idx], buf_vec, nr_vec, @@ -1645,12 +1645,12 @@ virtio_dev_rx_single_packed(struct virtio_net *dev, if (unlikely(vhost_enqueue_single_packed(dev, vq, pkt, buf_vec, &nr_descs) < 0)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, "failed to get enough desc from vring\n"); + VHOST_DATA_LOG(dev->ifname, DEBUG, "failed to get enough desc from vring"); return -1; } - VHOST_LOG_DATA(dev->ifname, DEBUG, - "current index %d | end index %d\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "current index %d | end index %d", vq->last_avail_idx, vq->last_avail_idx + nr_descs); vq_inc_last_avail_packed(vq, nr_descs); @@ -1702,7 +1702,7 @@ virtio_dev_rx(struct virtio_net *dev, struct vhost_virtqueue *vq, { uint32_t nb_tx = 0; - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s\n", __func__); + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s", __func__); rte_rwlock_read_lock(&vq->access_lock); if (unlikely(!vq->enabled)) @@ -1744,15 +1744,15 @@ rte_vhost_enqueue_burst(int vid, uint16_t queue_id, return 0; if (unlikely(!(dev->flags & VIRTIO_DEV_BUILTIN_VIRTIO_NET))) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: built-in vhost net backend is disabled.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: built-in vhost net backend is disabled.", __func__); return 0; } if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid virtqueue idx %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid virtqueue idx %d.", __func__, queue_id); return 0; } @@ -1821,14 +1821,14 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev, struct vhost_virtqueue if (unlikely(reserve_avail_buf_split(dev, vq, pkt_len, buf_vec, &num_buffers, avail_head, &nr_vec) < 0)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, - "failed to get enough desc from vring\n"); + VHOST_DATA_LOG(dev->ifname, DEBUG, + "failed to get enough desc from vring"); vq->shadow_used_idx -= num_buffers; break; } - VHOST_LOG_DATA(dev->ifname, DEBUG, - "current index %d | end index %d\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "current index %d | end index %d", vq->last_avail_idx, vq->last_avail_idx + num_buffers); if (mbuf_to_desc(dev, vq, pkts[pkt_idx], buf_vec, nr_vec, num_buffers, true) < 0) { @@ -1853,8 +1853,8 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev, struct vhost_virtqueue if (unlikely(pkt_err)) { uint16_t num_descs = 0; - VHOST_LOG_DATA(dev->ifname, DEBUG, - "%s: failed to transfer %u packets for queue %u.\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "%s: failed to transfer %u packets for queue %u.", __func__, pkt_err, vq->index); /* update number of completed packets */ @@ -1967,12 +1967,12 @@ virtio_dev_rx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, if (unlikely(vhost_enqueue_async_packed(dev, vq, pkt, buf_vec, nr_descs, nr_buffers) < 0)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, "failed to get enough desc from vring\n"); + VHOST_DATA_LOG(dev->ifname, DEBUG, "failed to get enough desc from vring"); return -1; } - VHOST_LOG_DATA(dev->ifname, DEBUG, - "current index %d | end index %d\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "current index %d | end index %d", vq->last_avail_idx, vq->last_avail_idx + *nr_descs); return 0; @@ -2151,8 +2151,8 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, struct vhost_virtqueue pkt_err = pkt_idx - n_xfer; if (unlikely(pkt_err)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, - "%s: failed to transfer %u packets for queue %u.\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "%s: failed to transfer %u packets for queue %u.", __func__, pkt_err, vq->index); dma_error_handler_packed(vq, slot_idx, pkt_err, &pkt_idx); } @@ -2344,18 +2344,18 @@ rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id, if (unlikely(!dev)) return 0; - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s\n", __func__); + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s", __func__); if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid virtqueue idx %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid virtqueue idx %d.", __func__, queue_id); return 0; } if (unlikely(!dma_copy_track[dma_id].vchans || !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid channel %d:%u.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid channel %d:%u.", __func__, dma_id, vchan_id); return 0; } @@ -2363,15 +2363,15 @@ rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id, vq = dev->virtqueue[queue_id]; if (rte_rwlock_read_trylock(&vq->access_lock)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, - "%s: virtqueue %u is busy.\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "%s: virtqueue %u is busy.", __func__, queue_id); return 0; } if (unlikely(!vq->async)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: async not registered for virtqueue %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: async not registered for virtqueue %d.", __func__, queue_id); goto out; } @@ -2399,15 +2399,15 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, if (!dev) return 0; - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s\n", __func__); + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s", __func__); if (unlikely(queue_id >= dev->nr_vring)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid virtqueue idx %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid virtqueue idx %d.", __func__, queue_id); return 0; } if (unlikely(dma_id < 0 || dma_id >= RTE_DMADEV_DEFAULT_MAX)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid dma id %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid dma id %d.", __func__, dma_id); return 0; } @@ -2417,16 +2417,16 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, vq_assert_lock(dev, vq); if (unlikely(!vq->async)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: async not registered for virtqueue %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: async not registered for virtqueue %d.", __func__, queue_id); return 0; } if (unlikely(!dma_copy_track[dma_id].vchans || !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid channel %d:%u.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid channel %d:%u.", __func__, dma_id, vchan_id); return 0; } @@ -2455,15 +2455,15 @@ rte_vhost_clear_queue(int vid, uint16_t queue_id, struct rte_mbuf **pkts, if (!dev) return 0; - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s\n", __func__); + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s", __func__); if (unlikely(queue_id >= dev->nr_vring)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid virtqueue idx %u.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid virtqueue idx %u.", __func__, queue_id); return 0; } if (unlikely(dma_id < 0 || dma_id >= RTE_DMADEV_DEFAULT_MAX)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid dma id %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid dma id %d.", __func__, dma_id); return 0; } @@ -2471,20 +2471,20 @@ rte_vhost_clear_queue(int vid, uint16_t queue_id, struct rte_mbuf **pkts, vq = dev->virtqueue[queue_id]; if (rte_rwlock_read_trylock(&vq->access_lock)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s: virtqueue %u is busy.\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s: virtqueue %u is busy.", __func__, queue_id); return 0; } if (unlikely(!vq->async)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: async not registered for queue id %u.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: async not registered for queue id %u.", __func__, queue_id); goto out_access_unlock; } if (unlikely(!dma_copy_track[dma_id].vchans || !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid channel %d:%u.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid channel %d:%u.", __func__, dma_id, vchan_id); goto out_access_unlock; } @@ -2511,12 +2511,12 @@ virtio_dev_rx_async_submit(struct virtio_net *dev, struct vhost_virtqueue *vq, { uint32_t nb_tx = 0; - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s\n", __func__); + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s", __func__); if (unlikely(!dma_copy_track[dma_id].vchans || !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid channel %d:%u.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid channel %d:%u.", __func__, dma_id, vchan_id); return 0; } @@ -2565,15 +2565,15 @@ rte_vhost_submit_enqueue_burst(int vid, uint16_t queue_id, return 0; if (unlikely(!(dev->flags & VIRTIO_DEV_BUILTIN_VIRTIO_NET))) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: built-in vhost net backend is disabled.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: built-in vhost net backend is disabled.", __func__); return 0; } if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid virtqueue idx %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid virtqueue idx %d.", __func__, queue_id); return 0; } @@ -2743,8 +2743,8 @@ vhost_dequeue_offload_legacy(struct virtio_net *dev, struct virtio_net_hdr *hdr, m->l4_len = sizeof(struct rte_udp_hdr); break; default: - VHOST_LOG_DATA(dev->ifname, WARNING, - "unsupported gso type %u.\n", + VHOST_DATA_LOG(dev->ifname, WARNING, + "unsupported gso type %u.", hdr->gso_type); goto error; } @@ -2975,8 +2975,8 @@ desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, if (mbuf_avail == 0) { cur = rte_pktmbuf_alloc(mbuf_pool); if (unlikely(cur == NULL)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "failed to allocate memory for mbuf.\n"); + VHOST_DATA_LOG(dev->ifname, ERR, + "failed to allocate memory for mbuf."); goto error; } @@ -3041,7 +3041,7 @@ virtio_dev_extbuf_alloc(struct virtio_net *dev, struct rte_mbuf *pkt, uint32_t s virtio_dev_extbuf_free, buf); if (unlikely(shinfo == NULL)) { rte_free(buf); - VHOST_LOG_DATA(dev->ifname, ERR, "failed to init shinfo\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "failed to init shinfo"); return -1; } @@ -3097,11 +3097,11 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]); - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s\n", __func__); + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s", __func__); count = RTE_MIN(count, MAX_PKT_BURST); count = RTE_MIN(count, avail_entries); - VHOST_LOG_DATA(dev->ifname, DEBUG, "about to dequeue %u buffers\n", count); + VHOST_DATA_LOG(dev->ifname, DEBUG, "about to dequeue %u buffers", count); if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, count)) return 0; @@ -3138,8 +3138,8 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, * is required. Drop this packet. */ if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, - "failed mbuf alloc of size %d from %s.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "failed mbuf alloc of size %d from %s.", buf_len, mbuf_pool->name); allocerr_warned = true; } @@ -3152,7 +3152,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, mbuf_pool, legacy_ol_flags, 0, false); if (unlikely(err)) { if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, "failed to copy desc to mbuf.\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "failed to copy desc to mbuf."); allocerr_warned = true; } dropped += 1; @@ -3421,8 +3421,8 @@ vhost_dequeue_single_packed(struct virtio_net *dev, if (unlikely(virtio_dev_pktmbuf_prep(dev, pkts, buf_len))) { if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, - "failed mbuf alloc of size %d from %s.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "failed mbuf alloc of size %d from %s.", buf_len, mbuf_pool->name); allocerr_warned = true; } @@ -3433,7 +3433,7 @@ vhost_dequeue_single_packed(struct virtio_net *dev, mbuf_pool, legacy_ol_flags, 0, false); if (unlikely(err)) { if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, "failed to copy desc to mbuf.\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "failed to copy desc to mbuf."); allocerr_warned = true; } return -1; @@ -3556,15 +3556,15 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, return 0; if (unlikely(!(dev->flags & VIRTIO_DEV_BUILTIN_VIRTIO_NET))) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: built-in vhost net backend is disabled.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: built-in vhost net backend is disabled.", __func__); return 0; } if (unlikely(!is_valid_virt_queue_idx(queue_id, 1, dev->nr_vring))) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid virtqueue idx %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid virtqueue idx %d.", __func__, queue_id); return 0; } @@ -3609,7 +3609,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, rarp_mbuf = rte_net_make_rarp_packet(mbuf_pool, &dev->mac); if (rarp_mbuf == NULL) { - VHOST_LOG_DATA(dev->ifname, ERR, "failed to make RARP packet.\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "failed to make RARP packet."); count = 0; goto out; } @@ -3731,7 +3731,7 @@ virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq, count = RTE_MIN(count, MAX_PKT_BURST); count = RTE_MIN(count, avail_entries); - VHOST_LOG_DATA(dev->ifname, DEBUG, "about to dequeue %u buffers\n", count); + VHOST_DATA_LOG(dev->ifname, DEBUG, "about to dequeue %u buffers", count); if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts_prealloc, count)) goto out; @@ -3768,8 +3768,8 @@ virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq, * is required. Drop this packet. */ if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: Failed mbuf alloc of size %d from %s\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: Failed mbuf alloc of size %d from %s", __func__, buf_len, mbuf_pool->name); allocerr_warned = true; } @@ -3783,8 +3783,8 @@ virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq, legacy_ol_flags, slot_idx, true); if (unlikely(err)) { if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: Failed to offload copies to async channel.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: Failed to offload copies to async channel.", __func__); allocerr_warned = true; } @@ -3814,7 +3814,7 @@ virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq, pkt_err = pkt_idx - n_xfer; if (unlikely(pkt_err)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s: failed to transfer data.\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s: failed to transfer data.", __func__); pkt_idx = n_xfer; @@ -3914,7 +3914,7 @@ virtio_dev_tx_async_single_packed(struct virtio_net *dev, if (unlikely(virtio_dev_pktmbuf_prep(dev, pkts, buf_len))) { if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, "Failed mbuf alloc of size %d from %s.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "Failed mbuf alloc of size %d from %s.", buf_len, mbuf_pool->name); allocerr_warned = true; @@ -3927,7 +3927,7 @@ virtio_dev_tx_async_single_packed(struct virtio_net *dev, if (unlikely(err)) { rte_pktmbuf_free(pkts); if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, "Failed to copy desc to mbuf on.\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "Failed to copy desc to mbuf on."); allocerr_warned = true; } return -1; @@ -4019,7 +4019,7 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct async_inflight_info *pkts_info = async->pkts_info; struct rte_mbuf *pkts_prealloc[MAX_PKT_BURST]; - VHOST_LOG_DATA(dev->ifname, DEBUG, "(%d) about to dequeue %u buffers\n", dev->vid, count); + VHOST_DATA_LOG(dev->ifname, DEBUG, "(%d) about to dequeue %u buffers", dev->vid, count); async_iter_reset(async); @@ -4153,26 +4153,26 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id, *nr_inflight = -1; if (unlikely(!(dev->flags & VIRTIO_DEV_BUILTIN_VIRTIO_NET))) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: built-in vhost net backend is disabled.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: built-in vhost net backend is disabled.", __func__); return 0; } if (unlikely(!is_valid_virt_queue_idx(queue_id, 1, dev->nr_vring))) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid virtqueue idx %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid virtqueue idx %d.", __func__, queue_id); return 0; } if (unlikely(dma_id < 0 || dma_id >= RTE_DMADEV_DEFAULT_MAX)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid dma id %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid dma id %d.", __func__, dma_id); return 0; } if (unlikely(!dma_copy_track[dma_id].vchans || !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid channel %d:%u.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid channel %d:%u.", __func__, dma_id, vchan_id); return 0; } @@ -4188,7 +4188,7 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id, } if (unlikely(!vq->async)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: async not registered for queue id %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: async not registered for queue id %d.", __func__, queue_id); count = 0; goto out_access_unlock; @@ -4224,7 +4224,7 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id, rarp_mbuf = rte_net_make_rarp_packet(mbuf_pool, &dev->mac); if (rarp_mbuf == NULL) { - VHOST_LOG_DATA(dev->ifname, ERR, "failed to make RARP packet.\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "failed to make RARP packet."); count = 0; goto out; } diff --git a/lib/vhost/virtio_net_ctrl.c b/lib/vhost/virtio_net_ctrl.c index c4847f84ed..8f78122361 100644 --- a/lib/vhost/virtio_net_ctrl.c +++ b/lib/vhost/virtio_net_ctrl.c @@ -36,13 +36,13 @@ virtio_net_ctrl_pop(struct virtio_net *dev, struct vhost_virtqueue *cvq, avail_idx = rte_atomic_load_explicit((unsigned short __rte_atomic *)&cvq->avail->idx, rte_memory_order_acquire); if (avail_idx == cvq->last_avail_idx) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "Control queue empty\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "Control queue empty"); return 0; } desc_idx = cvq->avail->ring[cvq->last_avail_idx]; if (desc_idx >= cvq->size) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Out of range desc index, dropping\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Out of range desc index, dropping"); goto err; } @@ -55,7 +55,7 @@ virtio_net_ctrl_pop(struct virtio_net *dev, struct vhost_virtqueue *cvq, descs = (struct vring_desc *)(uintptr_t)vhost_iova_to_vva(dev, cvq, desc_iova, &desc_len, VHOST_ACCESS_RO); if (!descs || desc_len != cvq->desc[desc_idx].len) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to map ctrl indirect descs\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to map ctrl indirect descs"); goto err; } @@ -72,28 +72,28 @@ virtio_net_ctrl_pop(struct virtio_net *dev, struct vhost_virtqueue *cvq, if (descs[desc_idx].flags & VRING_DESC_F_WRITE) { if (ctrl_elem->desc_ack) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Unexpected ctrl chain layout\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Unexpected ctrl chain layout"); goto err; } if (desc_len != sizeof(uint8_t)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Invalid ack size for ctrl req, dropping\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Invalid ack size for ctrl req, dropping"); goto err; } ctrl_elem->desc_ack = (uint8_t *)(uintptr_t)vhost_iova_to_vva(dev, cvq, desc_iova, &desc_len, VHOST_ACCESS_WO); if (!ctrl_elem->desc_ack || desc_len != sizeof(uint8_t)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Failed to map ctrl ack descriptor\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Failed to map ctrl ack descriptor"); goto err; } } else { if (ctrl_elem->desc_ack) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Unexpected ctrl chain layout\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Unexpected ctrl chain layout"); goto err; } @@ -114,18 +114,18 @@ virtio_net_ctrl_pop(struct virtio_net *dev, struct vhost_virtqueue *cvq, ctrl_elem->n_descs = n_descs; if (!ctrl_elem->desc_ack) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Missing ctrl ack descriptor\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Missing ctrl ack descriptor"); goto err; } if (data_len < sizeof(ctrl_elem->ctrl_req->class) + sizeof(ctrl_elem->ctrl_req->command)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Invalid control header size\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Invalid control header size"); goto err; } ctrl_elem->ctrl_req = malloc(data_len); if (!ctrl_elem->ctrl_req) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to alloc ctrl request\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to alloc ctrl request"); goto err; } @@ -138,7 +138,7 @@ virtio_net_ctrl_pop(struct virtio_net *dev, struct vhost_virtqueue *cvq, descs = (struct vring_desc *)(uintptr_t)vhost_iova_to_vva(dev, cvq, desc_iova, &desc_len, VHOST_ACCESS_RO); if (!descs || desc_len != cvq->desc[desc_idx].len) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to map ctrl indirect descs\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to map ctrl indirect descs"); goto free_err; } @@ -153,7 +153,7 @@ virtio_net_ctrl_pop(struct virtio_net *dev, struct vhost_virtqueue *cvq, desc_addr = vhost_iova_to_vva(dev, cvq, desc_iova, &desc_len, VHOST_ACCESS_RO); if (!desc_addr || desc_len < descs[desc_idx].len) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to map ctrl descriptor\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to map ctrl descriptor"); goto free_err; } @@ -199,7 +199,7 @@ virtio_net_ctrl_handle_req(struct virtio_net *dev, struct virtio_net_ctrl *ctrl_ uint32_t i; queue_pairs = *(uint16_t *)(uintptr_t)ctrl_req->command_data; - VHOST_LOG_CONFIG(dev->ifname, INFO, "Ctrl req: MQ %u queue pairs\n", queue_pairs); + VHOST_CONFIG_LOG(dev->ifname, INFO, "Ctrl req: MQ %u queue pairs", queue_pairs); ret = VIRTIO_NET_OK; for (i = 0; i < dev->nr_vring; i++) { @@ -253,12 +253,12 @@ virtio_net_ctrl_handle(struct virtio_net *dev) int ret = 0; if (dev->features & (1ULL << VIRTIO_F_RING_PACKED)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Packed ring not supported yet\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Packed ring not supported yet"); return -1; } if (!dev->cvq) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "missing control queue\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "missing control queue"); return -1; } -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 13/14] lib: replace logging helpers 2023-12-08 14:59 ` [RFC v2 13/14] lib: replace logging helpers David Marchand @ 2023-12-08 17:18 ` Stephen Hemminger 2023-12-11 12:36 ` David Marchand 2023-12-16 9:42 ` Andrew Rybchenko 1 sibling, 1 reply; 122+ messages in thread From: Stephen Hemminger @ 2023-12-08 17:18 UTC (permalink / raw) To: David Marchand Cc: dev, thomas, ferruh.yigit, bruce.richardson, mb, Konstantin Ananyev, Ruifeng Wang, Andrew Rybchenko, Ori Kam, Yipeng Wang, Sameh Gobriel, Reshma Pattan, Anatoly Burakov, David Hunt, Sivaprasad Tummala, Ciara Power, Maxime Coquelin, Chenbo Xia On Fri, 8 Dec 2023 15:59:47 +0100 David Marchand <david.marchand@redhat.com> wrote: > diff --git a/lib/bpf/bpf_impl.h b/lib/bpf/bpf_impl.h > index b483569071..30d83d2b40 100644 > --- a/lib/bpf/bpf_impl.h > +++ b/lib/bpf/bpf_impl.h > @@ -27,9 +27,10 @@ int __rte_bpf_jit_x86(struct rte_bpf *bpf); > int __rte_bpf_jit_arm64(struct rte_bpf *bpf); > > extern int rte_bpf_logtype; > +#define RTE_LOGTYPE_BPF rte_bpf_logtype > > -#define RTE_BPF_LOG(lvl, fmt, args...) \ > - rte_log(RTE_LOG_## lvl, rte_bpf_logtype, fmt, ##args) > +#define BPF_LOG(lvl, fmt, args...) \ > + RTE_LOG(lvl, BPF, fmt "\n", ##args) Not sure about this. There were some cases where bpf_XXX function names clashed with those in libpcap. That is probably why the RTE_BPF_LOG was chosen. ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 13/14] lib: replace logging helpers 2023-12-08 17:18 ` Stephen Hemminger @ 2023-12-11 12:36 ` David Marchand 0 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-11 12:36 UTC (permalink / raw) To: Stephen Hemminger Cc: dev, thomas, ferruh.yigit, bruce.richardson, mb, Konstantin Ananyev, Ruifeng Wang, Andrew Rybchenko, Ori Kam, Yipeng Wang, Sameh Gobriel, Reshma Pattan, Anatoly Burakov, David Hunt, Sivaprasad Tummala, Ciara Power, Maxime Coquelin, Chenbo Xia On Fri, Dec 8, 2023 at 6:18 PM Stephen Hemminger <stephen@networkplumber.org> wrote: > > On Fri, 8 Dec 2023 15:59:47 +0100 > David Marchand <david.marchand@redhat.com> wrote: > > > diff --git a/lib/bpf/bpf_impl.h b/lib/bpf/bpf_impl.h > > index b483569071..30d83d2b40 100644 > > --- a/lib/bpf/bpf_impl.h > > +++ b/lib/bpf/bpf_impl.h > > @@ -27,9 +27,10 @@ int __rte_bpf_jit_x86(struct rte_bpf *bpf); > > int __rte_bpf_jit_arm64(struct rte_bpf *bpf); > > > > extern int rte_bpf_logtype; > > +#define RTE_LOGTYPE_BPF rte_bpf_logtype > > > > -#define RTE_BPF_LOG(lvl, fmt, args...) \ > > - rte_log(RTE_LOG_## lvl, rte_bpf_logtype, fmt, ##args) > > +#define BPF_LOG(lvl, fmt, args...) \ > > + RTE_LOG(lvl, BPF, fmt "\n", ##args) > > Not sure about this. There were some cases where bpf_XXX function > names clashed with those in libpcap. That is probably why the > RTE_BPF_LOG was chosen. > That would only impact DPDK compilation as it is an internal header, but I get your point. I put a note to update in a next revision. -- David Marchand ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 13/14] lib: replace logging helpers 2023-12-08 14:59 ` [RFC v2 13/14] lib: replace logging helpers David Marchand 2023-12-08 17:18 ` Stephen Hemminger @ 2023-12-16 9:42 ` Andrew Rybchenko 1 sibling, 0 replies; 122+ messages in thread From: Andrew Rybchenko @ 2023-12-16 9:42 UTC (permalink / raw) To: David Marchand, dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Konstantin Ananyev, Ruifeng Wang, Ori Kam, Yipeng Wang, Sameh Gobriel, Reshma Pattan, Anatoly Burakov, David Hunt, Sivaprasad Tummala, Ciara Power, Maxime Coquelin, Chenbo Xia On 12/8/23 17:59, David Marchand wrote: > This is a preparation step before the next change. > > Many libraries have their own logging helpers that do not add a newline > in their format string. > Some previous changes fixed places where some of those helpers are > called without a trailing newline. > Using RTE_LOG_LINE in the existing helpers will ensure we don't > introduce new issues in the future. > > The problem is that if we simply convert to the RTE_LOG_LINE helper, > a future fix may introduce a regression since the logging helper > change won't be backported. > > To address this concern, rename existing helpers: backporting a call to > them will trigger some conflict or build issue in LTS branches. > > Note: > - bpf and vhost that still has some debug multilines messages, a direct > call to RTE_LOG/RTE_LOG_DP is used: this will make it easier to notice > such special cases, > - about previously publicly exposed logging helpers, when such helper is > not publicly used (iow in public inline API), it is removed from the > public API (this is the case for the member library), > > Signed-off-by: David Marchand <david.marchand@redhat.com> For ethdev Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> ^ permalink raw reply [flat|nested] 122+ messages in thread
* [RFC v2 14/14] lib: use per line logging in helpers 2023-12-08 14:59 ` [RFC v2 00/14] " David Marchand ` (12 preceding siblings ...) 2023-12-08 14:59 ` [RFC v2 13/14] lib: replace logging helpers David Marchand @ 2023-12-08 14:59 ` David Marchand 2023-12-09 7:19 ` fengchengwen 2023-12-16 9:41 ` Andrew Rybchenko 13 siblings, 2 replies; 122+ messages in thread From: David Marchand @ 2023-12-08 14:59 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Nicolas Chautru, Konstantin Ananyev, Cristian Dumitrescu, Fan Zhang, Ashish Gupta, Akhil Goyal, Chengwen Feng, Kevin Laatz, Andrew Rybchenko, Jerin Jacob, Erik Gabriel Carrillo, Elena Agostini, Kiran Kumar K, Nithin Dabilpuram, Zhirun Yan, Yipeng Wang, Sameh Gobriel, Srikanth Yalavarthi, Jasvinder Singh, Pavan Nikhilesh, Reshma Pattan, Anatoly Burakov, David Hunt, Sivaprasad Tummala, Sachin Saxena, Hemant Agrawal, Honnappa Nagarahalli, Ori Kam, Ciara Power, Maxime Coquelin, Chenbo Xia Use RTE_LOG_LINE in existing macros that append a \n. Signed-off-by: David Marchand <david.marchand@redhat.com> --- Changes since RFC v1: - converted all logging helpers in lib/, --- lib/bbdev/rte_bbdev.c | 5 +++-- lib/bpf/bpf_impl.h | 2 +- lib/cfgfile/rte_cfgfile.c | 4 ++-- lib/compressdev/rte_compressdev_internal.h | 5 +++-- lib/cryptodev/rte_cryptodev.h | 16 +++++++--------- lib/dmadev/rte_dmadev.c | 6 ++++-- lib/ethdev/rte_ethdev.h | 3 +-- lib/eventdev/eventdev_pmd.h | 8 ++++---- lib/eventdev/rte_event_timer_adapter.c | 17 ++++++++++------- lib/gpudev/gpudev.c | 6 ++++-- lib/graph/graph_private.h | 5 +++-- lib/member/member.h | 4 ++-- lib/metrics/rte_metrics_telemetry.c | 4 ++-- lib/mldev/rte_mldev.h | 5 +++-- lib/net/rte_net_crc.c | 8 ++++---- lib/node/node_private.h | 6 ++++-- lib/pdump/rte_pdump.c | 5 ++--- lib/power/power_common.h | 2 +- lib/rawdev/rte_rawdev_pmd.h | 4 ++-- lib/rcu/rte_rcu_qsbr.h | 8 +++----- lib/regexdev/rte_regexdev.h | 3 +-- lib/stack/stack_pvt.h | 4 ++-- lib/telemetry/telemetry.c | 4 +--- lib/vhost/vhost.h | 8 ++++---- lib/vhost/vhost_crypto.c | 6 +++--- 25 files changed, 76 insertions(+), 72 deletions(-) diff --git a/lib/bbdev/rte_bbdev.c b/lib/bbdev/rte_bbdev.c index e09bb97abb..13bde3c25b 100644 --- a/lib/bbdev/rte_bbdev.c +++ b/lib/bbdev/rte_bbdev.c @@ -28,10 +28,11 @@ /* BBDev library logging ID */ RTE_LOG_REGISTER_DEFAULT(bbdev_logtype, NOTICE); +#define RTE_LOGTYPE_BBDEV bbdev_logtype /* Helper macro for logging */ -#define rte_bbdev_log(level, fmt, ...) \ - rte_log(RTE_LOG_ ## level, bbdev_logtype, fmt "\n", ##__VA_ARGS__) +#define rte_bbdev_log(level, ...) \ + RTE_LOG_LINE(level, BBDEV, "" __VA_ARGS__) #define rte_bbdev_log_debug(fmt, ...) \ rte_bbdev_log(DEBUG, RTE_STR(__LINE__) ":%s() " fmt, __func__, \ diff --git a/lib/bpf/bpf_impl.h b/lib/bpf/bpf_impl.h index 30d83d2b40..17f38faec1 100644 --- a/lib/bpf/bpf_impl.h +++ b/lib/bpf/bpf_impl.h @@ -30,7 +30,7 @@ extern int rte_bpf_logtype; #define RTE_LOGTYPE_BPF rte_bpf_logtype #define BPF_LOG(lvl, fmt, args...) \ - RTE_LOG(lvl, BPF, fmt "\n", ##args) + RTE_LOG_LINE(lvl, BPF, fmt, ##args) static inline size_t bpf_size(uint32_t bpf_op_sz) diff --git a/lib/cfgfile/rte_cfgfile.c b/lib/cfgfile/rte_cfgfile.c index 2f9cc0722a..6a5e4fd942 100644 --- a/lib/cfgfile/rte_cfgfile.c +++ b/lib/cfgfile/rte_cfgfile.c @@ -29,10 +29,10 @@ struct rte_cfgfile { /* Setting up dynamic logging 8< */ RTE_LOG_REGISTER_DEFAULT(cfgfile_logtype, INFO); +#define RTE_LOGTYPE_CFGFILE cfgfile_logtype #define CFG_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, cfgfile_logtype, "%s(): " fmt "\n", \ - __func__, ## args) + RTE_LOG_LINE(level, CFGFILE, "%s(): " fmt, __func__, ## args) /* >8 End of setting up dynamic logging */ /** when we resize a file structure, how many extra entries diff --git a/lib/compressdev/rte_compressdev_internal.h b/lib/compressdev/rte_compressdev_internal.h index b3b193e3ee..01b7764282 100644 --- a/lib/compressdev/rte_compressdev_internal.h +++ b/lib/compressdev/rte_compressdev_internal.h @@ -21,9 +21,10 @@ extern "C" { /* Logging Macros */ extern int compressdev_logtype; +#define RTE_LOGTYPE_COMPRESSDEV compressdev_logtype + #define COMPRESSDEV_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, compressdev_logtype, "%s(): " fmt "\n", \ - __func__, ##args) + RTE_LOG_LINE(level, COMPRESSDEV, "%s(): " fmt, __func__, ## args) /** * Dequeue processed packets from queue pair of a device. diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h index aaeaf294e6..563f011299 100644 --- a/lib/cryptodev/rte_cryptodev.h +++ b/lib/cryptodev/rte_cryptodev.h @@ -31,23 +31,21 @@ extern const char **rte_cyptodev_names; /* Logging Macros */ #define CDEV_LOG_ERR(...) \ - RTE_LOG(ERR, CRYPTODEV, \ - RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_LOG_LINE(ERR, CRYPTODEV, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,), \ __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__,))) #define CDEV_LOG_INFO(...) \ - RTE_LOG(INFO, CRYPTODEV, \ - RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ - RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(INFO, CRYPTODEV, "" __VA_ARGS__) #define CDEV_LOG_DEBUG(...) \ - RTE_LOG(DEBUG, CRYPTODEV, \ - RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_LOG_LINE(DEBUG, CRYPTODEV, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,), \ __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__,))) #define CDEV_PMD_TRACE(...) \ - RTE_LOG(DEBUG, CRYPTODEV, \ - RTE_FMT("[%s] %s: " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_LOG_LINE(DEBUG, CRYPTODEV, \ + RTE_FMT("[%s] %s: " RTE_FMT_HEAD(__VA_ARGS__,), \ dev, __func__, RTE_FMT_TAIL(__VA_ARGS__,))) /** diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c index 009a21849a..c1a166858c 100644 --- a/lib/dmadev/rte_dmadev.c +++ b/lib/dmadev/rte_dmadev.c @@ -32,9 +32,11 @@ static struct { } *dma_devices_shared_data; RTE_LOG_REGISTER_DEFAULT(rte_dma_logtype, INFO); +#define RTE_LOGTYPE_DMA rte_dma_logtype + #define RTE_DMA_LOG(level, ...) \ - rte_log(RTE_LOG_ ## level, rte_dma_logtype, RTE_FMT("dma: " \ - RTE_FMT_HEAD(__VA_ARGS__,) "\n", RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(level, DMA, RTE_FMT("dma: " RTE_FMT_HEAD(__VA_ARGS__,), \ + RTE_FMT_TAIL(__VA_ARGS__,))) int rte_dma_dev_max(size_t dev_max) diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index 18debce99c..21e3a21903 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -179,8 +179,7 @@ extern int rte_eth_dev_logtype; #define RTE_LOGTYPE_ETHDEV rte_eth_dev_logtype #define RTE_ETHDEV_LOG_LINE(level, ...) \ - RTE_LOG(level, ETHDEV, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ - RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(level, ETHDEV, "" __VA_ARGS__) struct rte_mbuf; diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index 2ec5aec0a8..50cf7d9057 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -33,14 +33,14 @@ extern "C" { /* Logging Macros */ #define RTE_EDEV_LOG_ERR(...) \ - RTE_LOG(ERR, EVENTDEV, \ - RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_LOG_LINE(ERR, EVENTDEV, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,), \ __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__,))) #ifdef RTE_LIBRTE_EVENTDEV_DEBUG #define RTE_EDEV_LOG_DEBUG(...) \ - RTE_LOG(DEBUG, EVENTDEV, \ - RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_LOG_LINE(DEBUG, EVENTDEV, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,), \ __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__,))) #else #define RTE_EDEV_LOG_DEBUG(...) (void)0 diff --git a/lib/eventdev/rte_event_timer_adapter.c b/lib/eventdev/rte_event_timer_adapter.c index 3f22e85173..6ebb7b257e 100644 --- a/lib/eventdev/rte_event_timer_adapter.c +++ b/lib/eventdev/rte_event_timer_adapter.c @@ -30,27 +30,30 @@ #define DATA_MZ_NAME_FORMAT "rte_event_timer_adapter_data_%d" RTE_LOG_REGISTER_SUFFIX(evtim_logtype, adapter.timer, NOTICE); +#define RTE_LOGTYPE_EVTIM evtim_logtype RTE_LOG_REGISTER_SUFFIX(evtim_buffer_logtype, adapter.timer, NOTICE); +#define RTE_LOGTYPE_EVTIM_BUF evtim_buffer_logtype RTE_LOG_REGISTER_SUFFIX(evtim_svc_logtype, adapter.timer.svc, NOTICE); +#define RTE_LOGTYPE_EVTIM_SVC evtim_svc_logtype static struct rte_event_timer_adapter *adapters; static const struct event_timer_adapter_ops swtim_ops; #define EVTIM_LOG(level, logtype, ...) \ - rte_log(RTE_LOG_ ## level, logtype, \ - RTE_FMT("EVTIMER: %s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,) \ - "\n", __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(level, logtype, \ + RTE_FMT("EVTIMER: %s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,), \ + __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__,))) -#define EVTIM_LOG_ERR(...) EVTIM_LOG(ERR, evtim_logtype, __VA_ARGS__) +#define EVTIM_LOG_ERR(...) EVTIM_LOG(ERR, EVTIM, __VA_ARGS__) #ifdef RTE_LIBRTE_EVENTDEV_DEBUG #define EVTIM_LOG_DBG(...) \ - EVTIM_LOG(DEBUG, evtim_logtype, __VA_ARGS__) + EVTIM_LOG(DEBUG, EVTIM, __VA_ARGS__) #define EVTIM_BUF_LOG_DBG(...) \ - EVTIM_LOG(DEBUG, evtim_buffer_logtype, __VA_ARGS__) + EVTIM_LOG(DEBUG, EVTIM_BUF, __VA_ARGS__) #define EVTIM_SVC_LOG_DBG(...) \ - EVTIM_LOG(DEBUG, evtim_svc_logtype, __VA_ARGS__) + EVTIM_LOG(DEBUG, EVTIM_SVC, __VA_ARGS__) #else #define EVTIM_LOG_DBG(...) (void)0 #define EVTIM_BUF_LOG_DBG(...) (void)0 diff --git a/lib/gpudev/gpudev.c b/lib/gpudev/gpudev.c index 6845d18b4d..79118c3e94 100644 --- a/lib/gpudev/gpudev.c +++ b/lib/gpudev/gpudev.c @@ -17,9 +17,11 @@ /* Logging */ RTE_LOG_REGISTER_DEFAULT(gpu_logtype, NOTICE); +#define RTE_LOGTYPE_GPUDEV gpu_logtype + #define GPU_LOG(level, ...) \ - rte_log(RTE_LOG_ ## level, gpu_logtype, RTE_FMT("gpu: " \ - RTE_FMT_HEAD(__VA_ARGS__, ) "\n", RTE_FMT_TAIL(__VA_ARGS__, ))) + RTE_LOG_LINE(level, GPUDEV, RTE_FMT("gpu: " RTE_FMT_HEAD(__VA_ARGS__, ), \ + RTE_FMT_TAIL(__VA_ARGS__, ))) /* Set any driver error as EPERM */ #define GPU_DRV_RET(function) \ diff --git a/lib/graph/graph_private.h b/lib/graph/graph_private.h index d0ef13b205..672a034287 100644 --- a/lib/graph/graph_private.h +++ b/lib/graph/graph_private.h @@ -18,10 +18,11 @@ #include "rte_graph_worker.h" extern int rte_graph_logtype; +#define RTE_LOGTYPE_GRAPH rte_graph_logtype #define GRAPH_LOG(level, ...) \ - rte_log(RTE_LOG_##level, rte_graph_logtype, \ - RTE_FMT("GRAPH: %s():%u " RTE_FMT_HEAD(__VA_ARGS__, ) "\n", \ + RTE_LOG_LINE(level, GRAPH, \ + RTE_FMT("GRAPH: %s():%u " RTE_FMT_HEAD(__VA_ARGS__, ), \ __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__, ))) #define graph_err(...) GRAPH_LOG(ERR, __VA_ARGS__) diff --git a/lib/member/member.h b/lib/member/member.h index ce150f7689..56dd2782a6 100644 --- a/lib/member/member.h +++ b/lib/member/member.h @@ -8,7 +8,7 @@ extern int librte_member_logtype; #define RTE_LOGTYPE_MEMBER librte_member_logtype #define MEMBER_LOG(level, ...) \ - RTE_LOG(level, MEMBER, \ - RTE_FMT("%s(): " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_LOG_LINE(level, MEMBER, \ + RTE_FMT("%s(): " RTE_FMT_HEAD(__VA_ARGS__,), \ __func__, RTE_FMT_TAIL(__VA_ARGS__,))) diff --git a/lib/metrics/rte_metrics_telemetry.c b/lib/metrics/rte_metrics_telemetry.c index 1d133e1f8c..b8c9d75a7d 100644 --- a/lib/metrics/rte_metrics_telemetry.c +++ b/lib/metrics/rte_metrics_telemetry.c @@ -16,11 +16,11 @@ struct telemetry_metrics_data tel_met_data; int metrics_log_level; +#define RTE_LOGTYPE_METRICS metrics_log_level /* Logging Macros */ #define METRICS_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ##level, metrics_log_level, "%s(): "fmt "\n", \ - __func__, ##args) + RTE_LOG_LINE(level, METRICS, "%s(): "fmt, __func__, ## args) #define METRICS_LOG_ERR(fmt, args...) \ METRICS_LOG(ERR, fmt, ## args) diff --git a/lib/mldev/rte_mldev.h b/lib/mldev/rte_mldev.h index 63b2670bb0..5cf6f0566f 100644 --- a/lib/mldev/rte_mldev.h +++ b/lib/mldev/rte_mldev.h @@ -144,9 +144,10 @@ extern "C" { /* Logging Macro */ extern int rte_ml_dev_logtype; +#define RTE_LOGTYPE_MLDEV rte_ml_dev_logtype -#define RTE_MLDEV_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_##level, rte_ml_dev_logtype, "%s(): " fmt "\n", __func__, ##args) +#define RTE_MLDEV_LOG(level, fmt, args...) \ + RTE_LOG_LINE(level, MLDEV, "%s(): " fmt, __func__, ##args) #define RTE_ML_STR_MAX 128 /**< Maximum length of name string */ diff --git a/lib/net/rte_net_crc.c b/lib/net/rte_net_crc.c index 900d6de7f4..b401ea3dd8 100644 --- a/lib/net/rte_net_crc.c +++ b/lib/net/rte_net_crc.c @@ -70,11 +70,11 @@ static const rte_net_crc_handler handlers_neon[] = { static uint16_t max_simd_bitwidth; -#define NET_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, libnet_logtype, "%s(): " fmt "\n", \ - __func__, ## args) - RTE_LOG_REGISTER_DEFAULT(libnet_logtype, INFO); +#define RTE_LOGTYPE_NET libnet_logtype + +#define NET_LOG(level, fmt, args...) \ + RTE_LOG_LINE(level, NET, "%s(): " fmt, __func__, ## args) /* Scalar handling */ diff --git a/lib/node/node_private.h b/lib/node/node_private.h index 26135aaa5b..5702146db4 100644 --- a/lib/node/node_private.h +++ b/lib/node/node_private.h @@ -11,9 +11,11 @@ #include <rte_mbuf_dyn.h> extern int rte_node_logtype; +#define RTE_LOGTYPE_NODE rte_node_logtype + #define NODE_LOG(level, node_name, ...) \ - rte_log(RTE_LOG_##level, rte_node_logtype, \ - RTE_FMT("NODE %s: %s():%u " RTE_FMT_HEAD(__VA_ARGS__, ) "\n", \ + RTE_LOG_LINE(level, NODE, \ + RTE_FMT("NODE %s: %s():%u " RTE_FMT_HEAD(__VA_ARGS__, ), \ node_name, __func__, __LINE__, \ RTE_FMT_TAIL(__VA_ARGS__, ))) diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c index 70963e7ee7..3954c06a87 100644 --- a/lib/pdump/rte_pdump.c +++ b/lib/pdump/rte_pdump.c @@ -18,9 +18,8 @@ RTE_LOG_REGISTER_DEFAULT(pdump_logtype, NOTICE); #define RTE_LOGTYPE_PDUMP pdump_logtype -#define PDUMP_LOG_LINE(level, fmt, args...) \ - RTE_LOG(level, PDUMP, "%s(): " fmt "\n", \ - __func__, ## args) +#define PDUMP_LOG_LINE(level, fmt, args...) \ + RTE_LOG_LINE(level, PDUMP, "%s(): " fmt, __func__, ## args) /* Used for the multi-process communication */ #define PDUMP_MP "mp_pdump" diff --git a/lib/power/power_common.h b/lib/power/power_common.h index ea2febbd86..4e32548169 100644 --- a/lib/power/power_common.h +++ b/lib/power/power_common.h @@ -15,7 +15,7 @@ extern int power_logtype; #ifdef RTE_LIBRTE_POWER_DEBUG #define POWER_DEBUG_LOG(fmt, args...) \ - RTE_LOG(ERR, POWER, "%s: " fmt "\n", __func__, ## args) + RTE_LOG_LINE(ERR, POWER, "%s: " fmt, __func__, ## args) #else #define POWER_DEBUG_LOG(fmt, args...) #endif diff --git a/lib/rawdev/rte_rawdev_pmd.h b/lib/rawdev/rte_rawdev_pmd.h index 7b9ef1d09f..7173282c66 100644 --- a/lib/rawdev/rte_rawdev_pmd.h +++ b/lib/rawdev/rte_rawdev_pmd.h @@ -27,11 +27,11 @@ extern "C" { #include "rte_rawdev.h" extern int librawdev_logtype; +#define RTE_LOGTYPE_RAWDEV librawdev_logtype /* Logging Macros */ #define RTE_RDEV_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, librawdev_logtype, "%s(): " fmt "\n", \ - __func__, ##args) + RTE_LOG_LINE(level, RAWDEV, "%s(): " fmt, __func__, ##args) #define RTE_RDEV_ERR(fmt, args...) \ RTE_RDEV_LOG(ERR, fmt, ## args) diff --git a/lib/rcu/rte_rcu_qsbr.h b/lib/rcu/rte_rcu_qsbr.h index 0dca8310c0..23c9f89805 100644 --- a/lib/rcu/rte_rcu_qsbr.h +++ b/lib/rcu/rte_rcu_qsbr.h @@ -40,17 +40,15 @@ extern int rte_rcu_log_type; #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG #define __RTE_RCU_DP_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, rte_rcu_log_type, \ - "%s(): " fmt "\n", __func__, ## args) + RTE_LOG_LINE(level, RCU, "%s(): " fmt, __func__, ## args) #else #define __RTE_RCU_DP_LOG(level, fmt, args...) #endif #if defined(RTE_LIBRTE_RCU_DEBUG) -#define __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...) do {\ +#define __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...) do { \ if (v->qsbr_cnt[thread_id].lock_cnt) \ - rte_log(RTE_LOG_ ## level, rte_rcu_log_type, \ - "%s(): " fmt "\n", __func__, ## args); \ + RTE_LOG_LINE(level, RCU, "%s(): " fmt, __func__, ## args); \ } while (0) #else #define __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...) diff --git a/lib/regexdev/rte_regexdev.h b/lib/regexdev/rte_regexdev.h index dc111317a5..a50b841b1e 100644 --- a/lib/regexdev/rte_regexdev.h +++ b/lib/regexdev/rte_regexdev.h @@ -209,8 +209,7 @@ extern int rte_regexdev_logtype; #define RTE_LOGTYPE_REGEXDEV rte_regexdev_logtype #define RTE_REGEXDEV_LOG_LINE(level, ...) \ - RTE_LOG(level, REGEXDEV, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ - RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(level, REGEXDEV, "" __VA_ARGS__) /* Macros to check for valid port */ #define RTE_REGEXDEV_VALID_DEV_ID_OR_ERR_RET(dev_id, retval) do { \ diff --git a/lib/stack/stack_pvt.h b/lib/stack/stack_pvt.h index c7eab4027d..2dce42a9da 100644 --- a/lib/stack/stack_pvt.h +++ b/lib/stack/stack_pvt.h @@ -8,10 +8,10 @@ #include <rte_log.h> extern int stack_logtype; +#define RTE_LOGTYPE_STACK stack_logtype #define STACK_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ##level, stack_logtype, "%s(): "fmt "\n", \ - __func__, ##args) + RTE_LOG_LINE(level, STACK, "%s(): "fmt, __func__, ##args) #define STACK_LOG_ERR(fmt, args...) \ STACK_LOG(ERR, fmt, ## args) diff --git a/lib/telemetry/telemetry.c b/lib/telemetry/telemetry.c index 5c655e2b25..31e2391867 100644 --- a/lib/telemetry/telemetry.c +++ b/lib/telemetry/telemetry.c @@ -57,9 +57,7 @@ static rte_cpuset_t *thread_cpuset; RTE_LOG_REGISTER_DEFAULT(logtype, WARNING); #define RTE_LOGTYPE_TMTY logtype -#define TMTY_LOG_LINE(l, ...) \ - RTE_LOG(l, TMTY, RTE_FMT("TELEMETRY: " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ - RTE_FMT_TAIL(__VA_ARGS__,))) +#define TMTY_LOG_LINE(l, ...) RTE_LOG_LINE(l, TMTY, "TELEMETRY: " __VA_ARGS__) /* list of command callbacks, with one command registered by default */ static struct cmd_callback *callbacks; diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index 5a74d0e628..25c0f86e55 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -678,12 +678,12 @@ extern int vhost_data_log_level; #define RTE_LOGTYPE_VHOST_DATA vhost_data_log_level #define VHOST_CONFIG_LOG(prefix, level, fmt, args...) \ - RTE_LOG(level, VHOST_CONFIG, \ - "VHOST_CONFIG: (%s) " fmt "\n", prefix, ##args) + RTE_LOG_LINE(level, VHOST_CONFIG, \ + "VHOST_CONFIG: (%s) " fmt, prefix, ##args) #define VHOST_DATA_LOG(prefix, level, fmt, args...) \ - RTE_LOG_DP(level, VHOST_DATA, \ - "VHOST_DATA: (%s) " fmt "\n", prefix, ##args) + RTE_LOG_DP_LINE(level, VHOST_DATA, \ + "VHOST_DATA: (%s) " fmt, prefix, ##args) #ifdef RTE_LIBRTE_VHOST_DEBUG #define VHOST_MAX_PRINT_BUFF 6072 diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c index 6e5443e5f8..3704fbbb3d 100644 --- a/lib/vhost/vhost_crypto.c +++ b/lib/vhost/vhost_crypto.c @@ -21,15 +21,15 @@ RTE_LOG_REGISTER_SUFFIX(vhost_crypto_logtype, crypto, INFO); #define RTE_LOGTYPE_VHOST_CRYPTO vhost_crypto_logtype #define VC_LOG_ERR(fmt, args...) \ - RTE_LOG(ERR, VHOST_CRYPTO, "%s() line %u: " fmt "\n", \ + RTE_LOG_LINE(ERR, VHOST_CRYPTO, "%s() line %u: " fmt, \ __func__, __LINE__, ## args) #define VC_LOG_INFO(fmt, args...) \ - RTE_LOG(INFO, VHOST_CRYPTO, "%s() line %u: " fmt "\n", \ + RTE_LOG_LINE(INFO, VHOST_CRYPTO, "%s() line %u: " fmt, \ __func__, __LINE__, ## args) #ifdef RTE_LIBRTE_VHOST_DEBUG #define VC_LOG_DBG(fmt, args...) \ - RTE_LOG(DEBUG, VHOST_CRYPTO, "%s() line %u: " fmt "\n", \ + RTE_LOG_LINE(DEBUG, VHOST_CRYPTO, "%s() line %u: " fmt, \ __func__, __LINE__, ## args) #else #define VC_LOG_DBG(fmt, args...) -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 14/14] lib: use per line logging in helpers 2023-12-08 14:59 ` [RFC v2 14/14] lib: use per line logging in helpers David Marchand @ 2023-12-09 7:19 ` fengchengwen 2023-12-16 9:41 ` Andrew Rybchenko 1 sibling, 0 replies; 122+ messages in thread From: fengchengwen @ 2023-12-09 7:19 UTC (permalink / raw) To: David Marchand, dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Nicolas Chautru, Konstantin Ananyev, Cristian Dumitrescu, Fan Zhang, Ashish Gupta, Akhil Goyal, Kevin Laatz, Andrew Rybchenko, Jerin Jacob, Erik Gabriel Carrillo, Elena Agostini, Kiran Kumar K, Nithin Dabilpuram, Zhirun Yan, Yipeng Wang, Sameh Gobriel, Srikanth Yalavarthi, Jasvinder Singh, Pavan Nikhilesh, Reshma Pattan, Anatoly Burakov, David Hunt, Sivaprasad Tummala, Sachin Saxena, Hemant Agrawal, Honnappa Nagarahalli, Ori Kam, Ciara Power, Maxime Coquelin, Chenbo Xia For lib/dmadev part Reviewed-by: Chengwen Feng <fengchengwen@huawei.com> On 2023/12/8 22:59, David Marchand wrote: > Use RTE_LOG_LINE in existing macros that append a \n. > > Signed-off-by: David Marchand <david.marchand@redhat.com> > --- > Changes since RFC v1: > - converted all logging helpers in lib/, > > --- > lib/bbdev/rte_bbdev.c | 5 +++-- > lib/bpf/bpf_impl.h | 2 +- > lib/cfgfile/rte_cfgfile.c | 4 ++-- > lib/compressdev/rte_compressdev_internal.h | 5 +++-- > lib/cryptodev/rte_cryptodev.h | 16 +++++++--------- > lib/dmadev/rte_dmadev.c | 6 ++++-- > lib/ethdev/rte_ethdev.h | 3 +-- > lib/eventdev/eventdev_pmd.h | 8 ++++---- > lib/eventdev/rte_event_timer_adapter.c | 17 ++++++++++------- > lib/gpudev/gpudev.c | 6 ++++-- > lib/graph/graph_private.h | 5 +++-- > lib/member/member.h | 4 ++-- > lib/metrics/rte_metrics_telemetry.c | 4 ++-- > lib/mldev/rte_mldev.h | 5 +++-- > lib/net/rte_net_crc.c | 8 ++++---- > lib/node/node_private.h | 6 ++++-- > lib/pdump/rte_pdump.c | 5 ++--- > lib/power/power_common.h | 2 +- > lib/rawdev/rte_rawdev_pmd.h | 4 ++-- > lib/rcu/rte_rcu_qsbr.h | 8 +++----- > lib/regexdev/rte_regexdev.h | 3 +-- > lib/stack/stack_pvt.h | 4 ++-- > lib/telemetry/telemetry.c | 4 +--- > lib/vhost/vhost.h | 8 ++++---- > lib/vhost/vhost_crypto.c | 6 +++--- > 25 files changed, 76 insertions(+), 72 deletions(-) > ... ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [RFC v2 14/14] lib: use per line logging in helpers 2023-12-08 14:59 ` [RFC v2 14/14] lib: use per line logging in helpers David Marchand 2023-12-09 7:19 ` fengchengwen @ 2023-12-16 9:41 ` Andrew Rybchenko 1 sibling, 0 replies; 122+ messages in thread From: Andrew Rybchenko @ 2023-12-16 9:41 UTC (permalink / raw) To: David Marchand, dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Nicolas Chautru, Konstantin Ananyev, Cristian Dumitrescu, Fan Zhang, Ashish Gupta, Akhil Goyal, Chengwen Feng, Kevin Laatz, Jerin Jacob, Erik Gabriel Carrillo, Elena Agostini, Kiran Kumar K, Nithin Dabilpuram, Zhirun Yan, Yipeng Wang, Sameh Gobriel, Srikanth Yalavarthi, Jasvinder Singh, Pavan Nikhilesh, Reshma Pattan, Anatoly Burakov, David Hunt, Sivaprasad Tummala, Sachin Saxena, Hemant Agrawal, Honnappa Nagarahalli, Ori Kam, Ciara Power, Maxime Coquelin, Chenbo Xia On 12/8/23 17:59, David Marchand wrote: > Use RTE_LOG_LINE in existing macros that append a \n. > > Signed-off-by: David Marchand <david.marchand@redhat.com> > --- > Changes since RFC v1: > - converted all logging helpers in lib/, > > --- For ethdev Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v3 00/14] Detect superfluous newline in logs 2023-11-17 13:18 [RFC 0/3] Detect superfluous newline in logs David Marchand ` (5 preceding siblings ...) 2023-12-08 14:59 ` [RFC v2 00/14] " David Marchand @ 2023-12-18 9:27 ` David Marchand 2023-12-18 9:27 ` [PATCH v3 01/14] hash: remove some dead code David Marchand ` (13 more replies) 2023-12-18 14:37 ` [PATCH v4 00/14] Detect superfluous newline in logs David Marchand 2023-12-20 15:35 ` [PATCH v5 00/13] Detect superfluous newline in logs David Marchand 8 siblings, 14 replies; 122+ messages in thread From: David Marchand @ 2023-12-18 9:27 UTC (permalink / raw) To: dev; +Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb Getting readable and consistent logs is important when running a DPDK application, especially when troubleshooting. A common issue with logs is when a DPDK change do not add (or on the contrary add too many \n) in the format string. This issue would only get noticed when actually hitting this log (which may be a situation hard to reach). This series proposes to introduce a new RTE_LOG_LINE helper that is responsible for logging a one line message and spews a build error (with gcc) if any \n is part of the format string. Because this is still a RFC and a lot of changes are added in this v2, no ack from the v1 has been kept. Since the v1 discussion on the cover letter, I changed my mind, and made the choice to break existing logging helpers exported in the public API. The reasoning is that those should not be used in the first place: logs should be produced only by the library that registers the logtype. Some multiline logging for debugging and the test assert macros are still present, but in this case an explicit call to RTE_LOG() is done. This can be checked with a simple: $ git grep 'RTE_LOG(' -- lib/ :^lib/log/ lib/acl/acl_bld.c: RTE_LOG(DEBUG, ACL, "Build phase for ACL \"%s\":\n" lib/acl/acl_gen.c: RTE_LOG(DEBUG, ACL, "Gen phase for ACL \"%s\":\n" lib/bpf/bpf_validate.c: RTE_LOG(DEBUG, BPF, "%s(%p) stats:\n" lib/bpf/bpf_validate.c: RTE_LOG(DEBUG, BPF, lib/eal/common/eal_common_debug.c: RTE_LOG(CRIT, EAL, "Error - exiting with code: %d\n" lib/eal/include/rte_test.h: RTE_LOG(ERR, EAL, "Test assert %s line %d failed: " \ lib/ip_frag/ip_frag_common.h:#define IP_FRAG_LOG(lvl, fmt, args...) RTE_LOG(lvl, IPFRAG, fmt, ##args) lib/sched/rte_sched.c: RTE_LOG(DEBUG, SCHED, "Low level config for pipe profile %u:\n" lib/sched/rte_sched.c: RTE_LOG(DEBUG, SCHED, "Low level config for subport profile %u:\n" Changes since RFC v2: - sent as non RFC, - fixed format string crossing line boundaries, - avoided potential collision with BPF_ namespace, Changes since RFC v1: - rebased after Stephen log changes, - added more fixes as I was making progress on the topic, - added a check so dpdk developers stop using RTE_LOG(), - added preparation patches, like "lib: replace logging helpers", - converted all libraries, keeping some special cases with explicit calls to RTE_LOG, -- David Marchand David Marchand (14): hash: remove some dead code regexdev: fix logtype register lib: use dedicated logtypes and macros lib: add newline in logs lib: remove redundant newline from logs eal/linux: remove log paraphrasing the doc bpf: remove log level in internal helper lib: simplify multilines log messages rcu: introduce a logging helper vhost: improve log for memory dumping configuration log: add a per line log helper lib: convert to per line logging lib: replace logging helpers lib: use per line logging in helpers devtools/checkpatches.sh | 8 + drivers/crypto/ipsec_mb/ipsec_mb_ops.c | 2 +- lib/acl/acl_bld.c | 28 +- lib/acl/acl_gen.c | 8 +- lib/acl/rte_acl.c | 8 +- lib/acl/tb_mem.c | 4 +- lib/bbdev/rte_bbdev.c | 11 +- lib/bpf/bpf.c | 2 +- lib/bpf/bpf_convert.c | 16 +- lib/bpf/bpf_exec.c | 12 +- lib/bpf/bpf_impl.h | 5 +- lib/bpf/bpf_jit_arm64.c | 8 +- lib/bpf/bpf_jit_x86.c | 4 +- lib/bpf/bpf_load.c | 2 +- lib/bpf/bpf_load_elf.c | 24 +- lib/bpf/bpf_pkt.c | 4 +- lib/bpf/bpf_stub.c | 6 +- lib/bpf/bpf_validate.c | 44 +- lib/cfgfile/rte_cfgfile.c | 18 +- lib/compressdev/rte_compressdev_internal.h | 5 +- lib/compressdev/rte_compressdev_pmd.c | 4 +- lib/cryptodev/rte_cryptodev.c | 4 +- lib/cryptodev/rte_cryptodev.h | 16 +- lib/dispatcher/rte_dispatcher.c | 12 +- lib/dmadev/rte_dmadev.c | 8 +- lib/eal/common/eal_common_bus.c | 22 +- lib/eal/common/eal_common_class.c | 4 +- lib/eal/common/eal_common_config.c | 2 +- lib/eal/common/eal_common_debug.c | 6 +- lib/eal/common/eal_common_dev.c | 80 +- lib/eal/common/eal_common_devargs.c | 18 +- lib/eal/common/eal_common_dynmem.c | 34 +- lib/eal/common/eal_common_fbarray.c | 12 +- lib/eal/common/eal_common_interrupts.c | 38 +- lib/eal/common/eal_common_lcore.c | 26 +- lib/eal/common/eal_common_memalloc.c | 12 +- lib/eal/common/eal_common_memory.c | 66 +- lib/eal/common/eal_common_memzone.c | 24 +- lib/eal/common/eal_common_options.c | 236 +++--- lib/eal/common/eal_common_proc.c | 112 +-- lib/eal/common/eal_common_tailqs.c | 12 +- lib/eal/common/eal_common_thread.c | 12 +- lib/eal/common/eal_common_timer.c | 6 +- lib/eal/common/eal_common_trace_utils.c | 2 +- lib/eal/common/eal_trace.h | 4 +- lib/eal/common/hotplug_mp.c | 54 +- lib/eal/common/malloc_elem.c | 6 +- lib/eal/common/malloc_heap.c | 40 +- lib/eal/common/malloc_mp.c | 72 +- lib/eal/common/rte_keepalive.c | 2 +- lib/eal/common/rte_malloc.c | 10 +- lib/eal/common/rte_service.c | 8 +- lib/eal/freebsd/eal.c | 74 +- lib/eal/freebsd/eal_alarm.c | 2 +- lib/eal/freebsd/eal_dev.c | 8 +- lib/eal/freebsd/eal_hugepage_info.c | 22 +- lib/eal/freebsd/eal_interrupts.c | 60 +- lib/eal/freebsd/eal_lcore.c | 2 +- lib/eal/freebsd/eal_memalloc.c | 10 +- lib/eal/freebsd/eal_memory.c | 34 +- lib/eal/freebsd/eal_thread.c | 2 +- lib/eal/freebsd/eal_timer.c | 10 +- lib/eal/linux/eal.c | 122 +-- lib/eal/linux/eal_alarm.c | 2 +- lib/eal/linux/eal_dev.c | 40 +- lib/eal/linux/eal_hugepage_info.c | 38 +- lib/eal/linux/eal_interrupts.c | 116 +-- lib/eal/linux/eal_lcore.c | 4 +- lib/eal/linux/eal_memalloc.c | 120 +-- lib/eal/linux/eal_memory.c | 208 ++--- lib/eal/linux/eal_thread.c | 4 +- lib/eal/linux/eal_timer.c | 14 +- lib/eal/linux/eal_vfio.c | 270 +++---- lib/eal/linux/eal_vfio_mp_sync.c | 4 +- lib/eal/riscv/rte_cycles.c | 4 +- lib/eal/unix/eal_filesystem.c | 14 +- lib/eal/unix/eal_firmware.c | 2 +- lib/eal/unix/eal_unix_memory.c | 8 +- lib/eal/unix/rte_thread.c | 34 +- lib/eal/windows/eal.c | 36 +- lib/eal/windows/eal_alarm.c | 12 +- lib/eal/windows/eal_debug.c | 8 +- lib/eal/windows/eal_dev.c | 8 +- lib/eal/windows/eal_hugepages.c | 10 +- lib/eal/windows/eal_interrupts.c | 10 +- lib/eal/windows/eal_lcore.c | 6 +- lib/eal/windows/eal_memalloc.c | 50 +- lib/eal/windows/eal_memory.c | 22 +- lib/eal/windows/eal_windows.h | 4 +- lib/eal/windows/include/rte_windows.h | 4 +- lib/eal/windows/rte_thread.c | 28 +- lib/efd/rte_efd.c | 58 +- lib/ethdev/ethdev_driver.c | 44 +- lib/ethdev/ethdev_pci.h | 2 +- lib/ethdev/ethdev_private.c | 10 +- lib/ethdev/rte_class_eth.c | 2 +- lib/ethdev/rte_ethdev.c | 854 ++++++++++----------- lib/ethdev/rte_ethdev.h | 51 +- lib/ethdev/rte_ethdev_cman.c | 16 +- lib/ethdev/rte_ethdev_telemetry.c | 44 +- lib/ethdev/rte_flow.c | 64 +- lib/ethdev/rte_flow.h | 3 - lib/ethdev/sff_telemetry.c | 30 +- lib/eventdev/eventdev_pmd.h | 14 +- lib/eventdev/rte_event_crypto_adapter.c | 12 +- lib/eventdev/rte_event_dma_adapter.c | 18 +- lib/eventdev/rte_event_eth_rx_adapter.c | 40 +- lib/eventdev/rte_event_eth_tx_adapter.c | 2 +- lib/eventdev/rte_event_timer_adapter.c | 21 +- lib/eventdev/rte_eventdev.c | 10 +- lib/fib/rte_fib.c | 14 +- lib/fib/rte_fib6.c | 14 +- lib/gpudev/gpudev.c | 6 +- lib/graph/graph_private.h | 5 +- lib/hash/rte_cuckoo_hash.c | 52 +- lib/hash/rte_cuckoo_hash.h | 11 - lib/hash/rte_fbk_hash.c | 4 +- lib/hash/rte_hash_crc.c | 12 +- lib/hash/rte_thash.c | 20 +- lib/hash/rte_thash_gfni.c | 8 +- lib/ip_frag/rte_ip_frag_common.c | 8 +- lib/latencystats/rte_latencystats.c | 41 +- lib/log/log.c | 6 +- lib/log/rte_log.h | 21 + lib/lpm/rte_lpm.c | 12 +- lib/lpm/rte_lpm6.c | 10 +- lib/mbuf/rte_mbuf.c | 14 +- lib/mbuf/rte_mbuf_dyn.c | 14 +- lib/mbuf/rte_mbuf_pool_ops.c | 4 +- lib/member/member.h | 14 + lib/member/rte_member.c | 15 +- lib/member/rte_member.h | 9 - lib/member/rte_member_heap.h | 39 +- lib/member/rte_member_ht.c | 13 +- lib/member/rte_member_sketch.c | 41 +- lib/member/rte_member_vbf.c | 9 +- lib/mempool/rte_mempool.c | 24 +- lib/mempool/rte_mempool.h | 2 +- lib/mempool/rte_mempool_ops.c | 10 +- lib/metrics/rte_metrics_telemetry.c | 6 +- lib/mldev/rte_mldev.c | 102 +-- lib/mldev/rte_mldev.h | 5 +- lib/net/rte_net_crc.c | 14 +- lib/node/ethdev_rx.c | 4 +- lib/node/ip4_lookup.c | 2 +- lib/node/ip6_lookup.c | 2 +- lib/node/kernel_rx.c | 8 +- lib/node/kernel_tx.c | 4 +- lib/node/node_private.h | 6 +- lib/pdump/rte_pdump.c | 113 ++- lib/pipeline/rte_pipeline.c | 228 +++--- lib/port/rte_port_ethdev.c | 18 +- lib/port/rte_port_eventdev.c | 18 +- lib/port/rte_port_fd.c | 24 +- lib/port/rte_port_frag.c | 14 +- lib/port/rte_port_ras.c | 12 +- lib/port/rte_port_ring.c | 18 +- lib/port/rte_port_sched.c | 12 +- lib/port/rte_port_source_sink.c | 48 +- lib/port/rte_port_sym_crypto.c | 18 +- lib/power/guest_channel.c | 36 +- lib/power/power_acpi_cpufreq.c | 116 +-- lib/power/power_amd_pstate_cpufreq.c | 132 ++-- lib/power/power_common.c | 14 +- lib/power/power_common.h | 6 +- lib/power/power_cppc_cpufreq.c | 130 ++-- lib/power/power_intel_uncore.c | 72 +- lib/power/power_kvm_vm.c | 22 +- lib/power/power_pstate_cpufreq.c | 156 ++-- lib/power/rte_power.c | 22 +- lib/power/rte_power_pmd_mgmt.c | 34 +- lib/power/rte_power_uncore.c | 14 +- lib/rawdev/rte_rawdev_pmd.h | 4 +- lib/rcu/rte_rcu_qsbr.c | 66 +- lib/rcu/rte_rcu_qsbr.h | 17 +- lib/regexdev/rte_regexdev.c | 88 +-- lib/regexdev/rte_regexdev.h | 13 +- lib/reorder/rte_reorder.c | 32 +- lib/rib/rte_rib.c | 10 +- lib/rib/rte_rib6.c | 10 +- lib/ring/rte_ring.c | 24 +- lib/sched/rte_pie.c | 18 +- lib/sched/rte_sched.c | 274 +++---- lib/stack/rte_stack.c | 8 +- lib/stack/stack_pvt.h | 4 +- lib/table/rte_table_acl.c | 72 +- lib/table/rte_table_array.c | 16 +- lib/table/rte_table_hash_cuckoo.c | 22 +- lib/table/rte_table_hash_ext.c | 22 +- lib/table/rte_table_hash_key16.c | 38 +- lib/table/rte_table_hash_key32.c | 38 +- lib/table/rte_table_hash_key8.c | 38 +- lib/table/rte_table_hash_lru.c | 22 +- lib/table/rte_table_lpm.c | 42 +- lib/table/rte_table_lpm_ipv6.c | 44 +- lib/table/rte_table_stub.c | 4 +- lib/telemetry/telemetry.c | 39 +- lib/vhost/fd_man.c | 8 +- lib/vhost/iotlb.c | 36 +- lib/vhost/socket.c | 102 +-- lib/vhost/vdpa.c | 8 +- lib/vhost/vduse.c | 120 +-- lib/vhost/vduse.h | 4 +- lib/vhost/vhost.c | 118 +-- lib/vhost/vhost.h | 24 +- lib/vhost/vhost_crypto.c | 12 +- lib/vhost/vhost_user.c | 530 ++++++------- lib/vhost/virtio_net.c | 188 ++--- lib/vhost/virtio_net_ctrl.c | 38 +- 209 files changed, 3976 insertions(+), 3963 deletions(-) create mode 100644 lib/member/member.h -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v3 01/14] hash: remove some dead code 2023-12-18 9:27 ` [PATCH v3 00/14] Detect superfluous newline in logs David Marchand @ 2023-12-18 9:27 ` David Marchand 2023-12-18 9:27 ` [PATCH v3 02/14] regexdev: fix logtype register David Marchand ` (12 subsequent siblings) 13 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-18 9:27 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Tyler Retzlaff, Yipeng Wang, Sameh Gobriel, Vladimir Medvedkin, Ray Kinsella, Dharmik Thakkar, Ruifeng Wang This macro is not used. Fixes: 769b2de7fb52 ("hash: implement RCU resources reclamation") Cc: stable@dpdk.org Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Stephen Hemminger <stephen@networkplumber.org> Reviewed-by: Tyler Retzlaff <roretzla@linux.microsoft.com> --- lib/hash/rte_cuckoo_hash.h | 11 ----------- 1 file changed, 11 deletions(-) diff --git a/lib/hash/rte_cuckoo_hash.h b/lib/hash/rte_cuckoo_hash.h index f7afc4dd79..8ea793c66e 100644 --- a/lib/hash/rte_cuckoo_hash.h +++ b/lib/hash/rte_cuckoo_hash.h @@ -29,17 +29,6 @@ #define RETURN_IF_TRUE(cond, retval) #endif -#if defined(RTE_LIBRTE_HASH_DEBUG) -#define ERR_IF_TRUE(cond, fmt, args...) do { \ - if (cond) { \ - RTE_LOG(ERR, HASH, fmt, ##args); \ - return; \ - } \ -} while (0) -#else -#define ERR_IF_TRUE(cond, fmt, args...) -#endif - #include <rte_hash_crc.h> #include <rte_jhash.h> -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v3 02/14] regexdev: fix logtype register 2023-12-18 9:27 ` [PATCH v3 00/14] Detect superfluous newline in logs David Marchand 2023-12-18 9:27 ` [PATCH v3 01/14] hash: remove some dead code David Marchand @ 2023-12-18 9:27 ` David Marchand 2023-12-18 9:27 ` [PATCH v3 03/14] lib: use dedicated logtypes and macros David Marchand ` (11 subsequent siblings) 13 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-18 9:27 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Tyler Retzlaff, Ori Kam, Parav Pandit, Guy Kaneti This library logtype was not initialized so its logs would end up under the 0 logtype, iow, RTE_LOGTYPE_EAL. Fixes: b25246beaefc ("regexdev: add core functions") Cc: stable@dpdk.org Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Stephen Hemminger <stephen@networkplumber.org> Reviewed-by: Tyler Retzlaff <roretzla@linux.microsoft.com> Acked-by: Ori Kam <orika@nvidia.com> --- lib/regexdev/rte_regexdev.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/regexdev/rte_regexdev.c b/lib/regexdev/rte_regexdev.c index caec069182..d38a85eb0b 100644 --- a/lib/regexdev/rte_regexdev.c +++ b/lib/regexdev/rte_regexdev.c @@ -19,7 +19,7 @@ static struct { struct rte_regexdev_data data[RTE_MAX_REGEXDEV_DEVS]; } *rte_regexdev_shared_data; -int rte_regexdev_logtype; +RTE_LOG_REGISTER_DEFAULT(rte_regexdev_logtype, INFO); static uint16_t regexdev_find_free_dev(void) -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v3 03/14] lib: use dedicated logtypes and macros 2023-12-18 9:27 ` [PATCH v3 00/14] Detect superfluous newline in logs David Marchand 2023-12-18 9:27 ` [PATCH v3 01/14] hash: remove some dead code David Marchand 2023-12-18 9:27 ` [PATCH v3 02/14] regexdev: fix logtype register David Marchand @ 2023-12-18 9:27 ` David Marchand 2023-12-18 9:27 ` [PATCH v3 04/14] lib: add newline in logs David Marchand ` (10 subsequent siblings) 13 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-18 9:27 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Tyler Retzlaff, Andrew Rybchenko, Akhil Goyal, Fan Zhang, Amit Prakash Shukla, Jerin Jacob, Naga Harish K S V No printf! When a dedicated log helper exists, use it. And no usurpation please: a library should log under its logtype (see the eventdev rx adapter update for example). Note: the RTE_ETH_VALID_PORTID_OR_GOTO_ERR_RET macro is renamed for consistency with the rest of eventdev (private) macros. Cc: stable@dpdk.org Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Stephen Hemminger <stephen@networkplumber.org> Reviewed-by: Tyler Retzlaff <roretzla@linux.microsoft.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> --- lib/cryptodev/rte_cryptodev.c | 2 +- lib/ethdev/ethdev_driver.c | 4 ++-- lib/ethdev/ethdev_private.c | 2 +- lib/ethdev/rte_class_eth.c | 2 +- lib/eventdev/rte_event_dma_adapter.c | 4 ++-- lib/eventdev/rte_event_eth_rx_adapter.c | 12 ++++++------ lib/eventdev/rte_eventdev.c | 6 +++--- lib/mempool/rte_mempool_ops.c | 2 +- 8 files changed, 17 insertions(+), 17 deletions(-) diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c index 25e3ec12d1..ead8c9a623 100644 --- a/lib/cryptodev/rte_cryptodev.c +++ b/lib/cryptodev/rte_cryptodev.c @@ -2684,7 +2684,7 @@ rte_cryptodev_driver_id_get(const char *name) int driver_id = -1; if (name == NULL) { - RTE_LOG(DEBUG, CRYPTODEV, "name pointer NULL"); + CDEV_LOG_DEBUG("name pointer NULL"); return -1; } diff --git a/lib/ethdev/ethdev_driver.c b/lib/ethdev/ethdev_driver.c index fff4b7b4cd..55a9dcc565 100644 --- a/lib/ethdev/ethdev_driver.c +++ b/lib/ethdev/ethdev_driver.c @@ -487,7 +487,7 @@ rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da) pair = &args.pairs[i]; if (strcmp("representor", pair->key) == 0) { if (eth_da->type != RTE_ETH_REPRESENTOR_NONE) { - RTE_LOG(ERR, EAL, "duplicated representor key: %s\n", + RTE_ETHDEV_LOG(ERR, "duplicated representor key: %s\n", dargs); result = -1; goto parse_cleanup; @@ -713,7 +713,7 @@ rte_eth_representor_id_get(uint16_t port_id, if (info->ranges[i].controller != controller) continue; if (info->ranges[i].id_end < info->ranges[i].id_base) { - RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n", + RTE_ETHDEV_LOG(WARNING, "Port %hu invalid representor ID Range %u - %u, entry %d\n", port_id, info->ranges[i].id_base, info->ranges[i].id_end, i); continue; diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c index e98b7188b0..0e1c7b23c1 100644 --- a/lib/ethdev/ethdev_private.c +++ b/lib/ethdev/ethdev_private.c @@ -182,7 +182,7 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data) RTE_DIM(eth_da->representor_ports)); done: if (str == NULL) - RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str); + RTE_ETHDEV_LOG(ERR, "wrong representor format: %s\n", str); return str == NULL ? -1 : 0; } diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c index b61dae849d..311beb17cb 100644 --- a/lib/ethdev/rte_class_eth.c +++ b/lib/ethdev/rte_class_eth.c @@ -165,7 +165,7 @@ eth_dev_iterate(const void *start, valid_keys = eth_params_keys; kvargs = rte_kvargs_parse(str, valid_keys); if (kvargs == NULL) { - RTE_LOG(ERR, EAL, "cannot parse argument list\n"); + RTE_ETHDEV_LOG(ERR, "cannot parse argument list\n"); rte_errno = EINVAL; return NULL; } diff --git a/lib/eventdev/rte_event_dma_adapter.c b/lib/eventdev/rte_event_dma_adapter.c index af4b5ad388..cbf9405438 100644 --- a/lib/eventdev/rte_event_dma_adapter.c +++ b/lib/eventdev/rte_event_dma_adapter.c @@ -1046,7 +1046,7 @@ rte_event_dma_adapter_vchan_add(uint8_t id, int16_t dma_dev_id, uint16_t vchan, sizeof(struct dma_vchan_info), 0, adapter->socket_id); if (dev_info->vchanq == NULL) { - printf("Queue pair add not supported\n"); + RTE_EDEV_LOG_ERR("Queue pair add not supported"); return -ENOMEM; } } @@ -1057,7 +1057,7 @@ rte_event_dma_adapter_vchan_add(uint8_t id, int16_t dma_dev_id, uint16_t vchan, sizeof(struct dma_vchan_info), 0, adapter->socket_id); if (dev_info->tqmap == NULL) { - printf("tq pair add not supported\n"); + RTE_EDEV_LOG_ERR("tq pair add not supported"); return -ENOMEM; } } diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c index 6db03adf04..82ae31712d 100644 --- a/lib/eventdev/rte_event_eth_rx_adapter.c +++ b/lib/eventdev/rte_event_eth_rx_adapter.c @@ -314,9 +314,9 @@ rxa_event_buf_get(struct event_eth_rx_adapter *rx_adapter, uint16_t eth_dev_id, } \ } while (0) -#define RTE_ETH_VALID_PORTID_OR_GOTO_ERR_RET(port_id, retval) do { \ +#define RTE_EVENT_ETH_RX_ADAPTER_PORTID_VALID_OR_GOTO_ERR_RET(port_id, retval) do { \ if (!rte_eth_dev_is_valid_port(port_id)) { \ - RTE_ETHDEV_LOG(ERR, "Invalid port_id=%u\n", port_id); \ + RTE_EDEV_LOG_ERR("Invalid port_id=%u", port_id); \ ret = retval; \ goto error; \ } \ @@ -3671,7 +3671,7 @@ handle_rxa_get_queue_conf(const char *cmd __rte_unused, /* Get device ID from parameter string */ eth_dev_id = strtoul(token, NULL, 10); - RTE_ETH_VALID_PORTID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); + RTE_EVENT_ETH_RX_ADAPTER_PORTID_VALID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); token = strtok(NULL, ","); RTE_EVENT_ETH_RX_ADAPTER_TOKEN_VALID_OR_GOTO_ERR_RET(token, -1); @@ -3743,7 +3743,7 @@ handle_rxa_get_queue_stats(const char *cmd __rte_unused, /* Get device ID from parameter string */ eth_dev_id = strtoul(token, NULL, 10); - RTE_ETH_VALID_PORTID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); + RTE_EVENT_ETH_RX_ADAPTER_PORTID_VALID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); token = strtok(NULL, ","); RTE_EVENT_ETH_RX_ADAPTER_TOKEN_VALID_OR_GOTO_ERR_RET(token, -1); @@ -3813,7 +3813,7 @@ handle_rxa_queue_stats_reset(const char *cmd __rte_unused, /* Get device ID from parameter string */ eth_dev_id = strtoul(token, NULL, 10); - RTE_ETH_VALID_PORTID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); + RTE_EVENT_ETH_RX_ADAPTER_PORTID_VALID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); token = strtok(NULL, ","); RTE_EVENT_ETH_RX_ADAPTER_TOKEN_VALID_OR_GOTO_ERR_RET(token, -1); @@ -3868,7 +3868,7 @@ handle_rxa_instance_get(const char *cmd __rte_unused, /* Get device ID from parameter string */ eth_dev_id = strtoul(token, NULL, 10); - RTE_ETH_VALID_PORTID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); + RTE_EVENT_ETH_RX_ADAPTER_PORTID_VALID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); token = strtok(NULL, ","); RTE_EVENT_ETH_RX_ADAPTER_TOKEN_VALID_OR_GOTO_ERR_RET(token, -1); diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index 0ca32d6721..ae50821a3f 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -1428,8 +1428,8 @@ rte_event_vector_pool_create(const char *name, unsigned int n, int ret; if (!nb_elem) { - RTE_LOG(ERR, EVENTDEV, - "Invalid number of elements=%d requested\n", nb_elem); + RTE_EDEV_LOG_ERR("Invalid number of elements=%d requested", + nb_elem); rte_errno = EINVAL; return NULL; } @@ -1444,7 +1444,7 @@ rte_event_vector_pool_create(const char *name, unsigned int n, mp_ops_name = rte_mbuf_best_mempool_ops(); ret = rte_mempool_set_ops_byname(mp, mp_ops_name, NULL); if (ret != 0) { - RTE_LOG(ERR, EVENTDEV, "error setting mempool handler\n"); + RTE_EDEV_LOG_ERR("error setting mempool handler"); goto err; } diff --git a/lib/mempool/rte_mempool_ops.c b/lib/mempool/rte_mempool_ops.c index ae1d288f27..e871de9ec9 100644 --- a/lib/mempool/rte_mempool_ops.c +++ b/lib/mempool/rte_mempool_ops.c @@ -46,7 +46,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h) if (strlen(h->name) >= sizeof(ops->name) - 1) { rte_spinlock_unlock(&rte_mempool_ops_table.sl); - RTE_LOG(DEBUG, EAL, "%s(): mempool_ops <%s>: name too long\n", + RTE_LOG(DEBUG, MEMPOOL, "%s(): mempool_ops <%s>: name too long\n", __func__, h->name); rte_errno = EEXIST; return -EEXIST; -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v3 04/14] lib: add newline in logs 2023-12-18 9:27 ` [PATCH v3 00/14] Detect superfluous newline in logs David Marchand ` (2 preceding siblings ...) 2023-12-18 9:27 ` [PATCH v3 03/14] lib: use dedicated logtypes and macros David Marchand @ 2023-12-18 9:27 ` David Marchand 2023-12-18 9:27 ` [PATCH v3 05/14] lib: remove redundant newline from logs David Marchand ` (9 subsequent siblings) 13 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-18 9:27 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Tyler Retzlaff, Andrew Rybchenko, Harman Kalra, Vladimir Medvedkin, Anatoly Burakov, David Hunt, Sivaprasad Tummala Fix places leading to a log message not terminated with a newline. Cc: stable@dpdk.org Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Reviewed-by: Tyler Retzlaff <roretzla@linux.microsoft.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> --- lib/eal/common/eal_common_options.c | 2 +- lib/eal/linux/eal_hugepage_info.c | 2 +- lib/eal/linux/eal_interrupts.c | 2 +- lib/ethdev/ethdev_pci.h | 2 +- lib/ethdev/rte_ethdev.c | 40 ++++++++++++++--------------- lib/lpm/rte_lpm6.c | 6 ++--- lib/power/guest_channel.c | 2 +- lib/power/rte_power_pmd_mgmt.c | 6 ++--- 8 files changed, 31 insertions(+), 31 deletions(-) diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c index a6d21f1cba..e9ba01fb89 100644 --- a/lib/eal/common/eal_common_options.c +++ b/lib/eal/common/eal_common_options.c @@ -2141,7 +2141,7 @@ rte_vect_set_max_simd_bitwidth(uint16_t bitwidth) struct internal_config *internal_conf = eal_get_internal_configuration(); if (internal_conf->max_simd_bitwidth.forced) { - RTE_LOG(NOTICE, EAL, "Cannot set max SIMD bitwidth - user runtime override enabled"); + RTE_LOG(NOTICE, EAL, "Cannot set max SIMD bitwidth - user runtime override enabled\n"); return -EPERM; } diff --git a/lib/eal/linux/eal_hugepage_info.c b/lib/eal/linux/eal_hugepage_info.c index 581d9dfc91..36a495fb1f 100644 --- a/lib/eal/linux/eal_hugepage_info.c +++ b/lib/eal/linux/eal_hugepage_info.c @@ -403,7 +403,7 @@ inspect_hugedir_cb(const struct walk_hugedir_data *whd) struct stat st; if (fstat(whd->file_fd, &st) < 0) - RTE_LOG(DEBUG, EAL, "%s(): stat(\"%s\") failed: %s", + RTE_LOG(DEBUG, EAL, "%s(): stat(\"%s\") failed: %s\n", __func__, whd->file_name, strerror(errno)); else (*total_size) += st.st_size; diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c index d4919dff45..eabac24992 100644 --- a/lib/eal/linux/eal_interrupts.c +++ b/lib/eal/linux/eal_interrupts.c @@ -1542,7 +1542,7 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd) /* only check, initialization would be done in vdev driver.*/ if ((uint64_t)rte_intr_efd_counter_size_get(intr_handle) > sizeof(union rte_intr_read_buffer)) { - RTE_LOG(ERR, EAL, "the efd_counter_size is oversized"); + RTE_LOG(ERR, EAL, "the efd_counter_size is oversized\n"); return -EINVAL; } } else { diff --git a/lib/ethdev/ethdev_pci.h b/lib/ethdev/ethdev_pci.h index 320e3e0093..ddb559aa95 100644 --- a/lib/ethdev/ethdev_pci.h +++ b/lib/ethdev/ethdev_pci.h @@ -31,7 +31,7 @@ rte_eth_copy_pci_info(struct rte_eth_dev *eth_dev, struct rte_pci_device *pci_dev) { if ((eth_dev == NULL) || (pci_dev == NULL)) { - RTE_ETHDEV_LOG(ERR, "NULL pointer eth_dev=%p pci_dev=%p", + RTE_ETHDEV_LOG(ERR, "NULL pointer eth_dev=%p pci_dev=%p\n", (void *)eth_dev, (void *)pci_dev); return; } diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 3858983fcc..b9d99ece15 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -724,7 +724,7 @@ rte_eth_dev_get_port_by_name(const char *name, uint16_t *port_id) uint16_t pid; if (name == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get port ID from NULL name"); + RTE_ETHDEV_LOG(ERR, "Cannot get port ID from NULL name\n"); return -EINVAL; } @@ -2394,41 +2394,41 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id, nb_rx_desc = cap.max_nb_desc; if (nb_rx_desc > cap.max_nb_desc) { RTE_ETHDEV_LOG(ERR, - "Invalid value for nb_rx_desc(=%hu), should be: <= %hu", + "Invalid value for nb_rx_desc(=%hu), should be: <= %hu\n", nb_rx_desc, cap.max_nb_desc); return -EINVAL; } if (conf->peer_count > cap.max_rx_2_tx) { RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Rx queue(=%u), should be: <= %hu", + "Invalid value for number of peers for Rx queue(=%u), should be: <= %hu\n", conf->peer_count, cap.max_rx_2_tx); return -EINVAL; } if (conf->use_locked_device_memory && !cap.rx_cap.locked_device_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to use locked device memory for Rx queue, which is not supported"); + "Attempt to use locked device memory for Rx queue, which is not supported\n"); return -EINVAL; } if (conf->use_rte_memory && !cap.rx_cap.rte_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to use DPDK memory for Rx queue, which is not supported"); + "Attempt to use DPDK memory for Rx queue, which is not supported\n"); return -EINVAL; } if (conf->use_locked_device_memory && conf->use_rte_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to use mutually exclusive memory settings for Rx queue"); + "Attempt to use mutually exclusive memory settings for Rx queue\n"); return -EINVAL; } if (conf->force_memory && !conf->use_locked_device_memory && !conf->use_rte_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to force Rx queue memory settings, but none is set"); + "Attempt to force Rx queue memory settings, but none is set\n"); return -EINVAL; } if (conf->peer_count == 0) { RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Rx queue(=%u), should be: > 0", + "Invalid value for number of peers for Rx queue(=%u), should be: > 0\n", conf->peer_count); return -EINVAL; } @@ -2438,7 +2438,7 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id, count++; } if (count > cap.max_nb_queues) { - RTE_ETHDEV_LOG(ERR, "To many Rx hairpin queues max is %d", + RTE_ETHDEV_LOG(ERR, "To many Rx hairpin queues max is %d\n", cap.max_nb_queues); return -EINVAL; } @@ -2597,41 +2597,41 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id, nb_tx_desc = cap.max_nb_desc; if (nb_tx_desc > cap.max_nb_desc) { RTE_ETHDEV_LOG(ERR, - "Invalid value for nb_tx_desc(=%hu), should be: <= %hu", + "Invalid value for nb_tx_desc(=%hu), should be: <= %hu\n", nb_tx_desc, cap.max_nb_desc); return -EINVAL; } if (conf->peer_count > cap.max_tx_2_rx) { RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Tx queue(=%u), should be: <= %hu", + "Invalid value for number of peers for Tx queue(=%u), should be: <= %hu\n", conf->peer_count, cap.max_tx_2_rx); return -EINVAL; } if (conf->use_locked_device_memory && !cap.tx_cap.locked_device_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to use locked device memory for Tx queue, which is not supported"); + "Attempt to use locked device memory for Tx queue, which is not supported\n"); return -EINVAL; } if (conf->use_rte_memory && !cap.tx_cap.rte_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to use DPDK memory for Tx queue, which is not supported"); + "Attempt to use DPDK memory for Tx queue, which is not supported\n"); return -EINVAL; } if (conf->use_locked_device_memory && conf->use_rte_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to use mutually exclusive memory settings for Tx queue"); + "Attempt to use mutually exclusive memory settings for Tx queue\n"); return -EINVAL; } if (conf->force_memory && !conf->use_locked_device_memory && !conf->use_rte_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to force Tx queue memory settings, but none is set"); + "Attempt to force Tx queue memory settings, but none is set\n"); return -EINVAL; } if (conf->peer_count == 0) { RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Tx queue(=%u), should be: > 0", + "Invalid value for number of peers for Tx queue(=%u), should be: > 0\n", conf->peer_count); return -EINVAL; } @@ -2641,7 +2641,7 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id, count++; } if (count > cap.max_nb_queues) { - RTE_ETHDEV_LOG(ERR, "To many Tx hairpin queues max is %d", + RTE_ETHDEV_LOG(ERR, "To many Tx hairpin queues max is %d\n", cap.max_nb_queues); return -EINVAL; } @@ -6716,7 +6716,7 @@ rte_eth_ip_reassembly_capability_get(uint16_t port_id, } if (reassembly_capa == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get reassembly capability to NULL"); + RTE_ETHDEV_LOG(ERR, "Cannot get reassembly capability to NULL\n"); return -EINVAL; } @@ -6752,7 +6752,7 @@ rte_eth_ip_reassembly_conf_get(uint16_t port_id, } if (conf == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get reassembly info to NULL"); + RTE_ETHDEV_LOG(ERR, "Cannot get reassembly info to NULL\n"); return -EINVAL; } @@ -6780,7 +6780,7 @@ rte_eth_ip_reassembly_conf_set(uint16_t port_id, if (dev->data->dev_configured == 0) { RTE_ETHDEV_LOG(ERR, "Device with port_id=%u is not configured.\n" - "Cannot set IP reassembly configuration", + "Cannot set IP reassembly configuration\n", port_id); return -EINVAL; } diff --git a/lib/lpm/rte_lpm6.c b/lib/lpm/rte_lpm6.c index 873cc8bc26..24ce7dd022 100644 --- a/lib/lpm/rte_lpm6.c +++ b/lib/lpm/rte_lpm6.c @@ -280,7 +280,7 @@ rte_lpm6_create(const char *name, int socket_id, rules_tbl = rte_hash_create(&rule_hash_tbl_params); if (rules_tbl == NULL) { - RTE_LOG(ERR, LPM, "LPM rules hash table allocation failed: %s (%d)", + RTE_LOG(ERR, LPM, "LPM rules hash table allocation failed: %s (%d)\n", rte_strerror(rte_errno), rte_errno); goto fail_wo_unlock; } @@ -290,7 +290,7 @@ rte_lpm6_create(const char *name, int socket_id, sizeof(uint32_t) * config->number_tbl8s, RTE_CACHE_LINE_SIZE); if (tbl8_pool == NULL) { - RTE_LOG(ERR, LPM, "LPM tbl8 pool allocation failed: %s (%d)", + RTE_LOG(ERR, LPM, "LPM tbl8 pool allocation failed: %s (%d)\n", rte_strerror(rte_errno), rte_errno); rte_errno = ENOMEM; goto fail_wo_unlock; @@ -301,7 +301,7 @@ rte_lpm6_create(const char *name, int socket_id, sizeof(struct rte_lpm_tbl8_hdr) * config->number_tbl8s, RTE_CACHE_LINE_SIZE); if (tbl8_hdrs == NULL) { - RTE_LOG(ERR, LPM, "LPM tbl8 headers allocation failed: %s (%d)", + RTE_LOG(ERR, LPM, "LPM tbl8 headers allocation failed: %s (%d)\n", rte_strerror(rte_errno), rte_errno); rte_errno = ENOMEM; goto fail_wo_unlock; diff --git a/lib/power/guest_channel.c b/lib/power/guest_channel.c index cc05347425..a6f2097d5b 100644 --- a/lib/power/guest_channel.c +++ b/lib/power/guest_channel.c @@ -90,7 +90,7 @@ guest_channel_host_connect(const char *path, unsigned int lcore_id) flags |= O_NONBLOCK; if (fcntl(fd, F_SETFL, flags) < 0) { RTE_LOG(ERR, GUEST_CHANNEL, "Failed on setting non-blocking mode for " - "file %s", fd_path); + "file %s\n", fd_path); goto error; } /* QEMU needs a delay after connection */ diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c index 38f8384085..6f18ed0adf 100644 --- a/lib/power/rte_power_pmd_mgmt.c +++ b/lib/power/rte_power_pmd_mgmt.c @@ -686,7 +686,7 @@ int rte_power_pmd_mgmt_set_pause_duration(unsigned int duration) { if (duration == 0) { - RTE_LOG(ERR, POWER, "Pause duration must be greater than 0, value unchanged"); + RTE_LOG(ERR, POWER, "Pause duration must be greater than 0, value unchanged\n"); return -EINVAL; } pause_duration = duration; @@ -709,7 +709,7 @@ rte_power_pmd_mgmt_set_scaling_freq_min(unsigned int lcore, unsigned int min) } if (min > scale_freq_max[lcore]) { - RTE_LOG(ERR, POWER, "Invalid min frequency: Cannot be greater than max frequency"); + RTE_LOG(ERR, POWER, "Invalid min frequency: Cannot be greater than max frequency\n"); return -EINVAL; } scale_freq_min[lcore] = min; @@ -729,7 +729,7 @@ rte_power_pmd_mgmt_set_scaling_freq_max(unsigned int lcore, unsigned int max) if (max == 0) max = UINT32_MAX; if (max < scale_freq_min[lcore]) { - RTE_LOG(ERR, POWER, "Invalid max frequency: Cannot be less than min frequency"); + RTE_LOG(ERR, POWER, "Invalid max frequency: Cannot be less than min frequency\n"); return -EINVAL; } -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v3 05/14] lib: remove redundant newline from logs 2023-12-18 9:27 ` [PATCH v3 00/14] Detect superfluous newline in logs David Marchand ` (3 preceding siblings ...) 2023-12-18 9:27 ` [PATCH v3 04/14] lib: add newline in logs David Marchand @ 2023-12-18 9:27 ` David Marchand 2023-12-18 9:27 ` [PATCH v3 06/14] eal/linux: remove log paraphrasing the doc David Marchand ` (8 subsequent siblings) 13 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-18 9:27 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Chengwen Feng, Mattias Rönnblom, Kai Ji, Pablo de Lara, Nicolas Chautru, Cristian Dumitrescu, Fan Zhang, Ashish Gupta, Akhil Goyal, Kevin Laatz, Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam, Jerin Jacob, Abhinandan Gujjar, Amit Prakash Shukla, Naga Harish K S V, Erik Gabriel Carrillo, Srikanth Yalavarthi, Jasvinder Singh, Nithin Dabilpuram, Pavan Nikhilesh, Honnappa Nagarahalli, Maxime Coquelin, Chenbo Xia Fix places where two newline characters may be logged. Cc: stable@dpdk.org Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Reviewed-by: Chengwen Feng <fengchengwen@huawei.com> Acked-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com> --- Changes since RFC v1: - split fixes on direct calls to printf or RTE_LOG in a previous patch, --- drivers/crypto/ipsec_mb/ipsec_mb_ops.c | 2 +- lib/bbdev/rte_bbdev.c | 6 +- lib/cfgfile/rte_cfgfile.c | 14 ++-- lib/compressdev/rte_compressdev_pmd.c | 4 +- lib/cryptodev/rte_cryptodev.c | 2 +- lib/dispatcher/rte_dispatcher.c | 12 +-- lib/dmadev/rte_dmadev.c | 2 +- lib/eal/windows/eal_memory.c | 2 +- lib/eventdev/eventdev_pmd.h | 6 +- lib/eventdev/rte_event_crypto_adapter.c | 12 +-- lib/eventdev/rte_event_dma_adapter.c | 14 ++-- lib/eventdev/rte_event_eth_rx_adapter.c | 28 +++---- lib/eventdev/rte_event_eth_tx_adapter.c | 2 +- lib/eventdev/rte_event_timer_adapter.c | 4 +- lib/eventdev/rte_eventdev.c | 4 +- lib/metrics/rte_metrics_telemetry.c | 2 +- lib/mldev/rte_mldev.c | 102 ++++++++++++------------ lib/net/rte_net_crc.c | 6 +- lib/node/ethdev_rx.c | 4 +- lib/node/ip4_lookup.c | 2 +- lib/node/ip6_lookup.c | 2 +- lib/node/kernel_rx.c | 8 +- lib/node/kernel_tx.c | 4 +- lib/rcu/rte_rcu_qsbr.c | 4 +- lib/rcu/rte_rcu_qsbr.h | 8 +- lib/stack/rte_stack.c | 8 +- lib/vhost/vhost_crypto.c | 6 +- 27 files changed, 135 insertions(+), 135 deletions(-) diff --git a/drivers/crypto/ipsec_mb/ipsec_mb_ops.c b/drivers/crypto/ipsec_mb/ipsec_mb_ops.c index 52d6d010c7..f21f9cc5a0 100644 --- a/drivers/crypto/ipsec_mb/ipsec_mb_ops.c +++ b/drivers/crypto/ipsec_mb/ipsec_mb_ops.c @@ -407,7 +407,7 @@ ipsec_mb_ipc_request(const struct rte_mp_msg *mp_msg, const void *peer) resp_param->result = ipsec_mb_qp_release(dev, qp_id); break; default: - CDEV_LOG_ERR("invalid mp request type\n"); + CDEV_LOG_ERR("invalid mp request type"); } out: diff --git a/lib/bbdev/rte_bbdev.c b/lib/bbdev/rte_bbdev.c index cfebea09c7..e09bb97abb 100644 --- a/lib/bbdev/rte_bbdev.c +++ b/lib/bbdev/rte_bbdev.c @@ -1106,12 +1106,12 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op, intr_handle = dev->intr_handle; if (intr_handle == NULL) { - rte_bbdev_log(ERR, "Device %u intr handle unset\n", dev_id); + rte_bbdev_log(ERR, "Device %u intr handle unset", dev_id); return -ENOTSUP; } if (queue_id >= RTE_MAX_RXTX_INTR_VEC_ID) { - rte_bbdev_log(ERR, "Device %u queue_id %u is too big\n", + rte_bbdev_log(ERR, "Device %u queue_id %u is too big", dev_id, queue_id); return -ENOTSUP; } @@ -1120,7 +1120,7 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op, ret = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data); if (ret && (ret != -EEXIST)) { rte_bbdev_log(ERR, - "dev %u q %u int ctl error op %d epfd %d vec %u\n", + "dev %u q %u int ctl error op %d epfd %d vec %u", dev_id, queue_id, op, epfd, vec); return ret; } diff --git a/lib/cfgfile/rte_cfgfile.c b/lib/cfgfile/rte_cfgfile.c index eefba6e408..2f9cc0722a 100644 --- a/lib/cfgfile/rte_cfgfile.c +++ b/lib/cfgfile/rte_cfgfile.c @@ -137,7 +137,7 @@ rte_cfgfile_check_params(const struct rte_cfgfile_parameters *params) unsigned int i; if (!params) { - CFG_LOG(ERR, "missing cfgfile parameters\n"); + CFG_LOG(ERR, "missing cfgfile parameters"); return -EINVAL; } @@ -150,7 +150,7 @@ rte_cfgfile_check_params(const struct rte_cfgfile_parameters *params) } if (valid_comment == 0) { - CFG_LOG(ERR, "invalid comment characters %c\n", + CFG_LOG(ERR, "invalid comment characters %c", params->comment_character); return -ENOTSUP; } @@ -188,7 +188,7 @@ rte_cfgfile_load_with_params(const char *filename, int flags, lineno++; if ((len >= sizeof(buffer) - 1) && (buffer[len-1] != '\n')) { CFG_LOG(ERR, " line %d - no \\n found on string. " - "Check if line too long\n", lineno); + "Check if line too long", lineno); goto error1; } /* skip parsing if comment character found */ @@ -209,7 +209,7 @@ rte_cfgfile_load_with_params(const char *filename, int flags, char *end = memchr(buffer, ']', len); if (end == NULL) { CFG_LOG(ERR, - "line %d - no terminating ']' character found\n", + "line %d - no terminating ']' character found", lineno); goto error1; } @@ -225,7 +225,7 @@ rte_cfgfile_load_with_params(const char *filename, int flags, split[1] = memchr(buffer, '=', len); if (split[1] == NULL) { CFG_LOG(ERR, - "line %d - no '=' character found\n", + "line %d - no '=' character found", lineno); goto error1; } @@ -249,7 +249,7 @@ rte_cfgfile_load_with_params(const char *filename, int flags, if (!(flags & CFG_FLAG_EMPTY_VALUES) && (*split[1] == '\0')) { CFG_LOG(ERR, - "line %d - cannot use empty values\n", + "line %d - cannot use empty values", lineno); goto error1; } @@ -414,7 +414,7 @@ int rte_cfgfile_set_entry(struct rte_cfgfile *cfg, const char *sectionname, return 0; } - CFG_LOG(ERR, "entry name doesn't exist\n"); + CFG_LOG(ERR, "entry name doesn't exist"); return -EINVAL; } diff --git a/lib/compressdev/rte_compressdev_pmd.c b/lib/compressdev/rte_compressdev_pmd.c index 156bccd972..762b44f03e 100644 --- a/lib/compressdev/rte_compressdev_pmd.c +++ b/lib/compressdev/rte_compressdev_pmd.c @@ -100,12 +100,12 @@ rte_compressdev_pmd_create(const char *name, struct rte_compressdev *compressdev; if (params->name[0] != '\0') { - COMPRESSDEV_LOG(INFO, "User specified device name = %s\n", + COMPRESSDEV_LOG(INFO, "User specified device name = %s", params->name); name = params->name; } - COMPRESSDEV_LOG(INFO, "Creating compressdev %s\n", name); + COMPRESSDEV_LOG(INFO, "Creating compressdev %s", name); COMPRESSDEV_LOG(INFO, "Init parameters - name: %s, socket id: %d", name, params->socket_id); diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c index ead8c9a623..b233c0ecd7 100644 --- a/lib/cryptodev/rte_cryptodev.c +++ b/lib/cryptodev/rte_cryptodev.c @@ -2074,7 +2074,7 @@ rte_cryptodev_sym_session_create(uint8_t dev_id, } if (xforms == NULL) { - CDEV_LOG_ERR("Invalid xform\n"); + CDEV_LOG_ERR("Invalid xform"); rte_errno = EINVAL; return NULL; } diff --git a/lib/dispatcher/rte_dispatcher.c b/lib/dispatcher/rte_dispatcher.c index 10d02edde9..95dd41b818 100644 --- a/lib/dispatcher/rte_dispatcher.c +++ b/lib/dispatcher/rte_dispatcher.c @@ -246,7 +246,7 @@ evd_service_register(struct rte_dispatcher *dispatcher) rc = rte_service_component_register(&service, &dispatcher->service_id); if (rc != 0) RTE_EDEV_LOG_ERR("Registration of dispatcher service " - "%s failed with error code %d\n", + "%s failed with error code %d", service.name, rc); return rc; @@ -260,7 +260,7 @@ evd_service_unregister(struct rte_dispatcher *dispatcher) rc = rte_service_component_unregister(dispatcher->service_id); if (rc != 0) RTE_EDEV_LOG_ERR("Unregistration of dispatcher service " - "failed with error code %d\n", rc); + "failed with error code %d", rc); return rc; } @@ -279,7 +279,7 @@ rte_dispatcher_create(uint8_t event_dev_id) RTE_CACHE_LINE_SIZE, socket_id); if (dispatcher == NULL) { - RTE_EDEV_LOG_ERR("Unable to allocate memory for dispatcher\n"); + RTE_EDEV_LOG_ERR("Unable to allocate memory for dispatcher"); rte_errno = ENOMEM; return NULL; } @@ -483,7 +483,7 @@ evd_lcore_uninstall_handler(struct rte_dispatcher_lcore *lcore, unreg_handler = evd_lcore_get_handler_by_id(lcore, handler_id); if (unreg_handler == NULL) { - RTE_EDEV_LOG_ERR("Invalid handler id %d\n", handler_id); + RTE_EDEV_LOG_ERR("Invalid handler id %d", handler_id); return -EINVAL; } @@ -602,7 +602,7 @@ rte_dispatcher_finalize_unregister(struct rte_dispatcher *dispatcher, unreg_finalizer = evd_get_finalizer_by_id(dispatcher, finalizer_id); if (unreg_finalizer == NULL) { - RTE_EDEV_LOG_ERR("Invalid finalizer id %d\n", finalizer_id); + RTE_EDEV_LOG_ERR("Invalid finalizer id %d", finalizer_id); return -EINVAL; } @@ -636,7 +636,7 @@ evd_set_service_runstate(struct rte_dispatcher *dispatcher, int state) */ if (rc != 0) RTE_EDEV_LOG_ERR("Unexpected error %d occurred while setting " - "service component run state to %d\n", rc, + "service component run state to %d", rc, state); RTE_VERIFY(rc == 0); diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c index 4e5e420c82..009a21849a 100644 --- a/lib/dmadev/rte_dmadev.c +++ b/lib/dmadev/rte_dmadev.c @@ -726,7 +726,7 @@ rte_dma_vchan_status(int16_t dev_id, uint16_t vchan, enum rte_dma_vchan_status * return -EINVAL; if (vchan >= dev->data->dev_conf.nb_vchans) { - RTE_DMA_LOG(ERR, "Device %u vchan %u out of range\n", dev_id, vchan); + RTE_DMA_LOG(ERR, "Device %u vchan %u out of range", dev_id, vchan); return -EINVAL; } diff --git a/lib/eal/windows/eal_memory.c b/lib/eal/windows/eal_memory.c index 31410a41fd..fd39155163 100644 --- a/lib/eal/windows/eal_memory.c +++ b/lib/eal/windows/eal_memory.c @@ -110,7 +110,7 @@ eal_mem_win32api_init(void) VirtualAlloc2_ptr = (VirtualAlloc2_type)( (void *)GetProcAddress(library, function)); if (VirtualAlloc2_ptr == NULL) { - RTE_LOG_WIN32_ERR("GetProcAddress(\"%s\", \"%s\")\n", + RTE_LOG_WIN32_ERR("GetProcAddress(\"%s\", \"%s\")", library_name, function); /* Contrary to the docs, Server 2016 is not supported. */ diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index 30bd90085c..2ec5aec0a8 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -49,14 +49,14 @@ extern "C" { /* Macros to check for valid device */ #define RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, retval) do { \ if (!rte_event_pmd_is_valid_dev((dev_id))) { \ - RTE_EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \ + RTE_EDEV_LOG_ERR("Invalid dev_id=%d", dev_id); \ return retval; \ } \ } while (0) #define RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, errno, retval) do { \ if (!rte_event_pmd_is_valid_dev((dev_id))) { \ - RTE_EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \ + RTE_EDEV_LOG_ERR("Invalid dev_id=%d", dev_id); \ rte_errno = errno; \ return retval; \ } \ @@ -64,7 +64,7 @@ extern "C" { #define RTE_EVENTDEV_VALID_DEVID_OR_RET(dev_id) do { \ if (!rte_event_pmd_is_valid_dev((dev_id))) { \ - RTE_EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \ + RTE_EDEV_LOG_ERR("Invalid dev_id=%d", dev_id); \ return; \ } \ } while (0) diff --git a/lib/eventdev/rte_event_crypto_adapter.c b/lib/eventdev/rte_event_crypto_adapter.c index 1b435c9f0e..d46595d190 100644 --- a/lib/eventdev/rte_event_crypto_adapter.c +++ b/lib/eventdev/rte_event_crypto_adapter.c @@ -133,7 +133,7 @@ static struct event_crypto_adapter **event_crypto_adapter; /* Macros to check for valid adapter */ #define EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) do { \ if (!eca_valid_id(id)) { \ - RTE_EDEV_LOG_ERR("Invalid crypto adapter id = %d\n", id); \ + RTE_EDEV_LOG_ERR("Invalid crypto adapter id = %d", id); \ return retval; \ } \ } while (0) @@ -309,7 +309,7 @@ eca_default_config_cb(uint8_t id, uint8_t dev_id, ret = rte_event_dev_configure(dev_id, &dev_conf); if (ret) { - RTE_EDEV_LOG_ERR("failed to configure event dev %u\n", dev_id); + RTE_EDEV_LOG_ERR("failed to configure event dev %u", dev_id); if (started) { if (rte_event_dev_start(dev_id)) return -EIO; @@ -319,7 +319,7 @@ eca_default_config_cb(uint8_t id, uint8_t dev_id, ret = rte_event_port_setup(dev_id, port_id, port_conf); if (ret) { - RTE_EDEV_LOG_ERR("failed to setup event port %u\n", port_id); + RTE_EDEV_LOG_ERR("failed to setup event port %u", port_id); return ret; } @@ -391,7 +391,7 @@ rte_event_crypto_adapter_create_ext(uint8_t id, uint8_t dev_id, sizeof(struct crypto_device_info), 0, socket_id); if (adapter->cdevs == NULL) { - RTE_EDEV_LOG_ERR("Failed to get mem for crypto devices\n"); + RTE_EDEV_LOG_ERR("Failed to get mem for crypto devices"); eca_circular_buffer_free(&adapter->ebuf); rte_free(adapter); return -ENOMEM; @@ -1403,7 +1403,7 @@ rte_event_crypto_adapter_runtime_params_set(uint8_t id, EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); if (params == NULL) { - RTE_EDEV_LOG_ERR("params pointer is NULL\n"); + RTE_EDEV_LOG_ERR("params pointer is NULL"); return -EINVAL; } @@ -1436,7 +1436,7 @@ rte_event_crypto_adapter_runtime_params_get(uint8_t id, EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); if (params == NULL) { - RTE_EDEV_LOG_ERR("params pointer is NULL\n"); + RTE_EDEV_LOG_ERR("params pointer is NULL"); return -EINVAL; } diff --git a/lib/eventdev/rte_event_dma_adapter.c b/lib/eventdev/rte_event_dma_adapter.c index cbf9405438..4196164305 100644 --- a/lib/eventdev/rte_event_dma_adapter.c +++ b/lib/eventdev/rte_event_dma_adapter.c @@ -20,7 +20,7 @@ #define EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) \ do { \ if (!edma_adapter_valid_id(id)) { \ - RTE_EDEV_LOG_ERR("Invalid DMA adapter id = %d\n", id); \ + RTE_EDEV_LOG_ERR("Invalid DMA adapter id = %d", id); \ return retval; \ } \ } while (0) @@ -313,7 +313,7 @@ edma_default_config_cb(uint8_t id, uint8_t evdev_id, struct rte_event_dma_adapte ret = rte_event_dev_configure(evdev_id, &dev_conf); if (ret) { - RTE_EDEV_LOG_ERR("Failed to configure event dev %u\n", evdev_id); + RTE_EDEV_LOG_ERR("Failed to configure event dev %u", evdev_id); if (started) { if (rte_event_dev_start(evdev_id)) return -EIO; @@ -323,7 +323,7 @@ edma_default_config_cb(uint8_t id, uint8_t evdev_id, struct rte_event_dma_adapte ret = rte_event_port_setup(evdev_id, port_id, port_conf); if (ret) { - RTE_EDEV_LOG_ERR("Failed to setup event port %u\n", port_id); + RTE_EDEV_LOG_ERR("Failed to setup event port %u", port_id); return ret; } @@ -407,7 +407,7 @@ rte_event_dma_adapter_create_ext(uint8_t id, uint8_t evdev_id, num_dma_dev * sizeof(struct dma_device_info), 0, socket_id); if (adapter->dma_devs == NULL) { - RTE_EDEV_LOG_ERR("Failed to get memory for DMA devices\n"); + RTE_EDEV_LOG_ERR("Failed to get memory for DMA devices"); edma_circular_buffer_free(&adapter->ebuf); rte_free(adapter); return -ENOMEM; @@ -417,7 +417,7 @@ rte_event_dma_adapter_create_ext(uint8_t id, uint8_t evdev_id, for (i = 0; i < num_dma_dev; i++) { ret = rte_dma_info_get(i, &info); if (ret) { - RTE_EDEV_LOG_ERR("Failed to get dma device info\n"); + RTE_EDEV_LOG_ERR("Failed to get dma device info"); edma_circular_buffer_free(&adapter->ebuf); rte_free(adapter); return ret; @@ -1297,7 +1297,7 @@ rte_event_dma_adapter_runtime_params_set(uint8_t id, EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); if (params == NULL) { - RTE_EDEV_LOG_ERR("params pointer is NULL\n"); + RTE_EDEV_LOG_ERR("params pointer is NULL"); return -EINVAL; } @@ -1326,7 +1326,7 @@ rte_event_dma_adapter_runtime_params_get(uint8_t id, EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); if (params == NULL) { - RTE_EDEV_LOG_ERR("params pointer is NULL\n"); + RTE_EDEV_LOG_ERR("params pointer is NULL"); return -EINVAL; } diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c index 82ae31712d..1b83a55b5c 100644 --- a/lib/eventdev/rte_event_eth_rx_adapter.c +++ b/lib/eventdev/rte_event_eth_rx_adapter.c @@ -293,14 +293,14 @@ rxa_event_buf_get(struct event_eth_rx_adapter *rx_adapter, uint16_t eth_dev_id, #define RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) do { \ if (!rxa_validate_id(id)) { \ - RTE_EDEV_LOG_ERR("Invalid eth Rx adapter id = %d\n", id); \ + RTE_EDEV_LOG_ERR("Invalid eth Rx adapter id = %d", id); \ return retval; \ } \ } while (0) #define RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_GOTO_ERR_RET(id, retval) do { \ if (!rxa_validate_id(id)) { \ - RTE_EDEV_LOG_ERR("Invalid eth Rx adapter id = %d\n", id); \ + RTE_EDEV_LOG_ERR("Invalid eth Rx adapter id = %d", id); \ ret = retval; \ goto error; \ } \ @@ -308,7 +308,7 @@ rxa_event_buf_get(struct event_eth_rx_adapter *rx_adapter, uint16_t eth_dev_id, #define RTE_EVENT_ETH_RX_ADAPTER_TOKEN_VALID_OR_GOTO_ERR_RET(token, retval) do { \ if ((token) == NULL || strlen(token) == 0 || !isdigit(*token)) { \ - RTE_EDEV_LOG_ERR("Invalid eth Rx adapter token\n"); \ + RTE_EDEV_LOG_ERR("Invalid eth Rx adapter token"); \ ret = retval; \ goto error; \ } \ @@ -1540,7 +1540,7 @@ rxa_default_conf_cb(uint8_t id, uint8_t dev_id, ret = rte_event_dev_configure(dev_id, &dev_conf); if (ret) { - RTE_EDEV_LOG_ERR("failed to configure event dev %u\n", + RTE_EDEV_LOG_ERR("failed to configure event dev %u", dev_id); if (started) { if (rte_event_dev_start(dev_id)) @@ -1551,7 +1551,7 @@ rxa_default_conf_cb(uint8_t id, uint8_t dev_id, ret = rte_event_port_setup(dev_id, port_id, port_conf); if (ret) { - RTE_EDEV_LOG_ERR("failed to setup event port %u\n", + RTE_EDEV_LOG_ERR("failed to setup event port %u", port_id); return ret; } @@ -1628,7 +1628,7 @@ rxa_create_intr_thread(struct event_eth_rx_adapter *rx_adapter) if (!err) return 0; - RTE_EDEV_LOG_ERR("Failed to create interrupt thread err = %d\n", err); + RTE_EDEV_LOG_ERR("Failed to create interrupt thread err = %d", err); rte_free(rx_adapter->epoll_events); error: rte_ring_free(rx_adapter->intr_ring); @@ -1644,12 +1644,12 @@ rxa_destroy_intr_thread(struct event_eth_rx_adapter *rx_adapter) err = pthread_cancel((pthread_t)rx_adapter->rx_intr_thread.opaque_id); if (err) - RTE_EDEV_LOG_ERR("Can't cancel interrupt thread err = %d\n", + RTE_EDEV_LOG_ERR("Can't cancel interrupt thread err = %d", err); err = rte_thread_join(rx_adapter->rx_intr_thread, NULL); if (err) - RTE_EDEV_LOG_ERR("Can't join interrupt thread err = %d\n", err); + RTE_EDEV_LOG_ERR("Can't join interrupt thread err = %d", err); rte_free(rx_adapter->epoll_events); rte_ring_free(rx_adapter->intr_ring); @@ -1915,7 +1915,7 @@ rxa_init_service(struct event_eth_rx_adapter *rx_adapter, uint8_t id) if (rte_mbuf_dyn_rx_timestamp_register( &event_eth_rx_timestamp_dynfield_offset, &event_eth_rx_timestamp_dynflag) != 0) { - RTE_EDEV_LOG_ERR("Error registering timestamp field in mbuf\n"); + RTE_EDEV_LOG_ERR("Error registering timestamp field in mbuf"); return -rte_errno; } @@ -2445,7 +2445,7 @@ rxa_create(uint8_t id, uint8_t dev_id, RTE_DIM(default_rss_key)); if (rx_adapter->eth_devices == NULL) { - RTE_EDEV_LOG_ERR("failed to get mem for eth devices\n"); + RTE_EDEV_LOG_ERR("failed to get mem for eth devices"); rte_free(rx_adapter); return -ENOMEM; } @@ -2497,12 +2497,12 @@ rxa_config_params_validate(struct rte_event_eth_rx_adapter_params *rxa_params, return 0; } else if (!rxa_params->use_queue_event_buf && rxa_params->event_buf_size == 0) { - RTE_EDEV_LOG_ERR("event buffer size can't be zero\n"); + RTE_EDEV_LOG_ERR("event buffer size can't be zero"); return -EINVAL; } else if (rxa_params->use_queue_event_buf && rxa_params->event_buf_size != 0) { RTE_EDEV_LOG_ERR("event buffer size needs to be configured " - "as part of queue add\n"); + "as part of queue add"); return -EINVAL; } @@ -3597,7 +3597,7 @@ handle_rxa_stats(const char *cmd __rte_unused, /* Get Rx adapter stats */ if (rte_event_eth_rx_adapter_stats_get(rx_adapter_id, &rx_adptr_stats)) { - RTE_EDEV_LOG_ERR("Failed to get Rx adapter stats\n"); + RTE_EDEV_LOG_ERR("Failed to get Rx adapter stats"); return -1; } @@ -3636,7 +3636,7 @@ handle_rxa_stats_reset(const char *cmd __rte_unused, /* Reset Rx adapter stats */ if (rte_event_eth_rx_adapter_stats_reset(rx_adapter_id)) { - RTE_EDEV_LOG_ERR("Failed to reset Rx adapter stats\n"); + RTE_EDEV_LOG_ERR("Failed to reset Rx adapter stats"); return -1; } diff --git a/lib/eventdev/rte_event_eth_tx_adapter.c b/lib/eventdev/rte_event_eth_tx_adapter.c index 360d5caf6a..56435be991 100644 --- a/lib/eventdev/rte_event_eth_tx_adapter.c +++ b/lib/eventdev/rte_event_eth_tx_adapter.c @@ -334,7 +334,7 @@ txa_service_conf_cb(uint8_t __rte_unused id, uint8_t dev_id, ret = rte_event_port_setup(dev_id, port_id, pc); if (ret) { - RTE_EDEV_LOG_ERR("failed to setup event port %u\n", + RTE_EDEV_LOG_ERR("failed to setup event port %u", port_id); if (started) { if (rte_event_dev_start(dev_id)) diff --git a/lib/eventdev/rte_event_timer_adapter.c b/lib/eventdev/rte_event_timer_adapter.c index 27466707bc..3f22e85173 100644 --- a/lib/eventdev/rte_event_timer_adapter.c +++ b/lib/eventdev/rte_event_timer_adapter.c @@ -106,7 +106,7 @@ default_port_conf_cb(uint16_t id, uint8_t event_dev_id, uint8_t *event_port_id, ret = rte_event_dev_configure(dev_id, &dev_conf); if (ret < 0) { - EVTIM_LOG_ERR("failed to configure event dev %u\n", dev_id); + EVTIM_LOG_ERR("failed to configure event dev %u", dev_id); if (started) if (rte_event_dev_start(dev_id)) return -EIO; @@ -116,7 +116,7 @@ default_port_conf_cb(uint16_t id, uint8_t event_dev_id, uint8_t *event_port_id, ret = rte_event_port_setup(dev_id, port_id, port_conf); if (ret < 0) { - EVTIM_LOG_ERR("failed to setup event port %u on event dev %u\n", + EVTIM_LOG_ERR("failed to setup event port %u on event dev %u", port_id, dev_id); return ret; } diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index ae50821a3f..157752868d 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -1007,13 +1007,13 @@ rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t } if (*dev->dev_ops->port_link == NULL) { - RTE_EDEV_LOG_ERR("Function not supported\n"); + RTE_EDEV_LOG_ERR("Function not supported"); rte_errno = ENOTSUP; return 0; } if (profile_id && *dev->dev_ops->port_link_profile == NULL) { - RTE_EDEV_LOG_ERR("Function not supported\n"); + RTE_EDEV_LOG_ERR("Function not supported"); rte_errno = ENOTSUP; return 0; } diff --git a/lib/metrics/rte_metrics_telemetry.c b/lib/metrics/rte_metrics_telemetry.c index 5be21b2e86..1d133e1f8c 100644 --- a/lib/metrics/rte_metrics_telemetry.c +++ b/lib/metrics/rte_metrics_telemetry.c @@ -363,7 +363,7 @@ rte_metrics_tel_stat_names_to_ids(const char * const *stat_names, } } if (j == num_metrics) { - METRICS_LOG_WARN("Invalid stat name %s\n", + METRICS_LOG_WARN("Invalid stat name %s", stat_names[i]); free(names); return -EINVAL; diff --git a/lib/mldev/rte_mldev.c b/lib/mldev/rte_mldev.c index cc5f2e0cc6..196b1850e6 100644 --- a/lib/mldev/rte_mldev.c +++ b/lib/mldev/rte_mldev.c @@ -159,7 +159,7 @@ int rte_ml_dev_init(size_t dev_max) { if (dev_max == 0 || dev_max > INT16_MAX) { - RTE_MLDEV_LOG(ERR, "Invalid dev_max = %zu (> %d)\n", dev_max, INT16_MAX); + RTE_MLDEV_LOG(ERR, "Invalid dev_max = %zu (> %d)", dev_max, INT16_MAX); rte_errno = EINVAL; return -rte_errno; } @@ -217,7 +217,7 @@ rte_ml_dev_socket_id(int16_t dev_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -232,7 +232,7 @@ rte_ml_dev_info_get(int16_t dev_id, struct rte_ml_dev_info *dev_info) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -241,7 +241,7 @@ rte_ml_dev_info_get(int16_t dev_id, struct rte_ml_dev_info *dev_info) return -ENOTSUP; if (dev_info == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, dev_info cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, dev_info cannot be NULL", dev_id); return -EINVAL; } memset(dev_info, 0, sizeof(struct rte_ml_dev_info)); @@ -257,7 +257,7 @@ rte_ml_dev_configure(int16_t dev_id, const struct rte_ml_dev_config *config) int ret; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -271,7 +271,7 @@ rte_ml_dev_configure(int16_t dev_id, const struct rte_ml_dev_config *config) } if (config == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, config cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, config cannot be NULL", dev_id); return -EINVAL; } @@ -280,7 +280,7 @@ rte_ml_dev_configure(int16_t dev_id, const struct rte_ml_dev_config *config) return ret; if (config->nb_queue_pairs > dev_info.max_queue_pairs) { - RTE_MLDEV_LOG(ERR, "Device %d num of queues %u > %u\n", dev_id, + RTE_MLDEV_LOG(ERR, "Device %d num of queues %u > %u", dev_id, config->nb_queue_pairs, dev_info.max_queue_pairs); return -EINVAL; } @@ -294,7 +294,7 @@ rte_ml_dev_close(int16_t dev_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -318,7 +318,7 @@ rte_ml_dev_start(int16_t dev_id) int ret; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -345,7 +345,7 @@ rte_ml_dev_stop(int16_t dev_id) int ret; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -372,7 +372,7 @@ rte_ml_dev_queue_pair_setup(int16_t dev_id, uint16_t queue_pair_id, struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -386,7 +386,7 @@ rte_ml_dev_queue_pair_setup(int16_t dev_id, uint16_t queue_pair_id, } if (qp_conf == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, qp_conf cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, qp_conf cannot be NULL", dev_id); return -EINVAL; } @@ -404,7 +404,7 @@ rte_ml_dev_stats_get(int16_t dev_id, struct rte_ml_dev_stats *stats) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -413,7 +413,7 @@ rte_ml_dev_stats_get(int16_t dev_id, struct rte_ml_dev_stats *stats) return -ENOTSUP; if (stats == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, stats cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, stats cannot be NULL", dev_id); return -EINVAL; } memset(stats, 0, sizeof(struct rte_ml_dev_stats)); @@ -427,7 +427,7 @@ rte_ml_dev_stats_reset(int16_t dev_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return; } @@ -445,7 +445,7 @@ rte_ml_dev_xstats_names_get(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, in struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -462,7 +462,7 @@ rte_ml_dev_xstats_by_name_get(int16_t dev_id, const char *name, uint16_t *stat_i struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -471,12 +471,12 @@ rte_ml_dev_xstats_by_name_get(int16_t dev_id, const char *name, uint16_t *stat_i return -ENOTSUP; if (name == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, name cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, name cannot be NULL", dev_id); return -EINVAL; } if (value == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, value cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, value cannot be NULL", dev_id); return -EINVAL; } @@ -490,7 +490,7 @@ rte_ml_dev_xstats_get(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, int32_t struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -499,12 +499,12 @@ rte_ml_dev_xstats_get(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, int32_t return -ENOTSUP; if (stat_ids == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, stat_ids cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, stat_ids cannot be NULL", dev_id); return -EINVAL; } if (values == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, values cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, values cannot be NULL", dev_id); return -EINVAL; } @@ -518,7 +518,7 @@ rte_ml_dev_xstats_reset(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, int32_ struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -535,7 +535,7 @@ rte_ml_dev_dump(int16_t dev_id, FILE *fd) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -544,7 +544,7 @@ rte_ml_dev_dump(int16_t dev_id, FILE *fd) return -ENOTSUP; if (fd == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, file descriptor cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, file descriptor cannot be NULL", dev_id); return -EINVAL; } @@ -557,7 +557,7 @@ rte_ml_dev_selftest(int16_t dev_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -574,7 +574,7 @@ rte_ml_model_load(int16_t dev_id, struct rte_ml_model_params *params, uint16_t * struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -583,12 +583,12 @@ rte_ml_model_load(int16_t dev_id, struct rte_ml_model_params *params, uint16_t * return -ENOTSUP; if (params == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, params cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, params cannot be NULL", dev_id); return -EINVAL; } if (model_id == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, model_id cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, model_id cannot be NULL", dev_id); return -EINVAL; } @@ -601,7 +601,7 @@ rte_ml_model_unload(int16_t dev_id, uint16_t model_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -618,7 +618,7 @@ rte_ml_model_start(int16_t dev_id, uint16_t model_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -635,7 +635,7 @@ rte_ml_model_stop(int16_t dev_id, uint16_t model_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -652,7 +652,7 @@ rte_ml_model_info_get(int16_t dev_id, uint16_t model_id, struct rte_ml_model_inf struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -661,7 +661,7 @@ rte_ml_model_info_get(int16_t dev_id, uint16_t model_id, struct rte_ml_model_inf return -ENOTSUP; if (model_info == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, model_id %u, model_info cannot be NULL\n", dev_id, + RTE_MLDEV_LOG(ERR, "Dev %d, model_id %u, model_info cannot be NULL", dev_id, model_id); return -EINVAL; } @@ -675,7 +675,7 @@ rte_ml_model_params_update(int16_t dev_id, uint16_t model_id, void *buffer) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -684,7 +684,7 @@ rte_ml_model_params_update(int16_t dev_id, uint16_t model_id, void *buffer) return -ENOTSUP; if (buffer == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, buffer cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, buffer cannot be NULL", dev_id); return -EINVAL; } @@ -698,7 +698,7 @@ rte_ml_io_quantize(int16_t dev_id, uint16_t model_id, struct rte_ml_buff_seg **d struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -707,12 +707,12 @@ rte_ml_io_quantize(int16_t dev_id, uint16_t model_id, struct rte_ml_buff_seg **d return -ENOTSUP; if (dbuffer == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, dbuffer cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, dbuffer cannot be NULL", dev_id); return -EINVAL; } if (qbuffer == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, qbuffer cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, qbuffer cannot be NULL", dev_id); return -EINVAL; } @@ -726,7 +726,7 @@ rte_ml_io_dequantize(int16_t dev_id, uint16_t model_id, struct rte_ml_buff_seg * struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -735,12 +735,12 @@ rte_ml_io_dequantize(int16_t dev_id, uint16_t model_id, struct rte_ml_buff_seg * return -ENOTSUP; if (qbuffer == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, qbuffer cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, qbuffer cannot be NULL", dev_id); return -EINVAL; } if (dbuffer == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, dbuffer cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, dbuffer cannot be NULL", dev_id); return -EINVAL; } @@ -811,7 +811,7 @@ rte_ml_enqueue_burst(int16_t dev_id, uint16_t qp_id, struct rte_ml_op **ops, uin #ifdef RTE_LIBRTE_ML_DEV_DEBUG if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); rte_errno = -EINVAL; return 0; } @@ -823,13 +823,13 @@ rte_ml_enqueue_burst(int16_t dev_id, uint16_t qp_id, struct rte_ml_op **ops, uin } if (ops == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, ops cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, ops cannot be NULL", dev_id); rte_errno = -EINVAL; return 0; } if (qp_id >= dev->data->nb_queue_pairs) { - RTE_MLDEV_LOG(ERR, "Invalid qp_id %u\n", qp_id); + RTE_MLDEV_LOG(ERR, "Invalid qp_id %u", qp_id); rte_errno = -EINVAL; return 0; } @@ -847,7 +847,7 @@ rte_ml_dequeue_burst(int16_t dev_id, uint16_t qp_id, struct rte_ml_op **ops, uin #ifdef RTE_LIBRTE_ML_DEV_DEBUG if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); rte_errno = -EINVAL; return 0; } @@ -859,13 +859,13 @@ rte_ml_dequeue_burst(int16_t dev_id, uint16_t qp_id, struct rte_ml_op **ops, uin } if (ops == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, ops cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, ops cannot be NULL", dev_id); rte_errno = -EINVAL; return 0; } if (qp_id >= dev->data->nb_queue_pairs) { - RTE_MLDEV_LOG(ERR, "Invalid qp_id %u\n", qp_id); + RTE_MLDEV_LOG(ERR, "Invalid qp_id %u", qp_id); rte_errno = -EINVAL; return 0; } @@ -883,7 +883,7 @@ rte_ml_op_error_get(int16_t dev_id, struct rte_ml_op *op, struct rte_ml_op_error #ifdef RTE_LIBRTE_ML_DEV_DEBUG if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -892,12 +892,12 @@ rte_ml_op_error_get(int16_t dev_id, struct rte_ml_op *op, struct rte_ml_op_error return -ENOTSUP; if (op == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, op cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, op cannot be NULL", dev_id); return -EINVAL; } if (error == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, error cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, error cannot be NULL", dev_id); return -EINVAL; } #else diff --git a/lib/net/rte_net_crc.c b/lib/net/rte_net_crc.c index a685f9e7bb..900d6de7f4 100644 --- a/lib/net/rte_net_crc.c +++ b/lib/net/rte_net_crc.c @@ -179,7 +179,7 @@ avx512_vpclmulqdq_get_handlers(void) max_simd_bitwidth >= RTE_VECT_SIMD_512) return handlers_avx512; #endif - NET_LOG(INFO, "Requirements not met, can't use AVX512\n"); + NET_LOG(INFO, "Requirements not met, can't use AVX512"); return NULL; } @@ -205,7 +205,7 @@ sse42_pclmulqdq_get_handlers(void) max_simd_bitwidth >= RTE_VECT_SIMD_128) return handlers_sse42; #endif - NET_LOG(INFO, "Requirements not met, can't use SSE\n"); + NET_LOG(INFO, "Requirements not met, can't use SSE"); return NULL; } @@ -231,7 +231,7 @@ neon_pmull_get_handlers(void) max_simd_bitwidth >= RTE_VECT_SIMD_128) return handlers_neon; #endif - NET_LOG(INFO, "Requirements not met, can't use NEON\n"); + NET_LOG(INFO, "Requirements not met, can't use NEON"); return NULL; } diff --git a/lib/node/ethdev_rx.c b/lib/node/ethdev_rx.c index 3e8fac1df4..475eff6abe 100644 --- a/lib/node/ethdev_rx.c +++ b/lib/node/ethdev_rx.c @@ -160,13 +160,13 @@ ethdev_ptype_setup(uint16_t port, uint16_t queue) if (!l3_ipv4 || !l3_ipv6) { node_info("ethdev_rx", - "Enabling ptype callback for required ptypes on port %u\n", + "Enabling ptype callback for required ptypes on port %u", port); if (!rte_eth_add_rx_callback(port, queue, eth_pkt_parse_cb, NULL)) { node_err("ethdev_rx", - "Failed to add rx ptype cb: port=%d, queue=%d\n", + "Failed to add rx ptype cb: port=%d, queue=%d", port, queue); return -EINVAL; } diff --git a/lib/node/ip4_lookup.c b/lib/node/ip4_lookup.c index 0dbfde64fe..18955971f6 100644 --- a/lib/node/ip4_lookup.c +++ b/lib/node/ip4_lookup.c @@ -143,7 +143,7 @@ rte_node_ip4_route_add(uint32_t ip, uint8_t depth, uint16_t next_hop, ip, depth, val); if (ret < 0) { node_err("ip4_lookup", - "Unable to add entry %s / %d nh (%x) to LPM table on sock %d, rc=%d\n", + "Unable to add entry %s / %d nh (%x) to LPM table on sock %d, rc=%d", abuf, depth, val, socket, ret); return ret; } diff --git a/lib/node/ip6_lookup.c b/lib/node/ip6_lookup.c index 6f56eb5ec5..309964f60f 100644 --- a/lib/node/ip6_lookup.c +++ b/lib/node/ip6_lookup.c @@ -283,7 +283,7 @@ rte_node_ip6_route_add(const uint8_t *ip, uint8_t depth, uint16_t next_hop, if (ret < 0) { node_err("ip6_lookup", "Unable to add entry %s / %d nh (%x) to LPM " - "table on sock %d, rc=%d\n", + "table on sock %d, rc=%d", abuf, depth, val, socket, ret); return ret; } diff --git a/lib/node/kernel_rx.c b/lib/node/kernel_rx.c index 2dba7c8cc7..6c20cdbb1e 100644 --- a/lib/node/kernel_rx.c +++ b/lib/node/kernel_rx.c @@ -134,7 +134,7 @@ kernel_rx_node_do(struct rte_graph *graph, struct rte_node *node, kernel_rx_node if (len == 0 || len == 0xFFFF) { rte_pktmbuf_free(m); if (rx->idx <= 0) - node_dbg("kernel_rx", "rx_mbuf array is empty\n"); + node_dbg("kernel_rx", "rx_mbuf array is empty"); rx->idx--; break; } @@ -207,20 +207,20 @@ kernel_rx_node_init(const struct rte_graph *graph, struct rte_node *node) RTE_VERIFY(elem != NULL); if (ctx->pktmbuf_pool == NULL) { - node_err("kernel_rx", "Invalid mbuf pool on graph %s\n", graph->name); + node_err("kernel_rx", "Invalid mbuf pool on graph %s", graph->name); return -EINVAL; } recv_info = rte_zmalloc_socket("kernel_rx_info", sizeof(kernel_rx_info_t), RTE_CACHE_LINE_SIZE, graph->socket); if (!recv_info) { - node_err("kernel_rx", "Kernel recv_info is NULL\n"); + node_err("kernel_rx", "Kernel recv_info is NULL"); return -ENOMEM; } sock = socket(AF_INET, SOCK_RAW, IPPROTO_RAW); if (sock < 0) { - node_err("kernel_rx", "Unable to open RAW socket\n"); + node_err("kernel_rx", "Unable to open RAW socket"); return sock; } diff --git a/lib/node/kernel_tx.c b/lib/node/kernel_tx.c index 27d1808c71..3a96741622 100644 --- a/lib/node/kernel_tx.c +++ b/lib/node/kernel_tx.c @@ -36,7 +36,7 @@ kernel_tx_process_mbuf(struct rte_node *node, struct rte_mbuf **mbufs, uint16_t sin.sin_addr.s_addr = ip4->dst_addr; if (sendto(ctx->sock, buf, len, 0, (struct sockaddr *)&sin, sizeof(sin)) < 0) - node_err("kernel_tx", "Unable to send packets: %s\n", strerror(errno)); + node_err("kernel_tx", "Unable to send packets: %s", strerror(errno)); } } @@ -87,7 +87,7 @@ kernel_tx_node_init(const struct rte_graph *graph __rte_unused, struct rte_node ctx->sock = socket(AF_INET, SOCK_RAW, IPPROTO_RAW); if (ctx->sock < 0) - node_err("kernel_tx", "Unable to open RAW socket\n"); + node_err("kernel_tx", "Unable to open RAW socket"); return 0; } diff --git a/lib/rcu/rte_rcu_qsbr.c b/lib/rcu/rte_rcu_qsbr.c index a9f3d6cc98..41a44be4b9 100644 --- a/lib/rcu/rte_rcu_qsbr.c +++ b/lib/rcu/rte_rcu_qsbr.c @@ -92,7 +92,7 @@ rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id) return 1; } - __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n", + __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u", v->qsbr_cnt[thread_id].lock_cnt); id = thread_id & __RTE_QSBR_THRID_MASK; @@ -144,7 +144,7 @@ rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id) return 1; } - __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n", + __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u", v->qsbr_cnt[thread_id].lock_cnt); id = thread_id & __RTE_QSBR_THRID_MASK; diff --git a/lib/rcu/rte_rcu_qsbr.h b/lib/rcu/rte_rcu_qsbr.h index 5979fb0efb..6b908e7ee0 100644 --- a/lib/rcu/rte_rcu_qsbr.h +++ b/lib/rcu/rte_rcu_qsbr.h @@ -299,7 +299,7 @@ rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id) RTE_ASSERT(v != NULL && thread_id < v->max_threads); - __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n", + __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u", v->qsbr_cnt[thread_id].lock_cnt); /* Copy the current value of token. @@ -350,7 +350,7 @@ rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int thread_id) { RTE_ASSERT(v != NULL && thread_id < v->max_threads); - __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n", + __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u", v->qsbr_cnt[thread_id].lock_cnt); /* The reader can go offline only after the load of the @@ -427,7 +427,7 @@ rte_rcu_qsbr_unlock(__rte_unused struct rte_rcu_qsbr *v, 1, rte_memory_order_release); __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, WARNING, - "Lock counter %u. Nested locks?\n", + "Lock counter %u. Nested locks?", v->qsbr_cnt[thread_id].lock_cnt); #endif } @@ -481,7 +481,7 @@ rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int thread_id) RTE_ASSERT(v != NULL && thread_id < v->max_threads); - __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n", + __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u", v->qsbr_cnt[thread_id].lock_cnt); /* Acquire the changes to the shared data structure released diff --git a/lib/stack/rte_stack.c b/lib/stack/rte_stack.c index 1fabec2bfe..1dab6d6645 100644 --- a/lib/stack/rte_stack.c +++ b/lib/stack/rte_stack.c @@ -56,7 +56,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id, int ret; if (flags & ~(RTE_STACK_F_LF)) { - STACK_LOG_ERR("Unsupported stack flags %#x\n", flags); + STACK_LOG_ERR("Unsupported stack flags %#x", flags); return NULL; } @@ -65,7 +65,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id, #endif #if !defined(RTE_STACK_LF_SUPPORTED) if (flags & RTE_STACK_F_LF) { - STACK_LOG_ERR("Lock-free stack is not supported on your platform\n"); + STACK_LOG_ERR("Lock-free stack is not supported on your platform"); rte_errno = ENOTSUP; return NULL; } @@ -82,7 +82,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id, te = rte_zmalloc("STACK_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - STACK_LOG_ERR("Cannot reserve memory for tailq\n"); + STACK_LOG_ERR("Cannot reserve memory for tailq"); rte_errno = ENOMEM; return NULL; } @@ -92,7 +92,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id, mz = rte_memzone_reserve_aligned(mz_name, sz, socket_id, 0, __alignof__(*s)); if (mz == NULL) { - STACK_LOG_ERR("Cannot reserve stack memzone!\n"); + STACK_LOG_ERR("Cannot reserve stack memzone!"); rte_mcfg_tailq_write_unlock(); rte_free(te); return NULL; diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c index 3e1ef1ac25..6e5443e5f8 100644 --- a/lib/vhost/vhost_crypto.c +++ b/lib/vhost/vhost_crypto.c @@ -245,7 +245,7 @@ transform_cipher_param(struct rte_crypto_sym_xform *xform, return ret; if (param->cipher_key_len > VHOST_USER_CRYPTO_MAX_CIPHER_KEY_LENGTH) { - VC_LOG_DBG("Invalid cipher key length\n"); + VC_LOG_DBG("Invalid cipher key length"); return -VIRTIO_CRYPTO_BADMSG; } @@ -301,7 +301,7 @@ transform_chain_param(struct rte_crypto_sym_xform *xforms, return ret; if (param->cipher_key_len > VHOST_USER_CRYPTO_MAX_CIPHER_KEY_LENGTH) { - VC_LOG_DBG("Invalid cipher key length\n"); + VC_LOG_DBG("Invalid cipher key length"); return -VIRTIO_CRYPTO_BADMSG; } @@ -321,7 +321,7 @@ transform_chain_param(struct rte_crypto_sym_xform *xforms, return ret; if (param->auth_key_len > VHOST_USER_CRYPTO_MAX_HMAC_KEY_LENGTH) { - VC_LOG_DBG("Invalid auth key length\n"); + VC_LOG_DBG("Invalid auth key length"); return -VIRTIO_CRYPTO_BADMSG; } -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v3 06/14] eal/linux: remove log paraphrasing the doc 2023-12-18 9:27 ` [PATCH v3 00/14] Detect superfluous newline in logs David Marchand ` (4 preceding siblings ...) 2023-12-18 9:27 ` [PATCH v3 05/14] lib: remove redundant newline from logs David Marchand @ 2023-12-18 9:27 ` David Marchand 2023-12-18 9:27 ` [PATCH v3 07/14] bpf: remove log level in internal helper David Marchand ` (7 subsequent siblings) 13 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-18 9:27 UTC (permalink / raw) To: dev; +Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Tyler Retzlaff An error log message does not need to paraphrase the DPDK documentation. Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com> --- lib/eal/linux/eal_timer.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/lib/eal/linux/eal_timer.c b/lib/eal/linux/eal_timer.c index 3a30284e3a..df9ad61ae9 100644 --- a/lib/eal/linux/eal_timer.c +++ b/lib/eal/linux/eal_timer.c @@ -152,11 +152,7 @@ rte_eal_hpet_init(int make_default) } eal_hpet = mmap(NULL, 1024, PROT_READ, MAP_SHARED, fd, 0); if (eal_hpet == MAP_FAILED) { - RTE_LOG(ERR, EAL, "ERROR: Cannot mmap "DEV_HPET"!\n" - "Please enable CONFIG_HPET_MMAP in your kernel configuration " - "to allow HPET support.\n" - "To run without using HPET, unset RTE_LIBEAL_USE_HPET " - "in your build configuration or use '--no-hpet' EAL flag.\n"); + RTE_LOG(ERR, EAL, "ERROR: Cannot mmap "DEV_HPET"!\n"); close(fd); internal_conf->no_hpet = 1; return -1; -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v3 07/14] bpf: remove log level in internal helper 2023-12-18 9:27 ` [PATCH v3 00/14] Detect superfluous newline in logs David Marchand ` (5 preceding siblings ...) 2023-12-18 9:27 ` [PATCH v3 06/14] eal/linux: remove log paraphrasing the doc David Marchand @ 2023-12-18 9:27 ` David Marchand 2023-12-18 9:27 ` [PATCH v3 08/14] lib: simplify multilines log messages David Marchand ` (6 subsequent siblings) 13 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-18 9:27 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Tyler Retzlaff, Konstantin Ananyev There is no other log level than debug, simplify this helper. Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com> --- lib/bpf/bpf_validate.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c index 95b9ef99ef..f246b3c5eb 100644 --- a/lib/bpf/bpf_validate.c +++ b/lib/bpf/bpf_validate.c @@ -2178,18 +2178,18 @@ restore_eval_state(struct bpf_verifier *bvf, struct inst_node *node) } static void -log_eval_state(const struct bpf_verifier *bvf, const struct ebpf_insn *ins, - uint32_t pc, int32_t loglvl) +log_dbg_eval_state(const struct bpf_verifier *bvf, const struct ebpf_insn *ins, + uint32_t pc) { const struct bpf_eval_state *st; const struct bpf_reg_val *rv; - rte_log(loglvl, rte_bpf_logtype, "%s(pc=%u):\n", __func__, pc); + RTE_BPF_LOG(DEBUG, "%s(pc=%u):\n", __func__, pc); st = bvf->evst; rv = st->rv + ins->dst_reg; - rte_log(loglvl, rte_bpf_logtype, + RTE_BPF_LOG(DEBUG, "r%u={\n" "\tv={type=%u, size=%zu},\n" "\tmask=0x%" PRIx64 ",\n" @@ -2269,7 +2269,7 @@ evaluate(struct bpf_verifier *bvf) } } - log_eval_state(bvf, ins + idx, idx, RTE_LOG_DEBUG); + log_dbg_eval_state(bvf, ins + idx, idx); bvf->evin = NULL; } -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v3 08/14] lib: simplify multilines log messages 2023-12-18 9:27 ` [PATCH v3 00/14] Detect superfluous newline in logs David Marchand ` (6 preceding siblings ...) 2023-12-18 9:27 ` [PATCH v3 07/14] bpf: remove log level in internal helper David Marchand @ 2023-12-18 9:27 ` David Marchand 2023-12-18 10:05 ` Andrew Rybchenko 2023-12-18 9:27 ` [PATCH v3 09/14] rcu: introduce a logging helper David Marchand ` (5 subsequent siblings) 13 siblings, 1 reply; 122+ messages in thread From: David Marchand @ 2023-12-18 9:27 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Tyler Retzlaff, Konstantin Ananyev, Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam, Andrew Rybchenko Those error log messages don't need to span on multiple lines. Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com> --- Changes since RFC v2: - fixed format string crossing line boundaries, --- lib/acl/tb_mem.c | 4 ++-- lib/bpf/bpf_stub.c | 6 ++---- lib/eal/windows/eal_hugepages.c | 4 ++-- lib/ethdev/rte_ethdev.c | 12 ++++-------- 4 files changed, 10 insertions(+), 16 deletions(-) diff --git a/lib/acl/tb_mem.c b/lib/acl/tb_mem.c index 6a9d96aaed..4ee65b23da 100644 --- a/lib/acl/tb_mem.c +++ b/lib/acl/tb_mem.c @@ -26,8 +26,8 @@ tb_pool(struct tb_mem_pool *pool, size_t sz) size = sz + pool->alignment - 1; block = calloc(1, size + sizeof(*pool->block)); if (block == NULL) { - RTE_LOG(ERR, ACL, "%s(%zu)\n failed, currently allocated " - "by pool: %zu bytes\n", __func__, sz, pool->alloc); + RTE_LOG(ERR, ACL, "%s(%zu) failed, currently allocated by pool: %zu bytes\n", + __func__, sz, pool->alloc); siglongjmp(pool->fail, -ENOMEM); return NULL; } diff --git a/lib/bpf/bpf_stub.c b/lib/bpf/bpf_stub.c index ebc5343896..83c2203622 100644 --- a/lib/bpf/bpf_stub.c +++ b/lib/bpf/bpf_stub.c @@ -19,8 +19,7 @@ rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname, return NULL; } - RTE_BPF_LOG(ERR, "%s() is not supported with current config\n" - "rebuild with libelf installed\n", + RTE_BPF_LOG(ERR, "%s() is not supported, rebuild with libelf installed\n", __func__); rte_errno = ENOTSUP; return NULL; @@ -36,8 +35,7 @@ rte_bpf_convert(const struct bpf_program *prog) return NULL; } - RTE_BPF_LOG(ERR, "%s() is not supported with current config\n" - "rebuild with libpcap installed\n", + RTE_BPF_LOG(ERR, "%s() is not supported, rebuild with libpcap installed\n", __func__); rte_errno = ENOTSUP; return NULL; diff --git a/lib/eal/windows/eal_hugepages.c b/lib/eal/windows/eal_hugepages.c index b007dceb39..775c67e4c4 100644 --- a/lib/eal/windows/eal_hugepages.c +++ b/lib/eal/windows/eal_hugepages.c @@ -105,8 +105,8 @@ int eal_hugepage_info_init(void) { if (hugepage_claim_privilege() < 0) { - RTE_LOG(ERR, EAL, "Cannot claim hugepage privilege\n" - "Verify that large-page support privilege is assigned to the current user\n"); + RTE_LOG(ERR, EAL, + "Cannot claim hugepage privilege, check large-page support privilege\n"); return -1; } diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index b9d99ece15..9dd0efa9d8 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -6709,8 +6709,7 @@ rte_eth_ip_reassembly_capability_get(uint16_t port_id, if (dev->data->dev_configured == 0) { RTE_ETHDEV_LOG(ERR, - "Device with port_id=%u is not configured.\n" - "Cannot get IP reassembly capability\n", + "port_id=%u is not configured, cannot get IP reassembly capability\n", port_id); return -EINVAL; } @@ -6745,8 +6744,7 @@ rte_eth_ip_reassembly_conf_get(uint16_t port_id, if (dev->data->dev_configured == 0) { RTE_ETHDEV_LOG(ERR, - "Device with port_id=%u is not configured.\n" - "Cannot get IP reassembly configuration\n", + "port_id=%u is not configured, cannot get IP reassembly configuration\n", port_id); return -EINVAL; } @@ -6779,16 +6777,14 @@ rte_eth_ip_reassembly_conf_set(uint16_t port_id, if (dev->data->dev_configured == 0) { RTE_ETHDEV_LOG(ERR, - "Device with port_id=%u is not configured.\n" - "Cannot set IP reassembly configuration\n", + "port_id=%u is not configured, cannot set IP reassembly configuration\n", port_id); return -EINVAL; } if (dev->data->dev_started != 0) { RTE_ETHDEV_LOG(ERR, - "Device with port_id=%u started,\n" - "cannot configure IP reassembly params.\n", + "port_id=%u is started, cannot configure IP reassembly params.\n", port_id); return -EINVAL; } -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [PATCH v3 08/14] lib: simplify multilines log messages 2023-12-18 9:27 ` [PATCH v3 08/14] lib: simplify multilines log messages David Marchand @ 2023-12-18 10:05 ` Andrew Rybchenko 0 siblings, 0 replies; 122+ messages in thread From: Andrew Rybchenko @ 2023-12-18 10:05 UTC (permalink / raw) To: David Marchand, dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Tyler Retzlaff, Konstantin Ananyev, Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam On 12/18/23 12:27, David Marchand wrote: > Those error log messages don't need to span on multiple lines. > > Signed-off-by: David Marchand <david.marchand@redhat.com> > Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v3 09/14] rcu: introduce a logging helper 2023-12-18 9:27 ` [PATCH v3 00/14] Detect superfluous newline in logs David Marchand ` (7 preceding siblings ...) 2023-12-18 9:27 ` [PATCH v3 08/14] lib: simplify multilines log messages David Marchand @ 2023-12-18 9:27 ` David Marchand 2023-12-18 9:27 ` [PATCH v3 10/14] vhost: improve log for memory dumping configuration David Marchand ` (4 subsequent siblings) 13 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-18 9:27 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Honnappa Nagarahalli, Tyler Retzlaff Add a simple helper for logging messages in this library. Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Stephen Hemminger <stephen@networkplumber.org> Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> Reviewed-by: Tyler Retzlaff <roretzla@linux.microsoft.com> --- lib/rcu/rte_rcu_qsbr.c | 62 ++++++++++++++++-------------------------- lib/rcu/rte_rcu_qsbr.h | 1 + 2 files changed, 24 insertions(+), 39 deletions(-) diff --git a/lib/rcu/rte_rcu_qsbr.c b/lib/rcu/rte_rcu_qsbr.c index 41a44be4b9..5b6530788a 100644 --- a/lib/rcu/rte_rcu_qsbr.c +++ b/lib/rcu/rte_rcu_qsbr.c @@ -19,6 +19,9 @@ #include "rte_rcu_qsbr.h" #include "rcu_qsbr_pvt.h" +#define RCU_LOG(level, fmt, args...) \ + RTE_LOG(level, RCU, "%s(): " fmt "\n", __func__, ## args) + /* Get the memory size of QSBR variable */ size_t rte_rcu_qsbr_get_memsize(uint32_t max_threads) @@ -26,9 +29,7 @@ rte_rcu_qsbr_get_memsize(uint32_t max_threads) size_t sz; if (max_threads == 0) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid max_threads %u\n", - __func__, max_threads); + RCU_LOG(ERR, "Invalid max_threads %u", max_threads); rte_errno = EINVAL; return 1; @@ -52,8 +53,7 @@ rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads) size_t sz; if (v == NULL) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(ERR, "Invalid input parameter"); rte_errno = EINVAL; return 1; @@ -85,8 +85,7 @@ rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id) uint64_t old_bmap, new_bmap; if (v == NULL || thread_id >= v->max_threads) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(ERR, "Invalid input parameter"); rte_errno = EINVAL; return 1; @@ -137,8 +136,7 @@ rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id) uint64_t old_bmap, new_bmap; if (v == NULL || thread_id >= v->max_threads) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(ERR, "Invalid input parameter"); rte_errno = EINVAL; return 1; @@ -211,8 +209,7 @@ rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v) uint32_t i, t, id; if (v == NULL || f == NULL) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(ERR, "Invalid input parameter"); rte_errno = EINVAL; return 1; @@ -282,8 +279,7 @@ rte_rcu_qsbr_dq_create(const struct rte_rcu_qsbr_dq_parameters *params) params->v == NULL || params->name == NULL || params->size == 0 || params->esize == 0 || (params->esize % 4 != 0)) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(ERR, "Invalid input parameter"); rte_errno = EINVAL; return NULL; @@ -293,9 +289,10 @@ rte_rcu_qsbr_dq_create(const struct rte_rcu_qsbr_dq_parameters *params) */ if ((params->trigger_reclaim_limit <= params->size) && (params->max_reclaim_size == 0)) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter, size = %u, trigger_reclaim_limit = %u, max_reclaim_size = %u\n", - __func__, params->size, params->trigger_reclaim_limit, + RCU_LOG(ERR, + "Invalid input parameter, size = %u, trigger_reclaim_limit = %u, " + "max_reclaim_size = %u", + params->size, params->trigger_reclaim_limit, params->max_reclaim_size); rte_errno = EINVAL; @@ -328,8 +325,7 @@ rte_rcu_qsbr_dq_create(const struct rte_rcu_qsbr_dq_parameters *params) __RTE_QSBR_TOKEN_SIZE + params->esize, qs_fifo_size, SOCKET_ID_ANY, flags); if (dq->r == NULL) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): defer queue create failed\n", __func__); + RCU_LOG(ERR, "defer queue create failed"); rte_free(dq); return NULL; } @@ -354,8 +350,7 @@ int rte_rcu_qsbr_dq_enqueue(struct rte_rcu_qsbr_dq *dq, void *e) uint32_t cur_size; if (dq == NULL || e == NULL) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(ERR, "Invalid input parameter"); rte_errno = EINVAL; return 1; @@ -372,8 +367,7 @@ int rte_rcu_qsbr_dq_enqueue(struct rte_rcu_qsbr_dq *dq, void *e) */ cur_size = rte_ring_count(dq->r); if (cur_size > dq->trigger_reclaim_limit) { - rte_log(RTE_LOG_INFO, rte_rcu_log_type, - "%s(): Triggering reclamation\n", __func__); + RCU_LOG(INFO, "Triggering reclamation"); rte_rcu_qsbr_dq_reclaim(dq, dq->max_reclaim_size, NULL, NULL, NULL); } @@ -391,23 +385,18 @@ int rte_rcu_qsbr_dq_enqueue(struct rte_rcu_qsbr_dq *dq, void *e) * Enqueue uses the configured flags when the DQ was created. */ if (rte_ring_enqueue_elem(dq->r, data, dq->esize) != 0) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Enqueue failed\n", __func__); + RCU_LOG(ERR, "Enqueue failed"); /* Note that the token generated above is not used. * Other than wasting tokens, it should not cause any * other issues. */ - rte_log(RTE_LOG_INFO, rte_rcu_log_type, - "%s(): Skipped enqueuing token = %" PRIu64 "\n", - __func__, dq_elem->token); + RCU_LOG(INFO, "Skipped enqueuing token = %" PRIu64, dq_elem->token); rte_errno = ENOSPC; return 1; } - rte_log(RTE_LOG_INFO, rte_rcu_log_type, - "%s(): Enqueued token = %" PRIu64 "\n", - __func__, dq_elem->token); + RCU_LOG(INFO, "Enqueued token = %" PRIu64, dq_elem->token); return 0; } @@ -422,8 +411,7 @@ rte_rcu_qsbr_dq_reclaim(struct rte_rcu_qsbr_dq *dq, unsigned int n, __rte_rcu_qsbr_dq_elem_t *dq_elem; if (dq == NULL || n == 0) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(ERR, "Invalid input parameter"); rte_errno = EINVAL; return 1; @@ -445,17 +433,14 @@ rte_rcu_qsbr_dq_reclaim(struct rte_rcu_qsbr_dq *dq, unsigned int n, } rte_ring_dequeue_elem_finish(dq->r, 1); - rte_log(RTE_LOG_INFO, rte_rcu_log_type, - "%s(): Reclaimed token = %" PRIu64 "\n", - __func__, dq_elem->token); + RCU_LOG(INFO, "Reclaimed token = %" PRIu64, dq_elem->token); dq->free_fn(dq->p, dq_elem->elem, 1); cnt++; } - rte_log(RTE_LOG_INFO, rte_rcu_log_type, - "%s(): Reclaimed %u resources\n", __func__, cnt); + RCU_LOG(INFO, "Reclaimed %u resources", cnt); if (freed != NULL) *freed = cnt; @@ -472,8 +457,7 @@ rte_rcu_qsbr_dq_delete(struct rte_rcu_qsbr_dq *dq) unsigned int pending; if (dq == NULL) { - rte_log(RTE_LOG_DEBUG, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(DEBUG, "Invalid input parameter"); return 0; } diff --git a/lib/rcu/rte_rcu_qsbr.h b/lib/rcu/rte_rcu_qsbr.h index 6b908e7ee0..0dca8310c0 100644 --- a/lib/rcu/rte_rcu_qsbr.h +++ b/lib/rcu/rte_rcu_qsbr.h @@ -36,6 +36,7 @@ extern "C" { #include <rte_ring.h> extern int rte_rcu_log_type; +#define RTE_LOGTYPE_RCU rte_rcu_log_type #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG #define __RTE_RCU_DP_LOG(level, fmt, args...) \ -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v3 10/14] vhost: improve log for memory dumping configuration 2023-12-18 9:27 ` [PATCH v3 00/14] Detect superfluous newline in logs David Marchand ` (8 preceding siblings ...) 2023-12-18 9:27 ` [PATCH v3 09/14] rcu: introduce a logging helper David Marchand @ 2023-12-18 9:27 ` David Marchand 2023-12-18 9:27 ` [PATCH v3 11/14] log: add a per line log helper David Marchand ` (3 subsequent siblings) 13 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-18 9:27 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Maxime Coquelin, Chenbo Xia Add the device name as a prefix of logs associated to madvise() calls. Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> --- lib/vhost/iotlb.c | 18 +++++++++--------- lib/vhost/vhost.h | 2 +- lib/vhost/vhost_user.c | 26 +++++++++++++------------- 3 files changed, 23 insertions(+), 23 deletions(-) diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c index 87ac0e5126..10ab77262e 100644 --- a/lib/vhost/iotlb.c +++ b/lib/vhost/iotlb.c @@ -54,16 +54,16 @@ vhost_user_iotlb_share_page(struct vhost_iotlb_entry *a, struct vhost_iotlb_entr } static void -vhost_user_iotlb_set_dump(struct vhost_iotlb_entry *node) +vhost_user_iotlb_set_dump(struct virtio_net *dev, struct vhost_iotlb_entry *node) { uint64_t start; start = node->uaddr + node->uoffset; - mem_set_dump((void *)(uintptr_t)start, node->size, true, RTE_BIT64(node->page_shift)); + mem_set_dump(dev, (void *)(uintptr_t)start, node->size, true, RTE_BIT64(node->page_shift)); } static void -vhost_user_iotlb_clear_dump(struct vhost_iotlb_entry *node, +vhost_user_iotlb_clear_dump(struct virtio_net *dev, struct vhost_iotlb_entry *node, struct vhost_iotlb_entry *prev, struct vhost_iotlb_entry *next) { uint64_t start, end; @@ -80,7 +80,7 @@ vhost_user_iotlb_clear_dump(struct vhost_iotlb_entry *node, end = RTE_ALIGN_FLOOR(end, RTE_BIT64(node->page_shift)); if (end > start) - mem_set_dump((void *)(uintptr_t)start, end - start, false, + mem_set_dump(dev, (void *)(uintptr_t)start, end - start, false, RTE_BIT64(node->page_shift)); } @@ -204,7 +204,7 @@ vhost_user_iotlb_cache_remove_all(struct virtio_net *dev) vhost_user_iotlb_wr_lock_all(dev); RTE_TAILQ_FOREACH_SAFE(node, &dev->iotlb_list, next, temp_node) { - vhost_user_iotlb_clear_dump(node, NULL, NULL); + vhost_user_iotlb_clear_dump(dev, node, NULL, NULL); TAILQ_REMOVE(&dev->iotlb_list, node, next); vhost_user_iotlb_remove_notify(dev, node); @@ -230,7 +230,7 @@ vhost_user_iotlb_cache_random_evict(struct virtio_net *dev) if (!entry_idx) { struct vhost_iotlb_entry *next_node = RTE_TAILQ_NEXT(node, next); - vhost_user_iotlb_clear_dump(node, prev_node, next_node); + vhost_user_iotlb_clear_dump(dev, node, prev_node, next_node); TAILQ_REMOVE(&dev->iotlb_list, node, next); vhost_user_iotlb_remove_notify(dev, node); @@ -285,7 +285,7 @@ vhost_user_iotlb_cache_insert(struct virtio_net *dev, uint64_t iova, uint64_t ua vhost_user_iotlb_pool_put(dev, new_node); goto unlock; } else if (node->iova > new_node->iova) { - vhost_user_iotlb_set_dump(new_node); + vhost_user_iotlb_set_dump(dev, new_node); TAILQ_INSERT_BEFORE(node, new_node, next); dev->iotlb_cache_nr++; @@ -293,7 +293,7 @@ vhost_user_iotlb_cache_insert(struct virtio_net *dev, uint64_t iova, uint64_t ua } } - vhost_user_iotlb_set_dump(new_node); + vhost_user_iotlb_set_dump(dev, new_node); TAILQ_INSERT_TAIL(&dev->iotlb_list, new_node, next); dev->iotlb_cache_nr++; @@ -322,7 +322,7 @@ vhost_user_iotlb_cache_remove(struct virtio_net *dev, uint64_t iova, uint64_t si if (iova < node->iova + node->size) { struct vhost_iotlb_entry *next_node = RTE_TAILQ_NEXT(node, next); - vhost_user_iotlb_clear_dump(node, prev_node, next_node); + vhost_user_iotlb_clear_dump(dev, node, prev_node, next_node); TAILQ_REMOVE(&dev->iotlb_list, node, next); vhost_user_iotlb_remove_notify(dev, node); diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index f8624fba3d..5f24911190 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -1062,6 +1062,6 @@ mbuf_is_consumed(struct rte_mbuf *m) return true; } -void mem_set_dump(void *ptr, size_t size, bool enable, uint64_t alignment); +void mem_set_dump(struct virtio_net *dev, void *ptr, size_t size, bool enable, uint64_t alignment); #endif /* _VHOST_NET_CDEV_H_ */ diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index e36312181a..413f068bcd 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -763,7 +763,7 @@ hua_to_alignment(struct rte_vhost_memory *mem, void *ptr) } void -mem_set_dump(void *ptr, size_t size, bool enable, uint64_t pagesz) +mem_set_dump(struct virtio_net *dev, void *ptr, size_t size, bool enable, uint64_t pagesz) { #ifdef MADV_DONTDUMP void *start = RTE_PTR_ALIGN_FLOOR(ptr, pagesz); @@ -771,8 +771,8 @@ mem_set_dump(void *ptr, size_t size, bool enable, uint64_t pagesz) size_t len = end - (uintptr_t)start; if (madvise(start, len, enable ? MADV_DODUMP : MADV_DONTDUMP) == -1) { - rte_log(RTE_LOG_INFO, vhost_config_log_level, - "VHOST_CONFIG: could not set coredump preference (%s).\n", strerror(errno)); + VHOST_LOG_CONFIG(dev->ifname, INFO, + "could not set coredump preference (%s).\n", strerror(errno)); } #endif } @@ -807,7 +807,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->desc_packed, len, true, + mem_set_dump(dev, vq->desc_packed, len, true, hua_to_alignment(dev->mem, vq->desc_packed)); numa_realloc(&dev, &vq); *pdev = dev; @@ -824,7 +824,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->driver_event, len, true, + mem_set_dump(dev, vq->driver_event, len, true, hua_to_alignment(dev->mem, vq->driver_event)); len = sizeof(struct vring_packed_desc_event); vq->device_event = (struct vring_packed_desc_event *) @@ -837,7 +837,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->device_event, len, true, + mem_set_dump(dev, vq->device_event, len, true, hua_to_alignment(dev->mem, vq->device_event)); vq->access_ok = true; return; @@ -855,7 +855,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->desc, len, true, hua_to_alignment(dev->mem, vq->desc)); + mem_set_dump(dev, vq->desc, len, true, hua_to_alignment(dev->mem, vq->desc)); numa_realloc(&dev, &vq); *pdev = dev; *pvq = vq; @@ -871,7 +871,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->avail, len, true, hua_to_alignment(dev->mem, vq->avail)); + mem_set_dump(dev, vq->avail, len, true, hua_to_alignment(dev->mem, vq->avail)); len = sizeof(struct vring_used) + sizeof(struct vring_used_elem) * vq->size; if (dev->features & (1ULL << VIRTIO_RING_F_EVENT_IDX)) @@ -884,7 +884,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->used, len, true, hua_to_alignment(dev->mem, vq->used)); + mem_set_dump(dev, vq->used, len, true, hua_to_alignment(dev->mem, vq->used)); if (vq->last_used_idx != vq->used->idx) { VHOST_LOG_CONFIG(dev->ifname, WARNING, @@ -1274,7 +1274,7 @@ vhost_user_mmap_region(struct virtio_net *dev, region->mmap_addr = mmap_addr; region->mmap_size = mmap_size; region->host_user_addr = (uint64_t)(uintptr_t)mmap_addr + mmap_offset; - mem_set_dump(mmap_addr, mmap_size, false, alignment); + mem_set_dump(dev, mmap_addr, mmap_size, false, alignment); if (dev->async_copy) { if (add_guest_pages(dev, region, alignment) < 0) { @@ -1580,7 +1580,7 @@ inflight_mem_alloc(struct virtio_net *dev, const char *name, size_t size, int *f } alignment = get_blk_size(mfd); - mem_set_dump(ptr, size, false, alignment); + mem_set_dump(dev, ptr, size, false, alignment); *fd = mfd; return ptr; } @@ -1789,7 +1789,7 @@ vhost_user_set_inflight_fd(struct virtio_net **pdev, dev->inflight_info->fd = -1; } - mem_set_dump(addr, mmap_size, false, get_blk_size(fd)); + mem_set_dump(dev, addr, mmap_size, false, get_blk_size(fd)); dev->inflight_info->fd = fd; dev->inflight_info->addr = addr; dev->inflight_info->size = mmap_size; @@ -2343,7 +2343,7 @@ vhost_user_set_log_base(struct virtio_net **pdev, dev->log_addr = (uint64_t)(uintptr_t)addr; dev->log_base = dev->log_addr + off; dev->log_size = size; - mem_set_dump(addr, size + off, false, alignment); + mem_set_dump(dev, addr, size + off, false, alignment); for (i = 0; i < dev->nr_vring; i++) { struct vhost_virtqueue *vq = dev->virtqueue[i]; -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v3 11/14] log: add a per line log helper 2023-12-18 9:27 ` [PATCH v3 00/14] Detect superfluous newline in logs David Marchand ` (9 preceding siblings ...) 2023-12-18 9:27 ` [PATCH v3 10/14] vhost: improve log for memory dumping configuration David Marchand @ 2023-12-18 9:27 ` David Marchand 2023-12-18 9:27 ` [PATCH v3 12/14] lib: convert to per line logging David Marchand ` (2 subsequent siblings) 13 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-18 9:27 UTC (permalink / raw) To: dev; +Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Chengwen Feng gcc builtin __builtin_strchr can be used as a static assertion to check whether passed format strings contain a \n. This can be useful to detect double \n in log messages. Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Acked-by: Chengwen Feng <fengchengwen@huawei.com> --- Changes since RFC v1: - added a check in checkpatches.sh, --- devtools/checkpatches.sh | 8 ++++++++ lib/log/rte_log.h | 21 +++++++++++++++++++++ 2 files changed, 29 insertions(+) diff --git a/devtools/checkpatches.sh b/devtools/checkpatches.sh index 10b79ca2bc..10d1bf490b 100755 --- a/devtools/checkpatches.sh +++ b/devtools/checkpatches.sh @@ -53,6 +53,14 @@ print_usage () { check_forbidden_additions() { # <patch> res=0 + # refrain from new calls to RTE_LOG + awk -v FOLDERS="lib" \ + -v EXPRESSIONS="RTE_LOG\\\(" \ + -v RET_ON_FAIL=1 \ + -v MESSAGE='Prefer RTE_LOG_LINE' \ + -f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \ + "$1" || res=1 + # refrain from new additions of rte_panic() and rte_exit() # multiple folders and expressions are separated by spaces awk -v FOLDERS="lib drivers" \ diff --git a/lib/log/rte_log.h b/lib/log/rte_log.h index 3394746103..7dae74d849 100644 --- a/lib/log/rte_log.h +++ b/lib/log/rte_log.h @@ -17,6 +17,7 @@ extern "C" { #endif +#include <assert.h> #include <stdint.h> #include <stdio.h> #include <stdarg.h> @@ -358,6 +359,26 @@ int rte_vlog(uint32_t level, uint32_t logtype, const char *format, va_list ap) RTE_LOGTYPE_ ## t, # t ": " __VA_ARGS__) : \ 0) +#ifdef RTE_TOOLCHAIN_GCC +#define RTE_LOG_CHECK_NO_NEWLINE(fmt) \ + static_assert(!__builtin_strchr(fmt, '\n'), \ + "This log format string contains a \\n") +#else +#define RTE_LOG_CHECK_NO_NEWLINE(...) +#endif + +#define RTE_LOG_LINE(l, t, ...) do { \ + RTE_LOG_CHECK_NO_NEWLINE(RTE_FMT_HEAD(__VA_ARGS__,)); \ + RTE_LOG(l, t, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_FMT_TAIL(__VA_ARGS__ ,))); \ +} while (0) + +#define RTE_LOG_DP_LINE(l, t, ...) do { \ + RTE_LOG_CHECK_NO_NEWLINE(RTE_FMT_HEAD(__VA_ARGS__,)); \ + RTE_LOG_DP(l, t, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_FMT_TAIL(__VA_ARGS__ ,))); \ +} while (0) + #define RTE_LOG_REGISTER_IMPL(type, name, level) \ int type; \ RTE_INIT(__##type) \ -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v3 12/14] lib: convert to per line logging 2023-12-18 9:27 ` [PATCH v3 00/14] Detect superfluous newline in logs David Marchand ` (10 preceding siblings ...) 2023-12-18 9:27 ` [PATCH v3 11/14] log: add a per line log helper David Marchand @ 2023-12-18 9:27 ` David Marchand 2023-12-18 9:27 ` [PATCH v3 13/14] lib: replace logging helpers David Marchand 2023-12-18 9:27 ` [PATCH v3 14/14] lib: use per line logging in helpers David Marchand 13 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-18 9:27 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Andrew Rybchenko, Konstantin Ananyev, Anatoly Burakov, Harman Kalra, Jerin Jacob, Sunil Kumar Kori, Harry van Haaren, Stanislaw Kardach, Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam, Byron Marohn, Yipeng Wang, Vladimir Medvedkin, Sameh Gobriel, Reshma Pattan, Cristian Dumitrescu, David Hunt, Sivaprasad Tummala, Honnappa Nagarahalli, Volodymyr Fialko, Maxime Coquelin, Chenbo Xia Convert many libraries that call RTE_LOG(... "\n", ...) to RTE_LOG_LINE. Note: - for acl and sched libraries that still has some debug multilines messages, a direct call to RTE_LOG is used: this will make it easier to notice such special cases, Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> --- lib/acl/acl_bld.c | 28 +-- lib/acl/acl_gen.c | 8 +- lib/acl/rte_acl.c | 8 +- lib/acl/tb_mem.c | 2 +- lib/eal/common/eal_common_bus.c | 22 +- lib/eal/common/eal_common_class.c | 4 +- lib/eal/common/eal_common_config.c | 2 +- lib/eal/common/eal_common_debug.c | 6 +- lib/eal/common/eal_common_dev.c | 80 +++---- lib/eal/common/eal_common_devargs.c | 18 +- lib/eal/common/eal_common_dynmem.c | 34 +-- lib/eal/common/eal_common_fbarray.c | 12 +- lib/eal/common/eal_common_interrupts.c | 38 ++-- lib/eal/common/eal_common_lcore.c | 26 +-- lib/eal/common/eal_common_memalloc.c | 12 +- lib/eal/common/eal_common_memory.c | 66 +++--- lib/eal/common/eal_common_memzone.c | 24 +-- lib/eal/common/eal_common_options.c | 236 ++++++++++---------- lib/eal/common/eal_common_proc.c | 112 +++++----- lib/eal/common/eal_common_tailqs.c | 12 +- lib/eal/common/eal_common_thread.c | 12 +- lib/eal/common/eal_common_timer.c | 6 +- lib/eal/common/eal_common_trace_utils.c | 2 +- lib/eal/common/eal_trace.h | 4 +- lib/eal/common/hotplug_mp.c | 54 ++--- lib/eal/common/malloc_elem.c | 6 +- lib/eal/common/malloc_heap.c | 40 ++-- lib/eal/common/malloc_mp.c | 72 +++---- lib/eal/common/rte_keepalive.c | 2 +- lib/eal/common/rte_malloc.c | 10 +- lib/eal/common/rte_service.c | 8 +- lib/eal/freebsd/eal.c | 74 +++---- lib/eal/freebsd/eal_alarm.c | 2 +- lib/eal/freebsd/eal_dev.c | 8 +- lib/eal/freebsd/eal_hugepage_info.c | 22 +- lib/eal/freebsd/eal_interrupts.c | 60 +++--- lib/eal/freebsd/eal_lcore.c | 2 +- lib/eal/freebsd/eal_memalloc.c | 10 +- lib/eal/freebsd/eal_memory.c | 34 +-- lib/eal/freebsd/eal_thread.c | 2 +- lib/eal/freebsd/eal_timer.c | 10 +- lib/eal/linux/eal.c | 122 +++++------ lib/eal/linux/eal_alarm.c | 2 +- lib/eal/linux/eal_dev.c | 40 ++-- lib/eal/linux/eal_hugepage_info.c | 38 ++-- lib/eal/linux/eal_interrupts.c | 116 +++++----- lib/eal/linux/eal_lcore.c | 4 +- lib/eal/linux/eal_memalloc.c | 120 +++++------ lib/eal/linux/eal_memory.c | 208 +++++++++--------- lib/eal/linux/eal_thread.c | 4 +- lib/eal/linux/eal_timer.c | 10 +- lib/eal/linux/eal_vfio.c | 270 +++++++++++------------ lib/eal/linux/eal_vfio_mp_sync.c | 4 +- lib/eal/riscv/rte_cycles.c | 4 +- lib/eal/unix/eal_filesystem.c | 14 +- lib/eal/unix/eal_firmware.c | 2 +- lib/eal/unix/eal_unix_memory.c | 8 +- lib/eal/unix/rte_thread.c | 34 +-- lib/eal/windows/eal.c | 36 ++-- lib/eal/windows/eal_alarm.c | 12 +- lib/eal/windows/eal_debug.c | 8 +- lib/eal/windows/eal_dev.c | 8 +- lib/eal/windows/eal_hugepages.c | 10 +- lib/eal/windows/eal_interrupts.c | 10 +- lib/eal/windows/eal_lcore.c | 6 +- lib/eal/windows/eal_memalloc.c | 50 ++--- lib/eal/windows/eal_memory.c | 20 +- lib/eal/windows/eal_windows.h | 4 +- lib/eal/windows/include/rte_windows.h | 4 +- lib/eal/windows/rte_thread.c | 28 +-- lib/efd/rte_efd.c | 58 ++--- lib/fib/rte_fib.c | 14 +- lib/fib/rte_fib6.c | 14 +- lib/hash/rte_cuckoo_hash.c | 52 ++--- lib/hash/rte_fbk_hash.c | 4 +- lib/hash/rte_hash_crc.c | 12 +- lib/hash/rte_thash.c | 20 +- lib/hash/rte_thash_gfni.c | 8 +- lib/ip_frag/rte_ip_frag_common.c | 8 +- lib/latencystats/rte_latencystats.c | 41 ++-- lib/log/log.c | 6 +- lib/lpm/rte_lpm.c | 12 +- lib/lpm/rte_lpm6.c | 10 +- lib/mbuf/rte_mbuf.c | 14 +- lib/mbuf/rte_mbuf_dyn.c | 14 +- lib/mbuf/rte_mbuf_pool_ops.c | 4 +- lib/mempool/rte_mempool.c | 24 +-- lib/mempool/rte_mempool.h | 2 +- lib/mempool/rte_mempool_ops.c | 10 +- lib/pipeline/rte_pipeline.c | 228 ++++++++++---------- lib/port/rte_port_ethdev.c | 18 +- lib/port/rte_port_eventdev.c | 18 +- lib/port/rte_port_fd.c | 24 +-- lib/port/rte_port_frag.c | 14 +- lib/port/rte_port_ras.c | 12 +- lib/port/rte_port_ring.c | 18 +- lib/port/rte_port_sched.c | 12 +- lib/port/rte_port_source_sink.c | 48 ++--- lib/port/rte_port_sym_crypto.c | 18 +- lib/power/guest_channel.c | 38 ++-- lib/power/power_acpi_cpufreq.c | 106 ++++----- lib/power/power_amd_pstate_cpufreq.c | 120 +++++------ lib/power/power_common.c | 10 +- lib/power/power_cppc_cpufreq.c | 118 +++++----- lib/power/power_intel_uncore.c | 68 +++--- lib/power/power_kvm_vm.c | 22 +- lib/power/power_pstate_cpufreq.c | 144 ++++++------- lib/power/rte_power.c | 22 +- lib/power/rte_power_pmd_mgmt.c | 34 +-- lib/power/rte_power_uncore.c | 14 +- lib/rcu/rte_rcu_qsbr.c | 2 +- lib/reorder/rte_reorder.c | 32 +-- lib/rib/rte_rib.c | 10 +- lib/rib/rte_rib6.c | 10 +- lib/ring/rte_ring.c | 24 +-- lib/sched/rte_pie.c | 18 +- lib/sched/rte_sched.c | 274 ++++++++++++------------ lib/table/rte_table_acl.c | 72 +++---- lib/table/rte_table_array.c | 16 +- lib/table/rte_table_hash_cuckoo.c | 22 +- lib/table/rte_table_hash_ext.c | 22 +- lib/table/rte_table_hash_key16.c | 38 ++-- lib/table/rte_table_hash_key32.c | 38 ++-- lib/table/rte_table_hash_key8.c | 38 ++-- lib/table/rte_table_hash_lru.c | 22 +- lib/table/rte_table_lpm.c | 42 ++-- lib/table/rte_table_lpm_ipv6.c | 44 ++-- lib/table/rte_table_stub.c | 4 +- lib/vhost/fd_man.c | 8 +- 129 files changed, 2277 insertions(+), 2278 deletions(-) diff --git a/lib/acl/acl_bld.c b/lib/acl/acl_bld.c index eaf8770415..27bdd6b9a1 100644 --- a/lib/acl/acl_bld.c +++ b/lib/acl/acl_bld.c @@ -1017,8 +1017,8 @@ build_trie(struct acl_build_context *context, struct rte_acl_build_rule *head, break; default: - RTE_LOG(ERR, ACL, - "Error in rule[%u] type - %hhu\n", + RTE_LOG_LINE(ERR, ACL, + "Error in rule[%u] type - %hhu", rule->f->data.userdata, rule->config->defs[n].type); return NULL; @@ -1374,7 +1374,7 @@ acl_build_tries(struct acl_build_context *context, last = build_one_trie(context, rule_sets, n, context->node_max); if (context->bld_tries[n].trie == NULL) { - RTE_LOG(ERR, ACL, "Build of %u-th trie failed\n", n); + RTE_LOG_LINE(ERR, ACL, "Build of %u-th trie failed", n); return -ENOMEM; } @@ -1383,8 +1383,8 @@ acl_build_tries(struct acl_build_context *context, break; if (num_tries == RTE_DIM(context->tries)) { - RTE_LOG(ERR, ACL, - "Exceeded max number of tries: %u\n", + RTE_LOG_LINE(ERR, ACL, + "Exceeded max number of tries: %u", num_tries); return -ENOMEM; } @@ -1409,7 +1409,7 @@ acl_build_tries(struct acl_build_context *context, */ last = build_one_trie(context, rule_sets, n, INT32_MAX); if (context->bld_tries[n].trie == NULL || last != NULL) { - RTE_LOG(ERR, ACL, "Build of %u-th trie failed\n", n); + RTE_LOG_LINE(ERR, ACL, "Build of %u-th trie failed", n); return -ENOMEM; } @@ -1435,8 +1435,8 @@ acl_build_log(const struct acl_build_context *ctx) for (n = 0; n < RTE_DIM(ctx->tries); n++) { if (ctx->tries[n].count != 0) - RTE_LOG(DEBUG, ACL, - "trie %u: number of rules: %u, indexes: %u\n", + RTE_LOG_LINE(DEBUG, ACL, + "trie %u: number of rules: %u, indexes: %u", n, ctx->tries[n].count, ctx->tries[n].num_data_indexes); } @@ -1526,8 +1526,8 @@ acl_bld(struct acl_build_context *bcx, struct rte_acl_ctx *ctx, /* build phase runs out of memory. */ if (rc != 0) { - RTE_LOG(ERR, ACL, - "ACL context: %s, %s() failed with error code: %d\n", + RTE_LOG_LINE(ERR, ACL, + "ACL context: %s, %s() failed with error code: %d", bcx->acx->name, __func__, rc); return rc; } @@ -1568,8 +1568,8 @@ acl_check_bld_param(struct rte_acl_ctx *ctx, const struct rte_acl_config *cfg) for (i = 0; i != cfg->num_fields; i++) { if (cfg->defs[i].type > RTE_ACL_FIELD_TYPE_BITMASK) { - RTE_LOG(ERR, ACL, - "ACL context: %s, invalid type: %hhu for %u-th field\n", + RTE_LOG_LINE(ERR, ACL, + "ACL context: %s, invalid type: %hhu for %u-th field", ctx->name, cfg->defs[i].type, i); return -EINVAL; } @@ -1580,8 +1580,8 @@ acl_check_bld_param(struct rte_acl_ctx *ctx, const struct rte_acl_config *cfg) ; if (j == RTE_DIM(field_sizes)) { - RTE_LOG(ERR, ACL, - "ACL context: %s, invalid size: %hhu for %u-th field\n", + RTE_LOG_LINE(ERR, ACL, + "ACL context: %s, invalid size: %hhu for %u-th field", ctx->name, cfg->defs[i].size, i); return -EINVAL; } diff --git a/lib/acl/acl_gen.c b/lib/acl/acl_gen.c index 03a47ea231..2f612df1e0 100644 --- a/lib/acl/acl_gen.c +++ b/lib/acl/acl_gen.c @@ -471,9 +471,9 @@ rte_acl_gen(struct rte_acl_ctx *ctx, struct rte_acl_trie *trie, XMM_SIZE; if (total_size > max_size) { - RTE_LOG(DEBUG, ACL, + RTE_LOG_LINE(DEBUG, ACL, "Gen phase for ACL ctx \"%s\" exceeds max_size limit, " - "bytes required: %zu, allowed: %zu\n", + "bytes required: %zu, allowed: %zu", ctx->name, total_size, max_size); return -ERANGE; } @@ -481,8 +481,8 @@ rte_acl_gen(struct rte_acl_ctx *ctx, struct rte_acl_trie *trie, mem = rte_zmalloc_socket(ctx->name, total_size, RTE_CACHE_LINE_SIZE, ctx->socket_id); if (mem == NULL) { - RTE_LOG(ERR, ACL, - "allocation of %zu bytes on socket %d for %s failed\n", + RTE_LOG_LINE(ERR, ACL, + "allocation of %zu bytes on socket %d for %s failed", total_size, ctx->socket_id, ctx->name); return -ENOMEM; } diff --git a/lib/acl/rte_acl.c b/lib/acl/rte_acl.c index 760c3587d4..bec26d0a22 100644 --- a/lib/acl/rte_acl.c +++ b/lib/acl/rte_acl.c @@ -399,15 +399,15 @@ rte_acl_create(const struct rte_acl_param *param) te = rte_zmalloc("ACL_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, ACL, "Cannot allocate tailq entry!\n"); + RTE_LOG_LINE(ERR, ACL, "Cannot allocate tailq entry!"); goto exit; } ctx = rte_zmalloc_socket(name, sz, RTE_CACHE_LINE_SIZE, param->socket_id); if (ctx == NULL) { - RTE_LOG(ERR, ACL, - "allocation of %zu bytes on socket %d for %s failed\n", + RTE_LOG_LINE(ERR, ACL, + "allocation of %zu bytes on socket %d for %s failed", sz, param->socket_id, name); rte_free(te); goto exit; @@ -473,7 +473,7 @@ rte_acl_add_rules(struct rte_acl_ctx *ctx, const struct rte_acl_rule *rules, ((uintptr_t)rules + i * ctx->rule_sz); rc = acl_check_rule(&rv->data); if (rc != 0) { - RTE_LOG(ERR, ACL, "%s(%s): rule #%u is invalid\n", + RTE_LOG_LINE(ERR, ACL, "%s(%s): rule #%u is invalid", __func__, ctx->name, i + 1); return rc; } diff --git a/lib/acl/tb_mem.c b/lib/acl/tb_mem.c index 4ee65b23da..7914899363 100644 --- a/lib/acl/tb_mem.c +++ b/lib/acl/tb_mem.c @@ -26,7 +26,7 @@ tb_pool(struct tb_mem_pool *pool, size_t sz) size = sz + pool->alignment - 1; block = calloc(1, size + sizeof(*pool->block)); if (block == NULL) { - RTE_LOG(ERR, ACL, "%s(%zu) failed, currently allocated by pool: %zu bytes\n", + RTE_LOG_LINE(ERR, ACL, "%s(%zu) failed, currently allocated by pool: %zu bytes", __func__, sz, pool->alloc); siglongjmp(pool->fail, -ENOMEM); return NULL; diff --git a/lib/eal/common/eal_common_bus.c b/lib/eal/common/eal_common_bus.c index acac14131a..456f27112c 100644 --- a/lib/eal/common/eal_common_bus.c +++ b/lib/eal/common/eal_common_bus.c @@ -35,14 +35,14 @@ rte_bus_register(struct rte_bus *bus) RTE_VERIFY(!bus->plug || bus->unplug); TAILQ_INSERT_TAIL(&rte_bus_list, bus, next); - RTE_LOG(DEBUG, EAL, "Registered [%s] bus.\n", rte_bus_name(bus)); + RTE_LOG_LINE(DEBUG, EAL, "Registered [%s] bus.", rte_bus_name(bus)); } void rte_bus_unregister(struct rte_bus *bus) { TAILQ_REMOVE(&rte_bus_list, bus, next); - RTE_LOG(DEBUG, EAL, "Unregistered [%s] bus.\n", rte_bus_name(bus)); + RTE_LOG_LINE(DEBUG, EAL, "Unregistered [%s] bus.", rte_bus_name(bus)); } /* Scan all the buses for registered devices */ @@ -55,7 +55,7 @@ rte_bus_scan(void) TAILQ_FOREACH(bus, &rte_bus_list, next) { ret = bus->scan(); if (ret) - RTE_LOG(ERR, EAL, "Scan for (%s) bus failed.\n", + RTE_LOG_LINE(ERR, EAL, "Scan for (%s) bus failed.", rte_bus_name(bus)); } @@ -77,14 +77,14 @@ rte_bus_probe(void) ret = bus->probe(); if (ret) - RTE_LOG(ERR, EAL, "Bus (%s) probe failed.\n", + RTE_LOG_LINE(ERR, EAL, "Bus (%s) probe failed.", rte_bus_name(bus)); } if (vbus) { ret = vbus->probe(); if (ret) - RTE_LOG(ERR, EAL, "Bus (%s) probe failed.\n", + RTE_LOG_LINE(ERR, EAL, "Bus (%s) probe failed.", rte_bus_name(vbus)); } @@ -133,7 +133,7 @@ rte_bus_dump(FILE *f) TAILQ_FOREACH(bus, &rte_bus_list, next) { ret = bus_dump_one(f, bus); if (ret) { - RTE_LOG(ERR, EAL, "Unable to write to stream (%d)\n", + RTE_LOG_LINE(ERR, EAL, "Unable to write to stream (%d)", ret); break; } @@ -235,15 +235,15 @@ rte_bus_get_iommu_class(void) continue; bus_iova_mode = bus->get_iommu_class(); - RTE_LOG(DEBUG, EAL, "Bus %s wants IOVA as '%s'\n", + RTE_LOG_LINE(DEBUG, EAL, "Bus %s wants IOVA as '%s'", rte_bus_name(bus), bus_iova_mode == RTE_IOVA_DC ? "DC" : (bus_iova_mode == RTE_IOVA_PA ? "PA" : "VA")); if (bus_iova_mode == RTE_IOVA_PA) { buses_want_pa = true; if (!RTE_IOVA_IN_MBUF) - RTE_LOG(WARNING, EAL, - "Bus %s wants IOVA as PA not compatible with 'enable_iova_as_pa=false' build option.\n", + RTE_LOG_LINE(WARNING, EAL, + "Bus %s wants IOVA as PA not compatible with 'enable_iova_as_pa=false' build option.", rte_bus_name(bus)); } else if (bus_iova_mode == RTE_IOVA_VA) buses_want_va = true; @@ -255,8 +255,8 @@ rte_bus_get_iommu_class(void) } else { mode = RTE_IOVA_DC; if (buses_want_va) { - RTE_LOG(WARNING, EAL, "Some buses want 'VA' but forcing 'DC' because other buses want 'PA'.\n"); - RTE_LOG(WARNING, EAL, "Depending on the final decision by the EAL, not all buses may be able to initialize.\n"); + RTE_LOG_LINE(WARNING, EAL, "Some buses want 'VA' but forcing 'DC' because other buses want 'PA'."); + RTE_LOG_LINE(WARNING, EAL, "Depending on the final decision by the EAL, not all buses may be able to initialize."); } } diff --git a/lib/eal/common/eal_common_class.c b/lib/eal/common/eal_common_class.c index 0187076af1..02a983b286 100644 --- a/lib/eal/common/eal_common_class.c +++ b/lib/eal/common/eal_common_class.c @@ -19,14 +19,14 @@ rte_class_register(struct rte_class *class) RTE_VERIFY(class->name && strlen(class->name)); TAILQ_INSERT_TAIL(&rte_class_list, class, next); - RTE_LOG(DEBUG, EAL, "Registered [%s] device class.\n", class->name); + RTE_LOG_LINE(DEBUG, EAL, "Registered [%s] device class.", class->name); } void rte_class_unregister(struct rte_class *class) { TAILQ_REMOVE(&rte_class_list, class, next); - RTE_LOG(DEBUG, EAL, "Unregistered [%s] device class.\n", class->name); + RTE_LOG_LINE(DEBUG, EAL, "Unregistered [%s] device class.", class->name); } struct rte_class * diff --git a/lib/eal/common/eal_common_config.c b/lib/eal/common/eal_common_config.c index 0daf0f3188..4b6530f2fb 100644 --- a/lib/eal/common/eal_common_config.c +++ b/lib/eal/common/eal_common_config.c @@ -31,7 +31,7 @@ int eal_set_runtime_dir(const char *run_dir) { if (strlcpy(runtime_dir, run_dir, PATH_MAX) >= PATH_MAX) { - RTE_LOG(ERR, EAL, "Runtime directory string too long\n"); + RTE_LOG_LINE(ERR, EAL, "Runtime directory string too long"); return -1; } diff --git a/lib/eal/common/eal_common_debug.c b/lib/eal/common/eal_common_debug.c index 9cac9c6390..065843f34e 100644 --- a/lib/eal/common/eal_common_debug.c +++ b/lib/eal/common/eal_common_debug.c @@ -16,7 +16,7 @@ __rte_panic(const char *funcname, const char *format, ...) { va_list ap; - rte_log(RTE_LOG_CRIT, RTE_LOGTYPE_EAL, "PANIC in %s():\n", funcname); + RTE_LOG_LINE(CRIT, EAL, "PANIC in %s():", funcname); va_start(ap, format); rte_vlog(RTE_LOG_CRIT, RTE_LOGTYPE_EAL, format, ap); va_end(ap); @@ -42,7 +42,7 @@ rte_exit(int exit_code, const char *format, ...) va_end(ap); if (rte_eal_cleanup() != 0 && rte_errno != EALREADY) - RTE_LOG(CRIT, EAL, - "EAL could not release all resources\n"); + RTE_LOG_LINE(CRIT, EAL, + "EAL could not release all resources"); exit(exit_code); } diff --git a/lib/eal/common/eal_common_dev.c b/lib/eal/common/eal_common_dev.c index 614ef6c9fc..359907798a 100644 --- a/lib/eal/common/eal_common_dev.c +++ b/lib/eal/common/eal_common_dev.c @@ -182,7 +182,7 @@ local_dev_probe(const char *devargs, struct rte_device **new_dev) goto err_devarg; if (da->bus->plug == NULL) { - RTE_LOG(ERR, EAL, "Function plug not supported by bus (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Function plug not supported by bus (%s)", da->bus->name); ret = -ENOTSUP; goto err_devarg; @@ -199,7 +199,7 @@ local_dev_probe(const char *devargs, struct rte_device **new_dev) dev = da->bus->find_device(NULL, cmp_dev_name, da->name); if (dev == NULL) { - RTE_LOG(ERR, EAL, "Cannot find device (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Cannot find device (%s)", da->name); ret = -ENODEV; goto err_devarg; @@ -214,7 +214,7 @@ local_dev_probe(const char *devargs, struct rte_device **new_dev) ret = -ENOTSUP; if (ret && !rte_dev_is_probed(dev)) { /* if hasn't ever succeeded */ - RTE_LOG(ERR, EAL, "Driver cannot attach the device (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Driver cannot attach the device (%s)", dev->name); return ret; } @@ -248,13 +248,13 @@ rte_dev_probe(const char *devargs) */ ret = eal_dev_hotplug_request_to_primary(&req); if (ret != 0) { - RTE_LOG(ERR, EAL, - "Failed to send hotplug request to primary\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to send hotplug request to primary"); return -ENOMSG; } if (req.result != 0) - RTE_LOG(ERR, EAL, - "Failed to hotplug add device\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to hotplug add device"); return req.result; } @@ -264,8 +264,8 @@ rte_dev_probe(const char *devargs) ret = local_dev_probe(devargs, &dev); if (ret != 0) { - RTE_LOG(ERR, EAL, - "Failed to attach device on primary process\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to attach device on primary process"); /** * it is possible that secondary process failed to attached a @@ -282,8 +282,8 @@ rte_dev_probe(const char *devargs) /* if any communication error, we need to rollback. */ if (ret != 0) { - RTE_LOG(ERR, EAL, - "Failed to send hotplug add request to secondary\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to send hotplug add request to secondary"); ret = -ENOMSG; goto rollback; } @@ -293,8 +293,8 @@ rte_dev_probe(const char *devargs) * is necessary. */ if (req.result != 0) { - RTE_LOG(ERR, EAL, - "Failed to attach device on secondary process\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to attach device on secondary process"); ret = req.result; /* for -EEXIST, we don't need to rollback. */ @@ -310,15 +310,15 @@ rte_dev_probe(const char *devargs) /* primary send rollback request to secondary. */ if (eal_dev_hotplug_request_to_secondary(&req) != 0) - RTE_LOG(WARNING, EAL, + RTE_LOG_LINE(WARNING, EAL, "Failed to rollback device attach on secondary." - "Devices in secondary may not sync with primary\n"); + "Devices in secondary may not sync with primary"); /* primary rollback itself. */ if (local_dev_remove(dev) != 0) - RTE_LOG(WARNING, EAL, + RTE_LOG_LINE(WARNING, EAL, "Failed to rollback device attach on primary." - "Devices in secondary may not sync with primary\n"); + "Devices in secondary may not sync with primary"); return ret; } @@ -331,13 +331,13 @@ rte_eal_hotplug_remove(const char *busname, const char *devname) bus = rte_bus_find_by_name(busname); if (bus == NULL) { - RTE_LOG(ERR, EAL, "Cannot find bus (%s)\n", busname); + RTE_LOG_LINE(ERR, EAL, "Cannot find bus (%s)", busname); return -ENOENT; } dev = bus->find_device(NULL, cmp_dev_name, devname); if (dev == NULL) { - RTE_LOG(ERR, EAL, "Cannot find plugged device (%s)\n", devname); + RTE_LOG_LINE(ERR, EAL, "Cannot find plugged device (%s)", devname); return -EINVAL; } @@ -351,14 +351,14 @@ local_dev_remove(struct rte_device *dev) int ret; if (dev->bus->unplug == NULL) { - RTE_LOG(ERR, EAL, "Function unplug not supported by bus (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Function unplug not supported by bus (%s)", dev->bus->name); return -ENOTSUP; } ret = dev->bus->unplug(dev); if (ret) { - RTE_LOG(ERR, EAL, "Driver cannot detach the device (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Driver cannot detach the device (%s)", dev->name); return (ret < 0) ? ret : -ENOENT; } @@ -374,7 +374,7 @@ rte_dev_remove(struct rte_device *dev) int ret; if (!rte_dev_is_probed(dev)) { - RTE_LOG(ERR, EAL, "Device is not probed\n"); + RTE_LOG_LINE(ERR, EAL, "Device is not probed"); return -ENOENT; } @@ -394,13 +394,13 @@ rte_dev_remove(struct rte_device *dev) */ ret = eal_dev_hotplug_request_to_primary(&req); if (ret != 0) { - RTE_LOG(ERR, EAL, - "Failed to send hotplug request to primary\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to send hotplug request to primary"); return -ENOMSG; } if (req.result != 0) - RTE_LOG(ERR, EAL, - "Failed to hotplug remove device\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to hotplug remove device"); return req.result; } @@ -414,8 +414,8 @@ rte_dev_remove(struct rte_device *dev) * part of the secondary processes still detached it successfully. */ if (ret != 0) { - RTE_LOG(ERR, EAL, - "Failed to send device detach request to secondary\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to send device detach request to secondary"); ret = -ENOMSG; goto rollback; } @@ -425,8 +425,8 @@ rte_dev_remove(struct rte_device *dev) * is necessary. */ if (req.result != 0) { - RTE_LOG(ERR, EAL, - "Failed to detach device on secondary process\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to detach device on secondary process"); ret = req.result; /** * if -ENOENT, we don't need to rollback, since devices is @@ -441,8 +441,8 @@ rte_dev_remove(struct rte_device *dev) /* if primary failed, still need to consider if rollback is necessary */ if (ret != 0) { - RTE_LOG(ERR, EAL, - "Failed to detach device on primary process\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to detach device on primary process"); /* if -ENOENT, we don't need to rollback */ if (ret == -ENOENT) return ret; @@ -456,9 +456,9 @@ rte_dev_remove(struct rte_device *dev) /* primary send rollback request to secondary. */ if (eal_dev_hotplug_request_to_secondary(&req) != 0) - RTE_LOG(WARNING, EAL, + RTE_LOG_LINE(WARNING, EAL, "Failed to rollback device detach on secondary." - "Devices in secondary may not sync with primary\n"); + "Devices in secondary may not sync with primary"); return ret; } @@ -508,16 +508,16 @@ rte_dev_event_callback_register(const char *device_name, } TAILQ_INSERT_TAIL(&dev_event_cbs, event_cb, next); } else { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "Failed to allocate memory for device " "event callback."); ret = -ENOMEM; goto error; } } else { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "The callback is already exist, no need " - "to register again.\n"); + "to register again."); event_cb = NULL; ret = -EEXIST; goto error; @@ -635,17 +635,17 @@ rte_dev_iterator_init(struct rte_dev_iterator *it, * one layer specified. */ if (bus == NULL && cls == NULL) { - RTE_LOG(DEBUG, EAL, "Either bus or class must be specified.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Either bus or class must be specified."); rte_errno = EINVAL; goto get_out; } if (bus != NULL && bus->dev_iterate == NULL) { - RTE_LOG(DEBUG, EAL, "Bus %s not supported\n", bus->name); + RTE_LOG_LINE(DEBUG, EAL, "Bus %s not supported", bus->name); rte_errno = ENOTSUP; goto get_out; } if (cls != NULL && cls->dev_iterate == NULL) { - RTE_LOG(DEBUG, EAL, "Class %s not supported\n", cls->name); + RTE_LOG_LINE(DEBUG, EAL, "Class %s not supported", cls->name); rte_errno = ENOTSUP; goto get_out; } diff --git a/lib/eal/common/eal_common_devargs.c b/lib/eal/common/eal_common_devargs.c index fb5d0a293b..dbf5affa76 100644 --- a/lib/eal/common/eal_common_devargs.c +++ b/lib/eal/common/eal_common_devargs.c @@ -39,12 +39,12 @@ devargs_bus_parse_default(struct rte_devargs *devargs, /* Parse devargs name from bus key-value list. */ name = rte_kvargs_get(bus_args, "name"); if (name == NULL) { - RTE_LOG(DEBUG, EAL, "devargs name not found: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "devargs name not found: %s", devargs->data); return 0; } if (rte_strscpy(devargs->name, name, sizeof(devargs->name)) < 0) { - RTE_LOG(ERR, EAL, "devargs name too long: %s\n", + RTE_LOG_LINE(ERR, EAL, "devargs name too long: %s", devargs->data); return -E2BIG; } @@ -79,7 +79,7 @@ rte_devargs_layers_parse(struct rte_devargs *devargs, if (devargs->data != devstr) { devargs->data = strdup(devstr); if (devargs->data == NULL) { - RTE_LOG(ERR, EAL, "OOM\n"); + RTE_LOG_LINE(ERR, EAL, "OOM"); ret = -ENOMEM; goto get_out; } @@ -133,7 +133,7 @@ rte_devargs_layers_parse(struct rte_devargs *devargs, devargs->bus_str = layers[i].str; devargs->bus = rte_bus_find_by_name(kv->value); if (devargs->bus == NULL) { - RTE_LOG(ERR, EAL, "Could not find bus \"%s\"\n", + RTE_LOG_LINE(ERR, EAL, "Could not find bus \"%s\"", kv->value); ret = -EFAULT; goto get_out; @@ -142,7 +142,7 @@ rte_devargs_layers_parse(struct rte_devargs *devargs, devargs->cls_str = layers[i].str; devargs->cls = rte_class_find_by_name(kv->value); if (devargs->cls == NULL) { - RTE_LOG(ERR, EAL, "Could not find class \"%s\"\n", + RTE_LOG_LINE(ERR, EAL, "Could not find class \"%s\"", kv->value); ret = -EFAULT; goto get_out; @@ -217,7 +217,7 @@ rte_devargs_parse(struct rte_devargs *da, const char *dev) da->name[i] = devname[i]; i++; if (i == maxlen) { - RTE_LOG(WARNING, EAL, "Parsing \"%s\": device name should be shorter than %zu\n", + RTE_LOG_LINE(WARNING, EAL, "Parsing \"%s\": device name should be shorter than %zu", dev, maxlen); da->name[i - 1] = '\0'; return -EINVAL; @@ -227,7 +227,7 @@ rte_devargs_parse(struct rte_devargs *da, const char *dev) if (bus == NULL) { bus = rte_bus_find_by_device_name(da->name); if (bus == NULL) { - RTE_LOG(ERR, EAL, "failed to parse device \"%s\"\n", + RTE_LOG_LINE(ERR, EAL, "failed to parse device \"%s\"", da->name); return -EFAULT; } @@ -239,7 +239,7 @@ rte_devargs_parse(struct rte_devargs *da, const char *dev) else da->data = strdup(""); if (da->data == NULL) { - RTE_LOG(ERR, EAL, "not enough memory to parse arguments\n"); + RTE_LOG_LINE(ERR, EAL, "not enough memory to parse arguments"); return -ENOMEM; } da->drv_str = da->data; @@ -266,7 +266,7 @@ rte_devargs_parsef(struct rte_devargs *da, const char *format, ...) len += 1; dev = calloc(1, (size_t)len); if (dev == NULL) { - RTE_LOG(ERR, EAL, "not enough memory to parse device\n"); + RTE_LOG_LINE(ERR, EAL, "not enough memory to parse device"); return -ENOMEM; } diff --git a/lib/eal/common/eal_common_dynmem.c b/lib/eal/common/eal_common_dynmem.c index 95da55d9b0..721cb63bf2 100644 --- a/lib/eal/common/eal_common_dynmem.c +++ b/lib/eal/common/eal_common_dynmem.c @@ -76,7 +76,7 @@ eal_dynmem_memseg_lists_init(void) n_memtypes = internal_conf->num_hugepage_sizes * rte_socket_count(); memtypes = calloc(n_memtypes, sizeof(*memtypes)); if (memtypes == NULL) { - RTE_LOG(ERR, EAL, "Cannot allocate space for memory types\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate space for memory types"); return -1; } @@ -101,8 +101,8 @@ eal_dynmem_memseg_lists_init(void) memtypes[cur_type].page_sz = hugepage_sz; memtypes[cur_type].socket_id = socket_id; - RTE_LOG(DEBUG, EAL, "Detected memory type: " - "socket_id:%u hugepage_sz:%" PRIu64 "\n", + RTE_LOG_LINE(DEBUG, EAL, "Detected memory type: " + "socket_id:%u hugepage_sz:%" PRIu64, socket_id, hugepage_sz); } } @@ -120,7 +120,7 @@ eal_dynmem_memseg_lists_init(void) max_seglists_per_type = RTE_MAX_MEMSEG_LISTS / n_memtypes; if (max_seglists_per_type == 0) { - RTE_LOG(ERR, EAL, "Cannot accommodate all memory types, please increase RTE_MAX_MEMSEG_LISTS\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot accommodate all memory types, please increase RTE_MAX_MEMSEG_LISTS"); goto out; } @@ -171,15 +171,15 @@ eal_dynmem_memseg_lists_init(void) /* limit number of segment lists according to our maximum */ n_seglists = RTE_MIN(n_seglists, max_seglists_per_type); - RTE_LOG(DEBUG, EAL, "Creating %i segment lists: " - "n_segs:%i socket_id:%i hugepage_sz:%" PRIu64 "\n", + RTE_LOG_LINE(DEBUG, EAL, "Creating %i segment lists: " + "n_segs:%i socket_id:%i hugepage_sz:%" PRIu64, n_seglists, n_segs, socket_id, pagesz); /* create all segment lists */ for (cur_seglist = 0; cur_seglist < n_seglists; cur_seglist++) { if (msl_idx >= RTE_MAX_MEMSEG_LISTS) { - RTE_LOG(ERR, EAL, - "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS\n"); + RTE_LOG_LINE(ERR, EAL, + "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS"); goto out; } msl = &mcfg->memsegs[msl_idx++]; @@ -189,7 +189,7 @@ eal_dynmem_memseg_lists_init(void) goto out; if (eal_memseg_list_alloc(msl, 0)) { - RTE_LOG(ERR, EAL, "Cannot allocate VA space for memseg list\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate VA space for memseg list"); goto out; } } @@ -287,9 +287,9 @@ eal_dynmem_hugepage_init(void) if (num_pages == 0) continue; - RTE_LOG(DEBUG, EAL, + RTE_LOG_LINE(DEBUG, EAL, "Allocating %u pages of size %" PRIu64 "M " - "on socket %i\n", + "on socket %i", num_pages, hpi->hugepage_sz >> 20, socket_id); /* we may not be able to allocate all pages in one go, @@ -307,7 +307,7 @@ eal_dynmem_hugepage_init(void) pages = malloc(sizeof(*pages) * needed); if (pages == NULL) { - RTE_LOG(ERR, EAL, "Failed to malloc pages\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to malloc pages"); return -1; } @@ -342,7 +342,7 @@ eal_dynmem_hugepage_init(void) continue; if (rte_mem_alloc_validator_register("socket-limit", limits_callback, i, limit)) - RTE_LOG(ERR, EAL, "Failed to register socket limits validator callback\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to register socket limits validator callback"); } } return 0; @@ -515,8 +515,8 @@ eal_dynmem_calc_num_pages_per_socket( internal_conf->socket_mem[socket] / 0x100000); available = requested - ((unsigned int)(memory[socket] / 0x100000)); - RTE_LOG(ERR, EAL, "Not enough memory available on " - "socket %u! Requested: %uMB, available: %uMB\n", + RTE_LOG_LINE(ERR, EAL, "Not enough memory available on " + "socket %u! Requested: %uMB, available: %uMB", socket, requested, available); return -1; } @@ -526,8 +526,8 @@ eal_dynmem_calc_num_pages_per_socket( if (total_mem > 0) { requested = (unsigned int)(internal_conf->memory / 0x100000); available = requested - (unsigned int)(total_mem / 0x100000); - RTE_LOG(ERR, EAL, "Not enough memory available! " - "Requested: %uMB, available: %uMB\n", + RTE_LOG_LINE(ERR, EAL, "Not enough memory available! " + "Requested: %uMB, available: %uMB", requested, available); return -1; } diff --git a/lib/eal/common/eal_common_fbarray.c b/lib/eal/common/eal_common_fbarray.c index 2055bfa57d..7b90e01500 100644 --- a/lib/eal/common/eal_common_fbarray.c +++ b/lib/eal/common/eal_common_fbarray.c @@ -83,7 +83,7 @@ resize_and_map(int fd, const char *path, void *addr, size_t len) void *map_addr; if (eal_file_truncate(fd, len)) { - RTE_LOG(ERR, EAL, "Cannot truncate %s\n", path); + RTE_LOG_LINE(ERR, EAL, "Cannot truncate %s", path); return -1; } @@ -755,7 +755,7 @@ rte_fbarray_init(struct rte_fbarray *arr, const char *name, unsigned int len, void *new_data = rte_mem_map(data, mmap_len, RTE_PROT_READ | RTE_PROT_WRITE, flags, fd, 0); if (new_data == NULL) { - RTE_LOG(DEBUG, EAL, "%s(): couldn't remap anonymous memory: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): couldn't remap anonymous memory: %s", __func__, rte_strerror(rte_errno)); goto fail; } @@ -770,12 +770,12 @@ rte_fbarray_init(struct rte_fbarray *arr, const char *name, unsigned int len, */ fd = eal_file_open(path, EAL_OPEN_CREATE | EAL_OPEN_READWRITE); if (fd < 0) { - RTE_LOG(DEBUG, EAL, "%s(): couldn't open %s: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): couldn't open %s: %s", __func__, path, rte_strerror(rte_errno)); goto fail; } else if (eal_file_lock( fd, EAL_FLOCK_EXCLUSIVE, EAL_FLOCK_RETURN)) { - RTE_LOG(DEBUG, EAL, "%s(): couldn't lock %s: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): couldn't lock %s: %s", __func__, path, rte_strerror(rte_errno)); rte_errno = EBUSY; goto fail; @@ -1017,7 +1017,7 @@ rte_fbarray_destroy(struct rte_fbarray *arr) */ fd = tmp->fd; if (eal_file_lock(fd, EAL_FLOCK_EXCLUSIVE, EAL_FLOCK_RETURN)) { - RTE_LOG(DEBUG, EAL, "Cannot destroy fbarray - another process is using it\n"); + RTE_LOG_LINE(DEBUG, EAL, "Cannot destroy fbarray - another process is using it"); rte_errno = EBUSY; ret = -1; goto out; @@ -1026,7 +1026,7 @@ rte_fbarray_destroy(struct rte_fbarray *arr) /* we're OK to destroy the file */ eal_get_fbarray_path(path, sizeof(path), arr->name); if (unlink(path)) { - RTE_LOG(DEBUG, EAL, "Cannot unlink fbarray: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "Cannot unlink fbarray: %s", strerror(errno)); rte_errno = errno; /* diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c index 97b64fed58..6a5723a068 100644 --- a/lib/eal/common/eal_common_interrupts.c +++ b/lib/eal/common/eal_common_interrupts.c @@ -15,7 +15,7 @@ /* Macros to check for valid interrupt handle */ #define CHECK_VALID_INTR_HANDLE(intr_handle) do { \ if (intr_handle == NULL) { \ - RTE_LOG(DEBUG, EAL, "Interrupt instance unallocated\n"); \ + RTE_LOG_LINE(DEBUG, EAL, "Interrupt instance unallocated"); \ rte_errno = EINVAL; \ goto fail; \ } \ @@ -37,7 +37,7 @@ struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags) * defined flags. */ if ((flags & ~RTE_INTR_INSTANCE_KNOWN_FLAGS) != 0) { - RTE_LOG(DEBUG, EAL, "Invalid alloc flag passed 0x%x\n", flags); + RTE_LOG_LINE(DEBUG, EAL, "Invalid alloc flag passed 0x%x", flags); rte_errno = EINVAL; return NULL; } @@ -48,7 +48,7 @@ struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags) else intr_handle = calloc(1, sizeof(*intr_handle)); if (intr_handle == NULL) { - RTE_LOG(ERR, EAL, "Failed to allocate intr_handle\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to allocate intr_handle"); rte_errno = ENOMEM; return NULL; } @@ -61,7 +61,7 @@ struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags) sizeof(int)); } if (intr_handle->efds == NULL) { - RTE_LOG(ERR, EAL, "Fail to allocate event fd list\n"); + RTE_LOG_LINE(ERR, EAL, "Fail to allocate event fd list"); rte_errno = ENOMEM; goto fail; } @@ -75,7 +75,7 @@ struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags) sizeof(struct rte_epoll_event)); } if (intr_handle->elist == NULL) { - RTE_LOG(ERR, EAL, "fail to allocate event fd list\n"); + RTE_LOG_LINE(ERR, EAL, "fail to allocate event fd list"); rte_errno = ENOMEM; goto fail; } @@ -100,7 +100,7 @@ struct rte_intr_handle *rte_intr_instance_dup(const struct rte_intr_handle *src) struct rte_intr_handle *intr_handle; if (src == NULL) { - RTE_LOG(DEBUG, EAL, "Source interrupt instance unallocated\n"); + RTE_LOG_LINE(DEBUG, EAL, "Source interrupt instance unallocated"); rte_errno = EINVAL; return NULL; } @@ -129,7 +129,7 @@ int rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size) CHECK_VALID_INTR_HANDLE(intr_handle); if (size == 0) { - RTE_LOG(DEBUG, EAL, "Size can't be zero\n"); + RTE_LOG_LINE(DEBUG, EAL, "Size can't be zero"); rte_errno = EINVAL; goto fail; } @@ -143,7 +143,7 @@ int rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size) tmp_efds = realloc(intr_handle->efds, size * sizeof(int)); } if (tmp_efds == NULL) { - RTE_LOG(ERR, EAL, "Failed to realloc the efds list\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to realloc the efds list"); rte_errno = ENOMEM; goto fail; } @@ -157,7 +157,7 @@ int rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size) size * sizeof(struct rte_epoll_event)); } if (tmp_elist == NULL) { - RTE_LOG(ERR, EAL, "Failed to realloc the event list\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to realloc the event list"); rte_errno = ENOMEM; goto fail; } @@ -253,8 +253,8 @@ int rte_intr_max_intr_set(struct rte_intr_handle *intr_handle, CHECK_VALID_INTR_HANDLE(intr_handle); if (max_intr > intr_handle->nb_intr) { - RTE_LOG(DEBUG, EAL, "Maximum interrupt vector ID (%d) exceeds " - "the number of available events (%d)\n", max_intr, + RTE_LOG_LINE(DEBUG, EAL, "Maximum interrupt vector ID (%d) exceeds " + "the number of available events (%d)", max_intr, intr_handle->nb_intr); rte_errno = ERANGE; goto fail; @@ -332,7 +332,7 @@ int rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle, CHECK_VALID_INTR_HANDLE(intr_handle); if (index >= intr_handle->nb_intr) { - RTE_LOG(DEBUG, EAL, "Invalid index %d, max limit %d\n", index, + RTE_LOG_LINE(DEBUG, EAL, "Invalid index %d, max limit %d", index, intr_handle->nb_intr); rte_errno = EINVAL; goto fail; @@ -349,7 +349,7 @@ int rte_intr_efds_index_set(struct rte_intr_handle *intr_handle, CHECK_VALID_INTR_HANDLE(intr_handle); if (index >= intr_handle->nb_intr) { - RTE_LOG(DEBUG, EAL, "Invalid index %d, max limit %d\n", index, + RTE_LOG_LINE(DEBUG, EAL, "Invalid index %d, max limit %d", index, intr_handle->nb_intr); rte_errno = ERANGE; goto fail; @@ -368,7 +368,7 @@ struct rte_epoll_event *rte_intr_elist_index_get( CHECK_VALID_INTR_HANDLE(intr_handle); if (index >= intr_handle->nb_intr) { - RTE_LOG(DEBUG, EAL, "Invalid index %d, max limit %d\n", index, + RTE_LOG_LINE(DEBUG, EAL, "Invalid index %d, max limit %d", index, intr_handle->nb_intr); rte_errno = ERANGE; goto fail; @@ -385,7 +385,7 @@ int rte_intr_elist_index_set(struct rte_intr_handle *intr_handle, CHECK_VALID_INTR_HANDLE(intr_handle); if (index >= intr_handle->nb_intr) { - RTE_LOG(DEBUG, EAL, "Invalid index %d, max limit %d\n", index, + RTE_LOG_LINE(DEBUG, EAL, "Invalid index %d, max limit %d", index, intr_handle->nb_intr); rte_errno = ERANGE; goto fail; @@ -408,7 +408,7 @@ int rte_intr_vec_list_alloc(struct rte_intr_handle *intr_handle, return 0; if (size > intr_handle->nb_intr) { - RTE_LOG(DEBUG, EAL, "Invalid size %d, max limit %d\n", size, + RTE_LOG_LINE(DEBUG, EAL, "Invalid size %d, max limit %d", size, intr_handle->nb_intr); rte_errno = ERANGE; goto fail; @@ -419,7 +419,7 @@ int rte_intr_vec_list_alloc(struct rte_intr_handle *intr_handle, else intr_handle->intr_vec = calloc(size, sizeof(int)); if (intr_handle->intr_vec == NULL) { - RTE_LOG(ERR, EAL, "Failed to allocate %d intr_vec\n", size); + RTE_LOG_LINE(ERR, EAL, "Failed to allocate %d intr_vec", size); rte_errno = ENOMEM; goto fail; } @@ -437,7 +437,7 @@ int rte_intr_vec_list_index_get(const struct rte_intr_handle *intr_handle, CHECK_VALID_INTR_HANDLE(intr_handle); if (index >= intr_handle->vec_list_size) { - RTE_LOG(DEBUG, EAL, "Index %d greater than vec list size %d\n", + RTE_LOG_LINE(DEBUG, EAL, "Index %d greater than vec list size %d", index, intr_handle->vec_list_size); rte_errno = ERANGE; goto fail; @@ -454,7 +454,7 @@ int rte_intr_vec_list_index_set(struct rte_intr_handle *intr_handle, CHECK_VALID_INTR_HANDLE(intr_handle); if (index >= intr_handle->vec_list_size) { - RTE_LOG(DEBUG, EAL, "Index %d greater than vec list size %d\n", + RTE_LOG_LINE(DEBUG, EAL, "Index %d greater than vec list size %d", index, intr_handle->vec_list_size); rte_errno = ERANGE; goto fail; diff --git a/lib/eal/common/eal_common_lcore.c b/lib/eal/common/eal_common_lcore.c index 6807a38247..4ec1996d12 100644 --- a/lib/eal/common/eal_common_lcore.c +++ b/lib/eal/common/eal_common_lcore.c @@ -174,8 +174,8 @@ rte_eal_cpu_init(void) lcore_config[lcore_id].core_role = ROLE_RTE; lcore_config[lcore_id].core_id = eal_cpu_core_id(lcore_id); lcore_config[lcore_id].socket_id = socket_id; - RTE_LOG(DEBUG, EAL, "Detected lcore %u as " - "core %u on socket %u\n", + RTE_LOG_LINE(DEBUG, EAL, "Detected lcore %u as " + "core %u on socket %u", lcore_id, lcore_config[lcore_id].core_id, lcore_config[lcore_id].socket_id); count++; @@ -183,17 +183,17 @@ rte_eal_cpu_init(void) for (; lcore_id < CPU_SETSIZE; lcore_id++) { if (eal_cpu_detected(lcore_id) == 0) continue; - RTE_LOG(DEBUG, EAL, "Skipped lcore %u as core %u on socket %u\n", + RTE_LOG_LINE(DEBUG, EAL, "Skipped lcore %u as core %u on socket %u", lcore_id, eal_cpu_core_id(lcore_id), eal_cpu_socket_id(lcore_id)); } /* Set the count of enabled logical cores of the EAL configuration */ config->lcore_count = count; - RTE_LOG(DEBUG, EAL, - "Maximum logical cores by configuration: %u\n", + RTE_LOG_LINE(DEBUG, EAL, + "Maximum logical cores by configuration: %u", RTE_MAX_LCORE); - RTE_LOG(INFO, EAL, "Detected CPU lcores: %u\n", config->lcore_count); + RTE_LOG_LINE(INFO, EAL, "Detected CPU lcores: %u", config->lcore_count); /* sort all socket id's in ascending order */ qsort(lcore_to_socket_id, RTE_DIM(lcore_to_socket_id), @@ -208,7 +208,7 @@ rte_eal_cpu_init(void) socket_id; prev_socket_id = socket_id; } - RTE_LOG(INFO, EAL, "Detected NUMA nodes: %u\n", config->numa_node_count); + RTE_LOG_LINE(INFO, EAL, "Detected NUMA nodes: %u", config->numa_node_count); return 0; } @@ -247,7 +247,7 @@ callback_init(struct lcore_callback *callback, unsigned int lcore_id) { if (callback->init == NULL) return 0; - RTE_LOG(DEBUG, EAL, "Call init for lcore callback %s, lcore_id %u\n", + RTE_LOG_LINE(DEBUG, EAL, "Call init for lcore callback %s, lcore_id %u", callback->name, lcore_id); return callback->init(lcore_id, callback->arg); } @@ -257,7 +257,7 @@ callback_uninit(struct lcore_callback *callback, unsigned int lcore_id) { if (callback->uninit == NULL) return; - RTE_LOG(DEBUG, EAL, "Call uninit for lcore callback %s, lcore_id %u\n", + RTE_LOG_LINE(DEBUG, EAL, "Call uninit for lcore callback %s, lcore_id %u", callback->name, lcore_id); callback->uninit(lcore_id, callback->arg); } @@ -311,7 +311,7 @@ rte_lcore_callback_register(const char *name, rte_lcore_init_cb init, } no_init: TAILQ_INSERT_TAIL(&lcore_callbacks, callback, next); - RTE_LOG(DEBUG, EAL, "Registered new lcore callback %s (%sinit, %suninit).\n", + RTE_LOG_LINE(DEBUG, EAL, "Registered new lcore callback %s (%sinit, %suninit).", callback->name, callback->init == NULL ? "NO " : "", callback->uninit == NULL ? "NO " : ""); out: @@ -339,7 +339,7 @@ rte_lcore_callback_unregister(void *handle) no_uninit: TAILQ_REMOVE(&lcore_callbacks, callback, next); rte_rwlock_write_unlock(&lcore_lock); - RTE_LOG(DEBUG, EAL, "Unregistered lcore callback %s-%p.\n", + RTE_LOG_LINE(DEBUG, EAL, "Unregistered lcore callback %s-%p.", callback->name, callback->arg); free_callback(callback); } @@ -361,7 +361,7 @@ eal_lcore_non_eal_allocate(void) break; } if (lcore_id == RTE_MAX_LCORE) { - RTE_LOG(DEBUG, EAL, "No lcore available.\n"); + RTE_LOG_LINE(DEBUG, EAL, "No lcore available."); goto out; } TAILQ_FOREACH(callback, &lcore_callbacks, next) { @@ -375,7 +375,7 @@ eal_lcore_non_eal_allocate(void) callback_uninit(prev, lcore_id); prev = TAILQ_PREV(prev, lcore_callbacks_head, next); } - RTE_LOG(DEBUG, EAL, "Initialization refused for lcore %u.\n", + RTE_LOG_LINE(DEBUG, EAL, "Initialization refused for lcore %u.", lcore_id); cfg->lcore_role[lcore_id] = ROLE_OFF; cfg->lcore_count--; diff --git a/lib/eal/common/eal_common_memalloc.c b/lib/eal/common/eal_common_memalloc.c index ab04479c1c..feb22c2b2f 100644 --- a/lib/eal/common/eal_common_memalloc.c +++ b/lib/eal/common/eal_common_memalloc.c @@ -186,7 +186,7 @@ eal_memalloc_mem_event_callback_register(const char *name, ret = 0; - RTE_LOG(DEBUG, EAL, "Mem event callback '%s:%p' registered\n", + RTE_LOG_LINE(DEBUG, EAL, "Mem event callback '%s:%p' registered", name, arg); unlock: @@ -225,7 +225,7 @@ eal_memalloc_mem_event_callback_unregister(const char *name, void *arg) ret = 0; - RTE_LOG(DEBUG, EAL, "Mem event callback '%s:%p' unregistered\n", + RTE_LOG_LINE(DEBUG, EAL, "Mem event callback '%s:%p' unregistered", name, arg); unlock: @@ -242,7 +242,7 @@ eal_memalloc_mem_event_notify(enum rte_mem_event event, const void *start, rte_rwlock_read_lock(&mem_event_rwlock); TAILQ_FOREACH(entry, &mem_event_callback_list, next) { - RTE_LOG(DEBUG, EAL, "Calling mem event callback '%s:%p'\n", + RTE_LOG_LINE(DEBUG, EAL, "Calling mem event callback '%s:%p'", entry->name, entry->arg); entry->clb(event, start, len, entry->arg); } @@ -293,7 +293,7 @@ eal_memalloc_mem_alloc_validator_register(const char *name, ret = 0; - RTE_LOG(DEBUG, EAL, "Mem alloc validator '%s' on socket %i with limit %zu registered\n", + RTE_LOG_LINE(DEBUG, EAL, "Mem alloc validator '%s' on socket %i with limit %zu registered", name, socket_id, limit); unlock: @@ -332,7 +332,7 @@ eal_memalloc_mem_alloc_validator_unregister(const char *name, int socket_id) ret = 0; - RTE_LOG(DEBUG, EAL, "Mem alloc validator '%s' on socket %i unregistered\n", + RTE_LOG_LINE(DEBUG, EAL, "Mem alloc validator '%s' on socket %i unregistered", name, socket_id); unlock: @@ -351,7 +351,7 @@ eal_memalloc_mem_alloc_validate(int socket_id, size_t new_len) TAILQ_FOREACH(entry, &mem_alloc_validator_list, next) { if (entry->socket_id != socket_id || entry->limit > new_len) continue; - RTE_LOG(DEBUG, EAL, "Calling mem alloc validator '%s' on socket %i\n", + RTE_LOG_LINE(DEBUG, EAL, "Calling mem alloc validator '%s' on socket %i", entry->name, entry->socket_id); if (entry->clb(socket_id, entry->limit, new_len) < 0) ret = -1; diff --git a/lib/eal/common/eal_common_memory.c b/lib/eal/common/eal_common_memory.c index d9433db623..9e183669a6 100644 --- a/lib/eal/common/eal_common_memory.c +++ b/lib/eal/common/eal_common_memory.c @@ -57,7 +57,7 @@ eal_get_virtual_area(void *requested_addr, size_t *size, if (system_page_sz == 0) system_page_sz = rte_mem_page_size(); - RTE_LOG(DEBUG, EAL, "Ask a virtual area of 0x%zx bytes\n", *size); + RTE_LOG_LINE(DEBUG, EAL, "Ask a virtual area of 0x%zx bytes", *size); addr_is_hint = (flags & EAL_VIRTUAL_AREA_ADDR_IS_HINT) > 0; allow_shrink = (flags & EAL_VIRTUAL_AREA_ALLOW_SHRINK) > 0; @@ -94,7 +94,7 @@ eal_get_virtual_area(void *requested_addr, size_t *size, do { map_sz = no_align ? *size : *size + page_sz; if (map_sz > SIZE_MAX) { - RTE_LOG(ERR, EAL, "Map size too big\n"); + RTE_LOG_LINE(ERR, EAL, "Map size too big"); rte_errno = E2BIG; return NULL; } @@ -125,16 +125,16 @@ eal_get_virtual_area(void *requested_addr, size_t *size, RTE_PTR_ALIGN(mapped_addr, page_sz); if (*size == 0) { - RTE_LOG(ERR, EAL, "Cannot get a virtual area of any size: %s\n", + RTE_LOG_LINE(ERR, EAL, "Cannot get a virtual area of any size: %s", rte_strerror(rte_errno)); return NULL; } else if (mapped_addr == NULL) { - RTE_LOG(ERR, EAL, "Cannot get a virtual area: %s\n", + RTE_LOG_LINE(ERR, EAL, "Cannot get a virtual area: %s", rte_strerror(rte_errno)); return NULL; } else if (requested_addr != NULL && !addr_is_hint && aligned_addr != requested_addr) { - RTE_LOG(ERR, EAL, "Cannot get a virtual area at requested address: %p (got %p)\n", + RTE_LOG_LINE(ERR, EAL, "Cannot get a virtual area at requested address: %p (got %p)", requested_addr, aligned_addr); eal_mem_free(mapped_addr, map_sz); rte_errno = EADDRNOTAVAIL; @@ -146,19 +146,19 @@ eal_get_virtual_area(void *requested_addr, size_t *size, * a base virtual address. */ if (internal_conf->base_virtaddr != 0) { - RTE_LOG(WARNING, EAL, "WARNING! Base virtual address hint (%p != %p) not respected!\n", + RTE_LOG_LINE(WARNING, EAL, "WARNING! Base virtual address hint (%p != %p) not respected!", requested_addr, aligned_addr); - RTE_LOG(WARNING, EAL, " This may cause issues with mapping memory into secondary processes\n"); + RTE_LOG_LINE(WARNING, EAL, " This may cause issues with mapping memory into secondary processes"); } else { - RTE_LOG(DEBUG, EAL, "WARNING! Base virtual address hint (%p != %p) not respected!\n", + RTE_LOG_LINE(DEBUG, EAL, "WARNING! Base virtual address hint (%p != %p) not respected!", requested_addr, aligned_addr); - RTE_LOG(DEBUG, EAL, " This may cause issues with mapping memory into secondary processes\n"); + RTE_LOG_LINE(DEBUG, EAL, " This may cause issues with mapping memory into secondary processes"); } } else if (next_baseaddr != NULL) { next_baseaddr = RTE_PTR_ADD(aligned_addr, *size); } - RTE_LOG(DEBUG, EAL, "Virtual area found at %p (size = 0x%zx)\n", + RTE_LOG_LINE(DEBUG, EAL, "Virtual area found at %p (size = 0x%zx)", aligned_addr, *size); if (unmap) { @@ -202,7 +202,7 @@ eal_memseg_list_init_named(struct rte_memseg_list *msl, const char *name, { if (rte_fbarray_init(&msl->memseg_arr, name, n_segs, sizeof(struct rte_memseg))) { - RTE_LOG(ERR, EAL, "Cannot allocate memseg list: %s\n", + RTE_LOG_LINE(ERR, EAL, "Cannot allocate memseg list: %s", rte_strerror(rte_errno)); return -1; } @@ -212,8 +212,8 @@ eal_memseg_list_init_named(struct rte_memseg_list *msl, const char *name, msl->base_va = NULL; msl->heap = heap; - RTE_LOG(DEBUG, EAL, - "Memseg list allocated at socket %i, page size 0x%"PRIx64"kB\n", + RTE_LOG_LINE(DEBUG, EAL, + "Memseg list allocated at socket %i, page size 0x%"PRIx64"kB", socket_id, page_sz >> 10); return 0; @@ -251,8 +251,8 @@ eal_memseg_list_alloc(struct rte_memseg_list *msl, int reserve_flags) * including common code, so don't duplicate the message. */ if (rte_errno == EADDRNOTAVAIL) - RTE_LOG(ERR, EAL, "Cannot reserve %llu bytes at [%p] - " - "please use '--" OPT_BASE_VIRTADDR "' option\n", + RTE_LOG_LINE(ERR, EAL, "Cannot reserve %llu bytes at [%p] - " + "please use '--" OPT_BASE_VIRTADDR "' option", (unsigned long long)mem_sz, msl->base_va); #endif return -1; @@ -260,7 +260,7 @@ eal_memseg_list_alloc(struct rte_memseg_list *msl, int reserve_flags) msl->base_va = addr; msl->len = mem_sz; - RTE_LOG(DEBUG, EAL, "VA reserved for memseg list at %p, size %zx\n", + RTE_LOG_LINE(DEBUG, EAL, "VA reserved for memseg list at %p, size %zx", addr, mem_sz); return 0; @@ -472,7 +472,7 @@ rte_mem_event_callback_register(const char *name, rte_mem_event_callback_t clb, /* FreeBSD boots with legacy mem enabled by default */ if (internal_conf->legacy_mem) { - RTE_LOG(DEBUG, EAL, "Registering mem event callbacks not supported\n"); + RTE_LOG_LINE(DEBUG, EAL, "Registering mem event callbacks not supported"); rte_errno = ENOTSUP; return -1; } @@ -487,7 +487,7 @@ rte_mem_event_callback_unregister(const char *name, void *arg) /* FreeBSD boots with legacy mem enabled by default */ if (internal_conf->legacy_mem) { - RTE_LOG(DEBUG, EAL, "Registering mem event callbacks not supported\n"); + RTE_LOG_LINE(DEBUG, EAL, "Registering mem event callbacks not supported"); rte_errno = ENOTSUP; return -1; } @@ -503,7 +503,7 @@ rte_mem_alloc_validator_register(const char *name, /* FreeBSD boots with legacy mem enabled by default */ if (internal_conf->legacy_mem) { - RTE_LOG(DEBUG, EAL, "Registering mem alloc validators not supported\n"); + RTE_LOG_LINE(DEBUG, EAL, "Registering mem alloc validators not supported"); rte_errno = ENOTSUP; return -1; } @@ -519,7 +519,7 @@ rte_mem_alloc_validator_unregister(const char *name, int socket_id) /* FreeBSD boots with legacy mem enabled by default */ if (internal_conf->legacy_mem) { - RTE_LOG(DEBUG, EAL, "Registering mem alloc validators not supported\n"); + RTE_LOG_LINE(DEBUG, EAL, "Registering mem alloc validators not supported"); rte_errno = ENOTSUP; return -1; } @@ -545,10 +545,10 @@ check_iova(const struct rte_memseg_list *msl __rte_unused, if (!(iova & *mask)) return 0; - RTE_LOG(DEBUG, EAL, "memseg iova %"PRIx64", len %zx, out of range\n", + RTE_LOG_LINE(DEBUG, EAL, "memseg iova %"PRIx64", len %zx, out of range", ms->iova, ms->len); - RTE_LOG(DEBUG, EAL, "\tusing dma mask %"PRIx64"\n", *mask); + RTE_LOG_LINE(DEBUG, EAL, "\tusing dma mask %"PRIx64, *mask); return 1; } @@ -565,7 +565,7 @@ check_dma_mask(uint8_t maskbits, bool thread_unsafe) /* Sanity check. We only check width can be managed with 64 bits * variables. Indeed any higher value is likely wrong. */ if (maskbits > MAX_DMA_MASK_BITS) { - RTE_LOG(ERR, EAL, "wrong dma mask size %u (Max: %u)\n", + RTE_LOG_LINE(ERR, EAL, "wrong dma mask size %u (Max: %u)", maskbits, MAX_DMA_MASK_BITS); return -1; } @@ -925,7 +925,7 @@ rte_extmem_register(void *va_addr, size_t len, rte_iova_t iova_addrs[], /* get next available socket ID */ socket_id = mcfg->next_socket_id; if (socket_id > INT32_MAX) { - RTE_LOG(ERR, EAL, "Cannot assign new socket ID's\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot assign new socket ID's"); rte_errno = ENOSPC; ret = -1; goto unlock; @@ -1030,7 +1030,7 @@ rte_eal_memory_detach(void) /* detach internal memory subsystem data first */ if (eal_memalloc_cleanup()) - RTE_LOG(ERR, EAL, "Could not release memory subsystem data\n"); + RTE_LOG_LINE(ERR, EAL, "Could not release memory subsystem data"); for (i = 0; i < RTE_DIM(mcfg->memsegs); i++) { struct rte_memseg_list *msl = &mcfg->memsegs[i]; @@ -1047,7 +1047,7 @@ rte_eal_memory_detach(void) */ if (!msl->external) if (rte_mem_unmap(msl->base_va, msl->len) != 0) - RTE_LOG(ERR, EAL, "Could not unmap memory: %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not unmap memory: %s", rte_strerror(rte_errno)); /* @@ -1056,7 +1056,7 @@ rte_eal_memory_detach(void) * have no way of knowing if they still do. */ if (rte_fbarray_detach(&msl->memseg_arr)) - RTE_LOG(ERR, EAL, "Could not detach fbarray: %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not detach fbarray: %s", rte_strerror(rte_errno)); } rte_rwlock_write_unlock(&mcfg->memory_hotplug_lock); @@ -1068,7 +1068,7 @@ rte_eal_memory_detach(void) */ if (internal_conf->no_shconf == 0 && mcfg->mem_cfg_addr != 0) { if (rte_mem_unmap(mcfg, RTE_ALIGN(sizeof(*mcfg), page_sz)) != 0) - RTE_LOG(ERR, EAL, "Could not unmap shared memory config: %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not unmap shared memory config: %s", rte_strerror(rte_errno)); } rte_eal_get_configuration()->mem_config = NULL; @@ -1084,7 +1084,7 @@ rte_eal_memory_init(void) eal_get_internal_configuration(); int retval; - RTE_LOG(DEBUG, EAL, "Setting up physically contiguous memory...\n"); + RTE_LOG_LINE(DEBUG, EAL, "Setting up physically contiguous memory..."); if (rte_eal_memseg_init() < 0) goto fail; @@ -1213,7 +1213,7 @@ handle_eal_memzone_info_request(const char *cmd __rte_unused, /* go through each page occupied by this memzone */ msl = rte_mem_virt2memseg_list(mz->addr); if (!msl) { - RTE_LOG(DEBUG, EAL, "Skipping bad memzone\n"); + RTE_LOG_LINE(DEBUG, EAL, "Skipping bad memzone"); return -1; } page_sz = (size_t)mz->hugepage_sz; @@ -1404,7 +1404,7 @@ handle_eal_memseg_info_request(const char *cmd __rte_unused, ms = rte_fbarray_get(arr, ms_idx); if (ms == NULL) { rte_mcfg_mem_read_unlock(); - RTE_LOG(DEBUG, EAL, "Error fetching requested memseg.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Error fetching requested memseg."); return -1; } @@ -1477,7 +1477,7 @@ handle_eal_element_list_request(const char *cmd __rte_unused, ms = rte_fbarray_get(&msl->memseg_arr, ms_idx); if (ms == NULL) { rte_mcfg_mem_read_unlock(); - RTE_LOG(DEBUG, EAL, "Error fetching requested memseg.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Error fetching requested memseg."); return -1; } @@ -1555,7 +1555,7 @@ handle_eal_element_info_request(const char *cmd __rte_unused, ms = rte_fbarray_get(&msl->memseg_arr, ms_idx); if (ms == NULL) { rte_mcfg_mem_read_unlock(); - RTE_LOG(DEBUG, EAL, "Error fetching requested memseg.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Error fetching requested memseg."); return -1; } diff --git a/lib/eal/common/eal_common_memzone.c b/lib/eal/common/eal_common_memzone.c index 1f3e701499..fc478d0fac 100644 --- a/lib/eal/common/eal_common_memzone.c +++ b/lib/eal/common/eal_common_memzone.c @@ -31,13 +31,13 @@ rte_memzone_max_set(size_t max) struct rte_mem_config *mcfg; if (eal_get_internal_configuration()->init_complete > 0) { - RTE_LOG(ERR, EAL, "Max memzone cannot be set after EAL init\n"); + RTE_LOG_LINE(ERR, EAL, "Max memzone cannot be set after EAL init"); return -1; } mcfg = rte_eal_get_configuration()->mem_config; if (mcfg == NULL) { - RTE_LOG(ERR, EAL, "Failed to set max memzone count\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to set max memzone count"); return -1; } @@ -116,16 +116,16 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len, /* no more room in config */ if (arr->count >= arr->len) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "%s(): Number of requested memzone segments exceeds maximum " - "%u\n", __func__, arr->len); + "%u", __func__, arr->len); rte_errno = ENOSPC; return NULL; } if (strlen(name) > sizeof(mz->name) - 1) { - RTE_LOG(DEBUG, EAL, "%s(): memzone <%s>: name too long\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): memzone <%s>: name too long", __func__, name); rte_errno = ENAMETOOLONG; return NULL; @@ -133,7 +133,7 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len, /* zone already exist */ if ((memzone_lookup_thread_unsafe(name)) != NULL) { - RTE_LOG(DEBUG, EAL, "%s(): memzone <%s> already exists\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): memzone <%s> already exists", __func__, name); rte_errno = EEXIST; return NULL; @@ -141,7 +141,7 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len, /* if alignment is not a power of two */ if (align && !rte_is_power_of_2(align)) { - RTE_LOG(ERR, EAL, "%s(): Invalid alignment: %u\n", __func__, + RTE_LOG_LINE(ERR, EAL, "%s(): Invalid alignment: %u", __func__, align); rte_errno = EINVAL; return NULL; @@ -218,7 +218,7 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len, } if (mz == NULL) { - RTE_LOG(ERR, EAL, "%s(): Cannot find free memzone\n", __func__); + RTE_LOG_LINE(ERR, EAL, "%s(): Cannot find free memzone", __func__); malloc_heap_free(elem); rte_errno = ENOSPC; return NULL; @@ -323,7 +323,7 @@ rte_memzone_free(const struct rte_memzone *mz) if (found_mz == NULL) { ret = -EINVAL; } else if (found_mz->addr == NULL) { - RTE_LOG(ERR, EAL, "Memzone is not allocated\n"); + RTE_LOG_LINE(ERR, EAL, "Memzone is not allocated"); ret = -EINVAL; } else { addr = found_mz->addr; @@ -385,7 +385,7 @@ dump_memzone(const struct rte_memzone *mz, void *arg) /* go through each page occupied by this memzone */ msl = rte_mem_virt2memseg_list(mz->addr); if (!msl) { - RTE_LOG(DEBUG, EAL, "Skipping bad memzone\n"); + RTE_LOG_LINE(DEBUG, EAL, "Skipping bad memzone"); return; } page_sz = (size_t)mz->hugepage_sz; @@ -434,11 +434,11 @@ rte_eal_memzone_init(void) if (rte_eal_process_type() == RTE_PROC_PRIMARY && rte_fbarray_init(&mcfg->memzones, "memzone", rte_memzone_max_get(), sizeof(struct rte_memzone))) { - RTE_LOG(ERR, EAL, "Cannot allocate memzone list\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate memzone list"); ret = -1; } else if (rte_eal_process_type() == RTE_PROC_SECONDARY && rte_fbarray_attach(&mcfg->memzones)) { - RTE_LOG(ERR, EAL, "Cannot attach to memzone list\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot attach to memzone list"); ret = -1; } diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c index e9ba01fb89..c1af05b134 100644 --- a/lib/eal/common/eal_common_options.c +++ b/lib/eal/common/eal_common_options.c @@ -255,14 +255,14 @@ eal_option_device_add(enum rte_devtype type, const char *optarg) optlen = strlen(optarg) + 1; devopt = calloc(1, sizeof(*devopt) + optlen); if (devopt == NULL) { - RTE_LOG(ERR, EAL, "Unable to allocate device option\n"); + RTE_LOG_LINE(ERR, EAL, "Unable to allocate device option"); return -ENOMEM; } devopt->type = type; ret = strlcpy(devopt->arg, optarg, optlen); if (ret < 0) { - RTE_LOG(ERR, EAL, "Unable to copy device option\n"); + RTE_LOG_LINE(ERR, EAL, "Unable to copy device option"); free(devopt); return -EINVAL; } @@ -281,7 +281,7 @@ eal_option_device_parse(void) if (ret == 0) { ret = rte_devargs_add(devopt->type, devopt->arg); if (ret) - RTE_LOG(ERR, EAL, "Unable to parse device '%s'\n", + RTE_LOG_LINE(ERR, EAL, "Unable to parse device '%s'", devopt->arg); } TAILQ_REMOVE(&devopt_list, devopt, next); @@ -360,7 +360,7 @@ eal_plugin_add(const char *path) solib = malloc(sizeof(*solib)); if (solib == NULL) { - RTE_LOG(ERR, EAL, "malloc(solib) failed\n"); + RTE_LOG_LINE(ERR, EAL, "malloc(solib) failed"); return -1; } memset(solib, 0, sizeof(*solib)); @@ -390,7 +390,7 @@ eal_plugindir_init(const char *path) d = opendir(path); if (d == NULL) { - RTE_LOG(ERR, EAL, "failed to open directory %s: %s\n", + RTE_LOG_LINE(ERR, EAL, "failed to open directory %s: %s", path, strerror(errno)); return -1; } @@ -442,13 +442,13 @@ verify_perms(const char *dirpath) /* call stat to check for permissions and ensure not world writable */ if (stat(dirpath, &st) != 0) { - RTE_LOG(ERR, EAL, "Error with stat on %s, %s\n", + RTE_LOG_LINE(ERR, EAL, "Error with stat on %s, %s", dirpath, strerror(errno)); return -1; } if (st.st_mode & S_IWOTH) { - RTE_LOG(ERR, EAL, - "Error, directory path %s is world-writable and insecure\n", + RTE_LOG_LINE(ERR, EAL, + "Error, directory path %s is world-writable and insecure", dirpath); return -1; } @@ -466,16 +466,16 @@ eal_dlopen(const char *pathname) /* not a full or relative path, try a load from system dirs */ retval = dlopen(pathname, RTLD_NOW); if (retval == NULL) - RTE_LOG(ERR, EAL, "%s\n", dlerror()); + RTE_LOG_LINE(ERR, EAL, "%s", dlerror()); return retval; } if (realp == NULL) { - RTE_LOG(ERR, EAL, "Error with realpath for %s, %s\n", + RTE_LOG_LINE(ERR, EAL, "Error with realpath for %s, %s", pathname, strerror(errno)); goto out; } if (strnlen(realp, PATH_MAX) == PATH_MAX) { - RTE_LOG(ERR, EAL, "Error, driver path greater than PATH_MAX\n"); + RTE_LOG_LINE(ERR, EAL, "Error, driver path greater than PATH_MAX"); goto out; } @@ -485,7 +485,7 @@ eal_dlopen(const char *pathname) retval = dlopen(realp, RTLD_NOW); if (retval == NULL) - RTE_LOG(ERR, EAL, "%s\n", dlerror()); + RTE_LOG_LINE(ERR, EAL, "%s", dlerror()); out: free(realp); return retval; @@ -500,7 +500,7 @@ is_shared_build(void) len = strlcpy(soname, EAL_SO"."ABI_VERSION, sizeof(soname)); if (len > sizeof(soname)) { - RTE_LOG(ERR, EAL, "Shared lib name too long in shared build check\n"); + RTE_LOG_LINE(ERR, EAL, "Shared lib name too long in shared build check"); len = sizeof(soname) - 1; } @@ -508,10 +508,10 @@ is_shared_build(void) void *handle; /* check if we have this .so loaded, if so - shared build */ - RTE_LOG(DEBUG, EAL, "Checking presence of .so '%s'\n", soname); + RTE_LOG_LINE(DEBUG, EAL, "Checking presence of .so '%s'", soname); handle = dlopen(soname, RTLD_LAZY | RTLD_NOLOAD); if (handle != NULL) { - RTE_LOG(INFO, EAL, "Detected shared linkage of DPDK\n"); + RTE_LOG_LINE(INFO, EAL, "Detected shared linkage of DPDK"); dlclose(handle); return 1; } @@ -524,7 +524,7 @@ is_shared_build(void) } } - RTE_LOG(INFO, EAL, "Detected static linkage of DPDK\n"); + RTE_LOG_LINE(INFO, EAL, "Detected static linkage of DPDK"); return 0; } @@ -549,13 +549,13 @@ eal_plugins_init(void) if (stat(solib->name, &sb) == 0 && S_ISDIR(sb.st_mode)) { if (eal_plugindir_init(solib->name) == -1) { - RTE_LOG(ERR, EAL, - "Cannot init plugin directory %s\n", + RTE_LOG_LINE(ERR, EAL, + "Cannot init plugin directory %s", solib->name); return -1; } } else { - RTE_LOG(DEBUG, EAL, "open shared lib %s\n", + RTE_LOG_LINE(DEBUG, EAL, "open shared lib %s", solib->name); solib->lib_handle = eal_dlopen(solib->name); if (solib->lib_handle == NULL) @@ -626,15 +626,15 @@ eal_parse_service_coremask(const char *coremask) uint32_t lcore = idx; if (main_lcore_parsed && cfg->main_lcore == lcore) { - RTE_LOG(ERR, EAL, - "lcore %u is main lcore, cannot use as service core\n", + RTE_LOG_LINE(ERR, EAL, + "lcore %u is main lcore, cannot use as service core", idx); return -1; } if (eal_cpu_detected(idx) == 0) { - RTE_LOG(ERR, EAL, - "lcore %u unavailable\n", idx); + RTE_LOG_LINE(ERR, EAL, + "lcore %u unavailable", idx); return -1; } @@ -658,9 +658,9 @@ eal_parse_service_coremask(const char *coremask) return -1; if (core_parsed && taken_lcore_count != count) { - RTE_LOG(WARNING, EAL, + RTE_LOG_LINE(WARNING, EAL, "Not all service cores are in the coremask. " - "Please ensure -c or -l includes service cores\n"); + "Please ensure -c or -l includes service cores"); } cfg->service_lcore_count = count; @@ -689,7 +689,7 @@ update_lcore_config(int *cores) for (i = 0; i < RTE_MAX_LCORE; i++) { if (cores[i] != -1) { if (eal_cpu_detected(i) == 0) { - RTE_LOG(ERR, EAL, "lcore %u unavailable\n", i); + RTE_LOG_LINE(ERR, EAL, "lcore %u unavailable", i); ret = -1; continue; } @@ -717,7 +717,7 @@ check_core_list(int *lcores, unsigned int count) if (lcores[i] < RTE_MAX_LCORE) continue; - RTE_LOG(ERR, EAL, "lcore %d >= RTE_MAX_LCORE (%d)\n", + RTE_LOG_LINE(ERR, EAL, "lcore %d >= RTE_MAX_LCORE (%d)", lcores[i], RTE_MAX_LCORE); overflow = true; } @@ -737,9 +737,9 @@ check_core_list(int *lcores, unsigned int count) } if (len > 0) lcorestr[len - 1] = 0; - RTE_LOG(ERR, EAL, "To use high physical core ids, " + RTE_LOG_LINE(ERR, EAL, "To use high physical core ids, " "please use --lcores to map them to lcore ids below RTE_MAX_LCORE, " - "e.g. --lcores %s\n", lcorestr); + "e.g. --lcores %s", lcorestr); return -1; } @@ -769,7 +769,7 @@ rte_eal_parse_coremask(const char *coremask, int *cores) while ((i > 0) && isblank(coremask[i - 1])) i--; if (i == 0) { - RTE_LOG(ERR, EAL, "No lcores in coremask: [%s]\n", + RTE_LOG_LINE(ERR, EAL, "No lcores in coremask: [%s]", coremask_orig); return -1; } @@ -778,7 +778,7 @@ rte_eal_parse_coremask(const char *coremask, int *cores) c = coremask[i]; if (isxdigit(c) == 0) { /* invalid characters */ - RTE_LOG(ERR, EAL, "invalid characters in coremask: [%s]\n", + RTE_LOG_LINE(ERR, EAL, "invalid characters in coremask: [%s]", coremask_orig); return -1; } @@ -787,7 +787,7 @@ rte_eal_parse_coremask(const char *coremask, int *cores) { if ((1 << j) & val) { if (count >= RTE_MAX_LCORE) { - RTE_LOG(ERR, EAL, "Too many lcores provided. Cannot exceed RTE_MAX_LCORE (%d)\n", + RTE_LOG_LINE(ERR, EAL, "Too many lcores provided. Cannot exceed RTE_MAX_LCORE (%d)", RTE_MAX_LCORE); return -1; } @@ -796,7 +796,7 @@ rte_eal_parse_coremask(const char *coremask, int *cores) } } if (count == 0) { - RTE_LOG(ERR, EAL, "No lcores in coremask: [%s]\n", + RTE_LOG_LINE(ERR, EAL, "No lcores in coremask: [%s]", coremask_orig); return -1; } @@ -864,8 +864,8 @@ eal_parse_service_corelist(const char *corelist) uint32_t lcore = idx; if (cfg->main_lcore == lcore && main_lcore_parsed) { - RTE_LOG(ERR, EAL, - "Error: lcore %u is main lcore, cannot use as service core\n", + RTE_LOG_LINE(ERR, EAL, + "Error: lcore %u is main lcore, cannot use as service core", idx); return -1; } @@ -887,9 +887,9 @@ eal_parse_service_corelist(const char *corelist) return -1; if (core_parsed && taken_lcore_count != count) { - RTE_LOG(WARNING, EAL, + RTE_LOG_LINE(WARNING, EAL, "Not all service cores were in the coremask. " - "Please ensure -c or -l includes service cores\n"); + "Please ensure -c or -l includes service cores"); } return 0; @@ -943,7 +943,7 @@ eal_parse_corelist(const char *corelist, int *cores) if (dup) continue; if (count >= RTE_MAX_LCORE) { - RTE_LOG(ERR, EAL, "Too many lcores provided. Cannot exceed RTE_MAX_LCORE (%d)\n", + RTE_LOG_LINE(ERR, EAL, "Too many lcores provided. Cannot exceed RTE_MAX_LCORE (%d)", RTE_MAX_LCORE); return -1; } @@ -991,8 +991,8 @@ eal_parse_main_lcore(const char *arg) /* ensure main core is not used as service core */ if (lcore_config[cfg->main_lcore].core_role == ROLE_SERVICE) { - RTE_LOG(ERR, EAL, - "Error: Main lcore is used as a service core\n"); + RTE_LOG_LINE(ERR, EAL, + "Error: Main lcore is used as a service core"); return -1; } @@ -1132,8 +1132,8 @@ check_cpuset(rte_cpuset_t *set) continue; if (eal_cpu_detected(idx) == 0) { - RTE_LOG(ERR, EAL, "core %u " - "unavailable\n", idx); + RTE_LOG_LINE(ERR, EAL, "core %u " + "unavailable", idx); return -1; } } @@ -1612,8 +1612,8 @@ eal_parse_huge_unlink(const char *arg, struct hugepage_file_discipline *out) return 0; } if (strcmp(arg, HUGE_UNLINK_NEVER) == 0) { - RTE_LOG(WARNING, EAL, "Using --"OPT_HUGE_UNLINK"=" - HUGE_UNLINK_NEVER" may create data leaks.\n"); + RTE_LOG_LINE(WARNING, EAL, "Using --"OPT_HUGE_UNLINK"=" + HUGE_UNLINK_NEVER" may create data leaks."); out->unlink_existing = false; return 0; } @@ -1648,24 +1648,24 @@ eal_parse_common_option(int opt, const char *optarg, int lcore_indexes[RTE_MAX_LCORE]; if (eal_service_cores_parsed()) - RTE_LOG(WARNING, EAL, - "Service cores parsed before dataplane cores. Please ensure -c is before -s or -S\n"); + RTE_LOG_LINE(WARNING, EAL, + "Service cores parsed before dataplane cores. Please ensure -c is before -s or -S"); if (rte_eal_parse_coremask(optarg, lcore_indexes) < 0) { - RTE_LOG(ERR, EAL, "invalid coremask syntax\n"); + RTE_LOG_LINE(ERR, EAL, "invalid coremask syntax"); return -1; } if (update_lcore_config(lcore_indexes) < 0) { char *available = available_cores(); - RTE_LOG(ERR, EAL, - "invalid coremask, please check specified cores are part of %s\n", + RTE_LOG_LINE(ERR, EAL, + "invalid coremask, please check specified cores are part of %s", available); free(available); return -1; } if (core_parsed) { - RTE_LOG(ERR, EAL, "Option -c is ignored, because (%s) is set!\n", + RTE_LOG_LINE(ERR, EAL, "Option -c is ignored, because (%s) is set!", (core_parsed == LCORE_OPT_LST) ? "-l" : (core_parsed == LCORE_OPT_MAP) ? "--lcore" : "-c"); @@ -1680,25 +1680,25 @@ eal_parse_common_option(int opt, const char *optarg, int lcore_indexes[RTE_MAX_LCORE]; if (eal_service_cores_parsed()) - RTE_LOG(WARNING, EAL, - "Service cores parsed before dataplane cores. Please ensure -l is before -s or -S\n"); + RTE_LOG_LINE(WARNING, EAL, + "Service cores parsed before dataplane cores. Please ensure -l is before -s or -S"); if (eal_parse_corelist(optarg, lcore_indexes) < 0) { - RTE_LOG(ERR, EAL, "invalid core list syntax\n"); + RTE_LOG_LINE(ERR, EAL, "invalid core list syntax"); return -1; } if (update_lcore_config(lcore_indexes) < 0) { char *available = available_cores(); - RTE_LOG(ERR, EAL, - "invalid core list, please check specified cores are part of %s\n", + RTE_LOG_LINE(ERR, EAL, + "invalid core list, please check specified cores are part of %s", available); free(available); return -1; } if (core_parsed) { - RTE_LOG(ERR, EAL, "Option -l is ignored, because (%s) is set!\n", + RTE_LOG_LINE(ERR, EAL, "Option -l is ignored, because (%s) is set!", (core_parsed == LCORE_OPT_MSK) ? "-c" : (core_parsed == LCORE_OPT_MAP) ? "--lcore" : "-l"); @@ -1711,14 +1711,14 @@ eal_parse_common_option(int opt, const char *optarg, /* service coremask */ case 's': if (eal_parse_service_coremask(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid service coremask\n"); + RTE_LOG_LINE(ERR, EAL, "invalid service coremask"); return -1; } break; /* service corelist */ case 'S': if (eal_parse_service_corelist(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid service core list\n"); + RTE_LOG_LINE(ERR, EAL, "invalid service core list"); return -1; } break; @@ -1733,7 +1733,7 @@ eal_parse_common_option(int opt, const char *optarg, case 'n': conf->force_nchannel = atoi(optarg); if (conf->force_nchannel == 0) { - RTE_LOG(ERR, EAL, "invalid channel number\n"); + RTE_LOG_LINE(ERR, EAL, "invalid channel number"); return -1; } break; @@ -1742,7 +1742,7 @@ eal_parse_common_option(int opt, const char *optarg, conf->force_nrank = atoi(optarg); if (conf->force_nrank == 0 || conf->force_nrank > 16) { - RTE_LOG(ERR, EAL, "invalid rank number\n"); + RTE_LOG_LINE(ERR, EAL, "invalid rank number"); return -1; } break; @@ -1756,13 +1756,13 @@ eal_parse_common_option(int opt, const char *optarg, * write message at highest log level so it can always * be seen * even if info or warning messages are disabled */ - RTE_LOG(CRIT, EAL, "RTE Version: '%s'\n", rte_version()); + RTE_LOG_LINE(CRIT, EAL, "RTE Version: '%s'", rte_version()); break; /* long options */ case OPT_HUGE_UNLINK_NUM: if (eal_parse_huge_unlink(optarg, &conf->hugepage_file) < 0) { - RTE_LOG(ERR, EAL, "invalid --"OPT_HUGE_UNLINK" option\n"); + RTE_LOG_LINE(ERR, EAL, "invalid --"OPT_HUGE_UNLINK" option"); return -1; } break; @@ -1802,8 +1802,8 @@ eal_parse_common_option(int opt, const char *optarg, case OPT_MAIN_LCORE_NUM: if (eal_parse_main_lcore(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameter for --" - OPT_MAIN_LCORE "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameter for --" + OPT_MAIN_LCORE); return -1; } break; @@ -1818,8 +1818,8 @@ eal_parse_common_option(int opt, const char *optarg, #ifndef RTE_EXEC_ENV_WINDOWS case OPT_SYSLOG_NUM: if (eal_parse_syslog(optarg, conf) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_SYSLOG "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_SYSLOG); return -1; } break; @@ -1827,9 +1827,9 @@ eal_parse_common_option(int opt, const char *optarg, case OPT_LOG_LEVEL_NUM: { if (eal_parse_log_level(optarg) < 0) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" - OPT_LOG_LEVEL "\n"); + OPT_LOG_LEVEL); return -1; } break; @@ -1838,8 +1838,8 @@ eal_parse_common_option(int opt, const char *optarg, #ifndef RTE_EXEC_ENV_WINDOWS case OPT_TRACE_NUM: { if (eal_trace_args_save(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_TRACE "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_TRACE); return -1; } break; @@ -1847,8 +1847,8 @@ eal_parse_common_option(int opt, const char *optarg, case OPT_TRACE_DIR_NUM: { if (eal_trace_dir_args_save(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_TRACE_DIR "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_TRACE_DIR); return -1; } break; @@ -1856,8 +1856,8 @@ eal_parse_common_option(int opt, const char *optarg, case OPT_TRACE_BUF_SIZE_NUM: { if (eal_trace_bufsz_args_save(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_TRACE_BUF_SIZE "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_TRACE_BUF_SIZE); return -1; } break; @@ -1865,8 +1865,8 @@ eal_parse_common_option(int opt, const char *optarg, case OPT_TRACE_MODE_NUM: { if (eal_trace_mode_args_save(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_TRACE_MODE "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_TRACE_MODE); return -1; } break; @@ -1875,13 +1875,13 @@ eal_parse_common_option(int opt, const char *optarg, case OPT_LCORES_NUM: if (eal_parse_lcores(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameter for --" - OPT_LCORES "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameter for --" + OPT_LCORES); return -1; } if (core_parsed) { - RTE_LOG(ERR, EAL, "Option --lcore is ignored, because (%s) is set!\n", + RTE_LOG_LINE(ERR, EAL, "Option --lcore is ignored, because (%s) is set!", (core_parsed == LCORE_OPT_LST) ? "-l" : (core_parsed == LCORE_OPT_MSK) ? "-c" : "--lcore"); @@ -1898,15 +1898,15 @@ eal_parse_common_option(int opt, const char *optarg, break; case OPT_IOVA_MODE_NUM: if (eal_parse_iova_mode(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_IOVA_MODE "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_IOVA_MODE); return -1; } break; case OPT_BASE_VIRTADDR_NUM: if (eal_parse_base_virtaddr(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameter for --" - OPT_BASE_VIRTADDR "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameter for --" + OPT_BASE_VIRTADDR); return -1; } break; @@ -1917,8 +1917,8 @@ eal_parse_common_option(int opt, const char *optarg, break; case OPT_FORCE_MAX_SIMD_BITWIDTH_NUM: if (eal_parse_simd_bitwidth(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameter for --" - OPT_FORCE_MAX_SIMD_BITWIDTH "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameter for --" + OPT_FORCE_MAX_SIMD_BITWIDTH); return -1; } break; @@ -1932,8 +1932,8 @@ eal_parse_common_option(int opt, const char *optarg, return 0; ba_conflict: - RTE_LOG(ERR, EAL, - "Options allow (-a) and block (-b) can't be used at the same time\n"); + RTE_LOG_LINE(ERR, EAL, + "Options allow (-a) and block (-b) can't be used at the same time"); return -1; } @@ -2034,94 +2034,94 @@ eal_check_common_options(struct internal_config *internal_cfg) eal_get_internal_configuration(); if (cfg->lcore_role[cfg->main_lcore] != ROLE_RTE) { - RTE_LOG(ERR, EAL, "Main lcore is not enabled for DPDK\n"); + RTE_LOG_LINE(ERR, EAL, "Main lcore is not enabled for DPDK"); return -1; } if (internal_cfg->process_type == RTE_PROC_INVALID) { - RTE_LOG(ERR, EAL, "Invalid process type specified\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid process type specified"); return -1; } if (internal_cfg->hugefile_prefix != NULL && strlen(internal_cfg->hugefile_prefix) < 1) { - RTE_LOG(ERR, EAL, "Invalid length of --" OPT_FILE_PREFIX " option\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid length of --" OPT_FILE_PREFIX " option"); return -1; } if (internal_cfg->hugepage_dir != NULL && strlen(internal_cfg->hugepage_dir) < 1) { - RTE_LOG(ERR, EAL, "Invalid length of --" OPT_HUGE_DIR" option\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid length of --" OPT_HUGE_DIR" option"); return -1; } if (internal_cfg->user_mbuf_pool_ops_name != NULL && strlen(internal_cfg->user_mbuf_pool_ops_name) < 1) { - RTE_LOG(ERR, EAL, "Invalid length of --" OPT_MBUF_POOL_OPS_NAME" option\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid length of --" OPT_MBUF_POOL_OPS_NAME" option"); return -1; } if (strchr(eal_get_hugefile_prefix(), '%') != NULL) { - RTE_LOG(ERR, EAL, "Invalid char, '%%', in --"OPT_FILE_PREFIX" " - "option\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid char, '%%', in --"OPT_FILE_PREFIX" " + "option"); return -1; } if (mem_parsed && internal_cfg->force_sockets == 1) { - RTE_LOG(ERR, EAL, "Options -m and --"OPT_SOCKET_MEM" cannot " - "be specified at the same time\n"); + RTE_LOG_LINE(ERR, EAL, "Options -m and --"OPT_SOCKET_MEM" cannot " + "be specified at the same time"); return -1; } if (internal_cfg->no_hugetlbfs && internal_cfg->force_sockets == 1) { - RTE_LOG(ERR, EAL, "Option --"OPT_SOCKET_MEM" cannot " - "be specified together with --"OPT_NO_HUGE"\n"); + RTE_LOG_LINE(ERR, EAL, "Option --"OPT_SOCKET_MEM" cannot " + "be specified together with --"OPT_NO_HUGE); return -1; } if (internal_cfg->no_hugetlbfs && internal_cfg->hugepage_file.unlink_before_mapping && !internal_cfg->in_memory) { - RTE_LOG(ERR, EAL, "Option --"OPT_HUGE_UNLINK" cannot " - "be specified together with --"OPT_NO_HUGE"\n"); + RTE_LOG_LINE(ERR, EAL, "Option --"OPT_HUGE_UNLINK" cannot " + "be specified together with --"OPT_NO_HUGE); return -1; } if (internal_cfg->no_hugetlbfs && internal_cfg->huge_worker_stack_size != 0) { - RTE_LOG(ERR, EAL, "Option --"OPT_HUGE_WORKER_STACK" cannot " - "be specified together with --"OPT_NO_HUGE"\n"); + RTE_LOG_LINE(ERR, EAL, "Option --"OPT_HUGE_WORKER_STACK" cannot " + "be specified together with --"OPT_NO_HUGE); return -1; } if (internal_conf->force_socket_limits && internal_conf->legacy_mem) { - RTE_LOG(ERR, EAL, "Option --"OPT_SOCKET_LIMIT - " is only supported in non-legacy memory mode\n"); + RTE_LOG_LINE(ERR, EAL, "Option --"OPT_SOCKET_LIMIT + " is only supported in non-legacy memory mode"); } if (internal_cfg->single_file_segments && internal_cfg->hugepage_file.unlink_before_mapping && !internal_cfg->in_memory) { - RTE_LOG(ERR, EAL, "Option --"OPT_SINGLE_FILE_SEGMENTS" is " - "not compatible with --"OPT_HUGE_UNLINK"\n"); + RTE_LOG_LINE(ERR, EAL, "Option --"OPT_SINGLE_FILE_SEGMENTS" is " + "not compatible with --"OPT_HUGE_UNLINK); return -1; } if (!internal_cfg->hugepage_file.unlink_existing && internal_cfg->in_memory) { - RTE_LOG(ERR, EAL, "Option --"OPT_IN_MEMORY" is not compatible " - "with --"OPT_HUGE_UNLINK"="HUGE_UNLINK_NEVER"\n"); + RTE_LOG_LINE(ERR, EAL, "Option --"OPT_IN_MEMORY" is not compatible " + "with --"OPT_HUGE_UNLINK"="HUGE_UNLINK_NEVER); return -1; } if (internal_cfg->legacy_mem && internal_cfg->in_memory) { - RTE_LOG(ERR, EAL, "Option --"OPT_LEGACY_MEM" is not compatible " - "with --"OPT_IN_MEMORY"\n"); + RTE_LOG_LINE(ERR, EAL, "Option --"OPT_LEGACY_MEM" is not compatible " + "with --"OPT_IN_MEMORY); return -1; } if (internal_cfg->legacy_mem && internal_cfg->match_allocations) { - RTE_LOG(ERR, EAL, "Option --"OPT_LEGACY_MEM" is not compatible " - "with --"OPT_MATCH_ALLOCATIONS"\n"); + RTE_LOG_LINE(ERR, EAL, "Option --"OPT_LEGACY_MEM" is not compatible " + "with --"OPT_MATCH_ALLOCATIONS); return -1; } if (internal_cfg->no_hugetlbfs && internal_cfg->match_allocations) { - RTE_LOG(ERR, EAL, "Option --"OPT_NO_HUGE" is not compatible " - "with --"OPT_MATCH_ALLOCATIONS"\n"); + RTE_LOG_LINE(ERR, EAL, "Option --"OPT_NO_HUGE" is not compatible " + "with --"OPT_MATCH_ALLOCATIONS); return -1; } if (internal_cfg->legacy_mem && internal_cfg->memory == 0) { - RTE_LOG(NOTICE, EAL, "Static memory layout is selected, " + RTE_LOG_LINE(NOTICE, EAL, "Static memory layout is selected, " "amount of reserved memory can be adjusted with " - "-m or --"OPT_SOCKET_MEM"\n"); + "-m or --"OPT_SOCKET_MEM); } return 0; @@ -2141,12 +2141,12 @@ rte_vect_set_max_simd_bitwidth(uint16_t bitwidth) struct internal_config *internal_conf = eal_get_internal_configuration(); if (internal_conf->max_simd_bitwidth.forced) { - RTE_LOG(NOTICE, EAL, "Cannot set max SIMD bitwidth - user runtime override enabled\n"); + RTE_LOG_LINE(NOTICE, EAL, "Cannot set max SIMD bitwidth - user runtime override enabled"); return -EPERM; } if (bitwidth < RTE_VECT_SIMD_DISABLED || !rte_is_power_of_2(bitwidth)) { - RTE_LOG(ERR, EAL, "Invalid bitwidth value!\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid bitwidth value!"); return -EINVAL; } internal_conf->max_simd_bitwidth.bitwidth = bitwidth; diff --git a/lib/eal/common/eal_common_proc.c b/lib/eal/common/eal_common_proc.c index 728815c4a9..abc6117c65 100644 --- a/lib/eal/common/eal_common_proc.c +++ b/lib/eal/common/eal_common_proc.c @@ -181,12 +181,12 @@ static int validate_action_name(const char *name) { if (name == NULL) { - RTE_LOG(ERR, EAL, "Action name cannot be NULL\n"); + RTE_LOG_LINE(ERR, EAL, "Action name cannot be NULL"); rte_errno = EINVAL; return -1; } if (strnlen(name, RTE_MP_MAX_NAME_LEN) == 0) { - RTE_LOG(ERR, EAL, "Length of action name is zero\n"); + RTE_LOG_LINE(ERR, EAL, "Length of action name is zero"); rte_errno = EINVAL; return -1; } @@ -208,7 +208,7 @@ rte_mp_action_register(const char *name, rte_mp_t action) return -1; if (internal_conf->no_shconf) { - RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n"); + RTE_LOG_LINE(DEBUG, EAL, "No shared files mode enabled, IPC is disabled"); rte_errno = ENOTSUP; return -1; } @@ -244,7 +244,7 @@ rte_mp_action_unregister(const char *name) return; if (internal_conf->no_shconf) { - RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n"); + RTE_LOG_LINE(DEBUG, EAL, "No shared files mode enabled, IPC is disabled"); return; } @@ -291,12 +291,12 @@ read_msg(int fd, struct mp_msg_internal *m, struct sockaddr_un *s) if (errno == EINTR) goto retry; - RTE_LOG(ERR, EAL, "recvmsg failed, %s\n", strerror(errno)); + RTE_LOG_LINE(ERR, EAL, "recvmsg failed, %s", strerror(errno)); return -1; } if (msglen != buflen || (msgh.msg_flags & (MSG_TRUNC | MSG_CTRUNC))) { - RTE_LOG(ERR, EAL, "truncated msg\n"); + RTE_LOG_LINE(ERR, EAL, "truncated msg"); return -1; } @@ -311,11 +311,11 @@ read_msg(int fd, struct mp_msg_internal *m, struct sockaddr_un *s) } /* sanity-check the response */ if (m->msg.num_fds < 0 || m->msg.num_fds > RTE_MP_MAX_FD_NUM) { - RTE_LOG(ERR, EAL, "invalid number of fd's received\n"); + RTE_LOG_LINE(ERR, EAL, "invalid number of fd's received"); return -1; } if (m->msg.len_param < 0 || m->msg.len_param > RTE_MP_MAX_PARAM_LEN) { - RTE_LOG(ERR, EAL, "invalid received data length\n"); + RTE_LOG_LINE(ERR, EAL, "invalid received data length"); return -1; } return msglen; @@ -340,7 +340,7 @@ process_msg(struct mp_msg_internal *m, struct sockaddr_un *s) const struct internal_config *internal_conf = eal_get_internal_configuration(); - RTE_LOG(DEBUG, EAL, "msg: %s\n", msg->name); + RTE_LOG_LINE(DEBUG, EAL, "msg: %s", msg->name); if (m->type == MP_REP || m->type == MP_IGN) { struct pending_request *req = NULL; @@ -359,7 +359,7 @@ process_msg(struct mp_msg_internal *m, struct sockaddr_un *s) req = async_reply_handle_thread_unsafe( pending_req); } else { - RTE_LOG(ERR, EAL, "Drop mp reply: %s\n", msg->name); + RTE_LOG_LINE(ERR, EAL, "Drop mp reply: %s", msg->name); cleanup_msg_fds(msg); } pthread_mutex_unlock(&pending_requests.lock); @@ -388,12 +388,12 @@ process_msg(struct mp_msg_internal *m, struct sockaddr_un *s) strlcpy(dummy.name, msg->name, sizeof(dummy.name)); mp_send(&dummy, s->sun_path, MP_IGN); } else { - RTE_LOG(ERR, EAL, "Cannot find action: %s\n", + RTE_LOG_LINE(ERR, EAL, "Cannot find action: %s", msg->name); } cleanup_msg_fds(msg); } else if (action(msg, s->sun_path) < 0) { - RTE_LOG(ERR, EAL, "Fail to handle message: %s\n", msg->name); + RTE_LOG_LINE(ERR, EAL, "Fail to handle message: %s", msg->name); } } @@ -459,7 +459,7 @@ process_async_request(struct pending_request *sr, const struct timespec *now) tmp = realloc(user_msgs, sizeof(*msg) * (reply->nb_received + 1)); if (!tmp) { - RTE_LOG(ERR, EAL, "Fail to alloc reply for request %s:%s\n", + RTE_LOG_LINE(ERR, EAL, "Fail to alloc reply for request %s:%s", sr->dst, sr->request->name); /* this entry is going to be removed and its message * dropped, but we don't want to leak memory, so @@ -518,7 +518,7 @@ async_reply_handle_thread_unsafe(void *arg) struct timespec ts_now; if (clock_gettime(CLOCK_MONOTONIC, &ts_now) < 0) { - RTE_LOG(ERR, EAL, "Cannot get current time\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot get current time"); goto no_trigger; } @@ -532,10 +532,10 @@ async_reply_handle_thread_unsafe(void *arg) * handling the same message twice. */ if (rte_errno == EINPROGRESS) { - RTE_LOG(DEBUG, EAL, "Request handling is already in progress\n"); + RTE_LOG_LINE(DEBUG, EAL, "Request handling is already in progress"); goto no_trigger; } - RTE_LOG(ERR, EAL, "Failed to cancel alarm\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to cancel alarm"); } if (action == ACTION_TRIGGER) @@ -570,7 +570,7 @@ open_socket_fd(void) mp_fd = socket(AF_UNIX, SOCK_DGRAM, 0); if (mp_fd < 0) { - RTE_LOG(ERR, EAL, "failed to create unix socket\n"); + RTE_LOG_LINE(ERR, EAL, "failed to create unix socket"); return -1; } @@ -582,13 +582,13 @@ open_socket_fd(void) unlink(un.sun_path); /* May still exist since last run */ if (bind(mp_fd, (struct sockaddr *)&un, sizeof(un)) < 0) { - RTE_LOG(ERR, EAL, "failed to bind %s: %s\n", + RTE_LOG_LINE(ERR, EAL, "failed to bind %s: %s", un.sun_path, strerror(errno)); close(mp_fd); return -1; } - RTE_LOG(INFO, EAL, "Multi-process socket %s\n", un.sun_path); + RTE_LOG_LINE(INFO, EAL, "Multi-process socket %s", un.sun_path); return mp_fd; } @@ -614,7 +614,7 @@ rte_mp_channel_init(void) * so no need to initialize IPC. */ if (internal_conf->no_shconf) { - RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC will be disabled\n"); + RTE_LOG_LINE(DEBUG, EAL, "No shared files mode enabled, IPC will be disabled"); rte_errno = ENOTSUP; return -1; } @@ -630,13 +630,13 @@ rte_mp_channel_init(void) /* lock the directory */ dir_fd = open(mp_dir_path, O_RDONLY); if (dir_fd < 0) { - RTE_LOG(ERR, EAL, "failed to open %s: %s\n", + RTE_LOG_LINE(ERR, EAL, "failed to open %s: %s", mp_dir_path, strerror(errno)); return -1; } if (flock(dir_fd, LOCK_EX)) { - RTE_LOG(ERR, EAL, "failed to lock %s: %s\n", + RTE_LOG_LINE(ERR, EAL, "failed to lock %s: %s", mp_dir_path, strerror(errno)); close(dir_fd); return -1; @@ -649,7 +649,7 @@ rte_mp_channel_init(void) if (rte_thread_create_internal_control(&mp_handle_tid, "mp-msg", mp_handle, NULL) < 0) { - RTE_LOG(ERR, EAL, "failed to create mp thread: %s\n", + RTE_LOG_LINE(ERR, EAL, "failed to create mp thread: %s", strerror(errno)); close(dir_fd); close(rte_atomic_exchange_explicit(&mp_fd, -1, rte_memory_order_relaxed)); @@ -732,7 +732,7 @@ send_msg(const char *dst_path, struct rte_mp_msg *msg, int type) unlink(dst_path); return 0; } - RTE_LOG(ERR, EAL, "failed to send to (%s) due to %s\n", + RTE_LOG_LINE(ERR, EAL, "failed to send to (%s) due to %s", dst_path, strerror(errno)); return -1; } @@ -760,7 +760,7 @@ mp_send(struct rte_mp_msg *msg, const char *peer, int type) /* broadcast to all secondary processes */ mp_dir = opendir(mp_dir_path); if (!mp_dir) { - RTE_LOG(ERR, EAL, "Unable to open directory %s\n", + RTE_LOG_LINE(ERR, EAL, "Unable to open directory %s", mp_dir_path); rte_errno = errno; return -1; @@ -769,7 +769,7 @@ mp_send(struct rte_mp_msg *msg, const char *peer, int type) dir_fd = dirfd(mp_dir); /* lock the directory to prevent processes spinning up while we send */ if (flock(dir_fd, LOCK_SH)) { - RTE_LOG(ERR, EAL, "Unable to lock directory %s\n", + RTE_LOG_LINE(ERR, EAL, "Unable to lock directory %s", mp_dir_path); rte_errno = errno; closedir(mp_dir); @@ -799,7 +799,7 @@ static int check_input(const struct rte_mp_msg *msg) { if (msg == NULL) { - RTE_LOG(ERR, EAL, "Msg cannot be NULL\n"); + RTE_LOG_LINE(ERR, EAL, "Msg cannot be NULL"); rte_errno = EINVAL; return -1; } @@ -808,25 +808,25 @@ check_input(const struct rte_mp_msg *msg) return -1; if (msg->len_param < 0) { - RTE_LOG(ERR, EAL, "Message data length is negative\n"); + RTE_LOG_LINE(ERR, EAL, "Message data length is negative"); rte_errno = EINVAL; return -1; } if (msg->num_fds < 0) { - RTE_LOG(ERR, EAL, "Number of fd's is negative\n"); + RTE_LOG_LINE(ERR, EAL, "Number of fd's is negative"); rte_errno = EINVAL; return -1; } if (msg->len_param > RTE_MP_MAX_PARAM_LEN) { - RTE_LOG(ERR, EAL, "Message data is too long\n"); + RTE_LOG_LINE(ERR, EAL, "Message data is too long"); rte_errno = E2BIG; return -1; } if (msg->num_fds > RTE_MP_MAX_FD_NUM) { - RTE_LOG(ERR, EAL, "Cannot send more than %d FDs\n", + RTE_LOG_LINE(ERR, EAL, "Cannot send more than %d FDs", RTE_MP_MAX_FD_NUM); rte_errno = E2BIG; return -1; @@ -845,12 +845,12 @@ rte_mp_sendmsg(struct rte_mp_msg *msg) return -1; if (internal_conf->no_shconf) { - RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n"); + RTE_LOG_LINE(DEBUG, EAL, "No shared files mode enabled, IPC is disabled"); rte_errno = ENOTSUP; return -1; } - RTE_LOG(DEBUG, EAL, "sendmsg: %s\n", msg->name); + RTE_LOG_LINE(DEBUG, EAL, "sendmsg: %s", msg->name); return mp_send(msg, NULL, MP_MSG); } @@ -865,7 +865,7 @@ mp_request_async(const char *dst, struct rte_mp_msg *req, pending_req = calloc(1, sizeof(*pending_req)); reply_msg = calloc(1, sizeof(*reply_msg)); if (pending_req == NULL || reply_msg == NULL) { - RTE_LOG(ERR, EAL, "Could not allocate space for sync request\n"); + RTE_LOG_LINE(ERR, EAL, "Could not allocate space for sync request"); rte_errno = ENOMEM; ret = -1; goto fail; @@ -881,7 +881,7 @@ mp_request_async(const char *dst, struct rte_mp_msg *req, exist = find_pending_request(dst, req->name); if (exist) { - RTE_LOG(ERR, EAL, "A pending request %s:%s\n", dst, req->name); + RTE_LOG_LINE(ERR, EAL, "A pending request %s:%s", dst, req->name); rte_errno = EEXIST; ret = -1; goto fail; @@ -889,7 +889,7 @@ mp_request_async(const char *dst, struct rte_mp_msg *req, ret = send_msg(dst, req, MP_REQ); if (ret < 0) { - RTE_LOG(ERR, EAL, "Fail to send request %s:%s\n", + RTE_LOG_LINE(ERR, EAL, "Fail to send request %s:%s", dst, req->name); ret = -1; goto fail; @@ -902,7 +902,7 @@ mp_request_async(const char *dst, struct rte_mp_msg *req, /* if alarm set fails, we simply ignore the reply */ if (rte_eal_alarm_set(ts->tv_sec * 1000000 + ts->tv_nsec / 1000, async_reply_handle, pending_req) < 0) { - RTE_LOG(ERR, EAL, "Fail to set alarm for request %s:%s\n", + RTE_LOG_LINE(ERR, EAL, "Fail to set alarm for request %s:%s", dst, req->name); ret = -1; goto fail; @@ -936,14 +936,14 @@ mp_request_sync(const char *dst, struct rte_mp_msg *req, exist = find_pending_request(dst, req->name); if (exist) { - RTE_LOG(ERR, EAL, "A pending request %s:%s\n", dst, req->name); + RTE_LOG_LINE(ERR, EAL, "A pending request %s:%s", dst, req->name); rte_errno = EEXIST; return -1; } ret = send_msg(dst, req, MP_REQ); if (ret < 0) { - RTE_LOG(ERR, EAL, "Fail to send request %s:%s\n", + RTE_LOG_LINE(ERR, EAL, "Fail to send request %s:%s", dst, req->name); return -1; } else if (ret == 0) @@ -961,13 +961,13 @@ mp_request_sync(const char *dst, struct rte_mp_msg *req, TAILQ_REMOVE(&pending_requests.requests, &pending_req, next); if (pending_req.reply_received == 0) { - RTE_LOG(ERR, EAL, "Fail to recv reply for request %s:%s\n", + RTE_LOG_LINE(ERR, EAL, "Fail to recv reply for request %s:%s", dst, req->name); rte_errno = ETIMEDOUT; return -1; } if (pending_req.reply_received == -1) { - RTE_LOG(DEBUG, EAL, "Asked to ignore response\n"); + RTE_LOG_LINE(DEBUG, EAL, "Asked to ignore response"); /* not receiving this message is not an error, so decrement * number of sent messages */ @@ -977,7 +977,7 @@ mp_request_sync(const char *dst, struct rte_mp_msg *req, tmp = realloc(reply->msgs, sizeof(msg) * (reply->nb_received + 1)); if (!tmp) { - RTE_LOG(ERR, EAL, "Fail to alloc reply for request %s:%s\n", + RTE_LOG_LINE(ERR, EAL, "Fail to alloc reply for request %s:%s", dst, req->name); rte_errno = ENOMEM; return -1; @@ -999,7 +999,7 @@ rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply, const struct internal_config *internal_conf = eal_get_internal_configuration(); - RTE_LOG(DEBUG, EAL, "request: %s\n", req->name); + RTE_LOG_LINE(DEBUG, EAL, "request: %s", req->name); reply->nb_sent = 0; reply->nb_received = 0; @@ -1009,13 +1009,13 @@ rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply, goto end; if (internal_conf->no_shconf) { - RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n"); + RTE_LOG_LINE(DEBUG, EAL, "No shared files mode enabled, IPC is disabled"); rte_errno = ENOTSUP; return -1; } if (clock_gettime(CLOCK_MONOTONIC, &now) < 0) { - RTE_LOG(ERR, EAL, "Failed to get current time\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to get current time"); rte_errno = errno; goto end; } @@ -1035,7 +1035,7 @@ rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply, /* for primary process, broadcast request, and collect reply 1 by 1 */ mp_dir = opendir(mp_dir_path); if (!mp_dir) { - RTE_LOG(ERR, EAL, "Unable to open directory %s\n", mp_dir_path); + RTE_LOG_LINE(ERR, EAL, "Unable to open directory %s", mp_dir_path); rte_errno = errno; goto end; } @@ -1043,7 +1043,7 @@ rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply, dir_fd = dirfd(mp_dir); /* lock the directory to prevent processes spinning up while we send */ if (flock(dir_fd, LOCK_SH)) { - RTE_LOG(ERR, EAL, "Unable to lock directory %s\n", + RTE_LOG_LINE(ERR, EAL, "Unable to lock directory %s", mp_dir_path); rte_errno = errno; goto close_end; @@ -1102,19 +1102,19 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts, const struct internal_config *internal_conf = eal_get_internal_configuration(); - RTE_LOG(DEBUG, EAL, "request: %s\n", req->name); + RTE_LOG_LINE(DEBUG, EAL, "request: %s", req->name); if (check_input(req) != 0) return -1; if (internal_conf->no_shconf) { - RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n"); + RTE_LOG_LINE(DEBUG, EAL, "No shared files mode enabled, IPC is disabled"); rte_errno = ENOTSUP; return -1; } if (clock_gettime(CLOCK_MONOTONIC, &now) < 0) { - RTE_LOG(ERR, EAL, "Failed to get current time\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to get current time"); rte_errno = errno; return -1; } @@ -1122,7 +1122,7 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts, dummy = calloc(1, sizeof(*dummy)); param = calloc(1, sizeof(*param)); if (copy == NULL || dummy == NULL || param == NULL) { - RTE_LOG(ERR, EAL, "Failed to allocate memory for async reply\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to allocate memory for async reply"); rte_errno = ENOMEM; goto fail; } @@ -1180,7 +1180,7 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts, /* for primary process, broadcast request */ mp_dir = opendir(mp_dir_path); if (!mp_dir) { - RTE_LOG(ERR, EAL, "Unable to open directory %s\n", mp_dir_path); + RTE_LOG_LINE(ERR, EAL, "Unable to open directory %s", mp_dir_path); rte_errno = errno; goto unlock_fail; } @@ -1188,7 +1188,7 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts, /* lock the directory to prevent processes spinning up while we send */ if (flock(dir_fd, LOCK_SH)) { - RTE_LOG(ERR, EAL, "Unable to lock directory %s\n", + RTE_LOG_LINE(ERR, EAL, "Unable to lock directory %s", mp_dir_path); rte_errno = errno; goto closedir_fail; @@ -1240,7 +1240,7 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts, int rte_mp_reply(struct rte_mp_msg *msg, const char *peer) { - RTE_LOG(DEBUG, EAL, "reply: %s\n", msg->name); + RTE_LOG_LINE(DEBUG, EAL, "reply: %s", msg->name); const struct internal_config *internal_conf = eal_get_internal_configuration(); @@ -1248,13 +1248,13 @@ rte_mp_reply(struct rte_mp_msg *msg, const char *peer) return -1; if (peer == NULL) { - RTE_LOG(ERR, EAL, "peer is not specified\n"); + RTE_LOG_LINE(ERR, EAL, "peer is not specified"); rte_errno = EINVAL; return -1; } if (internal_conf->no_shconf) { - RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n"); + RTE_LOG_LINE(DEBUG, EAL, "No shared files mode enabled, IPC is disabled"); return 0; } diff --git a/lib/eal/common/eal_common_tailqs.c b/lib/eal/common/eal_common_tailqs.c index 580fbf24bc..06a6cac4ff 100644 --- a/lib/eal/common/eal_common_tailqs.c +++ b/lib/eal/common/eal_common_tailqs.c @@ -109,8 +109,8 @@ int rte_eal_tailq_register(struct rte_tailq_elem *t) { if (rte_eal_tailq_local_register(t) < 0) { - RTE_LOG(ERR, EAL, - "%s tailq is already registered\n", t->name); + RTE_LOG_LINE(ERR, EAL, + "%s tailq is already registered", t->name); goto error; } @@ -119,8 +119,8 @@ rte_eal_tailq_register(struct rte_tailq_elem *t) if (rte_tailqs_count >= 0) { rte_eal_tailq_update(t); if (t->head == NULL) { - RTE_LOG(ERR, EAL, - "Cannot initialize tailq: %s\n", t->name); + RTE_LOG_LINE(ERR, EAL, + "Cannot initialize tailq: %s", t->name); TAILQ_REMOVE(&rte_tailq_elem_head, t, next); goto error; } @@ -145,8 +145,8 @@ rte_eal_tailqs_init(void) * rte_eal_tailq_register and EAL_REGISTER_TAILQ */ rte_eal_tailq_update(t); if (t->head == NULL) { - RTE_LOG(ERR, EAL, - "Cannot initialize tailq: %s\n", t->name); + RTE_LOG_LINE(ERR, EAL, + "Cannot initialize tailq: %s", t->name); /* TAILQ_REMOVE not needed, error is already fatal */ goto fail; } diff --git a/lib/eal/common/eal_common_thread.c b/lib/eal/common/eal_common_thread.c index c422ea8b53..b0974a7aa5 100644 --- a/lib/eal/common/eal_common_thread.c +++ b/lib/eal/common/eal_common_thread.c @@ -86,7 +86,7 @@ int rte_thread_set_affinity(rte_cpuset_t *cpusetp) { if (rte_thread_set_affinity_by_id(rte_thread_self(), cpusetp) != 0) { - RTE_LOG(ERR, EAL, "rte_thread_set_affinity_by_id failed\n"); + RTE_LOG_LINE(ERR, EAL, "rte_thread_set_affinity_by_id failed"); return -1; } @@ -175,7 +175,7 @@ eal_thread_loop(void *arg) __rte_thread_init(lcore_id, &lcore_config[lcore_id].cpuset); ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset)); - RTE_LOG(DEBUG, EAL, "lcore %u is ready (tid=%zx;cpuset=[%s%s])\n", + RTE_LOG_LINE(DEBUG, EAL, "lcore %u is ready (tid=%zx;cpuset=[%s%s])", lcore_id, rte_thread_self().opaque_id, cpuset, ret == 0 ? "" : "..."); @@ -368,12 +368,12 @@ rte_thread_register(void) /* EAL init flushes all lcores, we can't register before. */ if (eal_get_internal_configuration()->init_complete != 1) { - RTE_LOG(DEBUG, EAL, "Called %s before EAL init.\n", __func__); + RTE_LOG_LINE(DEBUG, EAL, "Called %s before EAL init.", __func__); rte_errno = EINVAL; return -1; } if (!rte_mp_disable()) { - RTE_LOG(ERR, EAL, "Multiprocess in use, registering non-EAL threads is not supported.\n"); + RTE_LOG_LINE(ERR, EAL, "Multiprocess in use, registering non-EAL threads is not supported."); rte_errno = EINVAL; return -1; } @@ -387,7 +387,7 @@ rte_thread_register(void) rte_errno = ENOMEM; return -1; } - RTE_LOG(DEBUG, EAL, "Registered non-EAL thread as lcore %u.\n", + RTE_LOG_LINE(DEBUG, EAL, "Registered non-EAL thread as lcore %u.", lcore_id); return 0; } @@ -401,7 +401,7 @@ rte_thread_unregister(void) eal_lcore_non_eal_release(lcore_id); __rte_thread_uninit(); if (lcore_id != LCORE_ID_ANY) - RTE_LOG(DEBUG, EAL, "Unregistered non-EAL thread (was lcore %u).\n", + RTE_LOG_LINE(DEBUG, EAL, "Unregistered non-EAL thread (was lcore %u).", lcore_id); } diff --git a/lib/eal/common/eal_common_timer.c b/lib/eal/common/eal_common_timer.c index 5686a5102b..bd2ca85c6c 100644 --- a/lib/eal/common/eal_common_timer.c +++ b/lib/eal/common/eal_common_timer.c @@ -39,8 +39,8 @@ static uint64_t estimate_tsc_freq(void) { #define CYC_PER_10MHZ 1E7 - RTE_LOG(WARNING, EAL, "WARNING: TSC frequency estimated roughly" - " - clock timings may be less accurate.\n"); + RTE_LOG_LINE(WARNING, EAL, "WARNING: TSC frequency estimated roughly" + " - clock timings may be less accurate."); /* assume that the rte_delay_us_sleep() will sleep for 1 second */ uint64_t start = rte_rdtsc(); rte_delay_us_sleep(US_PER_S); @@ -71,7 +71,7 @@ set_tsc_freq(void) if (!freq) freq = estimate_tsc_freq(); - RTE_LOG(DEBUG, EAL, "TSC frequency is ~%" PRIu64 " KHz\n", freq / 1000); + RTE_LOG_LINE(DEBUG, EAL, "TSC frequency is ~%" PRIu64 " KHz", freq / 1000); eal_tsc_resolution_hz = freq; mcfg->tsc_hz = freq; } diff --git a/lib/eal/common/eal_common_trace_utils.c b/lib/eal/common/eal_common_trace_utils.c index 8561a0e198..f5e724f9cd 100644 --- a/lib/eal/common/eal_common_trace_utils.c +++ b/lib/eal/common/eal_common_trace_utils.c @@ -348,7 +348,7 @@ trace_mkdir(void) return -rte_errno; } - RTE_LOG(INFO, EAL, "Trace dir: %s\n", trace->dir); + RTE_LOG_LINE(INFO, EAL, "Trace dir: %s", trace->dir); already_done = true; return 0; } diff --git a/lib/eal/common/eal_trace.h b/lib/eal/common/eal_trace.h index ace2ef3ee5..4dbd6ea457 100644 --- a/lib/eal/common/eal_trace.h +++ b/lib/eal/common/eal_trace.h @@ -17,10 +17,10 @@ #include "eal_thread.h" #define trace_err(fmt, args...) \ - RTE_LOG(ERR, EAL, "%s():%u " fmt "\n", __func__, __LINE__, ## args) + RTE_LOG_LINE(ERR, EAL, "%s():%u " fmt, __func__, __LINE__, ## args) #define trace_crit(fmt, args...) \ - RTE_LOG(CRIT, EAL, "%s():%u " fmt "\n", __func__, __LINE__, ## args) + RTE_LOG_LINE(CRIT, EAL, "%s():%u " fmt, __func__, __LINE__, ## args) #define TRACE_CTF_MAGIC 0xC1FC1FC1 #define TRACE_MAX_ARGS 32 diff --git a/lib/eal/common/hotplug_mp.c b/lib/eal/common/hotplug_mp.c index 602781966c..cd47c248f5 100644 --- a/lib/eal/common/hotplug_mp.c +++ b/lib/eal/common/hotplug_mp.c @@ -77,7 +77,7 @@ send_response_to_secondary(const struct eal_dev_mp_req *req, ret = rte_mp_reply(&mp_resp, peer); if (ret != 0) - RTE_LOG(ERR, EAL, "failed to send response to secondary\n"); + RTE_LOG_LINE(ERR, EAL, "failed to send response to secondary"); return ret; } @@ -101,18 +101,18 @@ __handle_secondary_request(void *param) if (req->t == EAL_DEV_REQ_TYPE_ATTACH) { ret = local_dev_probe(req->devargs, &dev); if (ret != 0 && ret != -EEXIST) { - RTE_LOG(ERR, EAL, "Failed to hotplug add device on primary\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to hotplug add device on primary"); goto finish; } ret = eal_dev_hotplug_request_to_secondary(&tmp_req); if (ret != 0) { - RTE_LOG(ERR, EAL, "Failed to send hotplug request to secondary\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to send hotplug request to secondary"); ret = -ENOMSG; goto rollback; } if (tmp_req.result != 0) { ret = tmp_req.result; - RTE_LOG(ERR, EAL, "Failed to hotplug add device on secondary\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to hotplug add device on secondary"); if (ret != -EEXIST) goto rollback; } @@ -123,27 +123,27 @@ __handle_secondary_request(void *param) ret = eal_dev_hotplug_request_to_secondary(&tmp_req); if (ret != 0) { - RTE_LOG(ERR, EAL, "Failed to send hotplug request to secondary\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to send hotplug request to secondary"); ret = -ENOMSG; goto rollback; } bus = rte_bus_find_by_name(da.bus->name); if (bus == NULL) { - RTE_LOG(ERR, EAL, "Cannot find bus (%s)\n", da.bus->name); + RTE_LOG_LINE(ERR, EAL, "Cannot find bus (%s)", da.bus->name); ret = -ENOENT; goto finish; } dev = bus->find_device(NULL, cmp_dev_name, da.name); if (dev == NULL) { - RTE_LOG(ERR, EAL, "Cannot find plugged device (%s)\n", da.name); + RTE_LOG_LINE(ERR, EAL, "Cannot find plugged device (%s)", da.name); ret = -ENOENT; goto finish; } if (tmp_req.result != 0) { - RTE_LOG(ERR, EAL, "Failed to hotplug remove device on secondary\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to hotplug remove device on secondary"); ret = tmp_req.result; if (ret != -ENOENT) goto rollback; @@ -151,12 +151,12 @@ __handle_secondary_request(void *param) ret = local_dev_remove(dev); if (ret != 0) { - RTE_LOG(ERR, EAL, "Failed to hotplug remove device on primary\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to hotplug remove device on primary"); if (ret != -ENOENT) goto rollback; } } else { - RTE_LOG(ERR, EAL, "unsupported secondary to primary request\n"); + RTE_LOG_LINE(ERR, EAL, "unsupported secondary to primary request"); ret = -ENOTSUP; } goto finish; @@ -174,7 +174,7 @@ __handle_secondary_request(void *param) finish: ret = send_response_to_secondary(&tmp_req, ret, bundle->peer); if (ret) - RTE_LOG(ERR, EAL, "failed to send response to secondary\n"); + RTE_LOG_LINE(ERR, EAL, "failed to send response to secondary"); rte_devargs_reset(&da); free(bundle->peer); @@ -191,7 +191,7 @@ handle_secondary_request(const struct rte_mp_msg *msg, const void *peer) bundle = malloc(sizeof(*bundle)); if (bundle == NULL) { - RTE_LOG(ERR, EAL, "not enough memory\n"); + RTE_LOG_LINE(ERR, EAL, "not enough memory"); return send_response_to_secondary(req, -ENOMEM, peer); } @@ -204,7 +204,7 @@ handle_secondary_request(const struct rte_mp_msg *msg, const void *peer) bundle->peer = strdup(peer); if (bundle->peer == NULL) { free(bundle); - RTE_LOG(ERR, EAL, "not enough memory\n"); + RTE_LOG_LINE(ERR, EAL, "not enough memory"); return send_response_to_secondary(req, -ENOMEM, peer); } @@ -214,7 +214,7 @@ handle_secondary_request(const struct rte_mp_msg *msg, const void *peer) */ ret = rte_eal_alarm_set(1, __handle_secondary_request, bundle); if (ret != 0) { - RTE_LOG(ERR, EAL, "failed to add mp task\n"); + RTE_LOG_LINE(ERR, EAL, "failed to add mp task"); free(bundle->peer); free(bundle); return send_response_to_secondary(req, ret, peer); @@ -257,14 +257,14 @@ static void __handle_primary_request(void *param) bus = rte_bus_find_by_name(da->bus->name); if (bus == NULL) { - RTE_LOG(ERR, EAL, "Cannot find bus (%s)\n", da->bus->name); + RTE_LOG_LINE(ERR, EAL, "Cannot find bus (%s)", da->bus->name); ret = -ENOENT; goto quit; } dev = bus->find_device(NULL, cmp_dev_name, da->name); if (dev == NULL) { - RTE_LOG(ERR, EAL, "Cannot find plugged device (%s)\n", da->name); + RTE_LOG_LINE(ERR, EAL, "Cannot find plugged device (%s)", da->name); ret = -ENOENT; goto quit; } @@ -296,7 +296,7 @@ static void __handle_primary_request(void *param) memcpy(resp, req, sizeof(*resp)); resp->result = ret; if (rte_mp_reply(&mp_resp, bundle->peer) < 0) - RTE_LOG(ERR, EAL, "failed to send reply to primary request\n"); + RTE_LOG_LINE(ERR, EAL, "failed to send reply to primary request"); free(bundle->peer); free(bundle); @@ -320,11 +320,11 @@ handle_primary_request(const struct rte_mp_msg *msg, const void *peer) bundle = calloc(1, sizeof(*bundle)); if (bundle == NULL) { - RTE_LOG(ERR, EAL, "not enough memory\n"); + RTE_LOG_LINE(ERR, EAL, "not enough memory"); resp->result = -ENOMEM; ret = rte_mp_reply(&mp_resp, peer); if (ret) - RTE_LOG(ERR, EAL, "failed to send reply to primary request\n"); + RTE_LOG_LINE(ERR, EAL, "failed to send reply to primary request"); return ret; } @@ -336,12 +336,12 @@ handle_primary_request(const struct rte_mp_msg *msg, const void *peer) */ bundle->peer = (void *)strdup(peer); if (bundle->peer == NULL) { - RTE_LOG(ERR, EAL, "not enough memory\n"); + RTE_LOG_LINE(ERR, EAL, "not enough memory"); free(bundle); resp->result = -ENOMEM; ret = rte_mp_reply(&mp_resp, peer); if (ret) - RTE_LOG(ERR, EAL, "failed to send reply to primary request\n"); + RTE_LOG_LINE(ERR, EAL, "failed to send reply to primary request"); return ret; } @@ -356,7 +356,7 @@ handle_primary_request(const struct rte_mp_msg *msg, const void *peer) resp->result = ret; ret = rte_mp_reply(&mp_resp, peer); if (ret != 0) { - RTE_LOG(ERR, EAL, "failed to send reply to primary request\n"); + RTE_LOG_LINE(ERR, EAL, "failed to send reply to primary request"); return ret; } } @@ -378,7 +378,7 @@ int eal_dev_hotplug_request_to_primary(struct eal_dev_mp_req *req) ret = rte_mp_request_sync(&mp_req, &mp_reply, &ts); if (ret || mp_reply.nb_received != 1) { - RTE_LOG(ERR, EAL, "Cannot send request to primary\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot send request to primary"); if (!ret) return -1; return ret; @@ -408,14 +408,14 @@ int eal_dev_hotplug_request_to_secondary(struct eal_dev_mp_req *req) if (ret != 0) { /* if IPC is not supported, behave as if the call succeeded */ if (rte_errno != ENOTSUP) - RTE_LOG(ERR, EAL, "rte_mp_request_sync failed\n"); + RTE_LOG_LINE(ERR, EAL, "rte_mp_request_sync failed"); else ret = 0; return ret; } if (mp_reply.nb_sent != mp_reply.nb_received) { - RTE_LOG(ERR, EAL, "not all secondary reply\n"); + RTE_LOG_LINE(ERR, EAL, "not all secondary reply"); free(mp_reply.msgs); return -1; } @@ -448,7 +448,7 @@ int eal_mp_dev_hotplug_init(void) handle_secondary_request); /* primary is allowed to not support IPC */ if (ret != 0 && rte_errno != ENOTSUP) { - RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n", + RTE_LOG_LINE(ERR, EAL, "Couldn't register '%s' action", EAL_DEV_MP_ACTION_REQUEST); return ret; } @@ -456,7 +456,7 @@ int eal_mp_dev_hotplug_init(void) ret = rte_mp_action_register(EAL_DEV_MP_ACTION_REQUEST, handle_primary_request); if (ret != 0) { - RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n", + RTE_LOG_LINE(ERR, EAL, "Couldn't register '%s' action", EAL_DEV_MP_ACTION_REQUEST); return ret; } diff --git a/lib/eal/common/malloc_elem.c b/lib/eal/common/malloc_elem.c index f5d1c8c2e2..6e9d5b8660 100644 --- a/lib/eal/common/malloc_elem.c +++ b/lib/eal/common/malloc_elem.c @@ -148,7 +148,7 @@ malloc_elem_insert(struct malloc_elem *elem) /* first and last elements must be both NULL or both non-NULL */ if ((heap->first == NULL) != (heap->last == NULL)) { - RTE_LOG(ERR, EAL, "Heap is probably corrupt\n"); + RTE_LOG_LINE(ERR, EAL, "Heap is probably corrupt"); return; } @@ -628,7 +628,7 @@ malloc_elem_hide_region(struct malloc_elem *elem, void *start, size_t len) malloc_elem_free_list_insert(hide_end); } else if (len_after > 0) { - RTE_LOG(ERR, EAL, "Unaligned element, heap is probably corrupt\n"); + RTE_LOG_LINE(ERR, EAL, "Unaligned element, heap is probably corrupt"); return; } } @@ -647,7 +647,7 @@ malloc_elem_hide_region(struct malloc_elem *elem, void *start, size_t len) malloc_elem_free_list_insert(prev); } else if (len_before > 0) { - RTE_LOG(ERR, EAL, "Unaligned element, heap is probably corrupt\n"); + RTE_LOG_LINE(ERR, EAL, "Unaligned element, heap is probably corrupt"); return; } } diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c index 6b6cf9174c..010c84c36c 100644 --- a/lib/eal/common/malloc_heap.c +++ b/lib/eal/common/malloc_heap.c @@ -117,7 +117,7 @@ malloc_add_seg(const struct rte_memseg_list *msl, heap_idx = malloc_socket_to_heap_id(msl->socket_id); if (heap_idx < 0) { - RTE_LOG(ERR, EAL, "Memseg list has invalid socket id\n"); + RTE_LOG_LINE(ERR, EAL, "Memseg list has invalid socket id"); return -1; } heap = &mcfg->malloc_heaps[heap_idx]; @@ -135,7 +135,7 @@ malloc_add_seg(const struct rte_memseg_list *msl, heap->total_size += len; - RTE_LOG(DEBUG, EAL, "Added %zuM to heap on socket %i\n", len >> 20, + RTE_LOG_LINE(DEBUG, EAL, "Added %zuM to heap on socket %i", len >> 20, msl->socket_id); return 0; } @@ -308,7 +308,7 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size, /* first, check if we're allowed to allocate this memory */ if (eal_memalloc_mem_alloc_validate(socket, heap->total_size + alloc_sz) < 0) { - RTE_LOG(DEBUG, EAL, "User has disallowed allocation\n"); + RTE_LOG_LINE(DEBUG, EAL, "User has disallowed allocation"); return NULL; } @@ -324,7 +324,7 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size, /* check if we wanted contiguous memory but didn't get it */ if (contig && !eal_memalloc_is_contig(msl, map_addr, alloc_sz)) { - RTE_LOG(DEBUG, EAL, "%s(): couldn't allocate physically contiguous space\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): couldn't allocate physically contiguous space", __func__); goto fail; } @@ -352,8 +352,8 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size, * which could solve some situations when IOVA VA is not * really needed. */ - RTE_LOG(ERR, EAL, - "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask\n", + RTE_LOG_LINE(ERR, EAL, + "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask", __func__); /* @@ -363,8 +363,8 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size, */ if ((rte_eal_iova_mode() == RTE_IOVA_VA) && rte_eal_using_phys_addrs()) - RTE_LOG(ERR, EAL, - "%s(): Please try initializing EAL with --iova-mode=pa parameter\n", + RTE_LOG_LINE(ERR, EAL, + "%s(): Please try initializing EAL with --iova-mode=pa parameter", __func__); goto fail; } @@ -440,7 +440,7 @@ try_expand_heap_primary(struct malloc_heap *heap, uint64_t pg_sz, } heap->total_size += alloc_sz; - RTE_LOG(DEBUG, EAL, "Heap on socket %d was expanded by %zdMB\n", + RTE_LOG_LINE(DEBUG, EAL, "Heap on socket %d was expanded by %zdMB", socket, alloc_sz >> 20ULL); free(ms); @@ -693,7 +693,7 @@ malloc_heap_alloc_on_heap_id(const char *type, size_t size, /* this should have succeeded */ if (ret == NULL) - RTE_LOG(ERR, EAL, "Error allocating from heap\n"); + RTE_LOG_LINE(ERR, EAL, "Error allocating from heap"); } alloc_unlock: rte_spinlock_unlock(&(heap->lock)); @@ -1040,7 +1040,7 @@ malloc_heap_free(struct malloc_elem *elem) /* we didn't exit early, meaning we have unmapped some pages */ unmapped = true; - RTE_LOG(DEBUG, EAL, "Heap on socket %d was shrunk by %zdMB\n", + RTE_LOG_LINE(DEBUG, EAL, "Heap on socket %d was shrunk by %zdMB", msl->socket_id, aligned_len >> 20ULL); rte_mcfg_mem_write_unlock(); @@ -1199,7 +1199,7 @@ malloc_heap_create_external_seg(void *va_addr, rte_iova_t iova_addrs[], } } if (msl == NULL) { - RTE_LOG(ERR, EAL, "Couldn't find empty memseg list\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't find empty memseg list"); rte_errno = ENOSPC; return NULL; } @@ -1210,7 +1210,7 @@ malloc_heap_create_external_seg(void *va_addr, rte_iova_t iova_addrs[], /* create the backing fbarray */ if (rte_fbarray_init(&msl->memseg_arr, fbarray_name, n_pages, sizeof(struct rte_memseg)) < 0) { - RTE_LOG(ERR, EAL, "Couldn't create fbarray backing the memseg list\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't create fbarray backing the memseg list"); return NULL; } arr = &msl->memseg_arr; @@ -1310,7 +1310,7 @@ malloc_heap_add_external_memory(struct malloc_heap *heap, heap->total_size += msl->len; /* all done! */ - RTE_LOG(DEBUG, EAL, "Added segment for heap %s starting at %p\n", + RTE_LOG_LINE(DEBUG, EAL, "Added segment for heap %s starting at %p", heap->name, msl->base_va); /* notify all subscribers that a new memory area has been added */ @@ -1356,7 +1356,7 @@ malloc_heap_create(struct malloc_heap *heap, const char *heap_name) /* prevent overflow. did you really create 2 billion heaps??? */ if (next_socket_id > INT32_MAX) { - RTE_LOG(ERR, EAL, "Cannot assign new socket ID's\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot assign new socket ID's"); rte_errno = ENOSPC; return -1; } @@ -1382,17 +1382,17 @@ int malloc_heap_destroy(struct malloc_heap *heap) { if (heap->alloc_count != 0) { - RTE_LOG(ERR, EAL, "Heap is still in use\n"); + RTE_LOG_LINE(ERR, EAL, "Heap is still in use"); rte_errno = EBUSY; return -1; } if (heap->first != NULL || heap->last != NULL) { - RTE_LOG(ERR, EAL, "Heap still contains memory segments\n"); + RTE_LOG_LINE(ERR, EAL, "Heap still contains memory segments"); rte_errno = EBUSY; return -1; } if (heap->total_size != 0) - RTE_LOG(ERR, EAL, "Total size not zero, heap is likely corrupt\n"); + RTE_LOG_LINE(ERR, EAL, "Total size not zero, heap is likely corrupt"); /* Reset all of the heap but the (hold) lock so caller can release it. */ RTE_BUILD_BUG_ON(offsetof(struct malloc_heap, lock) != 0); @@ -1411,7 +1411,7 @@ rte_eal_malloc_heap_init(void) eal_get_internal_configuration(); if (internal_conf->match_allocations) - RTE_LOG(DEBUG, EAL, "Hugepages will be freed exactly as allocated.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Hugepages will be freed exactly as allocated."); if (rte_eal_process_type() == RTE_PROC_PRIMARY) { /* assign min socket ID to external heaps */ @@ -1431,7 +1431,7 @@ rte_eal_malloc_heap_init(void) } if (register_mp_requests()) { - RTE_LOG(ERR, EAL, "Couldn't register malloc multiprocess actions\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't register malloc multiprocess actions"); return -1; } diff --git a/lib/eal/common/malloc_mp.c b/lib/eal/common/malloc_mp.c index 4d62397aba..e0f49bc471 100644 --- a/lib/eal/common/malloc_mp.c +++ b/lib/eal/common/malloc_mp.c @@ -156,7 +156,7 @@ handle_sync(const struct rte_mp_msg *msg, const void *peer) int ret; if (req->t != REQ_TYPE_SYNC) { - RTE_LOG(ERR, EAL, "Unexpected request from primary\n"); + RTE_LOG_LINE(ERR, EAL, "Unexpected request from primary"); return -1; } @@ -189,19 +189,19 @@ handle_free_request(const struct malloc_mp_req *m) /* check if the requested memory actually exists */ msl = rte_mem_virt2memseg_list(start); if (msl == NULL) { - RTE_LOG(ERR, EAL, "Requested to free unknown memory\n"); + RTE_LOG_LINE(ERR, EAL, "Requested to free unknown memory"); return -1; } /* check if end is within the same memory region */ if (rte_mem_virt2memseg_list(end) != msl) { - RTE_LOG(ERR, EAL, "Requested to free memory spanning multiple regions\n"); + RTE_LOG_LINE(ERR, EAL, "Requested to free memory spanning multiple regions"); return -1; } /* we're supposed to only free memory that's not external */ if (msl->external) { - RTE_LOG(ERR, EAL, "Requested to free external memory\n"); + RTE_LOG_LINE(ERR, EAL, "Requested to free external memory"); return -1; } @@ -228,13 +228,13 @@ handle_alloc_request(const struct malloc_mp_req *m, /* this is checked by the API, but we need to prevent divide by zero */ if (ar->page_sz == 0 || !rte_is_power_of_2(ar->page_sz)) { - RTE_LOG(ERR, EAL, "Attempting to allocate with invalid page size\n"); + RTE_LOG_LINE(ERR, EAL, "Attempting to allocate with invalid page size"); return -1; } /* heap idx is index into the heap array, not socket ID */ if (ar->malloc_heap_idx >= RTE_MAX_HEAPS) { - RTE_LOG(ERR, EAL, "Attempting to allocate from invalid heap\n"); + RTE_LOG_LINE(ERR, EAL, "Attempting to allocate from invalid heap"); return -1; } @@ -247,7 +247,7 @@ handle_alloc_request(const struct malloc_mp_req *m, * socket ID's are always lower than RTE_MAX_NUMA_NODES. */ if (heap->socket_id >= RTE_MAX_NUMA_NODES) { - RTE_LOG(ERR, EAL, "Attempting to allocate from external heap\n"); + RTE_LOG_LINE(ERR, EAL, "Attempting to allocate from external heap"); return -1; } @@ -258,7 +258,7 @@ handle_alloc_request(const struct malloc_mp_req *m, /* we can't know in advance how many pages we'll need, so we malloc */ ms = malloc(sizeof(*ms) * n_segs); if (ms == NULL) { - RTE_LOG(ERR, EAL, "Couldn't allocate memory for request state\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't allocate memory for request state"); return -1; } memset(ms, 0, sizeof(*ms) * n_segs); @@ -307,13 +307,13 @@ handle_request(const struct rte_mp_msg *msg, const void *peer __rte_unused) /* make sure it's not a dupe */ entry = find_request_by_id(m->id); if (entry != NULL) { - RTE_LOG(ERR, EAL, "Duplicate request id\n"); + RTE_LOG_LINE(ERR, EAL, "Duplicate request id"); goto fail; } entry = malloc(sizeof(*entry)); if (entry == NULL) { - RTE_LOG(ERR, EAL, "Unable to allocate memory for request\n"); + RTE_LOG_LINE(ERR, EAL, "Unable to allocate memory for request"); goto fail; } @@ -325,7 +325,7 @@ handle_request(const struct rte_mp_msg *msg, const void *peer __rte_unused) } else if (m->t == REQ_TYPE_FREE) { ret = handle_free_request(m); } else { - RTE_LOG(ERR, EAL, "Unexpected request from secondary\n"); + RTE_LOG_LINE(ERR, EAL, "Unexpected request from secondary"); goto fail; } @@ -345,7 +345,7 @@ handle_request(const struct rte_mp_msg *msg, const void *peer __rte_unused) resp->id = m->id; if (rte_mp_sendmsg(&resp_msg)) { - RTE_LOG(ERR, EAL, "Couldn't send response\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't send response"); goto fail; } /* we did not modify the request */ @@ -376,7 +376,7 @@ handle_request(const struct rte_mp_msg *msg, const void *peer __rte_unused) handle_sync_response); } while (ret != 0 && rte_errno == EEXIST); if (ret != 0) { - RTE_LOG(ERR, EAL, "Couldn't send sync request\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't send sync request"); if (m->t == REQ_TYPE_ALLOC) free(entry->alloc_state.ms); goto fail; @@ -414,7 +414,7 @@ handle_sync_response(const struct rte_mp_msg *request, entry = find_request_by_id(mpreq->id); if (entry == NULL) { - RTE_LOG(ERR, EAL, "Wrong request ID\n"); + RTE_LOG_LINE(ERR, EAL, "Wrong request ID"); goto fail; } @@ -428,12 +428,12 @@ handle_sync_response(const struct rte_mp_msg *request, (struct malloc_mp_req *)reply->msgs[i].param; if (resp->t != REQ_TYPE_SYNC) { - RTE_LOG(ERR, EAL, "Unexpected response to sync request\n"); + RTE_LOG_LINE(ERR, EAL, "Unexpected response to sync request"); result = REQ_RESULT_FAIL; break; } if (resp->id != entry->user_req.id) { - RTE_LOG(ERR, EAL, "Response to wrong sync request\n"); + RTE_LOG_LINE(ERR, EAL, "Response to wrong sync request"); result = REQ_RESULT_FAIL; break; } @@ -458,7 +458,7 @@ handle_sync_response(const struct rte_mp_msg *request, strlcpy(msg.name, MP_ACTION_RESPONSE, sizeof(msg.name)); if (rte_mp_sendmsg(&msg)) - RTE_LOG(ERR, EAL, "Could not send message to secondary process\n"); + RTE_LOG_LINE(ERR, EAL, "Could not send message to secondary process"); TAILQ_REMOVE(&mp_request_list.list, entry, next); free(entry); @@ -482,7 +482,7 @@ handle_sync_response(const struct rte_mp_msg *request, strlcpy(msg.name, MP_ACTION_RESPONSE, sizeof(msg.name)); if (rte_mp_sendmsg(&msg)) - RTE_LOG(ERR, EAL, "Could not send message to secondary process\n"); + RTE_LOG_LINE(ERR, EAL, "Could not send message to secondary process"); TAILQ_REMOVE(&mp_request_list.list, entry, next); free(entry->alloc_state.ms); @@ -524,7 +524,7 @@ handle_sync_response(const struct rte_mp_msg *request, handle_rollback_response); } while (ret != 0 && rte_errno == EEXIST); if (ret != 0) { - RTE_LOG(ERR, EAL, "Could not send rollback request to secondary process\n"); + RTE_LOG_LINE(ERR, EAL, "Could not send rollback request to secondary process"); /* we couldn't send rollback request, but that's OK - * secondary will time out, and memory has been removed @@ -536,7 +536,7 @@ handle_sync_response(const struct rte_mp_msg *request, goto fail; } } else { - RTE_LOG(ERR, EAL, " to sync request of unknown type\n"); + RTE_LOG_LINE(ERR, EAL, " to sync request of unknown type"); goto fail; } @@ -564,12 +564,12 @@ handle_rollback_response(const struct rte_mp_msg *request, entry = find_request_by_id(mpreq->id); if (entry == NULL) { - RTE_LOG(ERR, EAL, "Wrong request ID\n"); + RTE_LOG_LINE(ERR, EAL, "Wrong request ID"); goto fail; } if (entry->user_req.t != REQ_TYPE_ALLOC) { - RTE_LOG(ERR, EAL, "Unexpected active request\n"); + RTE_LOG_LINE(ERR, EAL, "Unexpected active request"); goto fail; } @@ -582,7 +582,7 @@ handle_rollback_response(const struct rte_mp_msg *request, strlcpy(msg.name, MP_ACTION_RESPONSE, sizeof(msg.name)); if (rte_mp_sendmsg(&msg)) - RTE_LOG(ERR, EAL, "Could not send message to secondary process\n"); + RTE_LOG_LINE(ERR, EAL, "Could not send message to secondary process"); /* clean up */ TAILQ_REMOVE(&mp_request_list.list, entry, next); @@ -657,14 +657,14 @@ request_sync(void) if (ret != 0) { /* if IPC is unsupported, behave as if the call succeeded */ if (rte_errno != ENOTSUP) - RTE_LOG(ERR, EAL, "Could not send sync request to secondary process\n"); + RTE_LOG_LINE(ERR, EAL, "Could not send sync request to secondary process"); else ret = 0; goto out; } if (reply.nb_received != reply.nb_sent) { - RTE_LOG(ERR, EAL, "Not all secondaries have responded\n"); + RTE_LOG_LINE(ERR, EAL, "Not all secondaries have responded"); goto out; } @@ -672,15 +672,15 @@ request_sync(void) struct malloc_mp_req *resp = (struct malloc_mp_req *)reply.msgs[i].param; if (resp->t != REQ_TYPE_SYNC) { - RTE_LOG(ERR, EAL, "Unexpected response from secondary\n"); + RTE_LOG_LINE(ERR, EAL, "Unexpected response from secondary"); goto out; } if (resp->id != req->id) { - RTE_LOG(ERR, EAL, "Wrong request ID\n"); + RTE_LOG_LINE(ERR, EAL, "Wrong request ID"); goto out; } if (resp->result != REQ_RESULT_SUCCESS) { - RTE_LOG(ERR, EAL, "Secondary process failed to synchronize\n"); + RTE_LOG_LINE(ERR, EAL, "Secondary process failed to synchronize"); goto out; } } @@ -711,14 +711,14 @@ request_to_primary(struct malloc_mp_req *user_req) entry = malloc(sizeof(*entry)); if (entry == NULL) { - RTE_LOG(ERR, EAL, "Cannot allocate memory for request\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate memory for request"); goto fail; } memset(entry, 0, sizeof(*entry)); if (gettimeofday(&now, NULL) < 0) { - RTE_LOG(ERR, EAL, "Cannot get current time\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot get current time"); goto fail; } @@ -740,7 +740,7 @@ request_to_primary(struct malloc_mp_req *user_req) memcpy(msg_req, user_req, sizeof(*msg_req)); if (rte_mp_sendmsg(&msg)) { - RTE_LOG(ERR, EAL, "Cannot send message to primary\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot send message to primary"); goto fail; } @@ -759,7 +759,7 @@ request_to_primary(struct malloc_mp_req *user_req) } while (ret != 0 && ret != ETIMEDOUT); if (entry->state != REQ_STATE_COMPLETE) { - RTE_LOG(ERR, EAL, "Request timed out\n"); + RTE_LOG_LINE(ERR, EAL, "Request timed out"); ret = -1; } else { ret = 0; @@ -783,24 +783,24 @@ register_mp_requests(void) /* it's OK for primary to not support IPC */ if (rte_mp_action_register(MP_ACTION_REQUEST, handle_request) && rte_errno != ENOTSUP) { - RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n", + RTE_LOG_LINE(ERR, EAL, "Couldn't register '%s' action", MP_ACTION_REQUEST); return -1; } } else { if (rte_mp_action_register(MP_ACTION_SYNC, handle_sync)) { - RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n", + RTE_LOG_LINE(ERR, EAL, "Couldn't register '%s' action", MP_ACTION_SYNC); return -1; } if (rte_mp_action_register(MP_ACTION_ROLLBACK, handle_sync)) { - RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n", + RTE_LOG_LINE(ERR, EAL, "Couldn't register '%s' action", MP_ACTION_SYNC); return -1; } if (rte_mp_action_register(MP_ACTION_RESPONSE, handle_response)) { - RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n", + RTE_LOG_LINE(ERR, EAL, "Couldn't register '%s' action", MP_ACTION_RESPONSE); return -1; } diff --git a/lib/eal/common/rte_keepalive.c b/lib/eal/common/rte_keepalive.c index e0494b2010..699022ae1c 100644 --- a/lib/eal/common/rte_keepalive.c +++ b/lib/eal/common/rte_keepalive.c @@ -53,7 +53,7 @@ struct rte_keepalive { static void print_trace(const char *msg, struct rte_keepalive *keepcfg, int idx_core) { - RTE_LOG(INFO, EAL, "%sLast seen %" PRId64 "ms ago.\n", + RTE_LOG_LINE(INFO, EAL, "%sLast seen %" PRId64 "ms ago.", msg, ((rte_rdtsc() - keepcfg->last_alive[idx_core])*1000) / rte_get_tsc_hz() diff --git a/lib/eal/common/rte_malloc.c b/lib/eal/common/rte_malloc.c index 9db0c399ae..9b3038805a 100644 --- a/lib/eal/common/rte_malloc.c +++ b/lib/eal/common/rte_malloc.c @@ -35,7 +35,7 @@ mem_free(void *addr, const bool trace_ena) if (addr == NULL) return; if (malloc_heap_free(malloc_elem_from_data(addr)) < 0) - RTE_LOG(ERR, EAL, "Error: Invalid memory\n"); + RTE_LOG_LINE(ERR, EAL, "Error: Invalid memory"); } void @@ -171,7 +171,7 @@ rte_realloc_socket(void *ptr, size_t size, unsigned int align, int socket) struct malloc_elem *elem = malloc_elem_from_data(ptr); if (elem == NULL) { - RTE_LOG(ERR, EAL, "Error: memory corruption detected\n"); + RTE_LOG_LINE(ERR, EAL, "Error: memory corruption detected"); return NULL; } @@ -598,7 +598,7 @@ rte_malloc_heap_create(const char *heap_name) /* existing heap */ if (strncmp(heap_name, tmp->name, RTE_HEAP_NAME_MAX_LEN) == 0) { - RTE_LOG(ERR, EAL, "Heap %s already exists\n", + RTE_LOG_LINE(ERR, EAL, "Heap %s already exists", heap_name); rte_errno = EEXIST; ret = -1; @@ -611,7 +611,7 @@ rte_malloc_heap_create(const char *heap_name) } } if (heap == NULL) { - RTE_LOG(ERR, EAL, "Cannot create new heap: no space\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot create new heap: no space"); rte_errno = ENOSPC; ret = -1; goto unlock; @@ -643,7 +643,7 @@ rte_malloc_heap_destroy(const char *heap_name) /* start from non-socket heaps */ heap = find_named_heap(heap_name); if (heap == NULL) { - RTE_LOG(ERR, EAL, "Heap %s not found\n", heap_name); + RTE_LOG_LINE(ERR, EAL, "Heap %s not found", heap_name); rte_errno = ENOENT; ret = -1; goto unlock; diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c index e183d2e631..3ed4186add 100644 --- a/lib/eal/common/rte_service.c +++ b/lib/eal/common/rte_service.c @@ -87,8 +87,8 @@ rte_service_init(void) RTE_BUILD_BUG_ON(RTE_SERVICE_NUM_MAX > 64); if (rte_service_library_initialized) { - RTE_LOG(NOTICE, EAL, - "service library init() called, init flag %d\n", + RTE_LOG_LINE(NOTICE, EAL, + "service library init() called, init flag %d", rte_service_library_initialized); return -EALREADY; } @@ -97,14 +97,14 @@ rte_service_init(void) sizeof(struct rte_service_spec_impl), RTE_CACHE_LINE_SIZE); if (!rte_services) { - RTE_LOG(ERR, EAL, "error allocating rte services array\n"); + RTE_LOG_LINE(ERR, EAL, "error allocating rte services array"); goto fail_mem; } lcore_states = rte_calloc("rte_service_core_states", RTE_MAX_LCORE, sizeof(struct core_state), RTE_CACHE_LINE_SIZE); if (!lcore_states) { - RTE_LOG(ERR, EAL, "error allocating core states array\n"); + RTE_LOG_LINE(ERR, EAL, "error allocating core states array"); goto fail_mem; } diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c index 568e06e9ed..2c5d196af0 100644 --- a/lib/eal/freebsd/eal.c +++ b/lib/eal/freebsd/eal.c @@ -117,7 +117,7 @@ rte_eal_config_create(void) if (mem_cfg_fd < 0){ mem_cfg_fd = open(pathname, O_RDWR | O_CREAT, 0600); if (mem_cfg_fd < 0) { - RTE_LOG(ERR, EAL, "Cannot open '%s' for rte_mem_config\n", + RTE_LOG_LINE(ERR, EAL, "Cannot open '%s' for rte_mem_config", pathname); return -1; } @@ -127,7 +127,7 @@ rte_eal_config_create(void) if (retval < 0){ close(mem_cfg_fd); mem_cfg_fd = -1; - RTE_LOG(ERR, EAL, "Cannot resize '%s' for rte_mem_config\n", + RTE_LOG_LINE(ERR, EAL, "Cannot resize '%s' for rte_mem_config", pathname); return -1; } @@ -136,8 +136,8 @@ rte_eal_config_create(void) if (retval < 0){ close(mem_cfg_fd); mem_cfg_fd = -1; - RTE_LOG(ERR, EAL, "Cannot create lock on '%s'. Is another primary " - "process running?\n", pathname); + RTE_LOG_LINE(ERR, EAL, "Cannot create lock on '%s'. Is another primary " + "process running?", pathname); return -1; } @@ -145,7 +145,7 @@ rte_eal_config_create(void) rte_mem_cfg_addr = eal_get_virtual_area(rte_mem_cfg_addr, &cfg_len_aligned, page_sz, 0, 0); if (rte_mem_cfg_addr == NULL) { - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config"); close(mem_cfg_fd); mem_cfg_fd = -1; return -1; @@ -156,7 +156,7 @@ rte_eal_config_create(void) cfg_len_aligned, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_FIXED, mem_cfg_fd, 0); if (mapped_mem_cfg_addr == MAP_FAILED) { - RTE_LOG(ERR, EAL, "Cannot remap memory for rte_config\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot remap memory for rte_config"); munmap(rte_mem_cfg_addr, cfg_len); close(mem_cfg_fd); mem_cfg_fd = -1; @@ -190,7 +190,7 @@ rte_eal_config_attach(void) if (mem_cfg_fd < 0){ mem_cfg_fd = open(pathname, O_RDWR); if (mem_cfg_fd < 0) { - RTE_LOG(ERR, EAL, "Cannot open '%s' for rte_mem_config\n", + RTE_LOG_LINE(ERR, EAL, "Cannot open '%s' for rte_mem_config", pathname); return -1; } @@ -202,7 +202,7 @@ rte_eal_config_attach(void) if (rte_mem_cfg_addr == MAP_FAILED) { close(mem_cfg_fd); mem_cfg_fd = -1; - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)", errno, strerror(errno)); return -1; } @@ -242,14 +242,14 @@ rte_eal_config_reattach(void) if (mem_config == MAP_FAILED || mem_config != rte_mem_cfg_addr) { if (mem_config != MAP_FAILED) { /* errno is stale, don't use */ - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config at [%p], got [%p]" + RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config at [%p], got [%p]" " - please use '--" OPT_BASE_VIRTADDR - "' option\n", + "' option", rte_mem_cfg_addr, mem_config); munmap(mem_config, sizeof(struct rte_mem_config)); return -1; } - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)", errno, strerror(errno)); return -1; } @@ -280,7 +280,7 @@ eal_proc_type_detect(void) ptype = RTE_PROC_SECONDARY; } - RTE_LOG(INFO, EAL, "Auto-detected process type: %s\n", + RTE_LOG_LINE(INFO, EAL, "Auto-detected process type: %s", ptype == RTE_PROC_PRIMARY ? "PRIMARY" : "SECONDARY"); return ptype; @@ -307,20 +307,20 @@ rte_config_init(void) return -1; eal_mcfg_wait_complete(); if (eal_mcfg_check_version() < 0) { - RTE_LOG(ERR, EAL, "Primary and secondary process DPDK version mismatch\n"); + RTE_LOG_LINE(ERR, EAL, "Primary and secondary process DPDK version mismatch"); return -1; } if (rte_eal_config_reattach() < 0) return -1; if (!__rte_mp_enable()) { - RTE_LOG(ERR, EAL, "Primary process refused secondary attachment\n"); + RTE_LOG_LINE(ERR, EAL, "Primary process refused secondary attachment"); return -1; } eal_mcfg_update_internal(); break; case RTE_PROC_AUTO: case RTE_PROC_INVALID: - RTE_LOG(ERR, EAL, "Invalid process type %d\n", + RTE_LOG_LINE(ERR, EAL, "Invalid process type %d", config->process_type); return -1; } @@ -454,7 +454,7 @@ eal_parse_args(int argc, char **argv) { char *ops_name = strdup(optarg); if (ops_name == NULL) - RTE_LOG(ERR, EAL, "Could not store mbuf pool ops name\n"); + RTE_LOG_LINE(ERR, EAL, "Could not store mbuf pool ops name"); else { /* free old ops name */ free(internal_conf->user_mbuf_pool_ops_name); @@ -469,16 +469,16 @@ eal_parse_args(int argc, char **argv) exit(EXIT_SUCCESS); default: if (opt < OPT_LONG_MIN_NUM && isprint(opt)) { - RTE_LOG(ERR, EAL, "Option %c is not supported " - "on FreeBSD\n", opt); + RTE_LOG_LINE(ERR, EAL, "Option %c is not supported " + "on FreeBSD", opt); } else if (opt >= OPT_LONG_MIN_NUM && opt < OPT_LONG_MAX_NUM) { - RTE_LOG(ERR, EAL, "Option %s is not supported " - "on FreeBSD\n", + RTE_LOG_LINE(ERR, EAL, "Option %s is not supported " + "on FreeBSD", eal_long_options[option_index].name); } else { - RTE_LOG(ERR, EAL, "Option %d is not supported " - "on FreeBSD\n", opt); + RTE_LOG_LINE(ERR, EAL, "Option %d is not supported " + "on FreeBSD", opt); } eal_usage(prgname); ret = -1; @@ -489,11 +489,11 @@ eal_parse_args(int argc, char **argv) /* create runtime data directory. In no_shconf mode, skip any errors */ if (eal_create_runtime_dir() < 0) { if (internal_conf->no_shconf == 0) { - RTE_LOG(ERR, EAL, "Cannot create runtime directory\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot create runtime directory"); ret = -1; goto out; } else - RTE_LOG(WARNING, EAL, "No DPDK runtime directory created\n"); + RTE_LOG_LINE(WARNING, EAL, "No DPDK runtime directory created"); } if (eal_adjust_config(internal_conf) != 0) { @@ -545,7 +545,7 @@ eal_check_mem_on_local_socket(void) socket_id = rte_lcore_to_socket_id(config->main_lcore); if (rte_memseg_list_walk(check_socket, &socket_id) == 0) - RTE_LOG(WARNING, EAL, "WARNING: Main core has no memory on local socket!\n"); + RTE_LOG_LINE(WARNING, EAL, "WARNING: Main core has no memory on local socket!"); } @@ -572,7 +572,7 @@ rte_eal_iopl_init(void) static void rte_eal_init_alert(const char *msg) { fprintf(stderr, "EAL: FATAL: %s\n", msg); - RTE_LOG(ERR, EAL, "%s\n", msg); + RTE_LOG_LINE(ERR, EAL, "%s", msg); } /* Launch threads, called at application init(). */ @@ -629,7 +629,7 @@ rte_eal_init(int argc, char **argv) /* FreeBSD always uses legacy memory model */ internal_conf->legacy_mem = true; if (internal_conf->in_memory) { - RTE_LOG(WARNING, EAL, "Warning: ignoring unsupported flag, '%s'\n", OPT_IN_MEMORY); + RTE_LOG_LINE(WARNING, EAL, "Warning: ignoring unsupported flag, '%s'", OPT_IN_MEMORY); internal_conf->in_memory = false; } @@ -695,14 +695,14 @@ rte_eal_init(int argc, char **argv) has_phys_addr = internal_conf->no_hugetlbfs == 0; iova_mode = internal_conf->iova_mode; if (iova_mode == RTE_IOVA_DC) { - RTE_LOG(DEBUG, EAL, "Specific IOVA mode is not requested, autodetecting\n"); + RTE_LOG_LINE(DEBUG, EAL, "Specific IOVA mode is not requested, autodetecting"); if (has_phys_addr) { - RTE_LOG(DEBUG, EAL, "Selecting IOVA mode according to bus requests\n"); + RTE_LOG_LINE(DEBUG, EAL, "Selecting IOVA mode according to bus requests"); iova_mode = rte_bus_get_iommu_class(); if (iova_mode == RTE_IOVA_DC) { if (!RTE_IOVA_IN_MBUF) { iova_mode = RTE_IOVA_VA; - RTE_LOG(DEBUG, EAL, "IOVA as VA mode is forced by build option.\n"); + RTE_LOG_LINE(DEBUG, EAL, "IOVA as VA mode is forced by build option."); } else { iova_mode = RTE_IOVA_PA; } @@ -725,7 +725,7 @@ rte_eal_init(int argc, char **argv) } rte_eal_get_configuration()->iova_mode = iova_mode; - RTE_LOG(INFO, EAL, "Selected IOVA mode '%s'\n", + RTE_LOG_LINE(INFO, EAL, "Selected IOVA mode '%s'", rte_eal_iova_mode() == RTE_IOVA_PA ? "PA" : "VA"); if (internal_conf->no_hugetlbfs == 0) { @@ -751,11 +751,11 @@ rte_eal_init(int argc, char **argv) if (internal_conf->vmware_tsc_map == 1) { #ifdef RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT rte_cycles_vmware_tsc_map = 1; - RTE_LOG (DEBUG, EAL, "Using VMWARE TSC MAP, " - "you must have monitor_control.pseudo_perfctr = TRUE\n"); + RTE_LOG_LINE(DEBUG, EAL, "Using VMWARE TSC MAP, " + "you must have monitor_control.pseudo_perfctr = TRUE"); #else - RTE_LOG (WARNING, EAL, "Ignoring --vmware-tsc-map because " - "RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT is not set\n"); + RTE_LOG_LINE(WARNING, EAL, "Ignoring --vmware-tsc-map because " + "RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT is not set"); #endif } @@ -818,7 +818,7 @@ rte_eal_init(int argc, char **argv) ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset)); - RTE_LOG(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])\n", + RTE_LOG_LINE(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])", config->main_lcore, (uintptr_t)pthread_self(), cpuset, ret == 0 ? "" : "..."); @@ -917,7 +917,7 @@ rte_eal_cleanup(void) if (!rte_atomic_compare_exchange_strong_explicit(&run_once, &has_run, 1, rte_memory_order_relaxed, rte_memory_order_relaxed)) { - RTE_LOG(WARNING, EAL, "Already called cleanup\n"); + RTE_LOG_LINE(WARNING, EAL, "Already called cleanup"); rte_errno = EALREADY; return -1; } diff --git a/lib/eal/freebsd/eal_alarm.c b/lib/eal/freebsd/eal_alarm.c index e5b0909a45..2493adf8ae 100644 --- a/lib/eal/freebsd/eal_alarm.c +++ b/lib/eal/freebsd/eal_alarm.c @@ -59,7 +59,7 @@ rte_eal_alarm_init(void) intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE); if (intr_handle == NULL) { - RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n"); + RTE_LOG_LINE(ERR, EAL, "Fail to allocate intr_handle"); goto error; } diff --git a/lib/eal/freebsd/eal_dev.c b/lib/eal/freebsd/eal_dev.c index c3dfe9108f..8d35148ba3 100644 --- a/lib/eal/freebsd/eal_dev.c +++ b/lib/eal/freebsd/eal_dev.c @@ -8,27 +8,27 @@ int rte_dev_event_monitor_start(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for FreeBSD\n"); + RTE_LOG_LINE(ERR, EAL, "Device event is not supported for FreeBSD"); return -1; } int rte_dev_event_monitor_stop(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for FreeBSD\n"); + RTE_LOG_LINE(ERR, EAL, "Device event is not supported for FreeBSD"); return -1; } int rte_dev_hotplug_handle_enable(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for FreeBSD\n"); + RTE_LOG_LINE(ERR, EAL, "Device event is not supported for FreeBSD"); return -1; } int rte_dev_hotplug_handle_disable(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for FreeBSD\n"); + RTE_LOG_LINE(ERR, EAL, "Device event is not supported for FreeBSD"); return -1; } diff --git a/lib/eal/freebsd/eal_hugepage_info.c b/lib/eal/freebsd/eal_hugepage_info.c index e58e618469..3c97daa444 100644 --- a/lib/eal/freebsd/eal_hugepage_info.c +++ b/lib/eal/freebsd/eal_hugepage_info.c @@ -72,7 +72,7 @@ eal_hugepage_info_init(void) &sysctl_size, NULL, 0); if (error != 0) { - RTE_LOG(ERR, EAL, "could not read sysctl hw.contigmem.num_buffers\n"); + RTE_LOG_LINE(ERR, EAL, "could not read sysctl hw.contigmem.num_buffers"); return -1; } @@ -81,28 +81,28 @@ eal_hugepage_info_init(void) &sysctl_size, NULL, 0); if (error != 0) { - RTE_LOG(ERR, EAL, "could not read sysctl hw.contigmem.buffer_size\n"); + RTE_LOG_LINE(ERR, EAL, "could not read sysctl hw.contigmem.buffer_size"); return -1; } fd = open(CONTIGMEM_DEV, O_RDWR); if (fd < 0) { - RTE_LOG(ERR, EAL, "could not open "CONTIGMEM_DEV"\n"); + RTE_LOG_LINE(ERR, EAL, "could not open "CONTIGMEM_DEV); return -1; } if (flock(fd, LOCK_EX | LOCK_NB) < 0) { - RTE_LOG(ERR, EAL, "could not lock memory. Is another DPDK process running?\n"); + RTE_LOG_LINE(ERR, EAL, "could not lock memory. Is another DPDK process running?"); return -1; } if (buffer_size >= 1<<30) - RTE_LOG(INFO, EAL, "Contigmem driver has %d buffers, each of size %dGB\n", + RTE_LOG_LINE(INFO, EAL, "Contigmem driver has %d buffers, each of size %dGB", num_buffers, (int)(buffer_size>>30)); else if (buffer_size >= 1<<20) - RTE_LOG(INFO, EAL, "Contigmem driver has %d buffers, each of size %dMB\n", + RTE_LOG_LINE(INFO, EAL, "Contigmem driver has %d buffers, each of size %dMB", num_buffers, (int)(buffer_size>>20)); else - RTE_LOG(INFO, EAL, "Contigmem driver has %d buffers, each of size %dKB\n", + RTE_LOG_LINE(INFO, EAL, "Contigmem driver has %d buffers, each of size %dKB", num_buffers, (int)(buffer_size>>10)); strlcpy(hpi->hugedir, CONTIGMEM_DEV, sizeof(hpi->hugedir)); @@ -117,7 +117,7 @@ eal_hugepage_info_init(void) tmp_hpi = create_shared_memory(eal_hugepage_info_path(), sizeof(internal_conf->hugepage_info)); if (tmp_hpi == NULL ) { - RTE_LOG(ERR, EAL, "Failed to create shared memory!\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to create shared memory!"); return -1; } @@ -132,7 +132,7 @@ eal_hugepage_info_init(void) } if (munmap(tmp_hpi, sizeof(internal_conf->hugepage_info)) < 0) { - RTE_LOG(ERR, EAL, "Failed to unmap shared memory!\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to unmap shared memory!"); return -1; } @@ -154,14 +154,14 @@ eal_hugepage_info_read(void) tmp_hpi = open_shared_memory(eal_hugepage_info_path(), sizeof(internal_conf->hugepage_info)); if (tmp_hpi == NULL) { - RTE_LOG(ERR, EAL, "Failed to open shared memory!\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to open shared memory!"); return -1; } memcpy(hpi, tmp_hpi, sizeof(internal_conf->hugepage_info)); if (munmap(tmp_hpi, sizeof(internal_conf->hugepage_info)) < 0) { - RTE_LOG(ERR, EAL, "Failed to unmap shared memory!\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to unmap shared memory!"); return -1; } return 0; diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c index 2b31dfb099..ffba823808 100644 --- a/lib/eal/freebsd/eal_interrupts.c +++ b/lib/eal/freebsd/eal_interrupts.c @@ -90,12 +90,12 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, /* first do parameter checking */ if (rte_intr_fd_get(intr_handle) < 0 || cb == NULL) { - RTE_LOG(ERR, EAL, - "Registering with invalid input parameter\n"); + RTE_LOG_LINE(ERR, EAL, + "Registering with invalid input parameter"); return -EINVAL; } if (kq < 0) { - RTE_LOG(ERR, EAL, "Kqueue is not active: %d\n", kq); + RTE_LOG_LINE(ERR, EAL, "Kqueue is not active: %d", kq); return -ENODEV; } @@ -120,7 +120,7 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, /* allocate a new interrupt callback entity */ callback = calloc(1, sizeof(*callback)); if (callback == NULL) { - RTE_LOG(ERR, EAL, "Can not allocate memory\n"); + RTE_LOG_LINE(ERR, EAL, "Can not allocate memory"); ret = -ENOMEM; goto fail; } @@ -132,13 +132,13 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, if (src == NULL) { src = calloc(1, sizeof(*src)); if (src == NULL) { - RTE_LOG(ERR, EAL, "Can not allocate memory\n"); + RTE_LOG_LINE(ERR, EAL, "Can not allocate memory"); ret = -ENOMEM; goto fail; } else { src->intr_handle = rte_intr_instance_dup(intr_handle); if (src->intr_handle == NULL) { - RTE_LOG(ERR, EAL, "Can not create intr instance\n"); + RTE_LOG_LINE(ERR, EAL, "Can not create intr instance"); ret = -ENOMEM; free(src); src = NULL; @@ -167,7 +167,7 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, ke.flags = EV_ADD; /* mark for addition to the queue */ if (intr_source_to_kevent(intr_handle, &ke) < 0) { - RTE_LOG(ERR, EAL, "Cannot convert interrupt handle to kevent\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot convert interrupt handle to kevent"); ret = -ENODEV; goto fail; } @@ -181,10 +181,10 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, * user. so, don't output it unless debug log level set. */ if (errno == ENODEV) - RTE_LOG(DEBUG, EAL, "Interrupt handle %d not supported\n", + RTE_LOG_LINE(DEBUG, EAL, "Interrupt handle %d not supported", rte_intr_fd_get(src->intr_handle)); else - RTE_LOG(ERR, EAL, "Error adding fd %d kevent, %s\n", + RTE_LOG_LINE(ERR, EAL, "Error adding fd %d kevent, %s", rte_intr_fd_get(src->intr_handle), strerror(errno)); ret = -errno; @@ -222,13 +222,13 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle, /* do parameter checking first */ if (rte_intr_fd_get(intr_handle) < 0) { - RTE_LOG(ERR, EAL, - "Unregistering with invalid input parameter\n"); + RTE_LOG_LINE(ERR, EAL, + "Unregistering with invalid input parameter"); return -EINVAL; } if (kq < 0) { - RTE_LOG(ERR, EAL, "Kqueue is not active\n"); + RTE_LOG_LINE(ERR, EAL, "Kqueue is not active"); return -ENODEV; } @@ -277,12 +277,12 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle, /* do parameter checking first */ if (rte_intr_fd_get(intr_handle) < 0) { - RTE_LOG(ERR, EAL, - "Unregistering with invalid input parameter\n"); + RTE_LOG_LINE(ERR, EAL, + "Unregistering with invalid input parameter"); return -EINVAL; } if (kq < 0) { - RTE_LOG(ERR, EAL, "Kqueue is not active\n"); + RTE_LOG_LINE(ERR, EAL, "Kqueue is not active"); return -ENODEV; } @@ -312,7 +312,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle, ke.flags = EV_DELETE; /* mark for deletion from the queue */ if (intr_source_to_kevent(intr_handle, &ke) < 0) { - RTE_LOG(ERR, EAL, "Cannot convert to kevent\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot convert to kevent"); ret = -ENODEV; goto out; } @@ -321,7 +321,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle, * remove intr file descriptor from wait list. */ if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) { - RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n", + RTE_LOG_LINE(ERR, EAL, "Error removing fd %d kevent, %s", rte_intr_fd_get(src->intr_handle), strerror(errno)); /* removing non-existent even is an expected condition @@ -396,7 +396,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle) break; /* unknown handle type */ default: - RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Unknown handle type of fd %d", rte_intr_fd_get(intr_handle)); rc = -1; break; @@ -437,7 +437,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle) break; /* unknown handle type */ default: - RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Unknown handle type of fd %d", rte_intr_fd_get(intr_handle)); rc = -1; break; @@ -513,13 +513,13 @@ eal_intr_process_interrupts(struct kevent *events, int nfds) if (errno == EINTR || errno == EWOULDBLOCK) continue; - RTE_LOG(ERR, EAL, "Error reading from file " - "descriptor %d: %s\n", + RTE_LOG_LINE(ERR, EAL, "Error reading from file " + "descriptor %d: %s", event_fd, strerror(errno)); } else if (bytes_read == 0) - RTE_LOG(ERR, EAL, "Read nothing from file " - "descriptor %d\n", event_fd); + RTE_LOG_LINE(ERR, EAL, "Read nothing from file " + "descriptor %d", event_fd); else call = true; } @@ -556,7 +556,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds) ke.flags = EV_DELETE; if (intr_source_to_kevent(src->intr_handle, &ke) < 0) { - RTE_LOG(ERR, EAL, "Cannot convert to kevent\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot convert to kevent"); rte_spinlock_unlock(&intr_lock); return; } @@ -565,7 +565,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds) * remove intr file descriptor from wait list. */ if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) { - RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n", + RTE_LOG_LINE(ERR, EAL, "Error removing fd %d kevent, %s", rte_intr_fd_get(src->intr_handle), strerror(errno)); /* removing non-existent even is an expected @@ -606,8 +606,8 @@ eal_intr_thread_main(void *arg __rte_unused) if (nfds < 0) { if (errno == EINTR) continue; - RTE_LOG(ERR, EAL, - "kevent returns with fail\n"); + RTE_LOG_LINE(ERR, EAL, + "kevent returns with fail"); break; } /* kevent timeout, will never happen here */ @@ -632,7 +632,7 @@ rte_eal_intr_init(void) kq = kqueue(); if (kq < 0) { - RTE_LOG(ERR, EAL, "Cannot create kqueue instance\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot create kqueue instance"); return -1; } @@ -641,8 +641,8 @@ rte_eal_intr_init(void) eal_intr_thread_main, NULL); if (ret != 0) { rte_errno = -ret; - RTE_LOG(ERR, EAL, - "Failed to create thread for interrupt handling\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to create thread for interrupt handling"); } return ret; diff --git a/lib/eal/freebsd/eal_lcore.c b/lib/eal/freebsd/eal_lcore.c index d9ef4bc9c5..cfd375076a 100644 --- a/lib/eal/freebsd/eal_lcore.c +++ b/lib/eal/freebsd/eal_lcore.c @@ -30,7 +30,7 @@ eal_get_ncpus(void) if (ncpu < 0) { sysctl(mib, 2, &ncpu, &len, NULL, 0); - RTE_LOG(INFO, EAL, "Sysctl reports %d cpus\n", ncpu); + RTE_LOG_LINE(INFO, EAL, "Sysctl reports %d cpus", ncpu); } return ncpu; } diff --git a/lib/eal/freebsd/eal_memalloc.c b/lib/eal/freebsd/eal_memalloc.c index 00ab02cb63..f96ed2ce21 100644 --- a/lib/eal/freebsd/eal_memalloc.c +++ b/lib/eal/freebsd/eal_memalloc.c @@ -15,21 +15,21 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms __rte_unused, int __rte_unused n_segs, size_t __rte_unused page_sz, int __rte_unused socket, bool __rte_unused exact) { - RTE_LOG(ERR, EAL, "Memory hotplug not supported on FreeBSD\n"); + RTE_LOG_LINE(ERR, EAL, "Memory hotplug not supported on FreeBSD"); return -1; } struct rte_memseg * eal_memalloc_alloc_seg(size_t __rte_unused page_sz, int __rte_unused socket) { - RTE_LOG(ERR, EAL, "Memory hotplug not supported on FreeBSD\n"); + RTE_LOG_LINE(ERR, EAL, "Memory hotplug not supported on FreeBSD"); return NULL; } int eal_memalloc_free_seg(struct rte_memseg *ms __rte_unused) { - RTE_LOG(ERR, EAL, "Memory hotplug not supported on FreeBSD\n"); + RTE_LOG_LINE(ERR, EAL, "Memory hotplug not supported on FreeBSD"); return -1; } @@ -37,14 +37,14 @@ int eal_memalloc_free_seg_bulk(struct rte_memseg **ms __rte_unused, int n_segs __rte_unused) { - RTE_LOG(ERR, EAL, "Memory hotplug not supported on FreeBSD\n"); + RTE_LOG_LINE(ERR, EAL, "Memory hotplug not supported on FreeBSD"); return -1; } int eal_memalloc_sync_with_primary(void) { - RTE_LOG(ERR, EAL, "Memory hotplug not supported on FreeBSD\n"); + RTE_LOG_LINE(ERR, EAL, "Memory hotplug not supported on FreeBSD"); return -1; } diff --git a/lib/eal/freebsd/eal_memory.c b/lib/eal/freebsd/eal_memory.c index 5c6165c580..195f570da0 100644 --- a/lib/eal/freebsd/eal_memory.c +++ b/lib/eal/freebsd/eal_memory.c @@ -84,7 +84,7 @@ rte_eal_hugepage_init(void) addr = mmap(NULL, mem_sz, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); if (addr == MAP_FAILED) { - RTE_LOG(ERR, EAL, "%s: mmap() failed: %s\n", __func__, + RTE_LOG_LINE(ERR, EAL, "%s: mmap() failed: %s", __func__, strerror(errno)); return -1; } @@ -132,8 +132,8 @@ rte_eal_hugepage_init(void) error = sysctlbyname(physaddr_str, &physaddr, &sysctl_size, NULL, 0); if (error < 0) { - RTE_LOG(ERR, EAL, "Failed to get physical addr for buffer %u " - "from %s\n", j, hpi->hugedir); + RTE_LOG_LINE(ERR, EAL, "Failed to get physical addr for buffer %u " + "from %s", j, hpi->hugedir); return -1; } @@ -172,8 +172,8 @@ rte_eal_hugepage_init(void) break; } if (msl_idx == RTE_MAX_MEMSEG_LISTS) { - RTE_LOG(ERR, EAL, "Could not find space for memseg. Please increase RTE_MAX_MEMSEG_PER_LIST " - "RTE_MAX_MEMSEG_PER_TYPE and/or RTE_MAX_MEM_MB_PER_TYPE in configuration.\n"); + RTE_LOG_LINE(ERR, EAL, "Could not find space for memseg. Please increase RTE_MAX_MEMSEG_PER_LIST " + "RTE_MAX_MEMSEG_PER_TYPE and/or RTE_MAX_MEM_MB_PER_TYPE in configuration."); return -1; } arr = &msl->memseg_arr; @@ -190,7 +190,7 @@ rte_eal_hugepage_init(void) hpi->lock_descriptor, j * EAL_PAGE_SIZE); if (addr == MAP_FAILED) { - RTE_LOG(ERR, EAL, "Failed to mmap buffer %u from %s\n", + RTE_LOG_LINE(ERR, EAL, "Failed to mmap buffer %u from %s", j, hpi->hugedir); return -1; } @@ -205,8 +205,8 @@ rte_eal_hugepage_init(void) rte_fbarray_set_used(arr, ms_idx); - RTE_LOG(INFO, EAL, "Mapped memory segment %u @ %p: physaddr:0x%" - PRIx64", len %zu\n", + RTE_LOG_LINE(INFO, EAL, "Mapped memory segment %u @ %p: physaddr:0x%" + PRIx64", len %zu", seg_idx++, addr, physaddr, page_sz); total_mem += seg->len; @@ -215,9 +215,9 @@ rte_eal_hugepage_init(void) break; } if (total_mem < internal_conf->memory) { - RTE_LOG(ERR, EAL, "Couldn't reserve requested memory, " + RTE_LOG_LINE(ERR, EAL, "Couldn't reserve requested memory, " "requested: %" PRIu64 "M " - "available: %" PRIu64 "M\n", + "available: %" PRIu64 "M", internal_conf->memory >> 20, total_mem >> 20); return -1; } @@ -268,7 +268,7 @@ rte_eal_hugepage_attach(void) /* Obtain a file descriptor for contiguous memory */ fd_hugepage = open(cur_hpi->hugedir, O_RDWR); if (fd_hugepage < 0) { - RTE_LOG(ERR, EAL, "Could not open %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not open %s", cur_hpi->hugedir); goto error; } @@ -277,7 +277,7 @@ rte_eal_hugepage_attach(void) /* Map the contiguous memory into each memory segment */ if (rte_memseg_walk(attach_segment, &wa) < 0) { - RTE_LOG(ERR, EAL, "Failed to mmap buffer %u from %s\n", + RTE_LOG_LINE(ERR, EAL, "Failed to mmap buffer %u from %s", wa.seg_idx, cur_hpi->hugedir); goto error; } @@ -402,8 +402,8 @@ memseg_primary_init(void) unsigned int n_segs; if (msl_idx >= RTE_MAX_MEMSEG_LISTS) { - RTE_LOG(ERR, EAL, - "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS\n"); + RTE_LOG_LINE(ERR, EAL, + "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS"); return -1; } @@ -424,7 +424,7 @@ memseg_primary_init(void) type_msl_idx++; if (memseg_list_alloc(msl)) { - RTE_LOG(ERR, EAL, "Cannot allocate VA space for memseg list\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate VA space for memseg list"); return -1; } } @@ -449,13 +449,13 @@ memseg_secondary_init(void) continue; if (rte_fbarray_attach(&msl->memseg_arr)) { - RTE_LOG(ERR, EAL, "Cannot attach to primary process memseg lists\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot attach to primary process memseg lists"); return -1; } /* preallocate VA space */ if (memseg_list_alloc(msl)) { - RTE_LOG(ERR, EAL, "Cannot preallocate VA space for hugepage memory\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot preallocate VA space for hugepage memory"); return -1; } } diff --git a/lib/eal/freebsd/eal_thread.c b/lib/eal/freebsd/eal_thread.c index 6f97a3c2c1..0f7284768a 100644 --- a/lib/eal/freebsd/eal_thread.c +++ b/lib/eal/freebsd/eal_thread.c @@ -38,7 +38,7 @@ void rte_thread_set_name(rte_thread_t thread_id, const char *thread_name) const size_t truncatedsz = sizeof(truncated); if (strlcpy(truncated, thread_name, truncatedsz) >= truncatedsz) - RTE_LOG(DEBUG, EAL, "Truncated thread name\n"); + RTE_LOG_LINE(DEBUG, EAL, "Truncated thread name"); pthread_set_name_np((pthread_t)thread_id.opaque_id, truncated); } diff --git a/lib/eal/freebsd/eal_timer.c b/lib/eal/freebsd/eal_timer.c index beff755a47..61488ff641 100644 --- a/lib/eal/freebsd/eal_timer.c +++ b/lib/eal/freebsd/eal_timer.c @@ -36,20 +36,20 @@ get_tsc_freq(void) tmp = 0; if (sysctlbyname("kern.timecounter.smp_tsc", &tmp, &sz, NULL, 0)) - RTE_LOG(WARNING, EAL, "%s\n", strerror(errno)); + RTE_LOG_LINE(WARNING, EAL, "%s", strerror(errno)); else if (tmp != 1) - RTE_LOG(WARNING, EAL, "TSC is not safe to use in SMP mode\n"); + RTE_LOG_LINE(WARNING, EAL, "TSC is not safe to use in SMP mode"); tmp = 0; if (sysctlbyname("kern.timecounter.invariant_tsc", &tmp, &sz, NULL, 0)) - RTE_LOG(WARNING, EAL, "%s\n", strerror(errno)); + RTE_LOG_LINE(WARNING, EAL, "%s", strerror(errno)); else if (tmp != 1) - RTE_LOG(WARNING, EAL, "TSC is not invariant\n"); + RTE_LOG_LINE(WARNING, EAL, "TSC is not invariant"); sz = sizeof(tsc_hz); if (sysctlbyname("machdep.tsc_freq", &tsc_hz, &sz, NULL, 0)) { - RTE_LOG(WARNING, EAL, "%s\n", strerror(errno)); + RTE_LOG_LINE(WARNING, EAL, "%s", strerror(errno)); return 0; } diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c index 57da058cec..8aaff34d54 100644 --- a/lib/eal/linux/eal.c +++ b/lib/eal/linux/eal.c @@ -94,7 +94,7 @@ eal_clean_runtime_dir(void) /* open directory */ dir = opendir(runtime_dir); if (!dir) { - RTE_LOG(ERR, EAL, "Unable to open runtime directory %s\n", + RTE_LOG_LINE(ERR, EAL, "Unable to open runtime directory %s", runtime_dir); goto error; } @@ -102,14 +102,14 @@ eal_clean_runtime_dir(void) /* lock the directory before doing anything, to avoid races */ if (flock(dir_fd, LOCK_EX) < 0) { - RTE_LOG(ERR, EAL, "Unable to lock runtime directory %s\n", + RTE_LOG_LINE(ERR, EAL, "Unable to lock runtime directory %s", runtime_dir); goto error; } dirent = readdir(dir); if (!dirent) { - RTE_LOG(ERR, EAL, "Unable to read runtime directory %s\n", + RTE_LOG_LINE(ERR, EAL, "Unable to read runtime directory %s", runtime_dir); goto error; } @@ -159,7 +159,7 @@ eal_clean_runtime_dir(void) if (dir) closedir(dir); - RTE_LOG(ERR, EAL, "Error while clearing runtime dir: %s\n", + RTE_LOG_LINE(ERR, EAL, "Error while clearing runtime dir: %s", strerror(errno)); return -1; @@ -200,7 +200,7 @@ rte_eal_config_create(void) if (mem_cfg_fd < 0){ mem_cfg_fd = open(pathname, O_RDWR | O_CREAT, 0600); if (mem_cfg_fd < 0) { - RTE_LOG(ERR, EAL, "Cannot open '%s' for rte_mem_config\n", + RTE_LOG_LINE(ERR, EAL, "Cannot open '%s' for rte_mem_config", pathname); return -1; } @@ -210,7 +210,7 @@ rte_eal_config_create(void) if (retval < 0){ close(mem_cfg_fd); mem_cfg_fd = -1; - RTE_LOG(ERR, EAL, "Cannot resize '%s' for rte_mem_config\n", + RTE_LOG_LINE(ERR, EAL, "Cannot resize '%s' for rte_mem_config", pathname); return -1; } @@ -219,8 +219,8 @@ rte_eal_config_create(void) if (retval < 0){ close(mem_cfg_fd); mem_cfg_fd = -1; - RTE_LOG(ERR, EAL, "Cannot create lock on '%s'. Is another primary " - "process running?\n", pathname); + RTE_LOG_LINE(ERR, EAL, "Cannot create lock on '%s'. Is another primary " + "process running?", pathname); return -1; } @@ -228,7 +228,7 @@ rte_eal_config_create(void) rte_mem_cfg_addr = eal_get_virtual_area(rte_mem_cfg_addr, &cfg_len_aligned, page_sz, 0, 0); if (rte_mem_cfg_addr == NULL) { - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config"); close(mem_cfg_fd); mem_cfg_fd = -1; return -1; @@ -242,7 +242,7 @@ rte_eal_config_create(void) munmap(rte_mem_cfg_addr, cfg_len); close(mem_cfg_fd); mem_cfg_fd = -1; - RTE_LOG(ERR, EAL, "Cannot remap memory for rte_config\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot remap memory for rte_config"); return -1; } @@ -275,7 +275,7 @@ rte_eal_config_attach(void) if (mem_cfg_fd < 0){ mem_cfg_fd = open(pathname, O_RDWR); if (mem_cfg_fd < 0) { - RTE_LOG(ERR, EAL, "Cannot open '%s' for rte_mem_config\n", + RTE_LOG_LINE(ERR, EAL, "Cannot open '%s' for rte_mem_config", pathname); return -1; } @@ -287,7 +287,7 @@ rte_eal_config_attach(void) if (mem_config == MAP_FAILED) { close(mem_cfg_fd); mem_cfg_fd = -1; - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)", errno, strerror(errno)); return -1; } @@ -328,13 +328,13 @@ rte_eal_config_reattach(void) if (mem_config == MAP_FAILED || mem_config != rte_mem_cfg_addr) { if (mem_config != MAP_FAILED) { /* errno is stale, don't use */ - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config at [%p], got [%p]" + RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config at [%p], got [%p]" " - please use '--" OPT_BASE_VIRTADDR - "' option\n", rte_mem_cfg_addr, mem_config); + "' option", rte_mem_cfg_addr, mem_config); munmap(mem_config, sizeof(struct rte_mem_config)); return -1; } - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)", errno, strerror(errno)); return -1; } @@ -365,7 +365,7 @@ eal_proc_type_detect(void) ptype = RTE_PROC_SECONDARY; } - RTE_LOG(INFO, EAL, "Auto-detected process type: %s\n", + RTE_LOG_LINE(INFO, EAL, "Auto-detected process type: %s", ptype == RTE_PROC_PRIMARY ? "PRIMARY" : "SECONDARY"); return ptype; @@ -392,20 +392,20 @@ rte_config_init(void) return -1; eal_mcfg_wait_complete(); if (eal_mcfg_check_version() < 0) { - RTE_LOG(ERR, EAL, "Primary and secondary process DPDK version mismatch\n"); + RTE_LOG_LINE(ERR, EAL, "Primary and secondary process DPDK version mismatch"); return -1; } if (rte_eal_config_reattach() < 0) return -1; if (!__rte_mp_enable()) { - RTE_LOG(ERR, EAL, "Primary process refused secondary attachment\n"); + RTE_LOG_LINE(ERR, EAL, "Primary process refused secondary attachment"); return -1; } eal_mcfg_update_internal(); break; case RTE_PROC_AUTO: case RTE_PROC_INVALID: - RTE_LOG(ERR, EAL, "Invalid process type %d\n", + RTE_LOG_LINE(ERR, EAL, "Invalid process type %d", config->process_type); return -1; } @@ -474,7 +474,7 @@ eal_parse_socket_arg(char *strval, volatile uint64_t *socket_arg) len = strnlen(strval, SOCKET_MEM_STRLEN); if (len == SOCKET_MEM_STRLEN) { - RTE_LOG(ERR, EAL, "--socket-mem is too long\n"); + RTE_LOG_LINE(ERR, EAL, "--socket-mem is too long"); return -1; } @@ -595,13 +595,13 @@ eal_parse_huge_worker_stack(const char *arg) int ret; if (pthread_attr_init(&attr) != 0) { - RTE_LOG(ERR, EAL, "Could not retrieve default stack size\n"); + RTE_LOG_LINE(ERR, EAL, "Could not retrieve default stack size"); return -1; } ret = pthread_attr_getstacksize(&attr, &cfg->huge_worker_stack_size); pthread_attr_destroy(&attr); if (ret != 0) { - RTE_LOG(ERR, EAL, "Could not retrieve default stack size\n"); + RTE_LOG_LINE(ERR, EAL, "Could not retrieve default stack size"); return -1; } } else { @@ -617,7 +617,7 @@ eal_parse_huge_worker_stack(const char *arg) cfg->huge_worker_stack_size = stack_size * 1024; } - RTE_LOG(DEBUG, EAL, "Each worker thread will use %zu kB of DPDK memory as stack\n", + RTE_LOG_LINE(DEBUG, EAL, "Each worker thread will use %zu kB of DPDK memory as stack", cfg->huge_worker_stack_size / 1024); return 0; } @@ -673,7 +673,7 @@ eal_parse_args(int argc, char **argv) { char *hdir = strdup(optarg); if (hdir == NULL) - RTE_LOG(ERR, EAL, "Could not store hugepage directory\n"); + RTE_LOG_LINE(ERR, EAL, "Could not store hugepage directory"); else { /* free old hugepage dir */ free(internal_conf->hugepage_dir); @@ -685,7 +685,7 @@ eal_parse_args(int argc, char **argv) { char *prefix = strdup(optarg); if (prefix == NULL) - RTE_LOG(ERR, EAL, "Could not store file prefix\n"); + RTE_LOG_LINE(ERR, EAL, "Could not store file prefix"); else { /* free old prefix */ free(internal_conf->hugefile_prefix); @@ -696,8 +696,8 @@ eal_parse_args(int argc, char **argv) case OPT_SOCKET_MEM_NUM: if (eal_parse_socket_arg(optarg, internal_conf->socket_mem) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_SOCKET_MEM "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_SOCKET_MEM); eal_usage(prgname); ret = -1; goto out; @@ -708,8 +708,8 @@ eal_parse_args(int argc, char **argv) case OPT_SOCKET_LIMIT_NUM: if (eal_parse_socket_arg(optarg, internal_conf->socket_limit) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_SOCKET_LIMIT "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_SOCKET_LIMIT); eal_usage(prgname); ret = -1; goto out; @@ -719,8 +719,8 @@ eal_parse_args(int argc, char **argv) case OPT_VFIO_INTR_NUM: if (eal_parse_vfio_intr(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_VFIO_INTR "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_VFIO_INTR); eal_usage(prgname); ret = -1; goto out; @@ -729,8 +729,8 @@ eal_parse_args(int argc, char **argv) case OPT_VFIO_VF_TOKEN_NUM: if (eal_parse_vfio_vf_token(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_VFIO_VF_TOKEN "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_VFIO_VF_TOKEN); eal_usage(prgname); ret = -1; goto out; @@ -745,7 +745,7 @@ eal_parse_args(int argc, char **argv) { char *ops_name = strdup(optarg); if (ops_name == NULL) - RTE_LOG(ERR, EAL, "Could not store mbuf pool ops name\n"); + RTE_LOG_LINE(ERR, EAL, "Could not store mbuf pool ops name"); else { /* free old ops name */ free(internal_conf->user_mbuf_pool_ops_name); @@ -761,8 +761,8 @@ eal_parse_args(int argc, char **argv) case OPT_HUGE_WORKER_STACK_NUM: if (eal_parse_huge_worker_stack(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameter for --" - OPT_HUGE_WORKER_STACK"\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameter for --" + OPT_HUGE_WORKER_STACK); eal_usage(prgname); ret = -1; goto out; @@ -771,16 +771,16 @@ eal_parse_args(int argc, char **argv) default: if (opt < OPT_LONG_MIN_NUM && isprint(opt)) { - RTE_LOG(ERR, EAL, "Option %c is not supported " - "on Linux\n", opt); + RTE_LOG_LINE(ERR, EAL, "Option %c is not supported " + "on Linux", opt); } else if (opt >= OPT_LONG_MIN_NUM && opt < OPT_LONG_MAX_NUM) { - RTE_LOG(ERR, EAL, "Option %s is not supported " - "on Linux\n", + RTE_LOG_LINE(ERR, EAL, "Option %s is not supported " + "on Linux", eal_long_options[option_index].name); } else { - RTE_LOG(ERR, EAL, "Option %d is not supported " - "on Linux\n", opt); + RTE_LOG_LINE(ERR, EAL, "Option %d is not supported " + "on Linux", opt); } eal_usage(prgname); ret = -1; @@ -791,11 +791,11 @@ eal_parse_args(int argc, char **argv) /* create runtime data directory. In no_shconf mode, skip any errors */ if (eal_create_runtime_dir() < 0) { if (internal_conf->no_shconf == 0) { - RTE_LOG(ERR, EAL, "Cannot create runtime directory\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot create runtime directory"); ret = -1; goto out; } else - RTE_LOG(WARNING, EAL, "No DPDK runtime directory created\n"); + RTE_LOG_LINE(WARNING, EAL, "No DPDK runtime directory created"); } if (eal_adjust_config(internal_conf) != 0) { @@ -843,7 +843,7 @@ eal_check_mem_on_local_socket(void) socket_id = rte_lcore_to_socket_id(config->main_lcore); if (rte_memseg_list_walk(check_socket, &socket_id) == 0) - RTE_LOG(WARNING, EAL, "WARNING: Main core has no memory on local socket!\n"); + RTE_LOG_LINE(WARNING, EAL, "WARNING: Main core has no memory on local socket!"); } static int @@ -880,7 +880,7 @@ static int rte_eal_vfio_setup(void) static void rte_eal_init_alert(const char *msg) { fprintf(stderr, "EAL: FATAL: %s\n", msg); - RTE_LOG(ERR, EAL, "%s\n", msg); + RTE_LOG_LINE(ERR, EAL, "%s", msg); } /* @@ -1073,27 +1073,27 @@ rte_eal_init(int argc, char **argv) enum rte_iova_mode iova_mode = rte_bus_get_iommu_class(); if (iova_mode == RTE_IOVA_DC) { - RTE_LOG(DEBUG, EAL, "Buses did not request a specific IOVA mode.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Buses did not request a specific IOVA mode."); if (!RTE_IOVA_IN_MBUF) { iova_mode = RTE_IOVA_VA; - RTE_LOG(DEBUG, EAL, "IOVA as VA mode is forced by build option.\n"); + RTE_LOG_LINE(DEBUG, EAL, "IOVA as VA mode is forced by build option."); } else if (!phys_addrs) { /* if we have no access to physical addresses, * pick IOVA as VA mode. */ iova_mode = RTE_IOVA_VA; - RTE_LOG(DEBUG, EAL, "Physical addresses are unavailable, selecting IOVA as VA mode.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Physical addresses are unavailable, selecting IOVA as VA mode."); } else if (is_iommu_enabled()) { /* we have an IOMMU, pick IOVA as VA mode */ iova_mode = RTE_IOVA_VA; - RTE_LOG(DEBUG, EAL, "IOMMU is available, selecting IOVA as VA mode.\n"); + RTE_LOG_LINE(DEBUG, EAL, "IOMMU is available, selecting IOVA as VA mode."); } else { /* physical addresses available, and no IOMMU * found, so pick IOVA as PA. */ iova_mode = RTE_IOVA_PA; - RTE_LOG(DEBUG, EAL, "IOMMU is not available, selecting IOVA as PA mode.\n"); + RTE_LOG_LINE(DEBUG, EAL, "IOMMU is not available, selecting IOVA as PA mode."); } } rte_eal_get_configuration()->iova_mode = iova_mode; @@ -1114,7 +1114,7 @@ rte_eal_init(int argc, char **argv) return -1; } - RTE_LOG(INFO, EAL, "Selected IOVA mode '%s'\n", + RTE_LOG_LINE(INFO, EAL, "Selected IOVA mode '%s'", rte_eal_iova_mode() == RTE_IOVA_PA ? "PA" : "VA"); if (internal_conf->no_hugetlbfs == 0) { @@ -1138,11 +1138,11 @@ rte_eal_init(int argc, char **argv) if (internal_conf->vmware_tsc_map == 1) { #ifdef RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT rte_cycles_vmware_tsc_map = 1; - RTE_LOG (DEBUG, EAL, "Using VMWARE TSC MAP, " - "you must have monitor_control.pseudo_perfctr = TRUE\n"); + RTE_LOG_LINE(DEBUG, EAL, "Using VMWARE TSC MAP, " + "you must have monitor_control.pseudo_perfctr = TRUE"); #else - RTE_LOG (WARNING, EAL, "Ignoring --vmware-tsc-map because " - "RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT is not set\n"); + RTE_LOG_LINE(WARNING, EAL, "Ignoring --vmware-tsc-map because " + "RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT is not set"); #endif } @@ -1229,7 +1229,7 @@ rte_eal_init(int argc, char **argv) &lcore_config[config->main_lcore].cpuset); ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset)); - RTE_LOG(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])\n", + RTE_LOG_LINE(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])", config->main_lcore, (uintptr_t)pthread_self(), cpuset, ret == 0 ? "" : "..."); @@ -1350,7 +1350,7 @@ rte_eal_cleanup(void) if (!rte_atomic_compare_exchange_strong_explicit(&run_once, &has_run, 1, rte_memory_order_relaxed, rte_memory_order_relaxed)) { - RTE_LOG(WARNING, EAL, "Already called cleanup\n"); + RTE_LOG_LINE(WARNING, EAL, "Already called cleanup"); rte_errno = EALREADY; return -1; } @@ -1420,7 +1420,7 @@ rte_eal_check_module(const char *module_name) /* Check if there is sysfs mounted */ if (stat("/sys/module", &st) != 0) { - RTE_LOG(DEBUG, EAL, "sysfs is not mounted! error %i (%s)\n", + RTE_LOG_LINE(DEBUG, EAL, "sysfs is not mounted! error %i (%s)", errno, strerror(errno)); return -1; } @@ -1428,12 +1428,12 @@ rte_eal_check_module(const char *module_name) /* A module might be built-in, therefore try sysfs */ n = snprintf(sysfs_mod_name, PATH_MAX, "/sys/module/%s", module_name); if (n < 0 || n > PATH_MAX) { - RTE_LOG(DEBUG, EAL, "Could not format module path\n"); + RTE_LOG_LINE(DEBUG, EAL, "Could not format module path"); return -1; } if (stat(sysfs_mod_name, &st) != 0) { - RTE_LOG(DEBUG, EAL, "Module %s not found! error %i (%s)\n", + RTE_LOG_LINE(DEBUG, EAL, "Module %s not found! error %i (%s)", sysfs_mod_name, errno, strerror(errno)); return 0; } diff --git a/lib/eal/linux/eal_alarm.c b/lib/eal/linux/eal_alarm.c index 766ba2c251..3c0464ad10 100644 --- a/lib/eal/linux/eal_alarm.c +++ b/lib/eal/linux/eal_alarm.c @@ -65,7 +65,7 @@ rte_eal_alarm_init(void) intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE); if (intr_handle == NULL) { - RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n"); + RTE_LOG_LINE(ERR, EAL, "Fail to allocate intr_handle"); goto error; } diff --git a/lib/eal/linux/eal_dev.c b/lib/eal/linux/eal_dev.c index ac76f6174d..16e817121d 100644 --- a/lib/eal/linux/eal_dev.c +++ b/lib/eal/linux/eal_dev.c @@ -64,7 +64,7 @@ static void sigbus_handler(int signum, siginfo_t *info, { int ret; - RTE_LOG(DEBUG, EAL, "Thread catch SIGBUS, fault address:%p\n", + RTE_LOG_LINE(DEBUG, EAL, "Thread catch SIGBUS, fault address:%p", info->si_addr); rte_spinlock_lock(&failure_handle_lock); @@ -88,7 +88,7 @@ static void sigbus_handler(int signum, siginfo_t *info, } } - RTE_LOG(DEBUG, EAL, "Success to handle SIGBUS for hot-unplug!\n"); + RTE_LOG_LINE(DEBUG, EAL, "Success to handle SIGBUS for hot-unplug!"); } static int cmp_dev_name(const struct rte_device *dev, @@ -108,7 +108,7 @@ dev_uev_socket_fd_create(void) fd = socket(PF_NETLINK, SOCK_RAW | SOCK_CLOEXEC | SOCK_NONBLOCK, NETLINK_KOBJECT_UEVENT); if (fd < 0) { - RTE_LOG(ERR, EAL, "create uevent fd failed.\n"); + RTE_LOG_LINE(ERR, EAL, "create uevent fd failed."); return -1; } @@ -119,7 +119,7 @@ dev_uev_socket_fd_create(void) ret = bind(fd, (struct sockaddr *) &addr, sizeof(addr)); if (ret < 0) { - RTE_LOG(ERR, EAL, "Failed to bind uevent socket.\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to bind uevent socket."); goto err; } @@ -245,18 +245,18 @@ dev_uev_handler(__rte_unused void *param) return; else if (ret <= 0) { /* connection is closed or broken, can not up again. */ - RTE_LOG(ERR, EAL, "uevent socket connection is broken.\n"); + RTE_LOG_LINE(ERR, EAL, "uevent socket connection is broken."); rte_eal_alarm_set(1, dev_delayed_unregister, NULL); return; } ret = dev_uev_parse(buf, &uevent, EAL_UEV_MSG_LEN); if (ret < 0) { - RTE_LOG(DEBUG, EAL, "Ignoring uevent '%s'\n", buf); + RTE_LOG_LINE(DEBUG, EAL, "Ignoring uevent '%s'", buf); return; } - RTE_LOG(DEBUG, EAL, "receive uevent(name:%s, type:%d, subsystem:%d)\n", + RTE_LOG_LINE(DEBUG, EAL, "receive uevent(name:%s, type:%d, subsystem:%d)", uevent.devname, uevent.type, uevent.subsystem); switch (uevent.subsystem) { @@ -273,7 +273,7 @@ dev_uev_handler(__rte_unused void *param) rte_spinlock_lock(&failure_handle_lock); bus = rte_bus_find_by_name(busname); if (bus == NULL) { - RTE_LOG(ERR, EAL, "Cannot find bus (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Cannot find bus (%s)", busname); goto failure_handle_err; } @@ -281,15 +281,15 @@ dev_uev_handler(__rte_unused void *param) dev = bus->find_device(NULL, cmp_dev_name, uevent.devname); if (dev == NULL) { - RTE_LOG(ERR, EAL, "Cannot find device (%s) on " - "bus (%s)\n", uevent.devname, busname); + RTE_LOG_LINE(ERR, EAL, "Cannot find device (%s) on " + "bus (%s)", uevent.devname, busname); goto failure_handle_err; } ret = bus->hot_unplug_handler(dev); if (ret) { - RTE_LOG(ERR, EAL, "Can not handle hot-unplug " - "for device (%s)\n", dev->name); + RTE_LOG_LINE(ERR, EAL, "Can not handle hot-unplug " + "for device (%s)", dev->name); } rte_spinlock_unlock(&failure_handle_lock); } @@ -318,7 +318,7 @@ rte_dev_event_monitor_start(void) intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE); if (intr_handle == NULL) { - RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n"); + RTE_LOG_LINE(ERR, EAL, "Fail to allocate intr_handle"); goto exit; } @@ -332,7 +332,7 @@ rte_dev_event_monitor_start(void) ret = dev_uev_socket_fd_create(); if (ret) { - RTE_LOG(ERR, EAL, "error create device event fd.\n"); + RTE_LOG_LINE(ERR, EAL, "error create device event fd."); goto exit; } @@ -362,7 +362,7 @@ rte_dev_event_monitor_stop(void) rte_rwlock_write_lock(&monitor_lock); if (!monitor_refcount) { - RTE_LOG(ERR, EAL, "device event monitor already stopped\n"); + RTE_LOG_LINE(ERR, EAL, "device event monitor already stopped"); goto exit; } @@ -374,7 +374,7 @@ rte_dev_event_monitor_stop(void) ret = rte_intr_callback_unregister(intr_handle, dev_uev_handler, (void *)-1); if (ret < 0) { - RTE_LOG(ERR, EAL, "fail to unregister uevent callback.\n"); + RTE_LOG_LINE(ERR, EAL, "fail to unregister uevent callback."); goto exit; } @@ -429,8 +429,8 @@ rte_dev_hotplug_handle_enable(void) ret = dev_sigbus_handler_register(); if (ret < 0) - RTE_LOG(ERR, EAL, - "fail to register sigbus handler for devices.\n"); + RTE_LOG_LINE(ERR, EAL, + "fail to register sigbus handler for devices."); hotplug_handle = true; @@ -444,8 +444,8 @@ rte_dev_hotplug_handle_disable(void) ret = dev_sigbus_handler_unregister(); if (ret < 0) - RTE_LOG(ERR, EAL, - "fail to unregister sigbus handler for devices.\n"); + RTE_LOG_LINE(ERR, EAL, + "fail to unregister sigbus handler for devices."); hotplug_handle = false; diff --git a/lib/eal/linux/eal_hugepage_info.c b/lib/eal/linux/eal_hugepage_info.c index 36a495fb1f..971c57989d 100644 --- a/lib/eal/linux/eal_hugepage_info.c +++ b/lib/eal/linux/eal_hugepage_info.c @@ -110,7 +110,7 @@ get_num_hugepages(const char *subdir, size_t sz, unsigned int reusable_pages) over_pages = 0; if (num_pages == 0 && over_pages == 0 && reusable_pages) - RTE_LOG(WARNING, EAL, "No available %zu kB hugepages reported\n", + RTE_LOG_LINE(WARNING, EAL, "No available %zu kB hugepages reported", sz >> 10); num_pages += over_pages; @@ -155,7 +155,7 @@ get_num_hugepages_on_node(const char *subdir, unsigned int socket, size_t sz) return 0; if (num_pages == 0) - RTE_LOG(WARNING, EAL, "No free %zu kB hugepages reported on node %u\n", + RTE_LOG_LINE(WARNING, EAL, "No free %zu kB hugepages reported on node %u", sz >> 10, socket); /* @@ -239,7 +239,7 @@ get_hugepage_dir(uint64_t hugepage_sz, char *hugedir, int len) if (rte_strsplit(buf, sizeof(buf), splitstr, _FIELDNAME_MAX, split_tok) != _FIELDNAME_MAX) { - RTE_LOG(ERR, EAL, "Error parsing %s\n", proc_mounts); + RTE_LOG_LINE(ERR, EAL, "Error parsing %s", proc_mounts); break; /* return NULL */ } @@ -325,7 +325,7 @@ walk_hugedir(const char *hugedir, walk_hugedir_t *cb, void *user_data) dir = opendir(hugedir); if (!dir) { - RTE_LOG(ERR, EAL, "Unable to open hugepage directory %s\n", + RTE_LOG_LINE(ERR, EAL, "Unable to open hugepage directory %s", hugedir); goto error; } @@ -333,7 +333,7 @@ walk_hugedir(const char *hugedir, walk_hugedir_t *cb, void *user_data) dirent = readdir(dir); if (!dirent) { - RTE_LOG(ERR, EAL, "Unable to read hugepage directory %s\n", + RTE_LOG_LINE(ERR, EAL, "Unable to read hugepage directory %s", hugedir); goto error; } @@ -377,7 +377,7 @@ walk_hugedir(const char *hugedir, walk_hugedir_t *cb, void *user_data) if (dir) closedir(dir); - RTE_LOG(ERR, EAL, "Error while walking hugepage dir: %s\n", + RTE_LOG_LINE(ERR, EAL, "Error while walking hugepage dir: %s", strerror(errno)); return -1; @@ -403,7 +403,7 @@ inspect_hugedir_cb(const struct walk_hugedir_data *whd) struct stat st; if (fstat(whd->file_fd, &st) < 0) - RTE_LOG(DEBUG, EAL, "%s(): stat(\"%s\") failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): stat(\"%s\") failed: %s", __func__, whd->file_name, strerror(errno)); else (*total_size) += st.st_size; @@ -492,8 +492,8 @@ hugepage_info_init(void) dir = opendir(sys_dir_path); if (dir == NULL) { - RTE_LOG(ERR, EAL, - "Cannot open directory %s to read system hugepage info\n", + RTE_LOG_LINE(ERR, EAL, + "Cannot open directory %s to read system hugepage info", sys_dir_path); return -1; } @@ -520,10 +520,10 @@ hugepage_info_init(void) num_pages = get_num_hugepages(dirent->d_name, hpi->hugepage_sz, 0); if (num_pages > 0) - RTE_LOG(NOTICE, EAL, + RTE_LOG_LINE(NOTICE, EAL, "%" PRIu32 " hugepages of size " "%" PRIu64 " reserved, but no mounted " - "hugetlbfs found for that size\n", + "hugetlbfs found for that size", num_pages, hpi->hugepage_sz); /* if we have kernel support for reserving hugepages * through mmap, and we're in in-memory mode, treat this @@ -533,9 +533,9 @@ hugepage_info_init(void) */ #ifdef MAP_HUGE_SHIFT if (internal_conf->in_memory) { - RTE_LOG(DEBUG, EAL, "In-memory mode enabled, " + RTE_LOG_LINE(DEBUG, EAL, "In-memory mode enabled, " "hugepages of size %" PRIu64 " bytes " - "will be allocated anonymously\n", + "will be allocated anonymously", hpi->hugepage_sz); calc_num_pages(hpi, dirent, 0); num_sizes++; @@ -549,8 +549,8 @@ hugepage_info_init(void) /* if blocking lock failed */ if (flock(hpi->lock_descriptor, LOCK_EX) == -1) { - RTE_LOG(CRIT, EAL, - "Failed to lock hugepage directory!\n"); + RTE_LOG_LINE(CRIT, EAL, + "Failed to lock hugepage directory!"); break; } @@ -626,7 +626,7 @@ eal_hugepage_info_init(void) tmp_hpi = create_shared_memory(eal_hugepage_info_path(), sizeof(internal_conf->hugepage_info)); if (tmp_hpi == NULL) { - RTE_LOG(ERR, EAL, "Failed to create shared memory!\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to create shared memory!"); return -1; } @@ -641,7 +641,7 @@ eal_hugepage_info_init(void) } if (munmap(tmp_hpi, sizeof(internal_conf->hugepage_info)) < 0) { - RTE_LOG(ERR, EAL, "Failed to unmap shared memory!\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to unmap shared memory!"); return -1; } return 0; @@ -657,14 +657,14 @@ int eal_hugepage_info_read(void) tmp_hpi = open_shared_memory(eal_hugepage_info_path(), sizeof(internal_conf->hugepage_info)); if (tmp_hpi == NULL) { - RTE_LOG(ERR, EAL, "Failed to open shared memory!\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to open shared memory!"); return -1; } memcpy(hpi, tmp_hpi, sizeof(internal_conf->hugepage_info)); if (munmap(tmp_hpi, sizeof(internal_conf->hugepage_info)) < 0) { - RTE_LOG(ERR, EAL, "Failed to unmap shared memory!\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to unmap shared memory!"); return -1; } return 0; diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c index eabac24992..9a7169c4e4 100644 --- a/lib/eal/linux/eal_interrupts.c +++ b/lib/eal/linux/eal_interrupts.c @@ -123,7 +123,7 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) { ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { - RTE_LOG(ERR, EAL, "Error enabling INTx interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error enabling INTx interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -140,7 +140,7 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) { ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { - RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error unmasking INTx interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -168,7 +168,7 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) { ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { - RTE_LOG(ERR, EAL, "Error masking INTx interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error masking INTx interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -184,7 +184,7 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) { ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { - RTE_LOG(ERR, EAL, "Error disabling INTx interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error disabling INTx interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -208,7 +208,7 @@ vfio_ack_intx(const struct rte_intr_handle *intr_handle) vfio_dev_fd = rte_intr_dev_fd_get(intr_handle); if (ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) { - RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error unmasking INTx interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -238,7 +238,7 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) { ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { - RTE_LOG(ERR, EAL, "Error enabling MSI interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error enabling MSI interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -264,7 +264,7 @@ vfio_disable_msi(const struct rte_intr_handle *intr_handle) { vfio_dev_fd = rte_intr_dev_fd_get(intr_handle); ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) - RTE_LOG(ERR, EAL, "Error disabling MSI interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error disabling MSI interrupts for fd %d", rte_intr_fd_get(intr_handle)); return ret; @@ -303,7 +303,7 @@ vfio_enable_msix(const struct rte_intr_handle *intr_handle) { ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { - RTE_LOG(ERR, EAL, "Error enabling MSI-X interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error enabling MSI-X interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -331,7 +331,7 @@ vfio_disable_msix(const struct rte_intr_handle *intr_handle) { ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) - RTE_LOG(ERR, EAL, "Error disabling MSI-X interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error disabling MSI-X interrupts for fd %d", rte_intr_fd_get(intr_handle)); return ret; @@ -363,7 +363,7 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle) ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { - RTE_LOG(ERR, EAL, "Error enabling req interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error enabling req interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -392,7 +392,7 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle) ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) - RTE_LOG(ERR, EAL, "Error disabling req interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error disabling req interrupts for fd %d", rte_intr_fd_get(intr_handle)); return ret; @@ -409,16 +409,16 @@ uio_intx_intr_disable(const struct rte_intr_handle *intr_handle) /* use UIO config file descriptor for uio_pci_generic */ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle); if (uio_cfg_fd < 0 || pread(uio_cfg_fd, &command_high, 1, 5) != 1) { - RTE_LOG(ERR, EAL, - "Error reading interrupts status for fd %d\n", + RTE_LOG_LINE(ERR, EAL, + "Error reading interrupts status for fd %d", uio_cfg_fd); return -1; } /* disable interrupts */ command_high |= 0x4; if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) { - RTE_LOG(ERR, EAL, - "Error disabling interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, + "Error disabling interrupts for fd %d", uio_cfg_fd); return -1; } @@ -435,16 +435,16 @@ uio_intx_intr_enable(const struct rte_intr_handle *intr_handle) /* use UIO config file descriptor for uio_pci_generic */ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle); if (uio_cfg_fd < 0 || pread(uio_cfg_fd, &command_high, 1, 5) != 1) { - RTE_LOG(ERR, EAL, - "Error reading interrupts status for fd %d\n", + RTE_LOG_LINE(ERR, EAL, + "Error reading interrupts status for fd %d", uio_cfg_fd); return -1; } /* enable interrupts */ command_high &= ~0x4; if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) { - RTE_LOG(ERR, EAL, - "Error enabling interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, + "Error enabling interrupts for fd %d", uio_cfg_fd); return -1; } @@ -459,7 +459,7 @@ uio_intr_disable(const struct rte_intr_handle *intr_handle) if (rte_intr_fd_get(intr_handle) < 0 || write(rte_intr_fd_get(intr_handle), &value, sizeof(value)) < 0) { - RTE_LOG(ERR, EAL, "Error disabling interrupts for fd %d (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Error disabling interrupts for fd %d (%s)", rte_intr_fd_get(intr_handle), strerror(errno)); return -1; } @@ -473,7 +473,7 @@ uio_intr_enable(const struct rte_intr_handle *intr_handle) if (rte_intr_fd_get(intr_handle) < 0 || write(rte_intr_fd_get(intr_handle), &value, sizeof(value)) < 0) { - RTE_LOG(ERR, EAL, "Error enabling interrupts for fd %d (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Error enabling interrupts for fd %d (%s)", rte_intr_fd_get(intr_handle), strerror(errno)); return -1; } @@ -492,14 +492,14 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, /* first do parameter checking */ if (rte_intr_fd_get(intr_handle) < 0 || cb == NULL) { - RTE_LOG(ERR, EAL, "Registering with invalid input parameter\n"); + RTE_LOG_LINE(ERR, EAL, "Registering with invalid input parameter"); return -EINVAL; } /* allocate a new interrupt callback entity */ callback = calloc(1, sizeof(*callback)); if (callback == NULL) { - RTE_LOG(ERR, EAL, "Can not allocate memory\n"); + RTE_LOG_LINE(ERR, EAL, "Can not allocate memory"); return -ENOMEM; } callback->cb_fn = cb; @@ -526,14 +526,14 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, if (src == NULL) { src = calloc(1, sizeof(*src)); if (src == NULL) { - RTE_LOG(ERR, EAL, "Can not allocate memory\n"); + RTE_LOG_LINE(ERR, EAL, "Can not allocate memory"); ret = -ENOMEM; free(callback); callback = NULL; } else { src->intr_handle = rte_intr_instance_dup(intr_handle); if (src->intr_handle == NULL) { - RTE_LOG(ERR, EAL, "Can not create intr instance\n"); + RTE_LOG_LINE(ERR, EAL, "Can not create intr instance"); ret = -ENOMEM; free(callback); callback = NULL; @@ -575,7 +575,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle, /* do parameter checking first */ if (rte_intr_fd_get(intr_handle) < 0) { - RTE_LOG(ERR, EAL, "Unregistering with invalid input parameter\n"); + RTE_LOG_LINE(ERR, EAL, "Unregistering with invalid input parameter"); return -EINVAL; } @@ -625,7 +625,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle, /* do parameter checking first */ if (rte_intr_fd_get(intr_handle) < 0) { - RTE_LOG(ERR, EAL, "Unregistering with invalid input parameter\n"); + RTE_LOG_LINE(ERR, EAL, "Unregistering with invalid input parameter"); return -EINVAL; } @@ -752,7 +752,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle) break; /* unknown handle type */ default: - RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Unknown handle type of fd %d", rte_intr_fd_get(intr_handle)); rc = -1; break; @@ -817,7 +817,7 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle) return -1; /* unknown handle type */ default: - RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Unknown handle type of fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -884,7 +884,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle) break; /* unknown handle type */ default: - RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Unknown handle type of fd %d", rte_intr_fd_get(intr_handle)); rc = -1; break; @@ -972,8 +972,8 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds) if (errno == EINTR || errno == EWOULDBLOCK) continue; - RTE_LOG(ERR, EAL, "Error reading from file " - "descriptor %d: %s\n", + RTE_LOG_LINE(ERR, EAL, "Error reading from file " + "descriptor %d: %s", events[n].data.fd, strerror(errno)); /* @@ -995,8 +995,8 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds) free(src); return -1; } else if (bytes_read == 0) - RTE_LOG(ERR, EAL, "Read nothing from file " - "descriptor %d\n", events[n].data.fd); + RTE_LOG_LINE(ERR, EAL, "Read nothing from file " + "descriptor %d", events[n].data.fd); else call = true; } @@ -1080,8 +1080,8 @@ eal_intr_handle_interrupts(int pfd, unsigned totalfds) if (nfds < 0) { if (errno == EINTR) continue; - RTE_LOG(ERR, EAL, - "epoll_wait returns with fail\n"); + RTE_LOG_LINE(ERR, EAL, + "epoll_wait returns with fail"); return; } /* epoll_wait timeout, will never happens here */ @@ -1192,8 +1192,8 @@ rte_eal_intr_init(void) eal_intr_thread_main, NULL); if (ret != 0) { rte_errno = -ret; - RTE_LOG(ERR, EAL, - "Failed to create thread for interrupt handling\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to create thread for interrupt handling"); } return ret; @@ -1226,7 +1226,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle) return; default: bytes_read = 1; - RTE_LOG(INFO, EAL, "unexpected intr type\n"); + RTE_LOG_LINE(INFO, EAL, "unexpected intr type"); break; } @@ -1242,11 +1242,11 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle) if (errno == EINTR || errno == EWOULDBLOCK || errno == EAGAIN) continue; - RTE_LOG(ERR, EAL, - "Error reading from fd %d: %s\n", + RTE_LOG_LINE(ERR, EAL, + "Error reading from fd %d: %s", fd, strerror(errno)); } else if (nbytes == 0) - RTE_LOG(ERR, EAL, "Read nothing from fd %d\n", fd); + RTE_LOG_LINE(ERR, EAL, "Read nothing from fd %d", fd); return; } while (1); } @@ -1296,8 +1296,8 @@ eal_init_tls_epfd(void) int pfd = epoll_create(255); if (pfd < 0) { - RTE_LOG(ERR, EAL, - "Cannot create epoll instance\n"); + RTE_LOG_LINE(ERR, EAL, + "Cannot create epoll instance"); return -1; } return pfd; @@ -1320,7 +1320,7 @@ eal_epoll_wait(int epfd, struct rte_epoll_event *events, int rc; if (!events) { - RTE_LOG(ERR, EAL, "rte_epoll_event can't be NULL\n"); + RTE_LOG_LINE(ERR, EAL, "rte_epoll_event can't be NULL"); return -1; } @@ -1342,7 +1342,7 @@ eal_epoll_wait(int epfd, struct rte_epoll_event *events, continue; } /* epoll_wait fail */ - RTE_LOG(ERR, EAL, "epoll_wait returns with fail %s\n", + RTE_LOG_LINE(ERR, EAL, "epoll_wait returns with fail %s", strerror(errno)); rc = -1; break; @@ -1393,7 +1393,7 @@ rte_epoll_ctl(int epfd, int op, int fd, struct epoll_event ev; if (!event) { - RTE_LOG(ERR, EAL, "rte_epoll_event can't be NULL\n"); + RTE_LOG_LINE(ERR, EAL, "rte_epoll_event can't be NULL"); return -1; } @@ -1411,7 +1411,7 @@ rte_epoll_ctl(int epfd, int op, int fd, ev.events = event->epdata.event; if (epoll_ctl(epfd, op, fd, &ev) < 0) { - RTE_LOG(ERR, EAL, "Error op %d fd %d epoll_ctl, %s\n", + RTE_LOG_LINE(ERR, EAL, "Error op %d fd %d epoll_ctl, %s", op, fd, strerror(errno)); if (op == EPOLL_CTL_ADD) /* rollback status when CTL_ADD fail */ @@ -1442,7 +1442,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd, if (intr_handle == NULL || rte_intr_nb_efd_get(intr_handle) == 0 || efd_idx >= (unsigned int)rte_intr_nb_efd_get(intr_handle)) { - RTE_LOG(ERR, EAL, "Wrong intr vector number.\n"); + RTE_LOG_LINE(ERR, EAL, "Wrong intr vector number."); return -EPERM; } @@ -1452,7 +1452,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd, rev = rte_intr_elist_index_get(intr_handle, efd_idx); if (rte_atomic_load_explicit(&rev->status, rte_memory_order_relaxed) != RTE_EPOLL_INVALID) { - RTE_LOG(INFO, EAL, "Event already been added.\n"); + RTE_LOG_LINE(INFO, EAL, "Event already been added."); return -EEXIST; } @@ -1465,9 +1465,9 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd, rc = rte_epoll_ctl(epfd, epfd_op, rte_intr_efds_index_get(intr_handle, efd_idx), rev); if (!rc) - RTE_LOG(DEBUG, EAL, - "efd %d associated with vec %d added on epfd %d" - "\n", rev->fd, vec, epfd); + RTE_LOG_LINE(DEBUG, EAL, + "efd %d associated with vec %d added on epfd %d", + rev->fd, vec, epfd); else rc = -EPERM; break; @@ -1476,7 +1476,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd, rev = rte_intr_elist_index_get(intr_handle, efd_idx); if (rte_atomic_load_explicit(&rev->status, rte_memory_order_relaxed) == RTE_EPOLL_INVALID) { - RTE_LOG(INFO, EAL, "Event does not exist.\n"); + RTE_LOG_LINE(INFO, EAL, "Event does not exist."); return -EPERM; } @@ -1485,7 +1485,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd, rc = -EPERM; break; default: - RTE_LOG(ERR, EAL, "event op type mismatch\n"); + RTE_LOG_LINE(ERR, EAL, "event op type mismatch"); rc = -EPERM; } @@ -1523,8 +1523,8 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd) for (i = 0; i < n; i++) { fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC); if (fd < 0) { - RTE_LOG(ERR, EAL, - "can't setup eventfd, error %i (%s)\n", + RTE_LOG_LINE(ERR, EAL, + "can't setup eventfd, error %i (%s)", errno, strerror(errno)); return -errno; } @@ -1542,7 +1542,7 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd) /* only check, initialization would be done in vdev driver.*/ if ((uint64_t)rte_intr_efd_counter_size_get(intr_handle) > sizeof(union rte_intr_read_buffer)) { - RTE_LOG(ERR, EAL, "the efd_counter_size is oversized\n"); + RTE_LOG_LINE(ERR, EAL, "the efd_counter_size is oversized"); return -EINVAL; } } else { diff --git a/lib/eal/linux/eal_lcore.c b/lib/eal/linux/eal_lcore.c index 2e6a350603..42bf0ee7a1 100644 --- a/lib/eal/linux/eal_lcore.c +++ b/lib/eal/linux/eal_lcore.c @@ -68,7 +68,7 @@ eal_cpu_core_id(unsigned lcore_id) return (unsigned)id; err: - RTE_LOG(ERR, EAL, "Error reading core id value from %s " - "for lcore %u - assuming core 0\n", SYS_CPU_DIR, lcore_id); + RTE_LOG_LINE(ERR, EAL, "Error reading core id value from %s " + "for lcore %u - assuming core 0", SYS_CPU_DIR, lcore_id); return 0; } diff --git a/lib/eal/linux/eal_memalloc.c b/lib/eal/linux/eal_memalloc.c index 9853ec78a2..35a1868e32 100644 --- a/lib/eal/linux/eal_memalloc.c +++ b/lib/eal/linux/eal_memalloc.c @@ -147,7 +147,7 @@ check_numa(void) bool ret = true; /* Check if kernel supports NUMA. */ if (numa_available() != 0) { - RTE_LOG(DEBUG, EAL, "NUMA is not supported.\n"); + RTE_LOG_LINE(DEBUG, EAL, "NUMA is not supported."); ret = false; } return ret; @@ -156,16 +156,16 @@ check_numa(void) static void prepare_numa(int *oldpolicy, struct bitmask *oldmask, int socket_id) { - RTE_LOG(DEBUG, EAL, "Trying to obtain current memory policy.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Trying to obtain current memory policy."); if (get_mempolicy(oldpolicy, oldmask->maskp, oldmask->size + 1, 0, 0) < 0) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "Failed to get current mempolicy: %s. " - "Assuming MPOL_DEFAULT.\n", strerror(errno)); + "Assuming MPOL_DEFAULT.", strerror(errno)); *oldpolicy = MPOL_DEFAULT; } - RTE_LOG(DEBUG, EAL, - "Setting policy MPOL_PREFERRED for socket %d\n", + RTE_LOG_LINE(DEBUG, EAL, + "Setting policy MPOL_PREFERRED for socket %d", socket_id); numa_set_preferred(socket_id); } @@ -173,13 +173,13 @@ prepare_numa(int *oldpolicy, struct bitmask *oldmask, int socket_id) static void restore_numa(int *oldpolicy, struct bitmask *oldmask) { - RTE_LOG(DEBUG, EAL, - "Restoring previous memory policy: %d\n", *oldpolicy); + RTE_LOG_LINE(DEBUG, EAL, + "Restoring previous memory policy: %d", *oldpolicy); if (*oldpolicy == MPOL_DEFAULT) { numa_set_localalloc(); } else if (set_mempolicy(*oldpolicy, oldmask->maskp, oldmask->size + 1) < 0) { - RTE_LOG(ERR, EAL, "Failed to restore mempolicy: %s\n", + RTE_LOG_LINE(ERR, EAL, "Failed to restore mempolicy: %s", strerror(errno)); numa_set_localalloc(); } @@ -223,7 +223,7 @@ static int lock(int fd, int type) /* couldn't lock */ return 0; } else if (ret) { - RTE_LOG(ERR, EAL, "%s(): error calling flock(): %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): error calling flock(): %s", __func__, strerror(errno)); return -1; } @@ -251,7 +251,7 @@ get_seg_memfd(struct hugepage_info *hi __rte_unused, snprintf(segname, sizeof(segname), "seg_%i", list_idx); fd = memfd_create(segname, flags); if (fd < 0) { - RTE_LOG(DEBUG, EAL, "%s(): memfd create failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): memfd create failed: %s", __func__, strerror(errno)); return -1; } @@ -265,7 +265,7 @@ get_seg_memfd(struct hugepage_info *hi __rte_unused, list_idx, seg_idx); fd = memfd_create(segname, flags); if (fd < 0) { - RTE_LOG(DEBUG, EAL, "%s(): memfd create failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): memfd create failed: %s", __func__, strerror(errno)); return -1; } @@ -316,7 +316,7 @@ get_seg_fd(char *path, int buflen, struct hugepage_info *hi, */ ret = stat(path, &st); if (ret < 0 && errno != ENOENT) { - RTE_LOG(DEBUG, EAL, "%s(): stat() for '%s' failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): stat() for '%s' failed: %s", __func__, path, strerror(errno)); return -1; } @@ -342,7 +342,7 @@ get_seg_fd(char *path, int buflen, struct hugepage_info *hi, ret == 0) { /* coverity[toctou] */ if (unlink(path) < 0) { - RTE_LOG(DEBUG, EAL, "%s(): could not remove '%s': %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): could not remove '%s': %s", __func__, path, strerror(errno)); return -1; } @@ -351,13 +351,13 @@ get_seg_fd(char *path, int buflen, struct hugepage_info *hi, /* coverity[toctou] */ fd = open(path, O_CREAT | O_RDWR, 0600); if (fd < 0) { - RTE_LOG(ERR, EAL, "%s(): open '%s' failed: %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): open '%s' failed: %s", __func__, path, strerror(errno)); return -1; } /* take out a read lock */ if (lock(fd, LOCK_SH) < 0) { - RTE_LOG(ERR, EAL, "%s(): lock '%s' failed: %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): lock '%s' failed: %s", __func__, path, strerror(errno)); close(fd); return -1; @@ -378,7 +378,7 @@ resize_hugefile_in_memory(int fd, uint64_t fa_offset, ret = fallocate(fd, flags, fa_offset, page_sz); if (ret < 0) { - RTE_LOG(DEBUG, EAL, "%s(): fallocate() failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): fallocate() failed: %s", __func__, strerror(errno)); return -1; @@ -402,7 +402,7 @@ resize_hugefile_in_filesystem(int fd, uint64_t fa_offset, uint64_t page_sz, */ if (!grow) { - RTE_LOG(DEBUG, EAL, "%s(): fallocate not supported, not freeing page back to the system\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): fallocate not supported, not freeing page back to the system", __func__); return -1; } @@ -414,7 +414,7 @@ resize_hugefile_in_filesystem(int fd, uint64_t fa_offset, uint64_t page_sz, *dirty = new_size <= cur_size; if (new_size > cur_size && ftruncate(fd, new_size) < 0) { - RTE_LOG(DEBUG, EAL, "%s(): ftruncate() failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): ftruncate() failed: %s", __func__, strerror(errno)); return -1; } @@ -444,12 +444,12 @@ resize_hugefile_in_filesystem(int fd, uint64_t fa_offset, uint64_t page_sz, if (ret < 0) { if (fallocate_supported == -1 && errno == ENOTSUP) { - RTE_LOG(ERR, EAL, "%s(): fallocate() not supported, hugepage deallocation will be disabled\n", + RTE_LOG_LINE(ERR, EAL, "%s(): fallocate() not supported, hugepage deallocation will be disabled", __func__); again = true; fallocate_supported = 0; } else { - RTE_LOG(DEBUG, EAL, "%s(): fallocate() failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): fallocate() failed: %s", __func__, strerror(errno)); return -1; @@ -483,7 +483,7 @@ close_hugefile(int fd, char *path, int list_idx) if (!internal_conf->in_memory && rte_eal_process_type() == RTE_PROC_PRIMARY && unlink(path)) - RTE_LOG(ERR, EAL, "%s(): unlinking '%s' failed: %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): unlinking '%s' failed: %s", __func__, path, strerror(errno)); close(fd); @@ -536,12 +536,12 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, /* these are checked at init, but code analyzers don't know that */ if (internal_conf->in_memory && !anonymous_hugepages_supported) { - RTE_LOG(ERR, EAL, "Anonymous hugepages not supported, in-memory mode cannot allocate memory\n"); + RTE_LOG_LINE(ERR, EAL, "Anonymous hugepages not supported, in-memory mode cannot allocate memory"); return -1; } if (internal_conf->in_memory && !memfd_create_supported && internal_conf->single_file_segments) { - RTE_LOG(ERR, EAL, "Single-file segments are not supported without memfd support\n"); + RTE_LOG_LINE(ERR, EAL, "Single-file segments are not supported without memfd support"); return -1; } @@ -569,7 +569,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, fd = get_seg_fd(path, sizeof(path), hi, list_idx, seg_idx, &dirty); if (fd < 0) { - RTE_LOG(ERR, EAL, "Couldn't get fd on hugepage file\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't get fd on hugepage file"); return -1; } @@ -584,14 +584,14 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, } else { map_offset = 0; if (ftruncate(fd, alloc_sz) < 0) { - RTE_LOG(DEBUG, EAL, "%s(): ftruncate() failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): ftruncate() failed: %s", __func__, strerror(errno)); goto resized; } if (internal_conf->hugepage_file.unlink_before_mapping && !internal_conf->in_memory) { if (unlink(path)) { - RTE_LOG(DEBUG, EAL, "%s(): unlink() failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): unlink() failed: %s", __func__, strerror(errno)); goto resized; } @@ -610,7 +610,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, map_offset); if (va == MAP_FAILED) { - RTE_LOG(DEBUG, EAL, "%s(): mmap() failed: %s\n", __func__, + RTE_LOG_LINE(DEBUG, EAL, "%s(): mmap() failed: %s", __func__, strerror(errno)); /* mmap failed, but the previous region might have been * unmapped anyway. try to remap it @@ -618,7 +618,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, goto unmapped; } if (va != addr) { - RTE_LOG(DEBUG, EAL, "%s(): wrong mmap() address\n", __func__); + RTE_LOG_LINE(DEBUG, EAL, "%s(): wrong mmap() address", __func__); munmap(va, alloc_sz); goto resized; } @@ -631,7 +631,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, * back here. */ if (huge_wrap_sigsetjmp()) { - RTE_LOG(DEBUG, EAL, "SIGBUS: Cannot mmap more hugepages of size %uMB\n", + RTE_LOG_LINE(DEBUG, EAL, "SIGBUS: Cannot mmap more hugepages of size %uMB", (unsigned int)(alloc_sz >> 20)); goto mapped; } @@ -645,7 +645,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, iova = rte_mem_virt2iova(addr); if (iova == RTE_BAD_PHYS_ADDR) { - RTE_LOG(DEBUG, EAL, "%s(): can't get IOVA addr\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): can't get IOVA addr", __func__); goto mapped; } @@ -661,19 +661,19 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, ret = get_mempolicy(&cur_socket_id, NULL, 0, addr, MPOL_F_NODE | MPOL_F_ADDR); if (ret < 0) { - RTE_LOG(DEBUG, EAL, "%s(): get_mempolicy: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): get_mempolicy: %s", __func__, strerror(errno)); goto mapped; } else if (cur_socket_id != socket_id) { - RTE_LOG(DEBUG, EAL, - "%s(): allocation happened on wrong socket (wanted %d, got %d)\n", + RTE_LOG_LINE(DEBUG, EAL, + "%s(): allocation happened on wrong socket (wanted %d, got %d)", __func__, socket_id, cur_socket_id); goto mapped; } } #else if (rte_socket_count() > 1) - RTE_LOG(DEBUG, EAL, "%s(): not checking hugepage NUMA node.\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): not checking hugepage NUMA node.", __func__); #endif @@ -703,7 +703,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, * somebody else maps this hole now, we could accidentally * override it in the future. */ - RTE_LOG(CRIT, EAL, "Can't mmap holes in our virtual address space\n"); + RTE_LOG_LINE(CRIT, EAL, "Can't mmap holes in our virtual address space"); } /* roll back the ref count */ if (internal_conf->single_file_segments) @@ -748,7 +748,7 @@ free_seg(struct rte_memseg *ms, struct hugepage_info *hi, if (mmap(ms->addr, ms->len, PROT_NONE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED, -1, 0) == MAP_FAILED) { - RTE_LOG(DEBUG, EAL, "couldn't unmap page\n"); + RTE_LOG_LINE(DEBUG, EAL, "couldn't unmap page"); return -1; } @@ -873,13 +873,13 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg) if (wa->hi->lock_descriptor == -1 && !internal_conf->in_memory) { dir_fd = open(wa->hi->hugedir, O_RDONLY); if (dir_fd < 0) { - RTE_LOG(ERR, EAL, "%s(): Cannot open '%s': %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): Cannot open '%s': %s", __func__, wa->hi->hugedir, strerror(errno)); return -1; } /* blocking writelock */ if (flock(dir_fd, LOCK_EX)) { - RTE_LOG(ERR, EAL, "%s(): Cannot lock '%s': %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): Cannot lock '%s': %s", __func__, wa->hi->hugedir, strerror(errno)); close(dir_fd); return -1; @@ -896,7 +896,7 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg) if (alloc_seg(cur, map_addr, wa->socket, wa->hi, msl_idx, cur_idx)) { - RTE_LOG(DEBUG, EAL, "attempted to allocate %i segments, but only %i were allocated\n", + RTE_LOG_LINE(DEBUG, EAL, "attempted to allocate %i segments, but only %i were allocated", need, i); /* if exact number wasn't requested, stop */ @@ -916,7 +916,7 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg) * may fail. */ if (free_seg(tmp, wa->hi, msl_idx, j)) - RTE_LOG(DEBUG, EAL, "Cannot free page\n"); + RTE_LOG_LINE(DEBUG, EAL, "Cannot free page"); } /* clear the list */ if (wa->ms) @@ -980,13 +980,13 @@ free_seg_walk(const struct rte_memseg_list *msl, void *arg) if (wa->hi->lock_descriptor == -1 && !internal_conf->in_memory) { dir_fd = open(wa->hi->hugedir, O_RDONLY); if (dir_fd < 0) { - RTE_LOG(ERR, EAL, "%s(): Cannot open '%s': %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): Cannot open '%s': %s", __func__, wa->hi->hugedir, strerror(errno)); return -1; } /* blocking writelock */ if (flock(dir_fd, LOCK_EX)) { - RTE_LOG(ERR, EAL, "%s(): Cannot lock '%s': %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): Cannot lock '%s': %s", __func__, wa->hi->hugedir, strerror(errno)); close(dir_fd); return -1; @@ -1037,7 +1037,7 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs, size_t page_sz, } } if (!hi) { - RTE_LOG(ERR, EAL, "%s(): can't find relevant hugepage_info entry\n", + RTE_LOG_LINE(ERR, EAL, "%s(): can't find relevant hugepage_info entry", __func__); return -1; } @@ -1061,7 +1061,7 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs, size_t page_sz, /* memalloc is locked, so it's safe to use thread-unsafe version */ ret = rte_memseg_list_walk_thread_unsafe(alloc_seg_walk, &wa); if (ret == 0) { - RTE_LOG(ERR, EAL, "%s(): couldn't find suitable memseg_list\n", + RTE_LOG_LINE(ERR, EAL, "%s(): couldn't find suitable memseg_list", __func__); ret = -1; } else if (ret > 0) { @@ -1104,7 +1104,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs) /* if this page is marked as unfreeable, fail */ if (cur->flags & RTE_MEMSEG_FLAG_DO_NOT_FREE) { - RTE_LOG(DEBUG, EAL, "Page is not allowed to be freed\n"); + RTE_LOG_LINE(DEBUG, EAL, "Page is not allowed to be freed"); ret = -1; continue; } @@ -1118,7 +1118,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs) break; } if (i == (int)RTE_DIM(internal_conf->hugepage_info)) { - RTE_LOG(ERR, EAL, "Can't find relevant hugepage_info entry\n"); + RTE_LOG_LINE(ERR, EAL, "Can't find relevant hugepage_info entry"); ret = -1; continue; } @@ -1133,7 +1133,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs) if (walk_res == 1) continue; if (walk_res == 0) - RTE_LOG(ERR, EAL, "Couldn't find memseg list\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't find memseg list"); ret = -1; } return ret; @@ -1344,13 +1344,13 @@ sync_existing(struct rte_memseg_list *primary_msl, */ dir_fd = open(hi->hugedir, O_RDONLY); if (dir_fd < 0) { - RTE_LOG(ERR, EAL, "%s(): Cannot open '%s': %s\n", __func__, + RTE_LOG_LINE(ERR, EAL, "%s(): Cannot open '%s': %s", __func__, hi->hugedir, strerror(errno)); return -1; } /* blocking writelock */ if (flock(dir_fd, LOCK_EX)) { - RTE_LOG(ERR, EAL, "%s(): Cannot lock '%s': %s\n", __func__, + RTE_LOG_LINE(ERR, EAL, "%s(): Cannot lock '%s': %s", __func__, hi->hugedir, strerror(errno)); close(dir_fd); return -1; @@ -1405,7 +1405,7 @@ sync_walk(const struct rte_memseg_list *msl, void *arg __rte_unused) } } if (!hi) { - RTE_LOG(ERR, EAL, "Can't find relevant hugepage_info entry\n"); + RTE_LOG_LINE(ERR, EAL, "Can't find relevant hugepage_info entry"); return -1; } @@ -1454,7 +1454,7 @@ secondary_msl_create_walk(const struct rte_memseg_list *msl, primary_msl->memseg_arr.len, primary_msl->memseg_arr.elt_sz); if (ret < 0) { - RTE_LOG(ERR, EAL, "Cannot initialize local memory map\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot initialize local memory map"); return -1; } local_msl->base_va = primary_msl->base_va; @@ -1479,7 +1479,7 @@ secondary_msl_destroy_walk(const struct rte_memseg_list *msl, ret = rte_fbarray_destroy(&local_msl->memseg_arr); if (ret < 0) { - RTE_LOG(ERR, EAL, "Cannot destroy local memory map\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot destroy local memory map"); return -1; } local_msl->base_va = NULL; @@ -1501,7 +1501,7 @@ alloc_list(int list_idx, int len) /* ensure we have space to store fd per each possible segment */ data = malloc(sizeof(int) * len); if (data == NULL) { - RTE_LOG(ERR, EAL, "Unable to allocate space for file descriptors\n"); + RTE_LOG_LINE(ERR, EAL, "Unable to allocate space for file descriptors"); return -1; } /* set all fd's as invalid */ @@ -1750,13 +1750,13 @@ eal_memalloc_init(void) int mfd_res = test_memfd_create(); if (mfd_res < 0) { - RTE_LOG(ERR, EAL, "Unable to check if memfd is supported\n"); + RTE_LOG_LINE(ERR, EAL, "Unable to check if memfd is supported"); return -1; } if (mfd_res == 1) - RTE_LOG(DEBUG, EAL, "Using memfd for anonymous memory\n"); + RTE_LOG_LINE(DEBUG, EAL, "Using memfd for anonymous memory"); else - RTE_LOG(INFO, EAL, "Using memfd is not supported, falling back to anonymous hugepages\n"); + RTE_LOG_LINE(INFO, EAL, "Using memfd is not supported, falling back to anonymous hugepages"); /* we only support single-file segments mode with in-memory mode * if we support hugetlbfs with memfd_create. this code will @@ -1764,18 +1764,18 @@ eal_memalloc_init(void) */ if (internal_conf->single_file_segments && mfd_res != 1) { - RTE_LOG(ERR, EAL, "Single-file segments mode cannot be used without memfd support\n"); + RTE_LOG_LINE(ERR, EAL, "Single-file segments mode cannot be used without memfd support"); return -1; } /* this cannot ever happen but better safe than sorry */ if (!anonymous_hugepages_supported) { - RTE_LOG(ERR, EAL, "Using anonymous memory is not supported\n"); + RTE_LOG_LINE(ERR, EAL, "Using anonymous memory is not supported"); return -1; } /* safety net, should be impossible to configure */ if (internal_conf->hugepage_file.unlink_before_mapping && !internal_conf->hugepage_file.unlink_existing) { - RTE_LOG(ERR, EAL, "Unlinking existing hugepage files is prohibited, cannot unlink them before mapping.\n"); + RTE_LOG_LINE(ERR, EAL, "Unlinking existing hugepage files is prohibited, cannot unlink them before mapping."); return -1; } } diff --git a/lib/eal/linux/eal_memory.c b/lib/eal/linux/eal_memory.c index 9b6f08fba8..2f2551588b 100644 --- a/lib/eal/linux/eal_memory.c +++ b/lib/eal/linux/eal_memory.c @@ -104,7 +104,7 @@ rte_mem_virt2phy(const void *virtaddr) fd = open("/proc/self/pagemap", O_RDONLY); if (fd < 0) { - RTE_LOG(INFO, EAL, "%s(): cannot open /proc/self/pagemap: %s\n", + RTE_LOG_LINE(INFO, EAL, "%s(): cannot open /proc/self/pagemap: %s", __func__, strerror(errno)); return RTE_BAD_IOVA; } @@ -112,7 +112,7 @@ rte_mem_virt2phy(const void *virtaddr) virt_pfn = (unsigned long)virtaddr / page_size; offset = sizeof(uint64_t) * virt_pfn; if (lseek(fd, offset, SEEK_SET) == (off_t) -1) { - RTE_LOG(INFO, EAL, "%s(): seek error in /proc/self/pagemap: %s\n", + RTE_LOG_LINE(INFO, EAL, "%s(): seek error in /proc/self/pagemap: %s", __func__, strerror(errno)); close(fd); return RTE_BAD_IOVA; @@ -121,12 +121,12 @@ rte_mem_virt2phy(const void *virtaddr) retval = read(fd, &page, PFN_MASK_SIZE); close(fd); if (retval < 0) { - RTE_LOG(INFO, EAL, "%s(): cannot read /proc/self/pagemap: %s\n", + RTE_LOG_LINE(INFO, EAL, "%s(): cannot read /proc/self/pagemap: %s", __func__, strerror(errno)); return RTE_BAD_IOVA; } else if (retval != PFN_MASK_SIZE) { - RTE_LOG(INFO, EAL, "%s(): read %d bytes from /proc/self/pagemap " - "but expected %d:\n", + RTE_LOG_LINE(INFO, EAL, "%s(): read %d bytes from /proc/self/pagemap " + "but expected %d:", __func__, retval, PFN_MASK_SIZE); return RTE_BAD_IOVA; } @@ -237,7 +237,7 @@ static int huge_wrap_sigsetjmp(void) /* Callback for numa library. */ void numa_error(char *where) { - RTE_LOG(ERR, EAL, "%s failed: %s\n", where, strerror(errno)); + RTE_LOG_LINE(ERR, EAL, "%s failed: %s", where, strerror(errno)); } #endif @@ -267,18 +267,18 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, /* Check if kernel supports NUMA. */ if (numa_available() != 0) { - RTE_LOG(DEBUG, EAL, "NUMA is not supported.\n"); + RTE_LOG_LINE(DEBUG, EAL, "NUMA is not supported."); have_numa = false; } if (have_numa) { - RTE_LOG(DEBUG, EAL, "Trying to obtain current memory policy.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Trying to obtain current memory policy."); oldmask = numa_allocate_nodemask(); if (get_mempolicy(&oldpolicy, oldmask->maskp, oldmask->size + 1, 0, 0) < 0) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "Failed to get current mempolicy: %s. " - "Assuming MPOL_DEFAULT.\n", strerror(errno)); + "Assuming MPOL_DEFAULT.", strerror(errno)); oldpolicy = MPOL_DEFAULT; } for (i = 0; i < RTE_MAX_NUMA_NODES; i++) @@ -316,8 +316,8 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, essential_memory[j] -= hugepage_sz; } - RTE_LOG(DEBUG, EAL, - "Setting policy MPOL_PREFERRED for socket %d\n", + RTE_LOG_LINE(DEBUG, EAL, + "Setting policy MPOL_PREFERRED for socket %d", node_id); numa_set_preferred(node_id); } @@ -332,7 +332,7 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, /* try to create hugepage file */ fd = open(hf->filepath, O_CREAT | O_RDWR, 0600); if (fd < 0) { - RTE_LOG(DEBUG, EAL, "%s(): open failed: %s\n", __func__, + RTE_LOG_LINE(DEBUG, EAL, "%s(): open failed: %s", __func__, strerror(errno)); goto out; } @@ -345,7 +345,7 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, virtaddr = mmap(NULL, hugepage_sz, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_POPULATE, fd, 0); if (virtaddr == MAP_FAILED) { - RTE_LOG(DEBUG, EAL, "%s(): mmap failed: %s\n", __func__, + RTE_LOG_LINE(DEBUG, EAL, "%s(): mmap failed: %s", __func__, strerror(errno)); close(fd); goto out; @@ -361,8 +361,8 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, * back here. */ if (huge_wrap_sigsetjmp()) { - RTE_LOG(DEBUG, EAL, "SIGBUS: Cannot mmap more " - "hugepages of size %u MB\n", + RTE_LOG_LINE(DEBUG, EAL, "SIGBUS: Cannot mmap more " + "hugepages of size %u MB", (unsigned int)(hugepage_sz / 0x100000)); munmap(virtaddr, hugepage_sz); close(fd); @@ -378,7 +378,7 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, /* set shared lock on the file. */ if (flock(fd, LOCK_SH) < 0) { - RTE_LOG(DEBUG, EAL, "%s(): Locking file failed:%s \n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): Locking file failed:%s ", __func__, strerror(errno)); close(fd); goto out; @@ -390,13 +390,13 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, out: #ifdef RTE_EAL_NUMA_AWARE_HUGEPAGES if (maxnode) { - RTE_LOG(DEBUG, EAL, - "Restoring previous memory policy: %d\n", oldpolicy); + RTE_LOG_LINE(DEBUG, EAL, + "Restoring previous memory policy: %d", oldpolicy); if (oldpolicy == MPOL_DEFAULT) { numa_set_localalloc(); } else if (set_mempolicy(oldpolicy, oldmask->maskp, oldmask->size + 1) < 0) { - RTE_LOG(ERR, EAL, "Failed to restore mempolicy: %s\n", + RTE_LOG_LINE(ERR, EAL, "Failed to restore mempolicy: %s", strerror(errno)); numa_set_localalloc(); } @@ -424,8 +424,8 @@ find_numasocket(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi) f = fopen("/proc/self/numa_maps", "r"); if (f == NULL) { - RTE_LOG(NOTICE, EAL, "NUMA support not available" - " consider that all memory is in socket_id 0\n"); + RTE_LOG_LINE(NOTICE, EAL, "NUMA support not available" + " consider that all memory is in socket_id 0"); return 0; } @@ -443,20 +443,20 @@ find_numasocket(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi) /* get zone addr */ virt_addr = strtoull(buf, &end, 16); if (virt_addr == 0 || end == buf) { - RTE_LOG(ERR, EAL, "%s(): error in numa_maps parsing\n", __func__); + RTE_LOG_LINE(ERR, EAL, "%s(): error in numa_maps parsing", __func__); goto error; } /* get node id (socket id) */ nodestr = strstr(buf, " N"); if (nodestr == NULL) { - RTE_LOG(ERR, EAL, "%s(): error in numa_maps parsing\n", __func__); + RTE_LOG_LINE(ERR, EAL, "%s(): error in numa_maps parsing", __func__); goto error; } nodestr += 2; end = strstr(nodestr, "="); if (end == NULL) { - RTE_LOG(ERR, EAL, "%s(): error in numa_maps parsing\n", __func__); + RTE_LOG_LINE(ERR, EAL, "%s(): error in numa_maps parsing", __func__); goto error; } end[0] = '\0'; @@ -464,7 +464,7 @@ find_numasocket(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi) socket_id = strtoul(nodestr, &end, 0); if ((nodestr[0] == '\0') || (end == NULL) || (*end != '\0')) { - RTE_LOG(ERR, EAL, "%s(): error in numa_maps parsing\n", __func__); + RTE_LOG_LINE(ERR, EAL, "%s(): error in numa_maps parsing", __func__); goto error; } @@ -475,8 +475,8 @@ find_numasocket(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi) hugepg_tbl[i].socket_id = socket_id; hp_count++; #ifdef RTE_EAL_NUMA_AWARE_HUGEPAGES - RTE_LOG(DEBUG, EAL, - "Hugepage %s is on socket %d\n", + RTE_LOG_LINE(DEBUG, EAL, + "Hugepage %s is on socket %d", hugepg_tbl[i].filepath, socket_id); #endif } @@ -589,7 +589,7 @@ unlink_hugepage_files(struct hugepage_file *hugepg_tbl, struct hugepage_file *hp = &hugepg_tbl[page]; if (hp->orig_va != NULL && unlink(hp->filepath)) { - RTE_LOG(WARNING, EAL, "%s(): Removing %s failed: %s\n", + RTE_LOG_LINE(WARNING, EAL, "%s(): Removing %s failed: %s", __func__, hp->filepath, strerror(errno)); } } @@ -639,7 +639,7 @@ unmap_unneeded_hugepages(struct hugepage_file *hugepg_tbl, hp->orig_va = NULL; if (unlink(hp->filepath) == -1) { - RTE_LOG(ERR, EAL, "%s(): Removing %s failed: %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): Removing %s failed: %s", __func__, hp->filepath, strerror(errno)); return -1; } @@ -676,7 +676,7 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end) socket_id = hugepages[seg_start].socket_id; seg_len = seg_end - seg_start; - RTE_LOG(DEBUG, EAL, "Attempting to map %" PRIu64 "M on socket %i\n", + RTE_LOG_LINE(DEBUG, EAL, "Attempting to map %" PRIu64 "M on socket %i", (seg_len * page_sz) >> 20ULL, socket_id); /* find free space in memseg lists */ @@ -716,8 +716,8 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end) break; } if (msl_idx == RTE_MAX_MEMSEG_LISTS) { - RTE_LOG(ERR, EAL, "Could not find space for memseg. Please increase RTE_MAX_MEMSEG_PER_LIST " - "RTE_MAX_MEMSEG_PER_TYPE and/or RTE_MAX_MEM_MB_PER_TYPE in configuration.\n"); + RTE_LOG_LINE(ERR, EAL, "Could not find space for memseg. Please increase RTE_MAX_MEMSEG_PER_LIST " + "RTE_MAX_MEMSEG_PER_TYPE and/or RTE_MAX_MEM_MB_PER_TYPE in configuration."); return -1; } @@ -735,13 +735,13 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end) fd = open(hfile->filepath, O_RDWR); if (fd < 0) { - RTE_LOG(ERR, EAL, "Could not open '%s': %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not open '%s': %s", hfile->filepath, strerror(errno)); return -1; } /* set shared lock on the file. */ if (flock(fd, LOCK_SH) < 0) { - RTE_LOG(DEBUG, EAL, "Could not lock '%s': %s\n", + RTE_LOG_LINE(DEBUG, EAL, "Could not lock '%s': %s", hfile->filepath, strerror(errno)); close(fd); return -1; @@ -755,7 +755,7 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end) addr = mmap(addr, page_sz, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_POPULATE | MAP_FIXED, fd, 0); if (addr == MAP_FAILED) { - RTE_LOG(ERR, EAL, "Couldn't remap '%s': %s\n", + RTE_LOG_LINE(ERR, EAL, "Couldn't remap '%s': %s", hfile->filepath, strerror(errno)); close(fd); return -1; @@ -790,10 +790,10 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end) /* store segment fd internally */ if (eal_memalloc_set_seg_fd(msl_idx, ms_idx, fd) < 0) - RTE_LOG(ERR, EAL, "Could not store segment fd: %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not store segment fd: %s", rte_strerror(rte_errno)); } - RTE_LOG(DEBUG, EAL, "Allocated %" PRIu64 "M on socket %i\n", + RTE_LOG_LINE(DEBUG, EAL, "Allocated %" PRIu64 "M on socket %i", (seg_len * page_sz) >> 20, socket_id); return seg_len; } @@ -819,7 +819,7 @@ static int memseg_list_free(struct rte_memseg_list *msl) { if (rte_fbarray_destroy(&msl->memseg_arr)) { - RTE_LOG(ERR, EAL, "Cannot destroy memseg list\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot destroy memseg list"); return -1; } memset(msl, 0, sizeof(*msl)); @@ -965,7 +965,7 @@ prealloc_segments(struct hugepage_file *hugepages, int n_pages) break; } if (msl_idx == RTE_MAX_MEMSEG_LISTS) { - RTE_LOG(ERR, EAL, "Not enough space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS\n"); + RTE_LOG_LINE(ERR, EAL, "Not enough space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS"); return -1; } @@ -976,7 +976,7 @@ prealloc_segments(struct hugepage_file *hugepages, int n_pages) /* finally, allocate VA space */ if (eal_memseg_list_alloc(msl, 0) < 0) { - RTE_LOG(ERR, EAL, "Cannot preallocate 0x%"PRIx64"kB hugepages\n", + RTE_LOG_LINE(ERR, EAL, "Cannot preallocate 0x%"PRIx64"kB hugepages", page_sz >> 10); return -1; } @@ -1177,15 +1177,15 @@ eal_legacy_hugepage_init(void) /* create a memfd and store it in the segment fd table */ memfd = memfd_create("nohuge", 0); if (memfd < 0) { - RTE_LOG(DEBUG, EAL, "Cannot create memfd: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "Cannot create memfd: %s", strerror(errno)); - RTE_LOG(DEBUG, EAL, "Falling back to anonymous map\n"); + RTE_LOG_LINE(DEBUG, EAL, "Falling back to anonymous map"); } else { /* we got an fd - now resize it */ if (ftruncate(memfd, internal_conf->memory) < 0) { - RTE_LOG(ERR, EAL, "Cannot resize memfd: %s\n", + RTE_LOG_LINE(ERR, EAL, "Cannot resize memfd: %s", strerror(errno)); - RTE_LOG(ERR, EAL, "Falling back to anonymous map\n"); + RTE_LOG_LINE(ERR, EAL, "Falling back to anonymous map"); close(memfd); } else { /* creating memfd-backed file was successful. @@ -1193,7 +1193,7 @@ eal_legacy_hugepage_init(void) * other processes (such as vhost backend), so * map it as shared memory. */ - RTE_LOG(DEBUG, EAL, "Using memfd for anonymous memory\n"); + RTE_LOG_LINE(DEBUG, EAL, "Using memfd for anonymous memory"); fd = memfd; flags = MAP_SHARED; } @@ -1203,7 +1203,7 @@ eal_legacy_hugepage_init(void) * fit into the DMA mask. */ if (eal_memseg_list_alloc(msl, 0)) { - RTE_LOG(ERR, EAL, "Cannot preallocate VA space for hugepage memory\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot preallocate VA space for hugepage memory"); return -1; } @@ -1211,7 +1211,7 @@ eal_legacy_hugepage_init(void) addr = mmap(prealloc_addr, mem_sz, PROT_READ | PROT_WRITE, flags | MAP_FIXED, fd, 0); if (addr == MAP_FAILED || addr != prealloc_addr) { - RTE_LOG(ERR, EAL, "%s: mmap() failed: %s\n", __func__, + RTE_LOG_LINE(ERR, EAL, "%s: mmap() failed: %s", __func__, strerror(errno)); munmap(prealloc_addr, mem_sz); return -1; @@ -1222,7 +1222,7 @@ eal_legacy_hugepage_init(void) */ if (fd != -1) { if (eal_memalloc_set_seg_list_fd(0, fd) < 0) { - RTE_LOG(ERR, EAL, "Cannot set up segment list fd\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot set up segment list fd"); /* not a serious error, proceed */ } } @@ -1231,13 +1231,13 @@ eal_legacy_hugepage_init(void) if (mcfg->dma_maskbits && rte_mem_check_dma_mask_thread_unsafe(mcfg->dma_maskbits)) { - RTE_LOG(ERR, EAL, - "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask.\n", + RTE_LOG_LINE(ERR, EAL, + "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask.", __func__); if (rte_eal_iova_mode() == RTE_IOVA_VA && rte_eal_using_phys_addrs()) - RTE_LOG(ERR, EAL, - "%s(): Please try initializing EAL with --iova-mode=pa parameter.\n", + RTE_LOG_LINE(ERR, EAL, + "%s(): Please try initializing EAL with --iova-mode=pa parameter.", __func__); goto fail; } @@ -1292,8 +1292,8 @@ eal_legacy_hugepage_init(void) pages_old = hpi->num_pages[0]; pages_new = map_all_hugepages(&tmp_hp[hp_offset], hpi, memory); if (pages_new < pages_old) { - RTE_LOG(DEBUG, EAL, - "%d not %d hugepages of size %u MB allocated\n", + RTE_LOG_LINE(DEBUG, EAL, + "%d not %d hugepages of size %u MB allocated", pages_new, pages_old, (unsigned)(hpi->hugepage_sz / 0x100000)); @@ -1309,23 +1309,23 @@ eal_legacy_hugepage_init(void) rte_eal_iova_mode() != RTE_IOVA_VA) { /* find physical addresses for each hugepage */ if (find_physaddrs(&tmp_hp[hp_offset], hpi) < 0) { - RTE_LOG(DEBUG, EAL, "Failed to find phys addr " - "for %u MB pages\n", + RTE_LOG_LINE(DEBUG, EAL, "Failed to find phys addr " + "for %u MB pages", (unsigned int)(hpi->hugepage_sz / 0x100000)); goto fail; } } else { /* set physical addresses for each hugepage */ if (set_physaddrs(&tmp_hp[hp_offset], hpi) < 0) { - RTE_LOG(DEBUG, EAL, "Failed to set phys addr " - "for %u MB pages\n", + RTE_LOG_LINE(DEBUG, EAL, "Failed to set phys addr " + "for %u MB pages", (unsigned int)(hpi->hugepage_sz / 0x100000)); goto fail; } } if (find_numasocket(&tmp_hp[hp_offset], hpi) < 0){ - RTE_LOG(DEBUG, EAL, "Failed to find NUMA socket for %u MB pages\n", + RTE_LOG_LINE(DEBUG, EAL, "Failed to find NUMA socket for %u MB pages", (unsigned)(hpi->hugepage_sz / 0x100000)); goto fail; } @@ -1382,9 +1382,9 @@ eal_legacy_hugepage_init(void) for (i = 0; i < (int) internal_conf->num_hugepage_sizes; i++) { for (j = 0; j < RTE_MAX_NUMA_NODES; j++) { if (used_hp[i].num_pages[j] > 0) { - RTE_LOG(DEBUG, EAL, + RTE_LOG_LINE(DEBUG, EAL, "Requesting %u pages of size %uMB" - " from socket %i\n", + " from socket %i", used_hp[i].num_pages[j], (unsigned) (used_hp[i].hugepage_sz / 0x100000), @@ -1398,7 +1398,7 @@ eal_legacy_hugepage_init(void) nr_hugefiles * sizeof(struct hugepage_file)); if (hugepage == NULL) { - RTE_LOG(ERR, EAL, "Failed to create shared memory!\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to create shared memory!"); goto fail; } memset(hugepage, 0, nr_hugefiles * sizeof(struct hugepage_file)); @@ -1409,7 +1409,7 @@ eal_legacy_hugepage_init(void) */ if (unmap_unneeded_hugepages(tmp_hp, used_hp, internal_conf->num_hugepage_sizes) < 0) { - RTE_LOG(ERR, EAL, "Unmapping and locking hugepages failed!\n"); + RTE_LOG_LINE(ERR, EAL, "Unmapping and locking hugepages failed!"); goto fail; } @@ -1420,7 +1420,7 @@ eal_legacy_hugepage_init(void) */ if (copy_hugepages_to_shared_mem(hugepage, nr_hugefiles, tmp_hp, nr_hugefiles) < 0) { - RTE_LOG(ERR, EAL, "Copying tables to shared memory failed!\n"); + RTE_LOG_LINE(ERR, EAL, "Copying tables to shared memory failed!"); goto fail; } @@ -1428,7 +1428,7 @@ eal_legacy_hugepage_init(void) /* for legacy 32-bit mode, we did not preallocate VA space, so do it */ if (internal_conf->legacy_mem && prealloc_segments(hugepage, nr_hugefiles)) { - RTE_LOG(ERR, EAL, "Could not preallocate VA space for hugepages\n"); + RTE_LOG_LINE(ERR, EAL, "Could not preallocate VA space for hugepages"); goto fail; } #endif @@ -1437,14 +1437,14 @@ eal_legacy_hugepage_init(void) * pages become first-class citizens in DPDK memory subsystem */ if (remap_needed_hugepages(hugepage, nr_hugefiles)) { - RTE_LOG(ERR, EAL, "Couldn't remap hugepage files into memseg lists\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't remap hugepage files into memseg lists"); goto fail; } /* free the hugepage backing files */ if (internal_conf->hugepage_file.unlink_before_mapping && unlink_hugepage_files(tmp_hp, internal_conf->num_hugepage_sizes) < 0) { - RTE_LOG(ERR, EAL, "Unlinking hugepage files failed!\n"); + RTE_LOG_LINE(ERR, EAL, "Unlinking hugepage files failed!"); goto fail; } @@ -1480,8 +1480,8 @@ eal_legacy_hugepage_init(void) if (mcfg->dma_maskbits && rte_mem_check_dma_mask_thread_unsafe(mcfg->dma_maskbits)) { - RTE_LOG(ERR, EAL, - "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask.\n", + RTE_LOG_LINE(ERR, EAL, + "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask.", __func__); goto fail; } @@ -1527,15 +1527,15 @@ eal_legacy_hugepage_attach(void) int fd, fd_hugepage = -1; if (aslr_enabled() > 0) { - RTE_LOG(WARNING, EAL, "WARNING: Address Space Layout Randomization " - "(ASLR) is enabled in the kernel.\n"); - RTE_LOG(WARNING, EAL, " This may cause issues with mapping memory " - "into secondary processes\n"); + RTE_LOG_LINE(WARNING, EAL, "WARNING: Address Space Layout Randomization " + "(ASLR) is enabled in the kernel."); + RTE_LOG_LINE(WARNING, EAL, " This may cause issues with mapping memory " + "into secondary processes"); } fd_hugepage = open(eal_hugepage_data_path(), O_RDONLY); if (fd_hugepage < 0) { - RTE_LOG(ERR, EAL, "Could not open %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not open %s", eal_hugepage_data_path()); goto error; } @@ -1543,13 +1543,13 @@ eal_legacy_hugepage_attach(void) size = getFileSize(fd_hugepage); hp = mmap(NULL, size, PROT_READ, MAP_PRIVATE, fd_hugepage, 0); if (hp == MAP_FAILED) { - RTE_LOG(ERR, EAL, "Could not mmap %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not mmap %s", eal_hugepage_data_path()); goto error; } num_hp = size / sizeof(struct hugepage_file); - RTE_LOG(DEBUG, EAL, "Analysing %u files\n", num_hp); + RTE_LOG_LINE(DEBUG, EAL, "Analysing %u files", num_hp); /* map all segments into memory to make sure we get the addrs. the * segments themselves are already in memseg list (which is shared and @@ -1570,7 +1570,7 @@ eal_legacy_hugepage_attach(void) fd = open(hf->filepath, O_RDWR); if (fd < 0) { - RTE_LOG(ERR, EAL, "Could not open %s: %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not open %s: %s", hf->filepath, strerror(errno)); goto error; } @@ -1578,14 +1578,14 @@ eal_legacy_hugepage_attach(void) map_addr = mmap(map_addr, map_sz, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_FIXED, fd, 0); if (map_addr == MAP_FAILED) { - RTE_LOG(ERR, EAL, "Could not map %s: %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not map %s: %s", hf->filepath, strerror(errno)); goto fd_error; } /* set shared lock on the file. */ if (flock(fd, LOCK_SH) < 0) { - RTE_LOG(DEBUG, EAL, "%s(): Locking file failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): Locking file failed: %s", __func__, strerror(errno)); goto mmap_error; } @@ -1593,13 +1593,13 @@ eal_legacy_hugepage_attach(void) /* find segment data */ msl = rte_mem_virt2memseg_list(map_addr); if (msl == NULL) { - RTE_LOG(DEBUG, EAL, "%s(): Cannot find memseg list\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): Cannot find memseg list", __func__); goto mmap_error; } ms = rte_mem_virt2memseg(map_addr, msl); if (ms == NULL) { - RTE_LOG(DEBUG, EAL, "%s(): Cannot find memseg\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): Cannot find memseg", __func__); goto mmap_error; } @@ -1607,14 +1607,14 @@ eal_legacy_hugepage_attach(void) msl_idx = msl - mcfg->memsegs; ms_idx = rte_fbarray_find_idx(&msl->memseg_arr, ms); if (ms_idx < 0) { - RTE_LOG(DEBUG, EAL, "%s(): Cannot find memseg idx\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): Cannot find memseg idx", __func__); goto mmap_error; } /* store segment fd internally */ if (eal_memalloc_set_seg_fd(msl_idx, ms_idx, fd) < 0) - RTE_LOG(ERR, EAL, "Could not store segment fd: %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not store segment fd: %s", rte_strerror(rte_errno)); } /* unmap the hugepage config file, since we are done using it */ @@ -1642,9 +1642,9 @@ static int eal_hugepage_attach(void) { if (eal_memalloc_sync_with_primary()) { - RTE_LOG(ERR, EAL, "Could not map memory from primary process\n"); + RTE_LOG_LINE(ERR, EAL, "Could not map memory from primary process"); if (aslr_enabled() > 0) - RTE_LOG(ERR, EAL, "It is recommended to disable ASLR in the kernel and retry running both primary and secondary processes\n"); + RTE_LOG_LINE(ERR, EAL, "It is recommended to disable ASLR in the kernel and retry running both primary and secondary processes"); return -1; } return 0; @@ -1740,7 +1740,7 @@ memseg_primary_init_32(void) max_mem = (uint64_t)RTE_MAX_MEM_MB << 20; if (total_requested_mem > max_mem) { - RTE_LOG(ERR, EAL, "Invalid parameters: 32-bit process can at most use %uM of memory\n", + RTE_LOG_LINE(ERR, EAL, "Invalid parameters: 32-bit process can at most use %uM of memory", (unsigned int)(max_mem >> 20)); return -1; } @@ -1787,7 +1787,7 @@ memseg_primary_init_32(void) skip |= active_sockets == 0 && socket_id != main_lcore_socket; if (skip) { - RTE_LOG(DEBUG, EAL, "Will not preallocate memory on socket %u\n", + RTE_LOG_LINE(DEBUG, EAL, "Will not preallocate memory on socket %u", socket_id); continue; } @@ -1819,8 +1819,8 @@ memseg_primary_init_32(void) max_pagesz_mem = RTE_ALIGN_FLOOR(max_pagesz_mem, hugepage_sz); - RTE_LOG(DEBUG, EAL, "Attempting to preallocate " - "%" PRIu64 "M on socket %i\n", + RTE_LOG_LINE(DEBUG, EAL, "Attempting to preallocate " + "%" PRIu64 "M on socket %i", max_pagesz_mem >> 20, socket_id); type_msl_idx = 0; @@ -1830,8 +1830,8 @@ memseg_primary_init_32(void) unsigned int n_segs; if (msl_idx >= RTE_MAX_MEMSEG_LISTS) { - RTE_LOG(ERR, EAL, - "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS\n"); + RTE_LOG_LINE(ERR, EAL, + "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS"); return -1; } @@ -1847,7 +1847,7 @@ memseg_primary_init_32(void) /* failing to allocate a memseg list is * a serious error. */ - RTE_LOG(ERR, EAL, "Cannot allocate memseg list\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate memseg list"); return -1; } @@ -1855,7 +1855,7 @@ memseg_primary_init_32(void) /* if we couldn't allocate VA space, we * can try with smaller page sizes. */ - RTE_LOG(ERR, EAL, "Cannot allocate VA space for memseg list, retrying with different page size\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate VA space for memseg list, retrying with different page size"); /* deallocate memseg list */ if (memseg_list_free(msl)) return -1; @@ -1870,7 +1870,7 @@ memseg_primary_init_32(void) cur_socket_mem += cur_pagesz_mem; } if (cur_socket_mem == 0) { - RTE_LOG(ERR, EAL, "Cannot allocate VA space on socket %u\n", + RTE_LOG_LINE(ERR, EAL, "Cannot allocate VA space on socket %u", socket_id); return -1; } @@ -1901,13 +1901,13 @@ memseg_secondary_init(void) continue; if (rte_fbarray_attach(&msl->memseg_arr)) { - RTE_LOG(ERR, EAL, "Cannot attach to primary process memseg lists\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot attach to primary process memseg lists"); return -1; } /* preallocate VA space */ if (eal_memseg_list_alloc(msl, 0)) { - RTE_LOG(ERR, EAL, "Cannot preallocate VA space for hugepage memory\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot preallocate VA space for hugepage memory"); return -1; } } @@ -1930,21 +1930,21 @@ rte_eal_memseg_init(void) lim.rlim_cur = lim.rlim_max; if (setrlimit(RLIMIT_NOFILE, &lim) < 0) { - RTE_LOG(DEBUG, EAL, "Setting maximum number of open files failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "Setting maximum number of open files failed: %s", strerror(errno)); } else { - RTE_LOG(DEBUG, EAL, "Setting maximum number of open files to %" - PRIu64 "\n", + RTE_LOG_LINE(DEBUG, EAL, "Setting maximum number of open files to %" + PRIu64, (uint64_t)lim.rlim_cur); } } else { - RTE_LOG(ERR, EAL, "Cannot get current resource limits\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot get current resource limits"); } #ifndef RTE_EAL_NUMA_AWARE_HUGEPAGES if (!internal_conf->legacy_mem && rte_socket_count() > 1) { - RTE_LOG(WARNING, EAL, "DPDK is running on a NUMA system, but is compiled without NUMA support.\n"); - RTE_LOG(WARNING, EAL, "This will have adverse consequences for performance and usability.\n"); - RTE_LOG(WARNING, EAL, "Please use --"OPT_LEGACY_MEM" option, or recompile with NUMA support.\n"); + RTE_LOG_LINE(WARNING, EAL, "DPDK is running on a NUMA system, but is compiled without NUMA support."); + RTE_LOG_LINE(WARNING, EAL, "This will have adverse consequences for performance and usability."); + RTE_LOG_LINE(WARNING, EAL, "Please use --"OPT_LEGACY_MEM" option, or recompile with NUMA support."); } #endif diff --git a/lib/eal/linux/eal_thread.c b/lib/eal/linux/eal_thread.c index 880070c627..80b6f19a9e 100644 --- a/lib/eal/linux/eal_thread.c +++ b/lib/eal/linux/eal_thread.c @@ -28,7 +28,7 @@ void rte_thread_set_name(rte_thread_t thread_id, const char *thread_name) const size_t truncatedsz = sizeof(truncated); if (strlcpy(truncated, thread_name, truncatedsz) >= truncatedsz) - RTE_LOG(DEBUG, EAL, "Truncated thread name\n"); + RTE_LOG_LINE(DEBUG, EAL, "Truncated thread name"); ret = pthread_setname_np((pthread_t)thread_id.opaque_id, truncated); #endif @@ -37,5 +37,5 @@ void rte_thread_set_name(rte_thread_t thread_id, const char *thread_name) RTE_SET_USED(thread_name); if (ret != 0) - RTE_LOG(DEBUG, EAL, "Failed to set thread name\n"); + RTE_LOG_LINE(DEBUG, EAL, "Failed to set thread name"); } diff --git a/lib/eal/linux/eal_timer.c b/lib/eal/linux/eal_timer.c index df9ad61ae9..3813b1a66e 100644 --- a/lib/eal/linux/eal_timer.c +++ b/lib/eal/linux/eal_timer.c @@ -139,20 +139,20 @@ rte_eal_hpet_init(int make_default) eal_get_internal_configuration(); if (internal_conf->no_hpet) { - RTE_LOG(NOTICE, EAL, "HPET is disabled\n"); + RTE_LOG_LINE(NOTICE, EAL, "HPET is disabled"); return -1; } fd = open(DEV_HPET, O_RDONLY); if (fd < 0) { - RTE_LOG(ERR, EAL, "ERROR: Cannot open "DEV_HPET": %s!\n", + RTE_LOG_LINE(ERR, EAL, "ERROR: Cannot open "DEV_HPET": %s!", strerror(errno)); internal_conf->no_hpet = 1; return -1; } eal_hpet = mmap(NULL, 1024, PROT_READ, MAP_SHARED, fd, 0); if (eal_hpet == MAP_FAILED) { - RTE_LOG(ERR, EAL, "ERROR: Cannot mmap "DEV_HPET"!\n"); + RTE_LOG_LINE(ERR, EAL, "ERROR: Cannot mmap "DEV_HPET"!"); close(fd); internal_conf->no_hpet = 1; return -1; @@ -166,7 +166,7 @@ rte_eal_hpet_init(int make_default) eal_hpet_resolution_hz = (1000ULL*1000ULL*1000ULL*1000ULL*1000ULL) / (uint64_t)eal_hpet_resolution_fs; - RTE_LOG(INFO, EAL, "HPET frequency is ~%"PRIu64" kHz\n", + RTE_LOG_LINE(INFO, EAL, "HPET frequency is ~%"PRIu64" kHz", eal_hpet_resolution_hz/1000); eal_hpet_msb = (eal_hpet->counter_l >> 30); @@ -176,7 +176,7 @@ rte_eal_hpet_init(int make_default) ret = rte_thread_create_internal_control(&msb_inc_thread_id, "hpet-msb", hpet_msb_inc, NULL); if (ret != 0) { - RTE_LOG(ERR, EAL, "ERROR: Cannot create HPET timer thread!\n"); + RTE_LOG_LINE(ERR, EAL, "ERROR: Cannot create HPET timer thread!"); internal_conf->no_hpet = 1; return -1; } diff --git a/lib/eal/linux/eal_vfio.c b/lib/eal/linux/eal_vfio.c index ad3c1654b2..e8a783aaa8 100644 --- a/lib/eal/linux/eal_vfio.c +++ b/lib/eal/linux/eal_vfio.c @@ -367,7 +367,7 @@ vfio_open_group_fd(int iommu_group_num) if (vfio_group_fd < 0) { /* if file not found, it's not an error */ if (errno != ENOENT) { - RTE_LOG(ERR, EAL, "Cannot open %s: %s\n", + RTE_LOG_LINE(ERR, EAL, "Cannot open %s: %s", filename, strerror(errno)); return -1; } @@ -379,8 +379,8 @@ vfio_open_group_fd(int iommu_group_num) vfio_group_fd = open(filename, O_RDWR); if (vfio_group_fd < 0) { if (errno != ENOENT) { - RTE_LOG(ERR, EAL, - "Cannot open %s: %s\n", + RTE_LOG_LINE(ERR, EAL, + "Cannot open %s: %s", filename, strerror(errno)); return -1; } @@ -408,14 +408,14 @@ vfio_open_group_fd(int iommu_group_num) if (p->result == SOCKET_OK && mp_rep->num_fds == 1) { vfio_group_fd = mp_rep->fds[0]; } else if (p->result == SOCKET_NO_FD) { - RTE_LOG(ERR, EAL, "Bad VFIO group fd\n"); + RTE_LOG_LINE(ERR, EAL, "Bad VFIO group fd"); vfio_group_fd = -ENOENT; } } free(mp_reply.msgs); if (vfio_group_fd < 0 && vfio_group_fd != -ENOENT) - RTE_LOG(ERR, EAL, "Cannot request VFIO group fd\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot request VFIO group fd"); return vfio_group_fd; } @@ -452,7 +452,7 @@ vfio_get_group_fd(struct vfio_config *vfio_cfg, /* Lets see first if there is room for a new group */ if (vfio_cfg->vfio_active_groups == VFIO_MAX_GROUPS) { - RTE_LOG(ERR, EAL, "Maximum number of VFIO groups reached!\n"); + RTE_LOG_LINE(ERR, EAL, "Maximum number of VFIO groups reached!"); return -1; } @@ -465,13 +465,13 @@ vfio_get_group_fd(struct vfio_config *vfio_cfg, /* This should not happen */ if (i == VFIO_MAX_GROUPS) { - RTE_LOG(ERR, EAL, "No VFIO group free slot found\n"); + RTE_LOG_LINE(ERR, EAL, "No VFIO group free slot found"); return -1; } vfio_group_fd = vfio_open_group_fd(iommu_group_num); if (vfio_group_fd < 0) { - RTE_LOG(ERR, EAL, "Failed to open VFIO group %d\n", + RTE_LOG_LINE(ERR, EAL, "Failed to open VFIO group %d", iommu_group_num); return vfio_group_fd; } @@ -551,13 +551,13 @@ vfio_group_device_get(int vfio_group_fd) vfio_cfg = get_vfio_cfg_by_group_fd(vfio_group_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO group fd!\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid VFIO group fd!"); return; } i = get_vfio_group_idx(vfio_group_fd); if (i < 0 || i > (VFIO_MAX_GROUPS - 1)) - RTE_LOG(ERR, EAL, "Wrong VFIO group index (%d)\n", i); + RTE_LOG_LINE(ERR, EAL, "Wrong VFIO group index (%d)", i); else vfio_cfg->vfio_groups[i].devices++; } @@ -570,13 +570,13 @@ vfio_group_device_put(int vfio_group_fd) vfio_cfg = get_vfio_cfg_by_group_fd(vfio_group_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO group fd!\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid VFIO group fd!"); return; } i = get_vfio_group_idx(vfio_group_fd); if (i < 0 || i > (VFIO_MAX_GROUPS - 1)) - RTE_LOG(ERR, EAL, "Wrong VFIO group index (%d)\n", i); + RTE_LOG_LINE(ERR, EAL, "Wrong VFIO group index (%d)", i); else vfio_cfg->vfio_groups[i].devices--; } @@ -589,13 +589,13 @@ vfio_group_device_count(int vfio_group_fd) vfio_cfg = get_vfio_cfg_by_group_fd(vfio_group_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO group fd!\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid VFIO group fd!"); return -1; } i = get_vfio_group_idx(vfio_group_fd); if (i < 0 || i > (VFIO_MAX_GROUPS - 1)) { - RTE_LOG(ERR, EAL, "Wrong VFIO group index (%d)\n", i); + RTE_LOG_LINE(ERR, EAL, "Wrong VFIO group index (%d)", i); return -1; } @@ -636,8 +636,8 @@ vfio_mem_event_callback(enum rte_mem_event type, const void *addr, size_t len, while (cur_len < len) { /* some memory segments may have invalid IOVA */ if (ms->iova == RTE_BAD_IOVA) { - RTE_LOG(DEBUG, EAL, - "Memory segment at %p has bad IOVA, skipping\n", + RTE_LOG_LINE(DEBUG, EAL, + "Memory segment at %p has bad IOVA, skipping", ms->addr); goto next; } @@ -670,7 +670,7 @@ vfio_sync_default_container(void) /* default container fd should have been opened in rte_vfio_enable() */ if (!default_vfio_cfg->vfio_enabled || default_vfio_cfg->vfio_container_fd < 0) { - RTE_LOG(ERR, EAL, "VFIO support is not initialized\n"); + RTE_LOG_LINE(ERR, EAL, "VFIO support is not initialized"); return -1; } @@ -690,8 +690,8 @@ vfio_sync_default_container(void) } free(mp_reply.msgs); if (iommu_type_id < 0) { - RTE_LOG(ERR, EAL, - "Could not get IOMMU type for default container\n"); + RTE_LOG_LINE(ERR, EAL, + "Could not get IOMMU type for default container"); return -1; } @@ -708,7 +708,7 @@ vfio_sync_default_container(void) return 0; } - RTE_LOG(ERR, EAL, "Could not find IOMMU type id (%i)\n", + RTE_LOG_LINE(ERR, EAL, "Could not find IOMMU type id (%i)", iommu_type_id); return -1; } @@ -721,7 +721,7 @@ rte_vfio_clear_group(int vfio_group_fd) vfio_cfg = get_vfio_cfg_by_group_fd(vfio_group_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO group fd!\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid VFIO group fd!"); return -1; } @@ -756,8 +756,8 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, /* get group number */ ret = rte_vfio_get_group_num(sysfs_base, dev_addr, &iommu_group_num); if (ret == 0) { - RTE_LOG(NOTICE, EAL, - "%s not managed by VFIO driver, skipping\n", + RTE_LOG_LINE(NOTICE, EAL, + "%s not managed by VFIO driver, skipping", dev_addr); return 1; } @@ -776,8 +776,8 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, * isn't managed by VFIO */ if (vfio_group_fd == -ENOENT) { - RTE_LOG(NOTICE, EAL, - "%s not managed by VFIO driver, skipping\n", + RTE_LOG_LINE(NOTICE, EAL, + "%s not managed by VFIO driver, skipping", dev_addr); return 1; } @@ -790,14 +790,14 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, /* check if the group is viable */ ret = ioctl(vfio_group_fd, VFIO_GROUP_GET_STATUS, &group_status); if (ret) { - RTE_LOG(ERR, EAL, "%s cannot get VFIO group status, " - "error %i (%s)\n", dev_addr, errno, strerror(errno)); + RTE_LOG_LINE(ERR, EAL, "%s cannot get VFIO group status, " + "error %i (%s)", dev_addr, errno, strerror(errno)); close(vfio_group_fd); rte_vfio_clear_group(vfio_group_fd); return -1; } else if (!(group_status.flags & VFIO_GROUP_FLAGS_VIABLE)) { - RTE_LOG(ERR, EAL, "%s VFIO group is not viable! " - "Not all devices in IOMMU group bound to VFIO or unbound\n", + RTE_LOG_LINE(ERR, EAL, "%s VFIO group is not viable! " + "Not all devices in IOMMU group bound to VFIO or unbound", dev_addr); close(vfio_group_fd); rte_vfio_clear_group(vfio_group_fd); @@ -817,9 +817,9 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, ret = ioctl(vfio_group_fd, VFIO_GROUP_SET_CONTAINER, &vfio_container_fd); if (ret) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "%s cannot add VFIO group to container, error " - "%i (%s)\n", dev_addr, errno, strerror(errno)); + "%i (%s)", dev_addr, errno, strerror(errno)); close(vfio_group_fd); rte_vfio_clear_group(vfio_group_fd); return -1; @@ -841,8 +841,8 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, /* select an IOMMU type which we will be using */ t = vfio_set_iommu_type(vfio_container_fd); if (!t) { - RTE_LOG(ERR, EAL, - "%s failed to select IOMMU type\n", + RTE_LOG_LINE(ERR, EAL, + "%s failed to select IOMMU type", dev_addr); close(vfio_group_fd); rte_vfio_clear_group(vfio_group_fd); @@ -857,9 +857,9 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, else ret = 0; if (ret) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "%s DMA remapping failed, error " - "%i (%s)\n", + "%i (%s)", dev_addr, errno, strerror(errno)); close(vfio_group_fd); rte_vfio_clear_group(vfio_group_fd); @@ -886,10 +886,10 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, map->addr, map->iova, map->len, 1); if (ret) { - RTE_LOG(ERR, EAL, "Couldn't map user memory for DMA: " + RTE_LOG_LINE(ERR, EAL, "Couldn't map user memory for DMA: " "va: 0x%" PRIx64 " " "iova: 0x%" PRIx64 " " - "len: 0x%" PRIu64 "\n", + "len: 0x%" PRIu64, map->addr, map->iova, map->len); rte_spinlock_recursive_unlock( @@ -911,13 +911,13 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, rte_mcfg_mem_read_unlock(); if (ret && rte_errno != ENOTSUP) { - RTE_LOG(ERR, EAL, "Could not install memory event callback for VFIO\n"); + RTE_LOG_LINE(ERR, EAL, "Could not install memory event callback for VFIO"); return -1; } if (ret) - RTE_LOG(DEBUG, EAL, "Memory event callbacks not supported\n"); + RTE_LOG_LINE(DEBUG, EAL, "Memory event callbacks not supported"); else - RTE_LOG(DEBUG, EAL, "Installed memory event callback for VFIO\n"); + RTE_LOG_LINE(DEBUG, EAL, "Installed memory event callback for VFIO"); } } else if (rte_eal_process_type() != RTE_PROC_PRIMARY && vfio_cfg == default_vfio_cfg && @@ -929,7 +929,7 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, */ ret = vfio_sync_default_container(); if (ret < 0) { - RTE_LOG(ERR, EAL, "Could not sync default VFIO container\n"); + RTE_LOG_LINE(ERR, EAL, "Could not sync default VFIO container"); close(vfio_group_fd); rte_vfio_clear_group(vfio_group_fd); return -1; @@ -937,7 +937,7 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, /* we have successfully initialized VFIO, notify user */ const struct vfio_iommu_type *t = default_vfio_cfg->vfio_iommu_type; - RTE_LOG(INFO, EAL, "Using IOMMU type %d (%s)\n", + RTE_LOG_LINE(INFO, EAL, "Using IOMMU type %d (%s)", t->type_id, t->name); } @@ -965,7 +965,7 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, * the VFIO group or the container not having IOMMU configured. */ - RTE_LOG(WARNING, EAL, "Getting a vfio_dev_fd for %s failed\n", + RTE_LOG_LINE(WARNING, EAL, "Getting a vfio_dev_fd for %s failed", dev_addr); close(vfio_group_fd); rte_vfio_clear_group(vfio_group_fd); @@ -976,8 +976,8 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, dev_get_info: ret = ioctl(*vfio_dev_fd, VFIO_DEVICE_GET_INFO, device_info); if (ret) { - RTE_LOG(ERR, EAL, "%s cannot get device info, " - "error %i (%s)\n", dev_addr, errno, + RTE_LOG_LINE(ERR, EAL, "%s cannot get device info, " + "error %i (%s)", dev_addr, errno, strerror(errno)); close(*vfio_dev_fd); close(vfio_group_fd); @@ -1007,7 +1007,7 @@ rte_vfio_release_device(const char *sysfs_base, const char *dev_addr, /* get group number */ ret = rte_vfio_get_group_num(sysfs_base, dev_addr, &iommu_group_num); if (ret <= 0) { - RTE_LOG(WARNING, EAL, "%s not managed by VFIO driver\n", + RTE_LOG_LINE(WARNING, EAL, "%s not managed by VFIO driver", dev_addr); /* This is an error at this point. */ ret = -1; @@ -1017,7 +1017,7 @@ rte_vfio_release_device(const char *sysfs_base, const char *dev_addr, /* get the actual group fd */ vfio_group_fd = rte_vfio_get_group_fd(iommu_group_num); if (vfio_group_fd < 0) { - RTE_LOG(INFO, EAL, "rte_vfio_get_group_fd failed for %s\n", + RTE_LOG_LINE(INFO, EAL, "rte_vfio_get_group_fd failed for %s", dev_addr); ret = vfio_group_fd; goto out; @@ -1034,7 +1034,7 @@ rte_vfio_release_device(const char *sysfs_base, const char *dev_addr, /* Closing a device */ if (close(vfio_dev_fd) < 0) { - RTE_LOG(INFO, EAL, "Error when closing vfio_dev_fd for %s\n", + RTE_LOG_LINE(INFO, EAL, "Error when closing vfio_dev_fd for %s", dev_addr); ret = -1; goto out; @@ -1047,14 +1047,14 @@ rte_vfio_release_device(const char *sysfs_base, const char *dev_addr, if (!vfio_group_device_count(vfio_group_fd)) { if (close(vfio_group_fd) < 0) { - RTE_LOG(INFO, EAL, "Error when closing vfio_group_fd for %s\n", + RTE_LOG_LINE(INFO, EAL, "Error when closing vfio_group_fd for %s", dev_addr); ret = -1; goto out; } if (rte_vfio_clear_group(vfio_group_fd) < 0) { - RTE_LOG(INFO, EAL, "Error when clearing group for %s\n", + RTE_LOG_LINE(INFO, EAL, "Error when clearing group for %s", dev_addr); ret = -1; goto out; @@ -1101,21 +1101,21 @@ rte_vfio_enable(const char *modname) } } - RTE_LOG(DEBUG, EAL, "Probing VFIO support...\n"); + RTE_LOG_LINE(DEBUG, EAL, "Probing VFIO support..."); /* check if vfio module is loaded */ vfio_available = rte_eal_check_module(modname); /* return error directly */ if (vfio_available == -1) { - RTE_LOG(INFO, EAL, "Could not get loaded module details!\n"); + RTE_LOG_LINE(INFO, EAL, "Could not get loaded module details!"); return -1; } /* return 0 if VFIO modules not loaded */ if (vfio_available == 0) { - RTE_LOG(DEBUG, EAL, - "VFIO modules not loaded, skipping VFIO support...\n"); + RTE_LOG_LINE(DEBUG, EAL, + "VFIO modules not loaded, skipping VFIO support..."); return 0; } @@ -1131,10 +1131,10 @@ rte_vfio_enable(const char *modname) /* check if we have VFIO driver enabled */ if (default_vfio_cfg->vfio_container_fd != -1) { - RTE_LOG(INFO, EAL, "VFIO support initialized\n"); + RTE_LOG_LINE(INFO, EAL, "VFIO support initialized"); default_vfio_cfg->vfio_enabled = 1; } else { - RTE_LOG(NOTICE, EAL, "VFIO support could not be initialized\n"); + RTE_LOG_LINE(NOTICE, EAL, "VFIO support could not be initialized"); } return 0; @@ -1186,7 +1186,7 @@ vfio_get_default_container_fd(void) } free(mp_reply.msgs); - RTE_LOG(ERR, EAL, "Cannot request default VFIO container fd\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot request default VFIO container fd"); return -1; } @@ -1209,13 +1209,13 @@ vfio_set_iommu_type(int vfio_container_fd) int ret = ioctl(vfio_container_fd, VFIO_SET_IOMMU, t->type_id); if (!ret) { - RTE_LOG(INFO, EAL, "Using IOMMU type %d (%s)\n", + RTE_LOG_LINE(INFO, EAL, "Using IOMMU type %d (%s)", t->type_id, t->name); return t; } /* not an error, there may be more supported IOMMU types */ - RTE_LOG(DEBUG, EAL, "Set IOMMU type %d (%s) failed, error " - "%i (%s)\n", t->type_id, t->name, errno, + RTE_LOG_LINE(DEBUG, EAL, "Set IOMMU type %d (%s) failed, error " + "%i (%s)", t->type_id, t->name, errno, strerror(errno)); } /* if we didn't find a suitable IOMMU type, fail */ @@ -1233,15 +1233,15 @@ vfio_has_supported_extensions(int vfio_container_fd) ret = ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION, t->type_id); if (ret < 0) { - RTE_LOG(ERR, EAL, "Could not get IOMMU type, error " - "%i (%s)\n", errno, strerror(errno)); + RTE_LOG_LINE(ERR, EAL, "Could not get IOMMU type, error " + "%i (%s)", errno, strerror(errno)); close(vfio_container_fd); return -1; } else if (ret == 1) { /* we found a supported extension */ n_extensions++; } - RTE_LOG(DEBUG, EAL, "IOMMU type %d (%s) is %s\n", + RTE_LOG_LINE(DEBUG, EAL, "IOMMU type %d (%s) is %s", t->type_id, t->name, ret ? "supported" : "not supported"); } @@ -1271,9 +1271,9 @@ rte_vfio_get_container_fd(void) if (internal_conf->process_type == RTE_PROC_PRIMARY) { vfio_container_fd = open(VFIO_CONTAINER_PATH, O_RDWR); if (vfio_container_fd < 0) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "Cannot open VFIO container %s, error " - "%i (%s)\n", VFIO_CONTAINER_PATH, + "%i (%s)", VFIO_CONTAINER_PATH, errno, strerror(errno)); return -1; } @@ -1282,19 +1282,19 @@ rte_vfio_get_container_fd(void) ret = ioctl(vfio_container_fd, VFIO_GET_API_VERSION); if (ret != VFIO_API_VERSION) { if (ret < 0) - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "Could not get VFIO API version, error " - "%i (%s)\n", errno, strerror(errno)); + "%i (%s)", errno, strerror(errno)); else - RTE_LOG(ERR, EAL, "Unsupported VFIO API version!\n"); + RTE_LOG_LINE(ERR, EAL, "Unsupported VFIO API version!"); close(vfio_container_fd); return -1; } ret = vfio_has_supported_extensions(vfio_container_fd); if (ret) { - RTE_LOG(ERR, EAL, - "No supported IOMMU extensions found!\n"); + RTE_LOG_LINE(ERR, EAL, + "No supported IOMMU extensions found!"); return -1; } @@ -1322,7 +1322,7 @@ rte_vfio_get_container_fd(void) } free(mp_reply.msgs); - RTE_LOG(ERR, EAL, "Cannot request VFIO container fd\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot request VFIO container fd"); return -1; } @@ -1352,7 +1352,7 @@ rte_vfio_get_group_num(const char *sysfs_base, tok, RTE_DIM(tok), '/'); if (ret <= 0) { - RTE_LOG(ERR, EAL, "%s cannot get IOMMU group\n", dev_addr); + RTE_LOG_LINE(ERR, EAL, "%s cannot get IOMMU group", dev_addr); return -1; } @@ -1362,7 +1362,7 @@ rte_vfio_get_group_num(const char *sysfs_base, end = group_tok; *iommu_group_num = strtol(group_tok, &end, 10); if ((end != group_tok && *end != '\0') || errno != 0) { - RTE_LOG(ERR, EAL, "%s error parsing IOMMU number!\n", dev_addr); + RTE_LOG_LINE(ERR, EAL, "%s error parsing IOMMU number!", dev_addr); return -1; } @@ -1411,12 +1411,12 @@ vfio_type1_dma_mem_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, * returned from kernel. */ if (errno == EEXIST) { - RTE_LOG(DEBUG, EAL, + RTE_LOG_LINE(DEBUG, EAL, "Memory segment is already mapped, skipping"); } else { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "Cannot set up DMA remapping, error " - "%i (%s)\n", errno, strerror(errno)); + "%i (%s)", errno, strerror(errno)); return -1; } } @@ -1429,12 +1429,12 @@ vfio_type1_dma_mem_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, ret = ioctl(vfio_container_fd, VFIO_IOMMU_UNMAP_DMA, &dma_unmap); if (ret) { - RTE_LOG(ERR, EAL, "Cannot clear DMA remapping, error " - "%i (%s)\n", errno, strerror(errno)); + RTE_LOG_LINE(ERR, EAL, "Cannot clear DMA remapping, error " + "%i (%s)", errno, strerror(errno)); return -1; } else if (dma_unmap.size != len) { - RTE_LOG(ERR, EAL, "Unexpected size %"PRIu64 - " of DMA remapping cleared instead of %"PRIu64"\n", + RTE_LOG_LINE(ERR, EAL, "Unexpected size %"PRIu64 + " of DMA remapping cleared instead of %"PRIu64, (uint64_t)dma_unmap.size, len); rte_errno = EIO; return -1; @@ -1470,16 +1470,16 @@ vfio_spapr_dma_do_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, struct vfio_iommu_type1_dma_map dma_map; if (iova + len > spapr_dma_win_len) { - RTE_LOG(ERR, EAL, "DMA map attempt outside DMA window\n"); + RTE_LOG_LINE(ERR, EAL, "DMA map attempt outside DMA window"); return -1; } ret = ioctl(vfio_container_fd, VFIO_IOMMU_SPAPR_REGISTER_MEMORY, ®); if (ret) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "Cannot register vaddr for IOMMU, error " - "%i (%s)\n", errno, strerror(errno)); + "%i (%s)", errno, strerror(errno)); return -1; } @@ -1493,8 +1493,8 @@ vfio_spapr_dma_do_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, ret = ioctl(vfio_container_fd, VFIO_IOMMU_MAP_DMA, &dma_map); if (ret) { - RTE_LOG(ERR, EAL, "Cannot map vaddr for IOMMU, error " - "%i (%s)\n", errno, strerror(errno)); + RTE_LOG_LINE(ERR, EAL, "Cannot map vaddr for IOMMU, error " + "%i (%s)", errno, strerror(errno)); return -1; } @@ -1509,17 +1509,17 @@ vfio_spapr_dma_do_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, ret = ioctl(vfio_container_fd, VFIO_IOMMU_UNMAP_DMA, &dma_unmap); if (ret) { - RTE_LOG(ERR, EAL, "Cannot unmap vaddr for IOMMU, error " - "%i (%s)\n", errno, strerror(errno)); + RTE_LOG_LINE(ERR, EAL, "Cannot unmap vaddr for IOMMU, error " + "%i (%s)", errno, strerror(errno)); return -1; } ret = ioctl(vfio_container_fd, VFIO_IOMMU_SPAPR_UNREGISTER_MEMORY, ®); if (ret) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "Cannot unregister vaddr for IOMMU, error " - "%i (%s)\n", errno, strerror(errno)); + "%i (%s)", errno, strerror(errno)); return -1; } } @@ -1599,7 +1599,7 @@ find_highest_mem_addr(struct spapr_size_walk_param *param) */ FILE *fd = fopen(proc_iomem, "r"); if (fd == NULL) { - RTE_LOG(ERR, EAL, "Cannot open %s\n", proc_iomem); + RTE_LOG_LINE(ERR, EAL, "Cannot open %s", proc_iomem); return -1; } /* Scan /proc/iomem for the highest PA in the system */ @@ -1612,15 +1612,15 @@ find_highest_mem_addr(struct spapr_size_walk_param *param) /* Validate the format of the memory string */ if (space == NULL || dash == NULL || space < dash) { - RTE_LOG(ERR, EAL, "Can't parse line \"%s\" in file %s\n", + RTE_LOG_LINE(ERR, EAL, "Can't parse line \"%s\" in file %s", line, proc_iomem); continue; } start = strtoull(line, NULL, 16); end = strtoull(dash + 1, NULL, 16); - RTE_LOG(DEBUG, EAL, "Found system RAM from 0x%" PRIx64 - " to 0x%" PRIx64 "\n", start, end); + RTE_LOG_LINE(DEBUG, EAL, "Found system RAM from 0x%" PRIx64 + " to 0x%" PRIx64, start, end); if (end > max) max = end; } @@ -1628,22 +1628,22 @@ find_highest_mem_addr(struct spapr_size_walk_param *param) fclose(fd); if (max == 0) { - RTE_LOG(ERR, EAL, "Failed to find valid \"System RAM\" " - "entry in file %s\n", proc_iomem); + RTE_LOG_LINE(ERR, EAL, "Failed to find valid \"System RAM\" " + "entry in file %s", proc_iomem); return -1; } spapr_dma_win_len = rte_align64pow2(max + 1); return 0; } else if (rte_eal_iova_mode() == RTE_IOVA_VA) { - RTE_LOG(DEBUG, EAL, "Highest VA address in memseg list is 0x%" - PRIx64 "\n", param->max_va); + RTE_LOG_LINE(DEBUG, EAL, "Highest VA address in memseg list is 0x%" + PRIx64, param->max_va); spapr_dma_win_len = rte_align64pow2(param->max_va); return 0; } spapr_dma_win_len = 0; - RTE_LOG(ERR, EAL, "Unsupported IOVA mode\n"); + RTE_LOG_LINE(ERR, EAL, "Unsupported IOVA mode"); return -1; } @@ -1668,18 +1668,18 @@ spapr_dma_win_size(void) /* walk the memseg list to find the page size/max VA address */ memset(¶m, 0, sizeof(param)); if (rte_memseg_list_walk(vfio_spapr_size_walk, ¶m) < 0) { - RTE_LOG(ERR, EAL, "Failed to walk memseg list for DMA window size\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to walk memseg list for DMA window size"); return -1; } /* we can't be sure if DMA window covers external memory */ if (param.is_user_managed) - RTE_LOG(WARNING, EAL, "Detected user managed external memory which may not be managed by the IOMMU\n"); + RTE_LOG_LINE(WARNING, EAL, "Detected user managed external memory which may not be managed by the IOMMU"); /* check physical/virtual memory size */ if (find_highest_mem_addr(¶m) < 0) return -1; - RTE_LOG(DEBUG, EAL, "Setting DMA window size to 0x%" PRIx64 "\n", + RTE_LOG_LINE(DEBUG, EAL, "Setting DMA window size to 0x%" PRIx64, spapr_dma_win_len); spapr_dma_win_page_sz = param.page_sz; rte_mem_set_dma_mask(rte_ctz64(spapr_dma_win_len)); @@ -1703,7 +1703,7 @@ vfio_spapr_create_dma_window(int vfio_container_fd) ret = ioctl(vfio_container_fd, VFIO_IOMMU_SPAPR_TCE_GET_INFO, &info); if (ret) { - RTE_LOG(ERR, EAL, "Cannot get IOMMU info, error %i (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Cannot get IOMMU info, error %i (%s)", errno, strerror(errno)); return -1; } @@ -1744,17 +1744,17 @@ vfio_spapr_create_dma_window(int vfio_container_fd) } #endif /* VFIO_IOMMU_SPAPR_INFO_DDW */ if (ret) { - RTE_LOG(ERR, EAL, "Cannot create new DMA window, error " - "%i (%s)\n", errno, strerror(errno)); - RTE_LOG(ERR, EAL, - "Consider using a larger hugepage size if supported by the system\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot create new DMA window, error " + "%i (%s)", errno, strerror(errno)); + RTE_LOG_LINE(ERR, EAL, + "Consider using a larger hugepage size if supported by the system"); return -1; } /* verify the start address */ if (create.start_addr != 0) { - RTE_LOG(ERR, EAL, "Received unsupported start address 0x%" - PRIx64 "\n", (uint64_t)create.start_addr); + RTE_LOG_LINE(ERR, EAL, "Received unsupported start address 0x%" + PRIx64, (uint64_t)create.start_addr); return -1; } return ret; @@ -1769,13 +1769,13 @@ vfio_spapr_dma_mem_map(int vfio_container_fd, uint64_t vaddr, if (do_map) { if (vfio_spapr_dma_do_map(vfio_container_fd, vaddr, iova, len, 1)) { - RTE_LOG(ERR, EAL, "Failed to map DMA\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to map DMA"); ret = -1; } } else { if (vfio_spapr_dma_do_map(vfio_container_fd, vaddr, iova, len, 0)) { - RTE_LOG(ERR, EAL, "Failed to unmap DMA\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to unmap DMA"); ret = -1; } } @@ -1787,7 +1787,7 @@ static int vfio_spapr_dma_map(int vfio_container_fd) { if (vfio_spapr_create_dma_window(vfio_container_fd) < 0) { - RTE_LOG(ERR, EAL, "Could not create new DMA window!\n"); + RTE_LOG_LINE(ERR, EAL, "Could not create new DMA window!"); return -1; } @@ -1822,14 +1822,14 @@ vfio_dma_mem_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, const struct vfio_iommu_type *t = vfio_cfg->vfio_iommu_type; if (!t) { - RTE_LOG(ERR, EAL, "VFIO support not initialized\n"); + RTE_LOG_LINE(ERR, EAL, "VFIO support not initialized"); rte_errno = ENODEV; return -1; } if (!t->dma_user_map_func) { - RTE_LOG(ERR, EAL, - "VFIO custom DMA region mapping not supported by IOMMU %s\n", + RTE_LOG_LINE(ERR, EAL, + "VFIO custom DMA region mapping not supported by IOMMU %s", t->name); rte_errno = ENOTSUP; return -1; @@ -1851,7 +1851,7 @@ container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, user_mem_maps = &vfio_cfg->mem_maps; rte_spinlock_recursive_lock(&user_mem_maps->lock); if (user_mem_maps->n_maps == VFIO_MAX_USER_MEM_MAPS) { - RTE_LOG(ERR, EAL, "No more space for user mem maps\n"); + RTE_LOG_LINE(ERR, EAL, "No more space for user mem maps"); rte_errno = ENOMEM; ret = -1; goto out; @@ -1865,7 +1865,7 @@ container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, * this to be unsupported, because we can't just store any old * mapping and pollute list of active mappings willy-nilly. */ - RTE_LOG(ERR, EAL, "Couldn't map new region for DMA\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't map new region for DMA"); ret = -1; goto out; } @@ -1921,7 +1921,7 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, orig_maps, RTE_DIM(orig_maps)); /* did we find anything? */ if (n_orig < 0) { - RTE_LOG(ERR, EAL, "Couldn't find previously mapped region\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't find previously mapped region"); rte_errno = EINVAL; ret = -1; goto out; @@ -1943,7 +1943,7 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, vaddr + len, iova + len); if (!start_aligned || !end_aligned) { - RTE_LOG(DEBUG, EAL, "DMA partial unmap unsupported\n"); + RTE_LOG_LINE(DEBUG, EAL, "DMA partial unmap unsupported"); rte_errno = ENOTSUP; ret = -1; goto out; @@ -1961,7 +1961,7 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, /* can we store the new maps in our list? */ newlen = (user_mem_maps->n_maps - n_orig) + n_new; if (newlen >= VFIO_MAX_USER_MEM_MAPS) { - RTE_LOG(ERR, EAL, "Not enough space to store partial mapping\n"); + RTE_LOG_LINE(ERR, EAL, "Not enough space to store partial mapping"); rte_errno = ENOMEM; ret = -1; goto out; @@ -1978,11 +1978,11 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, * within our mapped range but had invalid alignment). */ if (rte_errno != ENODEV && rte_errno != ENOTSUP) { - RTE_LOG(ERR, EAL, "Couldn't unmap region for DMA\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't unmap region for DMA"); ret = -1; goto out; } else { - RTE_LOG(DEBUG, EAL, "DMA unmapping failed, but removing mappings anyway\n"); + RTE_LOG_LINE(DEBUG, EAL, "DMA unmapping failed, but removing mappings anyway"); } } @@ -2005,8 +2005,8 @@ rte_vfio_noiommu_is_enabled(void) fd = open(VFIO_NOIOMMU_MODE, O_RDONLY); if (fd < 0) { if (errno != ENOENT) { - RTE_LOG(ERR, EAL, "Cannot open VFIO noiommu file " - "%i (%s)\n", errno, strerror(errno)); + RTE_LOG_LINE(ERR, EAL, "Cannot open VFIO noiommu file " + "%i (%s)", errno, strerror(errno)); return -1; } /* @@ -2019,8 +2019,8 @@ rte_vfio_noiommu_is_enabled(void) cnt = read(fd, &c, 1); close(fd); if (cnt != 1) { - RTE_LOG(ERR, EAL, "Unable to read from VFIO noiommu file " - "%i (%s)\n", errno, strerror(errno)); + RTE_LOG_LINE(ERR, EAL, "Unable to read from VFIO noiommu file " + "%i (%s)", errno, strerror(errno)); return -1; } @@ -2039,13 +2039,13 @@ rte_vfio_container_create(void) } if (i == VFIO_MAX_CONTAINERS) { - RTE_LOG(ERR, EAL, "Exceed max VFIO container limit\n"); + RTE_LOG_LINE(ERR, EAL, "Exceed max VFIO container limit"); return -1; } vfio_cfgs[i].vfio_container_fd = rte_vfio_get_container_fd(); if (vfio_cfgs[i].vfio_container_fd < 0) { - RTE_LOG(NOTICE, EAL, "Fail to create a new VFIO container\n"); + RTE_LOG_LINE(NOTICE, EAL, "Fail to create a new VFIO container"); return -1; } @@ -2060,7 +2060,7 @@ rte_vfio_container_destroy(int container_fd) vfio_cfg = get_vfio_cfg_by_container_fd(container_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO container fd\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid VFIO container fd"); return -1; } @@ -2084,7 +2084,7 @@ rte_vfio_container_group_bind(int container_fd, int iommu_group_num) vfio_cfg = get_vfio_cfg_by_container_fd(container_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO container fd\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid VFIO container fd"); return -1; } @@ -2100,7 +2100,7 @@ rte_vfio_container_group_unbind(int container_fd, int iommu_group_num) vfio_cfg = get_vfio_cfg_by_container_fd(container_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO container fd\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid VFIO container fd"); return -1; } @@ -2113,14 +2113,14 @@ rte_vfio_container_group_unbind(int container_fd, int iommu_group_num) /* This should not happen */ if (i == VFIO_MAX_GROUPS || cur_grp == NULL) { - RTE_LOG(ERR, EAL, "Specified VFIO group number not found\n"); + RTE_LOG_LINE(ERR, EAL, "Specified VFIO group number not found"); return -1; } if (cur_grp->fd >= 0 && close(cur_grp->fd) < 0) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "Error when closing vfio_group_fd for iommu_group_num " - "%d\n", iommu_group_num); + "%d", iommu_group_num); return -1; } cur_grp->group_num = -1; @@ -2144,7 +2144,7 @@ rte_vfio_container_dma_map(int container_fd, uint64_t vaddr, uint64_t iova, vfio_cfg = get_vfio_cfg_by_container_fd(container_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO container fd\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid VFIO container fd"); return -1; } @@ -2164,7 +2164,7 @@ rte_vfio_container_dma_unmap(int container_fd, uint64_t vaddr, uint64_t iova, vfio_cfg = get_vfio_cfg_by_container_fd(container_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO container fd\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid VFIO container fd"); return -1; } diff --git a/lib/eal/linux/eal_vfio_mp_sync.c b/lib/eal/linux/eal_vfio_mp_sync.c index 157f20e583..a78113844b 100644 --- a/lib/eal/linux/eal_vfio_mp_sync.c +++ b/lib/eal/linux/eal_vfio_mp_sync.c @@ -33,7 +33,7 @@ vfio_mp_primary(const struct rte_mp_msg *msg, const void *peer) (const struct vfio_mp_param *)msg->param; if (msg->len_param != sizeof(*m)) { - RTE_LOG(ERR, EAL, "vfio received invalid message!\n"); + RTE_LOG_LINE(ERR, EAL, "vfio received invalid message!"); return -1; } @@ -95,7 +95,7 @@ vfio_mp_primary(const struct rte_mp_msg *msg, const void *peer) break; } default: - RTE_LOG(ERR, EAL, "vfio received invalid message!\n"); + RTE_LOG_LINE(ERR, EAL, "vfio received invalid message!"); return -1; } diff --git a/lib/eal/riscv/rte_cycles.c b/lib/eal/riscv/rte_cycles.c index 358f271311..e27e02d9a9 100644 --- a/lib/eal/riscv/rte_cycles.c +++ b/lib/eal/riscv/rte_cycles.c @@ -38,14 +38,14 @@ __rte_riscv_timefrq(void) break; } fail: - RTE_LOG(WARNING, EAL, "Unable to read timebase-frequency from FDT.\n"); + RTE_LOG_LINE(WARNING, EAL, "Unable to read timebase-frequency from FDT."); return 0; } uint64_t get_tsc_freq_arch(void) { - RTE_LOG(NOTICE, EAL, "TSC using RISC-V %s.\n", + RTE_LOG_LINE(NOTICE, EAL, "TSC using RISC-V %s.", RTE_RISCV_RDTSC_USE_HPM ? "rdcycle" : "rdtime"); if (!RTE_RISCV_RDTSC_USE_HPM) return __rte_riscv_timefrq(); diff --git a/lib/eal/unix/eal_filesystem.c b/lib/eal/unix/eal_filesystem.c index afbab9368a..4d90c2707f 100644 --- a/lib/eal/unix/eal_filesystem.c +++ b/lib/eal/unix/eal_filesystem.c @@ -41,7 +41,7 @@ int eal_create_runtime_dir(void) /* create DPDK subdirectory under runtime dir */ ret = snprintf(tmp, sizeof(tmp), "%s/dpdk", directory); if (ret < 0 || ret == sizeof(tmp)) { - RTE_LOG(ERR, EAL, "Error creating DPDK runtime path name\n"); + RTE_LOG_LINE(ERR, EAL, "Error creating DPDK runtime path name"); return -1; } @@ -49,7 +49,7 @@ int eal_create_runtime_dir(void) ret = snprintf(run_dir, sizeof(run_dir), "%s/%s", tmp, eal_get_hugefile_prefix()); if (ret < 0 || ret == sizeof(run_dir)) { - RTE_LOG(ERR, EAL, "Error creating prefix-specific runtime path name\n"); + RTE_LOG_LINE(ERR, EAL, "Error creating prefix-specific runtime path name"); return -1; } @@ -58,14 +58,14 @@ int eal_create_runtime_dir(void) */ ret = mkdir(tmp, 0700); if (ret < 0 && errno != EEXIST) { - RTE_LOG(ERR, EAL, "Error creating '%s': %s\n", + RTE_LOG_LINE(ERR, EAL, "Error creating '%s': %s", tmp, strerror(errno)); return -1; } ret = mkdir(run_dir, 0700); if (ret < 0 && errno != EEXIST) { - RTE_LOG(ERR, EAL, "Error creating '%s': %s\n", + RTE_LOG_LINE(ERR, EAL, "Error creating '%s': %s", run_dir, strerror(errno)); return -1; } @@ -84,20 +84,20 @@ int eal_parse_sysfs_value(const char *filename, unsigned long *val) char *end = NULL; if ((f = fopen(filename, "r")) == NULL) { - RTE_LOG(ERR, EAL, "%s(): cannot open sysfs value %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): cannot open sysfs value %s", __func__, filename); return -1; } if (fgets(buf, sizeof(buf), f) == NULL) { - RTE_LOG(ERR, EAL, "%s(): cannot read sysfs value %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): cannot read sysfs value %s", __func__, filename); fclose(f); return -1; } *val = strtoul(buf, &end, 0); if ((buf[0] == '\0') || (end == NULL) || (*end != '\n')) { - RTE_LOG(ERR, EAL, "%s(): cannot parse sysfs value %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): cannot parse sysfs value %s", __func__, filename); fclose(f); return -1; diff --git a/lib/eal/unix/eal_firmware.c b/lib/eal/unix/eal_firmware.c index 1a7cf8e7b7..b071bb1396 100644 --- a/lib/eal/unix/eal_firmware.c +++ b/lib/eal/unix/eal_firmware.c @@ -151,7 +151,7 @@ rte_firmware_read(const char *name, void **buf, size_t *bufsz) path[PATH_MAX - 1] = '\0'; #ifndef RTE_HAS_LIBARCHIVE if (access(path, F_OK) == 0) { - RTE_LOG(WARNING, EAL, "libarchive not linked, %s cannot be decompressed\n", + RTE_LOG_LINE(WARNING, EAL, "libarchive not linked, %s cannot be decompressed", path); } #else diff --git a/lib/eal/unix/eal_unix_memory.c b/lib/eal/unix/eal_unix_memory.c index 68ae93bd6e..16183fb395 100644 --- a/lib/eal/unix/eal_unix_memory.c +++ b/lib/eal/unix/eal_unix_memory.c @@ -29,8 +29,8 @@ mem_map(void *requested_addr, size_t size, int prot, int flags, { void *virt = mmap(requested_addr, size, prot, flags, fd, offset); if (virt == MAP_FAILED) { - RTE_LOG(DEBUG, EAL, - "Cannot mmap(%p, 0x%zx, 0x%x, 0x%x, %d, 0x%"PRIx64"): %s\n", + RTE_LOG_LINE(DEBUG, EAL, + "Cannot mmap(%p, 0x%zx, 0x%x, 0x%x, %d, 0x%"PRIx64"): %s", requested_addr, size, prot, flags, fd, offset, strerror(errno)); rte_errno = errno; @@ -44,7 +44,7 @@ mem_unmap(void *virt, size_t size) { int ret = munmap(virt, size); if (ret < 0) { - RTE_LOG(DEBUG, EAL, "Cannot munmap(%p, 0x%zx): %s\n", + RTE_LOG_LINE(DEBUG, EAL, "Cannot munmap(%p, 0x%zx): %s", virt, size, strerror(errno)); rte_errno = errno; } @@ -83,7 +83,7 @@ eal_mem_set_dump(void *virt, size_t size, bool dump) int flags = dump ? EAL_DODUMP : EAL_DONTDUMP; int ret = madvise(virt, size, flags); if (ret) { - RTE_LOG(DEBUG, EAL, "madvise(%p, %#zx, %d) failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "madvise(%p, %#zx, %d) failed: %s", virt, size, flags, strerror(rte_errno)); rte_errno = errno; } diff --git a/lib/eal/unix/rte_thread.c b/lib/eal/unix/rte_thread.c index 36a21ab2f9..bee77e9448 100644 --- a/lib/eal/unix/rte_thread.c +++ b/lib/eal/unix/rte_thread.c @@ -53,7 +53,7 @@ thread_map_priority_to_os_value(enum rte_thread_priority eal_pri, int *os_pri, *os_pri = sched_get_priority_max(SCHED_RR); break; default: - RTE_LOG(DEBUG, EAL, "The requested priority value is invalid.\n"); + RTE_LOG_LINE(DEBUG, EAL, "The requested priority value is invalid."); return EINVAL; } @@ -79,7 +79,7 @@ thread_map_os_priority_to_eal_priority(int policy, int os_pri, } break; default: - RTE_LOG(DEBUG, EAL, "The OS priority value does not map to an EAL-defined priority.\n"); + RTE_LOG_LINE(DEBUG, EAL, "The OS priority value does not map to an EAL-defined priority."); return EINVAL; } @@ -97,7 +97,7 @@ thread_start_wrapper(void *arg) if (ctx->thread_attr != NULL && CPU_COUNT(&ctx->thread_attr->cpuset) > 0) { ret = rte_thread_set_affinity_by_id(rte_thread_self(), &ctx->thread_attr->cpuset); if (ret != 0) - RTE_LOG(DEBUG, EAL, "rte_thread_set_affinity_by_id failed\n"); + RTE_LOG_LINE(DEBUG, EAL, "rte_thread_set_affinity_by_id failed"); } pthread_mutex_lock(&ctx->wrapper_mutex); @@ -136,7 +136,7 @@ rte_thread_create(rte_thread_t *thread_id, if (thread_attr != NULL) { ret = pthread_attr_init(&attr); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "pthread_attr_init failed\n"); + RTE_LOG_LINE(DEBUG, EAL, "pthread_attr_init failed"); goto cleanup; } @@ -149,7 +149,7 @@ rte_thread_create(rte_thread_t *thread_id, ret = pthread_attr_setinheritsched(attrp, PTHREAD_EXPLICIT_SCHED); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "pthread_attr_setinheritsched failed\n"); + RTE_LOG_LINE(DEBUG, EAL, "pthread_attr_setinheritsched failed"); goto cleanup; } @@ -165,13 +165,13 @@ rte_thread_create(rte_thread_t *thread_id, ret = pthread_attr_setschedpolicy(attrp, policy); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "pthread_attr_setschedpolicy failed\n"); + RTE_LOG_LINE(DEBUG, EAL, "pthread_attr_setschedpolicy failed"); goto cleanup; } ret = pthread_attr_setschedparam(attrp, ¶m); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "pthread_attr_setschedparam failed\n"); + RTE_LOG_LINE(DEBUG, EAL, "pthread_attr_setschedparam failed"); goto cleanup; } } @@ -179,7 +179,7 @@ rte_thread_create(rte_thread_t *thread_id, ret = pthread_create((pthread_t *)&thread_id->opaque_id, attrp, thread_start_wrapper, &ctx); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "pthread_create failed\n"); + RTE_LOG_LINE(DEBUG, EAL, "pthread_create failed"); goto cleanup; } @@ -211,7 +211,7 @@ rte_thread_join(rte_thread_t thread_id, uint32_t *value_ptr) ret = pthread_join((pthread_t)thread_id.opaque_id, pres); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "pthread_join failed\n"); + RTE_LOG_LINE(DEBUG, EAL, "pthread_join failed"); return ret; } @@ -256,7 +256,7 @@ rte_thread_get_priority(rte_thread_t thread_id, ret = pthread_getschedparam((pthread_t)thread_id.opaque_id, &policy, ¶m); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "pthread_getschedparam failed\n"); + RTE_LOG_LINE(DEBUG, EAL, "pthread_getschedparam failed"); goto cleanup; } @@ -295,13 +295,13 @@ rte_thread_key_create(rte_thread_key *key, void (*destructor)(void *)) *key = malloc(sizeof(**key)); if ((*key) == NULL) { - RTE_LOG(DEBUG, EAL, "Cannot allocate TLS key.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Cannot allocate TLS key."); rte_errno = ENOMEM; return -1; } err = pthread_key_create(&((*key)->thread_index), destructor); if (err) { - RTE_LOG(DEBUG, EAL, "pthread_key_create failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "pthread_key_create failed: %s", strerror(err)); free(*key); rte_errno = ENOEXEC; @@ -316,13 +316,13 @@ rte_thread_key_delete(rte_thread_key key) int err; if (!key) { - RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Invalid TLS key."); rte_errno = EINVAL; return -1; } err = pthread_key_delete(key->thread_index); if (err) { - RTE_LOG(DEBUG, EAL, "pthread_key_delete failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "pthread_key_delete failed: %s", strerror(err)); free(key); rte_errno = ENOEXEC; @@ -338,13 +338,13 @@ rte_thread_value_set(rte_thread_key key, const void *value) int err; if (!key) { - RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Invalid TLS key."); rte_errno = EINVAL; return -1; } err = pthread_setspecific(key->thread_index, value); if (err) { - RTE_LOG(DEBUG, EAL, "pthread_setspecific failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "pthread_setspecific failed: %s", strerror(err)); rte_errno = ENOEXEC; return -1; @@ -356,7 +356,7 @@ void * rte_thread_value_get(rte_thread_key key) { if (!key) { - RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Invalid TLS key."); rte_errno = EINVAL; return NULL; } diff --git a/lib/eal/windows/eal.c b/lib/eal/windows/eal.c index 7ec2152211..b573fa7c74 100644 --- a/lib/eal/windows/eal.c +++ b/lib/eal/windows/eal.c @@ -67,7 +67,7 @@ eal_proc_type_detect(void) ptype = RTE_PROC_SECONDARY; } - RTE_LOG(INFO, EAL, "Auto-detected process type: %s\n", + RTE_LOG_LINE(INFO, EAL, "Auto-detected process type: %s", ptype == RTE_PROC_PRIMARY ? "PRIMARY" : "SECONDARY"); return ptype; @@ -175,16 +175,16 @@ eal_parse_args(int argc, char **argv) exit(EXIT_SUCCESS); default: if (opt < OPT_LONG_MIN_NUM && isprint(opt)) { - RTE_LOG(ERR, EAL, "Option %c is not supported " - "on Windows\n", opt); + RTE_LOG_LINE(ERR, EAL, "Option %c is not supported " + "on Windows", opt); } else if (opt >= OPT_LONG_MIN_NUM && opt < OPT_LONG_MAX_NUM) { - RTE_LOG(ERR, EAL, "Option %s is not supported " - "on Windows\n", + RTE_LOG_LINE(ERR, EAL, "Option %s is not supported " + "on Windows", eal_long_options[option_index].name); } else { - RTE_LOG(ERR, EAL, "Option %d is not supported " - "on Windows\n", opt); + RTE_LOG_LINE(ERR, EAL, "Option %d is not supported " + "on Windows", opt); } eal_usage(prgname); return -1; @@ -217,7 +217,7 @@ static void rte_eal_init_alert(const char *msg) { fprintf(stderr, "EAL: FATAL: %s\n", msg); - RTE_LOG(ERR, EAL, "%s\n", msg); + RTE_LOG_LINE(ERR, EAL, "%s", msg); } /* Stubs to enable EAL trace point compilation @@ -312,8 +312,8 @@ rte_eal_init(int argc, char **argv) /* Prevent creation of shared memory files. */ if (internal_conf->in_memory == 0) { - RTE_LOG(WARNING, EAL, "Multi-process support is requested, " - "but not available.\n"); + RTE_LOG_LINE(WARNING, EAL, "Multi-process support is requested, " + "but not available."); internal_conf->in_memory = 1; internal_conf->no_shconf = 1; } @@ -356,21 +356,21 @@ rte_eal_init(int argc, char **argv) has_phys_addr = true; if (eal_mem_virt2iova_init() < 0) { /* Non-fatal error if physical addresses are not required. */ - RTE_LOG(DEBUG, EAL, "Cannot access virt2phys driver, " - "PA will not be available\n"); + RTE_LOG_LINE(DEBUG, EAL, "Cannot access virt2phys driver, " + "PA will not be available"); has_phys_addr = false; } iova_mode = internal_conf->iova_mode; if (iova_mode == RTE_IOVA_DC) { - RTE_LOG(DEBUG, EAL, "Specific IOVA mode is not requested, autodetecting\n"); + RTE_LOG_LINE(DEBUG, EAL, "Specific IOVA mode is not requested, autodetecting"); if (has_phys_addr) { - RTE_LOG(DEBUG, EAL, "Selecting IOVA mode according to bus requests\n"); + RTE_LOG_LINE(DEBUG, EAL, "Selecting IOVA mode according to bus requests"); iova_mode = rte_bus_get_iommu_class(); if (iova_mode == RTE_IOVA_DC) { if (!RTE_IOVA_IN_MBUF) { iova_mode = RTE_IOVA_VA; - RTE_LOG(DEBUG, EAL, "IOVA as VA mode is forced by build option.\n"); + RTE_LOG_LINE(DEBUG, EAL, "IOVA as VA mode is forced by build option."); } else { iova_mode = RTE_IOVA_PA; } @@ -392,7 +392,7 @@ rte_eal_init(int argc, char **argv) return -1; } - RTE_LOG(DEBUG, EAL, "Selected IOVA mode '%s'\n", + RTE_LOG_LINE(DEBUG, EAL, "Selected IOVA mode '%s'", iova_mode == RTE_IOVA_PA ? "PA" : "VA"); rte_eal_get_configuration()->iova_mode = iova_mode; @@ -442,7 +442,7 @@ rte_eal_init(int argc, char **argv) &lcore_config[config->main_lcore].cpuset); ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset)); - RTE_LOG(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])\n", + RTE_LOG_LINE(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])", config->main_lcore, rte_thread_self().opaque_id, cpuset, ret == 0 ? "" : "..."); @@ -474,7 +474,7 @@ rte_eal_init(int argc, char **argv) ret = rte_thread_set_affinity_by_id(lcore_config[i].thread_id, &lcore_config[i].cpuset); if (ret != 0) - RTE_LOG(DEBUG, EAL, "Cannot set affinity\n"); + RTE_LOG_LINE(DEBUG, EAL, "Cannot set affinity"); } /* Initialize services so drivers can register services during probe. */ diff --git a/lib/eal/windows/eal_alarm.c b/lib/eal/windows/eal_alarm.c index 34b52380ce..c56aa0e687 100644 --- a/lib/eal/windows/eal_alarm.c +++ b/lib/eal/windows/eal_alarm.c @@ -92,7 +92,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg) int ret; if (cb_fn == NULL) { - RTE_LOG(ERR, EAL, "NULL callback\n"); + RTE_LOG_LINE(ERR, EAL, "NULL callback"); ret = -EINVAL; goto exit; } @@ -105,7 +105,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg) ap = calloc(1, sizeof(*ap)); if (ap == NULL) { - RTE_LOG(ERR, EAL, "Cannot allocate alarm entry\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate alarm entry"); ret = -ENOMEM; goto exit; } @@ -129,7 +129,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg) /* Directly schedule callback execution. */ ret = alarm_set(ap, deadline); if (ret < 0) { - RTE_LOG(ERR, EAL, "Cannot setup alarm\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot setup alarm"); goto fail; } } else { @@ -143,7 +143,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg) ret = intr_thread_exec_sync(alarm_task_exec, &task); if (ret < 0) { - RTE_LOG(ERR, EAL, "Cannot setup alarm in interrupt thread\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot setup alarm in interrupt thread"); goto fail; } @@ -187,7 +187,7 @@ rte_eal_alarm_cancel(rte_eal_alarm_callback cb_fn, void *cb_arg) removed = 0; if (cb_fn == NULL) { - RTE_LOG(ERR, EAL, "NULL callback\n"); + RTE_LOG_LINE(ERR, EAL, "NULL callback"); return -EINVAL; } @@ -246,7 +246,7 @@ intr_thread_exec_sync(void (*func)(void *arg), void *arg) rte_spinlock_lock(&task.lock); ret = eal_intr_thread_schedule(intr_thread_entry, &task); if (ret < 0) { - RTE_LOG(ERR, EAL, "Cannot schedule task to interrupt thread\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot schedule task to interrupt thread"); return -EINVAL; } diff --git a/lib/eal/windows/eal_debug.c b/lib/eal/windows/eal_debug.c index 56ed70df7d..be646080c3 100644 --- a/lib/eal/windows/eal_debug.c +++ b/lib/eal/windows/eal_debug.c @@ -48,8 +48,8 @@ rte_dump_stack(void) error_code = GetLastError(); if (error_code == ERROR_INVALID_ADDRESS) { /* Missing symbols, print message */ - rte_log(RTE_LOG_ERR, RTE_LOGTYPE_EAL, - "%d: [<missing_symbols>]\n", frame_num--); + RTE_LOG_LINE(ERR, EAL, + "%d: [<missing_symbols>]", frame_num--); continue; } else { RTE_LOG_WIN32_ERR("SymFromAddr()"); @@ -67,8 +67,8 @@ rte_dump_stack(void) } } - rte_log(RTE_LOG_ERR, RTE_LOGTYPE_EAL, - "%d: [%s (%s+0x%0llx)[0x%0llX]]\n", frame_num, + RTE_LOG_LINE(ERR, EAL, + "%d: [%s (%s+0x%0llx)[0x%0llX]]", frame_num, error_code ? "<unknown>" : line.FileName, symbol_info->Name, sym_disp, symbol_info->Address); frame_num--; diff --git a/lib/eal/windows/eal_dev.c b/lib/eal/windows/eal_dev.c index 35191056fd..264bc4a649 100644 --- a/lib/eal/windows/eal_dev.c +++ b/lib/eal/windows/eal_dev.c @@ -7,27 +7,27 @@ int rte_dev_event_monitor_start(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for Windows\n"); + RTE_LOG_LINE(ERR, EAL, "Device event is not supported for Windows"); return -1; } int rte_dev_event_monitor_stop(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for Windows\n"); + RTE_LOG_LINE(ERR, EAL, "Device event is not supported for Windows"); return -1; } int rte_dev_hotplug_handle_enable(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for Windows\n"); + RTE_LOG_LINE(ERR, EAL, "Device event is not supported for Windows"); return -1; } int rte_dev_hotplug_handle_disable(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for Windows\n"); + RTE_LOG_LINE(ERR, EAL, "Device event is not supported for Windows"); return -1; } diff --git a/lib/eal/windows/eal_hugepages.c b/lib/eal/windows/eal_hugepages.c index 775c67e4c4..f2336fbe1e 100644 --- a/lib/eal/windows/eal_hugepages.c +++ b/lib/eal/windows/eal_hugepages.c @@ -89,8 +89,8 @@ hugepage_info_init(void) } hpi->num_pages[socket_id] = bytes / hpi->hugepage_sz; - RTE_LOG(DEBUG, EAL, - "Found %u hugepages of %zu bytes on socket %u\n", + RTE_LOG_LINE(DEBUG, EAL, + "Found %u hugepages of %zu bytes on socket %u", hpi->num_pages[socket_id], hpi->hugepage_sz, socket_id); } @@ -105,13 +105,13 @@ int eal_hugepage_info_init(void) { if (hugepage_claim_privilege() < 0) { - RTE_LOG(ERR, EAL, - "Cannot claim hugepage privilege, check large-page support privilege\n"); + RTE_LOG_LINE(ERR, EAL, + "Cannot claim hugepage privilege, check large-page support privilege"); return -1; } if (hugepage_info_init() < 0) { - RTE_LOG(ERR, EAL, "Cannot discover available hugepages\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot discover available hugepages"); return -1; } diff --git a/lib/eal/windows/eal_interrupts.c b/lib/eal/windows/eal_interrupts.c index 49efdc098c..a9c62453b8 100644 --- a/lib/eal/windows/eal_interrupts.c +++ b/lib/eal/windows/eal_interrupts.c @@ -39,7 +39,7 @@ eal_intr_thread_main(LPVOID arg __rte_unused) bool finished = false; if (eal_intr_thread_handle_init() < 0) { - RTE_LOG(ERR, EAL, "Cannot open interrupt thread handle\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot open interrupt thread handle"); goto cleanup; } @@ -57,7 +57,7 @@ eal_intr_thread_main(LPVOID arg __rte_unused) DWORD error = GetLastError(); if (error != WAIT_IO_COMPLETION) { RTE_LOG_WIN32_ERR("GetQueuedCompletionStatusEx()"); - RTE_LOG(ERR, EAL, "Failed waiting for interrupts\n"); + RTE_LOG_LINE(ERR, EAL, "Failed waiting for interrupts"); break; } @@ -94,7 +94,7 @@ rte_eal_intr_init(void) intr_iocp = CreateIoCompletionPort(INVALID_HANDLE_VALUE, NULL, 0, 1); if (intr_iocp == NULL) { RTE_LOG_WIN32_ERR("CreateIoCompletionPort()"); - RTE_LOG(ERR, EAL, "Cannot create interrupt IOCP\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot create interrupt IOCP"); return -1; } @@ -102,7 +102,7 @@ rte_eal_intr_init(void) eal_intr_thread_main, NULL); if (ret != 0) { rte_errno = -ret; - RTE_LOG(ERR, EAL, "Cannot create interrupt thread\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot create interrupt thread"); } return ret; @@ -140,7 +140,7 @@ eal_intr_thread_cancel(void) if (!PostQueuedCompletionStatus( intr_iocp, 0, IOCP_KEY_SHUTDOWN, NULL)) { RTE_LOG_WIN32_ERR("PostQueuedCompletionStatus()"); - RTE_LOG(ERR, EAL, "Cannot cancel interrupt thread\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot cancel interrupt thread"); return; } diff --git a/lib/eal/windows/eal_lcore.c b/lib/eal/windows/eal_lcore.c index 286fe241eb..da3be08aab 100644 --- a/lib/eal/windows/eal_lcore.c +++ b/lib/eal/windows/eal_lcore.c @@ -65,7 +65,7 @@ eal_query_group_affinity(void) &infos_size)) { DWORD error = GetLastError(); if (error != ERROR_INSUFFICIENT_BUFFER) { - RTE_LOG(ERR, EAL, "Cannot get group information size, error %lu\n", error); + RTE_LOG_LINE(ERR, EAL, "Cannot get group information size, error %lu", error); rte_errno = EINVAL; ret = -1; goto cleanup; @@ -74,7 +74,7 @@ eal_query_group_affinity(void) infos = malloc(infos_size); if (infos == NULL) { - RTE_LOG(ERR, EAL, "Cannot allocate memory for NUMA node information\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate memory for NUMA node information"); rte_errno = ENOMEM; ret = -1; goto cleanup; @@ -82,7 +82,7 @@ eal_query_group_affinity(void) if (!GetLogicalProcessorInformationEx(RelationGroup, infos, &infos_size)) { - RTE_LOG(ERR, EAL, "Cannot get group information, error %lu\n", + RTE_LOG_LINE(ERR, EAL, "Cannot get group information, error %lu", GetLastError()); rte_errno = EINVAL; ret = -1; diff --git a/lib/eal/windows/eal_memalloc.c b/lib/eal/windows/eal_memalloc.c index aa7589b81d..fa9d1fdc1e 100644 --- a/lib/eal/windows/eal_memalloc.c +++ b/lib/eal/windows/eal_memalloc.c @@ -52,7 +52,7 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id, } /* Bugcheck, should not happen. */ - RTE_LOG(DEBUG, EAL, "Attempted to reallocate segment %p " + RTE_LOG_LINE(DEBUG, EAL, "Attempted to reallocate segment %p " "(size %zu) on socket %d", ms->addr, ms->len, ms->socket_id); return -1; @@ -66,8 +66,8 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id, /* Request a new chunk of memory from OS. */ addr = eal_mem_alloc_socket(alloc_sz, socket_id); if (addr == NULL) { - RTE_LOG(DEBUG, EAL, "Cannot allocate %zu bytes " - "on socket %d\n", alloc_sz, socket_id); + RTE_LOG_LINE(DEBUG, EAL, "Cannot allocate %zu bytes " + "on socket %d", alloc_sz, socket_id); return -1; } } else { @@ -79,15 +79,15 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id, * error, because it breaks MSL assumptions. */ if ((addr != NULL) && (addr != requested_addr)) { - RTE_LOG(CRIT, EAL, "Address %p occupied by an alien " - " allocation - MSL is not VA-contiguous!\n", + RTE_LOG_LINE(CRIT, EAL, "Address %p occupied by an alien " + " allocation - MSL is not VA-contiguous!", requested_addr); return -1; } if (addr == NULL) { - RTE_LOG(DEBUG, EAL, "Cannot commit reserved memory %p " - "(size %zu) on socket %d\n", + RTE_LOG_LINE(DEBUG, EAL, "Cannot commit reserved memory %p " + "(size %zu) on socket %d", requested_addr, alloc_sz, socket_id); return -1; } @@ -101,8 +101,8 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id, iova = rte_mem_virt2iova(addr); if (iova == RTE_BAD_IOVA) { - RTE_LOG(DEBUG, EAL, - "Cannot get IOVA of allocated segment\n"); + RTE_LOG_LINE(DEBUG, EAL, + "Cannot get IOVA of allocated segment"); goto error; } @@ -115,12 +115,12 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id, page = &info.VirtualAttributes; if (!page->Valid || !page->LargePage) { - RTE_LOG(DEBUG, EAL, "Got regular page instead of a hugepage\n"); + RTE_LOG_LINE(DEBUG, EAL, "Got regular page instead of a hugepage"); goto error; } if (page->Node != numa_node) { - RTE_LOG(DEBUG, EAL, - "NUMA node hint %u (socket %d) not respected, got %u\n", + RTE_LOG_LINE(DEBUG, EAL, + "NUMA node hint %u (socket %d) not respected, got %u", numa_node, socket_id, page->Node); goto error; } @@ -141,8 +141,8 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id, /* During decommitment, memory is temporarily returned * to the system and the address may become unavailable. */ - RTE_LOG(CRIT, EAL, "Address %p occupied by an alien " - " allocation - MSL is not VA-contiguous!\n", addr); + RTE_LOG_LINE(CRIT, EAL, "Address %p occupied by an alien " + " allocation - MSL is not VA-contiguous!", addr); } return -1; } @@ -153,8 +153,8 @@ free_seg(struct rte_memseg *ms) if (eal_mem_decommit(ms->addr, ms->len)) { if (rte_errno == EADDRNOTAVAIL) { /* See alloc_seg() for explanation. */ - RTE_LOG(CRIT, EAL, "Address %p occupied by an alien " - " allocation - MSL is not VA-contiguous!\n", + RTE_LOG_LINE(CRIT, EAL, "Address %p occupied by an alien " + " allocation - MSL is not VA-contiguous!", ms->addr); } return -1; @@ -233,8 +233,8 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg) map_addr = RTE_PTR_ADD(cur_msl->base_va, cur_idx * page_sz); if (alloc_seg(cur, map_addr, wa->socket, wa->hi)) { - RTE_LOG(DEBUG, EAL, "attempted to allocate %i segments, " - "but only %i were allocated\n", need, i); + RTE_LOG_LINE(DEBUG, EAL, "attempted to allocate %i segments, " + "but only %i were allocated", need, i); /* if exact number wasn't requested, stop */ if (!wa->exact) @@ -249,7 +249,7 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg) rte_fbarray_set_free(arr, j); if (free_seg(tmp)) - RTE_LOG(DEBUG, EAL, "Cannot free page\n"); + RTE_LOG_LINE(DEBUG, EAL, "Cannot free page"); } /* clear the list */ if (wa->ms) @@ -318,7 +318,7 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs, eal_get_internal_configuration(); if (internal_conf->legacy_mem) { - RTE_LOG(ERR, EAL, "dynamic allocation not supported in legacy mode\n"); + RTE_LOG_LINE(ERR, EAL, "dynamic allocation not supported in legacy mode"); return -ENOTSUP; } @@ -330,7 +330,7 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs, } } if (!hi) { - RTE_LOG(ERR, EAL, "cannot find relevant hugepage_info entry\n"); + RTE_LOG_LINE(ERR, EAL, "cannot find relevant hugepage_info entry"); return -1; } @@ -346,7 +346,7 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs, /* memalloc is locked, so it's safe to use thread-unsafe version */ ret = rte_memseg_list_walk_thread_unsafe(alloc_seg_walk, &wa); if (ret == 0) { - RTE_LOG(ERR, EAL, "cannot find suitable memseg_list\n"); + RTE_LOG_LINE(ERR, EAL, "cannot find suitable memseg_list"); ret = -1; } else if (ret > 0) { ret = (int)wa.segs_allocated; @@ -383,7 +383,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs) /* if this page is marked as unfreeable, fail */ if (cur->flags & RTE_MEMSEG_FLAG_DO_NOT_FREE) { - RTE_LOG(DEBUG, EAL, "Page is not allowed to be freed\n"); + RTE_LOG_LINE(DEBUG, EAL, "Page is not allowed to be freed"); ret = -1; continue; } @@ -396,7 +396,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs) break; } if (i == RTE_DIM(internal_conf->hugepage_info)) { - RTE_LOG(ERR, EAL, "Can't find relevant hugepage_info entry\n"); + RTE_LOG_LINE(ERR, EAL, "Can't find relevant hugepage_info entry"); ret = -1; continue; } @@ -411,7 +411,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs) if (walk_res == 1) continue; if (walk_res == 0) - RTE_LOG(ERR, EAL, "Couldn't find memseg list\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't find memseg list"); ret = -1; } return ret; diff --git a/lib/eal/windows/eal_memory.c b/lib/eal/windows/eal_memory.c index fd39155163..7e1e8d4c84 100644 --- a/lib/eal/windows/eal_memory.c +++ b/lib/eal/windows/eal_memory.c @@ -114,8 +114,8 @@ eal_mem_win32api_init(void) library_name, function); /* Contrary to the docs, Server 2016 is not supported. */ - RTE_LOG(ERR, EAL, "Windows 10 or Windows Server 2019 " - " is required for memory management\n"); + RTE_LOG_LINE(ERR, EAL, "Windows 10 or Windows Server 2019 " + " is required for memory management"); ret = -1; } @@ -173,8 +173,8 @@ eal_mem_virt2iova_init(void) detail = malloc(detail_size); if (detail == NULL) { - RTE_LOG(ERR, EAL, "Cannot allocate virt2phys " - "device interface detail data\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate virt2phys " + "device interface detail data"); goto exit; } @@ -185,7 +185,7 @@ eal_mem_virt2iova_init(void) goto exit; } - RTE_LOG(DEBUG, EAL, "Found virt2phys device: %s\n", detail->DevicePath); + RTE_LOG_LINE(DEBUG, EAL, "Found virt2phys device: %s", detail->DevicePath); virt2phys_device = CreateFile( detail->DevicePath, 0, 0, NULL, OPEN_EXISTING, 0, NULL); @@ -574,8 +574,8 @@ rte_mem_map(void *requested_addr, size_t size, int prot, int flags, int ret = mem_free(requested_addr, size, true); if (ret) { if (ret > 0) { - RTE_LOG(ERR, EAL, "Cannot map memory " - "to a region not reserved\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot map memory " + "to a region not reserved"); rte_errno = EADDRNOTAVAIL; } return NULL; @@ -691,7 +691,7 @@ eal_nohuge_init(void) NULL, mem_sz, MEM_RESERVE | MEM_COMMIT, PAGE_READWRITE); if (addr == NULL) { RTE_LOG_WIN32_ERR("VirtualAlloc(size=%#zx)", mem_sz); - RTE_LOG(ERR, EAL, "Cannot allocate memory\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate memory"); return -1; } @@ -702,9 +702,9 @@ eal_nohuge_init(void) if (mcfg->dma_maskbits && rte_mem_check_dma_mask_thread_unsafe(mcfg->dma_maskbits)) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "%s(): couldn't allocate memory due to IOVA " - "exceeding limits of current DMA mask.\n", __func__); + "exceeding limits of current DMA mask.", __func__); return -1; } diff --git a/lib/eal/windows/eal_windows.h b/lib/eal/windows/eal_windows.h index 43b228d388..ee206f365d 100644 --- a/lib/eal/windows/eal_windows.h +++ b/lib/eal/windows/eal_windows.h @@ -17,7 +17,7 @@ */ #define EAL_LOG_NOT_IMPLEMENTED() \ do { \ - RTE_LOG(DEBUG, EAL, "%s() is not implemented\n", __func__); \ + RTE_LOG_LINE(DEBUG, EAL, "%s() is not implemented", __func__); \ rte_errno = ENOTSUP; \ } while (0) @@ -25,7 +25,7 @@ * Log current function as a stub. */ #define EAL_LOG_STUB() \ - RTE_LOG(DEBUG, EAL, "Windows: %s() is a stub\n", __func__) + RTE_LOG_LINE(DEBUG, EAL, "Windows: %s() is a stub", __func__) /** * Create a map of processors and cores on the system. diff --git a/lib/eal/windows/include/rte_windows.h b/lib/eal/windows/include/rte_windows.h index 83730c3d2e..015072885b 100644 --- a/lib/eal/windows/include/rte_windows.h +++ b/lib/eal/windows/include/rte_windows.h @@ -48,8 +48,8 @@ extern "C" { * Log GetLastError() with context, usually a Win32 API function and arguments. */ #define RTE_LOG_WIN32_ERR(...) \ - RTE_LOG(DEBUG, EAL, RTE_FMT("GetLastError()=%lu: " \ - RTE_FMT_HEAD(__VA_ARGS__,) "\n", GetLastError(), \ + RTE_LOG_LINE(DEBUG, EAL, RTE_FMT("GetLastError()=%lu: " \ + RTE_FMT_HEAD(__VA_ARGS__,), GetLastError(), \ RTE_FMT_TAIL(__VA_ARGS__,))) #ifdef __cplusplus diff --git a/lib/eal/windows/rte_thread.c b/lib/eal/windows/rte_thread.c index 145ac4b5aa..7c62f57e0d 100644 --- a/lib/eal/windows/rte_thread.c +++ b/lib/eal/windows/rte_thread.c @@ -67,7 +67,7 @@ static int thread_log_last_error(const char *message) { DWORD error = GetLastError(); - RTE_LOG(DEBUG, EAL, "GetLastError()=%lu: %s\n", error, message); + RTE_LOG_LINE(DEBUG, EAL, "GetLastError()=%lu: %s", error, message); return thread_translate_win32_error(error); } @@ -90,7 +90,7 @@ thread_map_priority_to_os_value(enum rte_thread_priority eal_pri, int *os_pri, *os_pri = THREAD_PRIORITY_TIME_CRITICAL; break; default: - RTE_LOG(DEBUG, EAL, "The requested priority value is invalid.\n"); + RTE_LOG_LINE(DEBUG, EAL, "The requested priority value is invalid."); return EINVAL; } @@ -109,7 +109,7 @@ thread_map_os_priority_to_eal_value(int os_pri, DWORD pri_class, } break; case HIGH_PRIORITY_CLASS: - RTE_LOG(WARNING, EAL, "The OS priority class is high not real-time.\n"); + RTE_LOG_LINE(WARNING, EAL, "The OS priority class is high not real-time."); /* FALLTHROUGH */ case REALTIME_PRIORITY_CLASS: if (os_pri == THREAD_PRIORITY_TIME_CRITICAL) { @@ -118,7 +118,7 @@ thread_map_os_priority_to_eal_value(int os_pri, DWORD pri_class, } break; default: - RTE_LOG(DEBUG, EAL, "The OS priority value does not map to an EAL-defined priority.\n"); + RTE_LOG_LINE(DEBUG, EAL, "The OS priority value does not map to an EAL-defined priority."); return EINVAL; } @@ -148,7 +148,7 @@ convert_cpuset_to_affinity(const rte_cpuset_t *cpuset, if (affinity->Group == (USHORT)-1) { affinity->Group = cpu_affinity->Group; } else if (affinity->Group != cpu_affinity->Group) { - RTE_LOG(DEBUG, EAL, "All processors must belong to the same processor group\n"); + RTE_LOG_LINE(DEBUG, EAL, "All processors must belong to the same processor group"); ret = ENOTSUP; goto cleanup; } @@ -194,7 +194,7 @@ rte_thread_create(rte_thread_t *thread_id, ctx = calloc(1, sizeof(*ctx)); if (ctx == NULL) { - RTE_LOG(DEBUG, EAL, "Insufficient memory for thread context allocations\n"); + RTE_LOG_LINE(DEBUG, EAL, "Insufficient memory for thread context allocations"); ret = ENOMEM; goto cleanup; } @@ -217,7 +217,7 @@ rte_thread_create(rte_thread_t *thread_id, &thread_affinity ); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "Unable to convert cpuset to thread affinity\n"); + RTE_LOG_LINE(DEBUG, EAL, "Unable to convert cpuset to thread affinity"); thread_exit = true; goto resume_thread; } @@ -232,7 +232,7 @@ rte_thread_create(rte_thread_t *thread_id, ret = rte_thread_set_priority(*thread_id, thread_attr->priority); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "Unable to set thread priority\n"); + RTE_LOG_LINE(DEBUG, EAL, "Unable to set thread priority"); thread_exit = true; goto resume_thread; } @@ -360,7 +360,7 @@ rte_thread_set_name(rte_thread_t thread_id, const char *thread_name) CloseHandle(thread_handle); if (ret != 0) - RTE_LOG(DEBUG, EAL, "Failed to set thread name\n"); + RTE_LOG_LINE(DEBUG, EAL, "Failed to set thread name"); } int @@ -446,7 +446,7 @@ rte_thread_key_create(rte_thread_key *key, { *key = malloc(sizeof(**key)); if ((*key) == NULL) { - RTE_LOG(DEBUG, EAL, "Cannot allocate TLS key.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Cannot allocate TLS key."); rte_errno = ENOMEM; return -1; } @@ -464,7 +464,7 @@ int rte_thread_key_delete(rte_thread_key key) { if (!key) { - RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Invalid TLS key."); rte_errno = EINVAL; return -1; } @@ -484,7 +484,7 @@ rte_thread_value_set(rte_thread_key key, const void *value) char *p; if (!key) { - RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Invalid TLS key."); rte_errno = EINVAL; return -1; } @@ -504,7 +504,7 @@ rte_thread_value_get(rte_thread_key key) void *output; if (!key) { - RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Invalid TLS key."); rte_errno = EINVAL; return NULL; } @@ -532,7 +532,7 @@ rte_thread_set_affinity_by_id(rte_thread_t thread_id, ret = convert_cpuset_to_affinity(cpuset, &thread_affinity); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "Unable to convert cpuset to thread affinity\n"); + RTE_LOG_LINE(DEBUG, EAL, "Unable to convert cpuset to thread affinity"); goto cleanup; } diff --git a/lib/efd/rte_efd.c b/lib/efd/rte_efd.c index 78fb9250ef..e441263335 100644 --- a/lib/efd/rte_efd.c +++ b/lib/efd/rte_efd.c @@ -512,13 +512,13 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len, efd_list = RTE_TAILQ_CAST(rte_efd_tailq.head, rte_efd_list); if (online_cpu_socket_bitmask == 0) { - RTE_LOG(ERR, EFD, "At least one CPU socket must be enabled " - "in the bitmask\n"); + RTE_LOG_LINE(ERR, EFD, "At least one CPU socket must be enabled " + "in the bitmask"); return NULL; } if (max_num_rules == 0) { - RTE_LOG(ERR, EFD, "Max num rules must be higher than 0\n"); + RTE_LOG_LINE(ERR, EFD, "Max num rules must be higher than 0"); return NULL; } @@ -557,7 +557,7 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len, te = rte_zmalloc("EFD_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, EFD, "tailq entry allocation failed\n"); + RTE_LOG_LINE(ERR, EFD, "tailq entry allocation failed"); goto error_unlock_exit; } @@ -567,15 +567,15 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len, RTE_CACHE_LINE_SIZE, offline_cpu_socket); if (table == NULL) { - RTE_LOG(ERR, EFD, "Allocating EFD table management structure" - " on socket %u failed\n", + RTE_LOG_LINE(ERR, EFD, "Allocating EFD table management structure" + " on socket %u failed", offline_cpu_socket); goto error_unlock_exit; } - RTE_LOG(DEBUG, EFD, "Allocated EFD table management structure " - "on socket %u\n", offline_cpu_socket); + RTE_LOG_LINE(DEBUG, EFD, "Allocated EFD table management structure " + "on socket %u", offline_cpu_socket); table->max_num_rules = num_chunks * EFD_TARGET_CHUNK_MAX_NUM_RULES; table->num_rules = 0; @@ -589,16 +589,16 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len, RTE_CACHE_LINE_SIZE, offline_cpu_socket); if (key_array == NULL) { - RTE_LOG(ERR, EFD, "Allocating key array" - " on socket %u failed\n", + RTE_LOG_LINE(ERR, EFD, "Allocating key array" + " on socket %u failed", offline_cpu_socket); goto error_unlock_exit; } table->keys = key_array; strlcpy(table->name, name, sizeof(table->name)); - RTE_LOG(DEBUG, EFD, "Creating an EFD table with %u chunks," - " which potentially supports %u entries\n", + RTE_LOG_LINE(DEBUG, EFD, "Creating an EFD table with %u chunks," + " which potentially supports %u entries", num_chunks, table->max_num_rules); /* Make sure all the allocatable table pointers are NULL initially */ @@ -626,15 +626,15 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len, RTE_CACHE_LINE_SIZE, socket_id); if (table->chunks[socket_id] == NULL) { - RTE_LOG(ERR, EFD, + RTE_LOG_LINE(ERR, EFD, "Allocating EFD online table on " - "socket %u failed\n", + "socket %u failed", socket_id); goto error_unlock_exit; } - RTE_LOG(DEBUG, EFD, + RTE_LOG_LINE(DEBUG, EFD, "Allocated EFD online table of size " - "%"PRIu64" bytes (%.2f MB) on socket %u\n", + "%"PRIu64" bytes (%.2f MB) on socket %u", online_table_size, (float) online_table_size / (1024.0F * 1024.0F), @@ -678,14 +678,14 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len, RTE_CACHE_LINE_SIZE, offline_cpu_socket); if (table->offline_chunks == NULL) { - RTE_LOG(ERR, EFD, "Allocating EFD offline table on socket %u " - "failed\n", offline_cpu_socket); + RTE_LOG_LINE(ERR, EFD, "Allocating EFD offline table on socket %u " + "failed", offline_cpu_socket); goto error_unlock_exit; } - RTE_LOG(DEBUG, EFD, + RTE_LOG_LINE(DEBUG, EFD, "Allocated EFD offline table of size %"PRIu64" bytes " - " (%.2f MB) on socket %u\n", offline_table_size, + " (%.2f MB) on socket %u", offline_table_size, (float) offline_table_size / (1024.0F * 1024.0F), offline_cpu_socket); @@ -698,7 +698,7 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len, r = rte_ring_create(ring_name, rte_align32pow2(table->max_num_rules), offline_cpu_socket, 0); if (r == NULL) { - RTE_LOG(ERR, EFD, "memory allocation failed\n"); + RTE_LOG_LINE(ERR, EFD, "memory allocation failed"); rte_efd_free(table); return NULL; } @@ -1018,9 +1018,9 @@ efd_compute_update(struct rte_efd_table * const table, if (found == 0) { /* Key does not exist. Insert the rule into the bin/group */ if (unlikely(current_group->num_rules >= EFD_MAX_GROUP_NUM_RULES)) { - RTE_LOG(ERR, EFD, + RTE_LOG_LINE(ERR, EFD, "Fatal: No room remaining for insert into " - "chunk %u group %u bin %u\n", + "chunk %u group %u bin %u", *chunk_id, current_group_id, *bin_id); return RTE_EFD_UPDATE_FAILED; @@ -1028,9 +1028,9 @@ efd_compute_update(struct rte_efd_table * const table, if (unlikely(current_group->num_rules == (EFD_MAX_GROUP_NUM_RULES - 1))) { - RTE_LOG(INFO, EFD, "Warn: Insert into last " + RTE_LOG_LINE(INFO, EFD, "Warn: Insert into last " "available slot in chunk %u " - "group %u bin %u\n", *chunk_id, + "group %u bin %u", *chunk_id, current_group_id, *bin_id); status = RTE_EFD_UPDATE_WARN_GROUP_FULL; } @@ -1117,10 +1117,10 @@ efd_compute_update(struct rte_efd_table * const table, if (current_group != new_group && new_group->num_rules + bin_size > EFD_MAX_GROUP_NUM_RULES) { - RTE_LOG(DEBUG, EFD, + RTE_LOG_LINE(DEBUG, EFD, "Unable to move_groups to dest group " "containing %u entries." - "bin_size:%u choice:%02x\n", + "bin_size:%u choice:%02x", new_group->num_rules, bin_size, choice - 1); goto next_choice; @@ -1135,9 +1135,9 @@ efd_compute_update(struct rte_efd_table * const table, if (!ret) return status; - RTE_LOG(DEBUG, EFD, + RTE_LOG_LINE(DEBUG, EFD, "Failed to find perfect hash for group " - "containing %u entries. bin_size:%u choice:%02x\n", + "containing %u entries. bin_size:%u choice:%02x", new_group->num_rules, bin_size, choice - 1); /* Restore groups modified to their previous state */ revert_groups(current_group, new_group, bin_size); diff --git a/lib/fib/rte_fib.c b/lib/fib/rte_fib.c index f88e71a59d..3d9bf6fe9d 100644 --- a/lib/fib/rte_fib.c +++ b/lib/fib/rte_fib.c @@ -171,8 +171,8 @@ rte_fib_create(const char *name, int socket_id, struct rte_fib_conf *conf) rib = rte_rib_create(name, socket_id, &rib_conf); if (rib == NULL) { - RTE_LOG(ERR, LPM, - "Can not allocate RIB %s\n", name); + RTE_LOG_LINE(ERR, LPM, + "Can not allocate RIB %s", name); return NULL; } @@ -196,8 +196,8 @@ rte_fib_create(const char *name, int socket_id, struct rte_fib_conf *conf) /* allocate tailq entry */ te = rte_zmalloc("FIB_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, LPM, - "Can not allocate tailq entry for FIB %s\n", name); + RTE_LOG_LINE(ERR, LPM, + "Can not allocate tailq entry for FIB %s", name); rte_errno = ENOMEM; goto exit; } @@ -206,7 +206,7 @@ rte_fib_create(const char *name, int socket_id, struct rte_fib_conf *conf) fib = rte_zmalloc_socket(mem_name, sizeof(struct rte_fib), RTE_CACHE_LINE_SIZE, socket_id); if (fib == NULL) { - RTE_LOG(ERR, LPM, "FIB %s memory allocation failed\n", name); + RTE_LOG_LINE(ERR, LPM, "FIB %s memory allocation failed", name); rte_errno = ENOMEM; goto free_te; } @@ -217,9 +217,9 @@ rte_fib_create(const char *name, int socket_id, struct rte_fib_conf *conf) fib->def_nh = conf->default_nh; ret = init_dataplane(fib, socket_id, conf); if (ret < 0) { - RTE_LOG(ERR, LPM, + RTE_LOG_LINE(ERR, LPM, "FIB dataplane struct %s memory allocation failed " - "with err %d\n", name, ret); + "with err %d", name, ret); rte_errno = -ret; goto free_fib; } diff --git a/lib/fib/rte_fib6.c b/lib/fib/rte_fib6.c index ab1d960479..2d23c09eea 100644 --- a/lib/fib/rte_fib6.c +++ b/lib/fib/rte_fib6.c @@ -171,8 +171,8 @@ rte_fib6_create(const char *name, int socket_id, struct rte_fib6_conf *conf) rib = rte_rib6_create(name, socket_id, &rib_conf); if (rib == NULL) { - RTE_LOG(ERR, LPM, - "Can not allocate RIB %s\n", name); + RTE_LOG_LINE(ERR, LPM, + "Can not allocate RIB %s", name); return NULL; } @@ -196,8 +196,8 @@ rte_fib6_create(const char *name, int socket_id, struct rte_fib6_conf *conf) /* allocate tailq entry */ te = rte_zmalloc("FIB_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, LPM, - "Can not allocate tailq entry for FIB %s\n", name); + RTE_LOG_LINE(ERR, LPM, + "Can not allocate tailq entry for FIB %s", name); rte_errno = ENOMEM; goto exit; } @@ -206,7 +206,7 @@ rte_fib6_create(const char *name, int socket_id, struct rte_fib6_conf *conf) fib = rte_zmalloc_socket(mem_name, sizeof(struct rte_fib6), RTE_CACHE_LINE_SIZE, socket_id); if (fib == NULL) { - RTE_LOG(ERR, LPM, "FIB %s memory allocation failed\n", name); + RTE_LOG_LINE(ERR, LPM, "FIB %s memory allocation failed", name); rte_errno = ENOMEM; goto free_te; } @@ -217,8 +217,8 @@ rte_fib6_create(const char *name, int socket_id, struct rte_fib6_conf *conf) fib->def_nh = conf->default_nh; ret = init_dataplane(fib, socket_id, conf); if (ret < 0) { - RTE_LOG(ERR, LPM, - "FIB dataplane struct %s memory allocation failed\n", + RTE_LOG_LINE(ERR, LPM, + "FIB dataplane struct %s memory allocation failed", name); rte_errno = -ret; goto free_fib; diff --git a/lib/hash/rte_cuckoo_hash.c b/lib/hash/rte_cuckoo_hash.c index 8e4364f060..2a7b38843d 100644 --- a/lib/hash/rte_cuckoo_hash.c +++ b/lib/hash/rte_cuckoo_hash.c @@ -164,7 +164,7 @@ rte_hash_create(const struct rte_hash_parameters *params) hash_list = RTE_TAILQ_CAST(rte_hash_tailq.head, rte_hash_list); if (params == NULL) { - RTE_LOG(ERR, HASH, "rte_hash_create has no parameters\n"); + RTE_LOG_LINE(ERR, HASH, "rte_hash_create has no parameters"); return NULL; } @@ -173,13 +173,13 @@ rte_hash_create(const struct rte_hash_parameters *params) (params->entries < RTE_HASH_BUCKET_ENTRIES) || (params->key_len == 0)) { rte_errno = EINVAL; - RTE_LOG(ERR, HASH, "rte_hash_create has invalid parameters\n"); + RTE_LOG_LINE(ERR, HASH, "rte_hash_create has invalid parameters"); return NULL; } if (params->extra_flag & ~RTE_HASH_EXTRA_FLAGS_MASK) { rte_errno = EINVAL; - RTE_LOG(ERR, HASH, "rte_hash_create: unsupported extra flags\n"); + RTE_LOG_LINE(ERR, HASH, "rte_hash_create: unsupported extra flags"); return NULL; } @@ -187,8 +187,8 @@ rte_hash_create(const struct rte_hash_parameters *params) if ((params->extra_flag & RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY) && (params->extra_flag & RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF)) { rte_errno = EINVAL; - RTE_LOG(ERR, HASH, "rte_hash_create: choose rw concurrency or " - "rw concurrency lock free\n"); + RTE_LOG_LINE(ERR, HASH, "rte_hash_create: choose rw concurrency or " + "rw concurrency lock free"); return NULL; } @@ -238,7 +238,7 @@ rte_hash_create(const struct rte_hash_parameters *params) r = rte_ring_create_elem(ring_name, sizeof(uint32_t), rte_align32pow2(num_key_slots), params->socket_id, 0); if (r == NULL) { - RTE_LOG(ERR, HASH, "memory allocation failed\n"); + RTE_LOG_LINE(ERR, HASH, "memory allocation failed"); goto err; } @@ -254,8 +254,8 @@ rte_hash_create(const struct rte_hash_parameters *params) params->socket_id, 0); if (r_ext == NULL) { - RTE_LOG(ERR, HASH, "ext buckets memory allocation " - "failed\n"); + RTE_LOG_LINE(ERR, HASH, "ext buckets memory allocation " + "failed"); goto err; } } @@ -280,7 +280,7 @@ rte_hash_create(const struct rte_hash_parameters *params) te = rte_zmalloc("HASH_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, HASH, "tailq entry allocation failed\n"); + RTE_LOG_LINE(ERR, HASH, "tailq entry allocation failed"); goto err_unlock; } @@ -288,7 +288,7 @@ rte_hash_create(const struct rte_hash_parameters *params) RTE_CACHE_LINE_SIZE, params->socket_id); if (h == NULL) { - RTE_LOG(ERR, HASH, "memory allocation failed\n"); + RTE_LOG_LINE(ERR, HASH, "memory allocation failed"); goto err_unlock; } @@ -297,7 +297,7 @@ rte_hash_create(const struct rte_hash_parameters *params) RTE_CACHE_LINE_SIZE, params->socket_id); if (buckets == NULL) { - RTE_LOG(ERR, HASH, "buckets memory allocation failed\n"); + RTE_LOG_LINE(ERR, HASH, "buckets memory allocation failed"); goto err_unlock; } @@ -307,8 +307,8 @@ rte_hash_create(const struct rte_hash_parameters *params) num_buckets * sizeof(struct rte_hash_bucket), RTE_CACHE_LINE_SIZE, params->socket_id); if (buckets_ext == NULL) { - RTE_LOG(ERR, HASH, "ext buckets memory allocation " - "failed\n"); + RTE_LOG_LINE(ERR, HASH, "ext buckets memory allocation " + "failed"); goto err_unlock; } /* Populate ext bkt ring. We reserve 0 similar to the @@ -323,8 +323,8 @@ rte_hash_create(const struct rte_hash_parameters *params) ext_bkt_to_free = rte_zmalloc(NULL, sizeof(uint32_t) * num_key_slots, 0); if (ext_bkt_to_free == NULL) { - RTE_LOG(ERR, HASH, "ext bkt to free memory allocation " - "failed\n"); + RTE_LOG_LINE(ERR, HASH, "ext bkt to free memory allocation " + "failed"); goto err_unlock; } } @@ -339,7 +339,7 @@ rte_hash_create(const struct rte_hash_parameters *params) RTE_CACHE_LINE_SIZE, params->socket_id); if (k == NULL) { - RTE_LOG(ERR, HASH, "memory allocation failed\n"); + RTE_LOG_LINE(ERR, HASH, "memory allocation failed"); goto err_unlock; } @@ -347,7 +347,7 @@ rte_hash_create(const struct rte_hash_parameters *params) RTE_CACHE_LINE_SIZE, params->socket_id); if (tbl_chng_cnt == NULL) { - RTE_LOG(ERR, HASH, "memory allocation failed\n"); + RTE_LOG_LINE(ERR, HASH, "memory allocation failed"); goto err_unlock; } @@ -395,7 +395,7 @@ rte_hash_create(const struct rte_hash_parameters *params) sizeof(struct lcore_cache) * RTE_MAX_LCORE, RTE_CACHE_LINE_SIZE, params->socket_id); if (local_free_slots == NULL) { - RTE_LOG(ERR, HASH, "local free slots memory allocation failed\n"); + RTE_LOG_LINE(ERR, HASH, "local free slots memory allocation failed"); goto err_unlock; } } @@ -637,7 +637,7 @@ rte_hash_reset(struct rte_hash *h) /* Reclaim all the resources */ rte_rcu_qsbr_dq_reclaim(h->dq, ~0, NULL, &pending, NULL); if (pending != 0) - RTE_LOG(ERR, HASH, "RCU reclaim all resources failed\n"); + RTE_LOG_LINE(ERR, HASH, "RCU reclaim all resources failed"); } memset(h->buckets, 0, h->num_buckets * sizeof(struct rte_hash_bucket)); @@ -1511,8 +1511,8 @@ __hash_rcu_qsbr_free_resource(void *p, void *e, unsigned int n) /* Return key indexes to free slot ring */ ret = free_slot(h, rcu_dq_entry.key_idx); if (ret < 0) { - RTE_LOG(ERR, HASH, - "%s: could not enqueue free slots in global ring\n", + RTE_LOG_LINE(ERR, HASH, + "%s: could not enqueue free slots in global ring", __func__); } } @@ -1540,7 +1540,7 @@ rte_hash_rcu_qsbr_add(struct rte_hash *h, struct rte_hash_rcu_config *cfg) hash_rcu_cfg = rte_zmalloc(NULL, sizeof(struct rte_hash_rcu_config), 0); if (hash_rcu_cfg == NULL) { - RTE_LOG(ERR, HASH, "memory allocation failed\n"); + RTE_LOG_LINE(ERR, HASH, "memory allocation failed"); return 1; } @@ -1564,7 +1564,7 @@ rte_hash_rcu_qsbr_add(struct rte_hash *h, struct rte_hash_rcu_config *cfg) h->dq = rte_rcu_qsbr_dq_create(¶ms); if (h->dq == NULL) { rte_free(hash_rcu_cfg); - RTE_LOG(ERR, HASH, "HASH defer queue creation failed\n"); + RTE_LOG_LINE(ERR, HASH, "HASH defer queue creation failed"); return 1; } } else { @@ -1593,8 +1593,8 @@ remove_entry(const struct rte_hash *h, struct rte_hash_bucket *bkt, int ret = free_slot(h, bkt->key_idx[i]); if (ret < 0) { - RTE_LOG(ERR, HASH, - "%s: could not enqueue free slots in global ring\n", + RTE_LOG_LINE(ERR, HASH, + "%s: could not enqueue free slots in global ring", __func__); } } @@ -1783,7 +1783,7 @@ __rte_hash_del_key_with_hash(const struct rte_hash *h, const void *key, } else if (h->dq) /* Push into QSBR FIFO if using RTE_HASH_QSBR_MODE_DQ */ if (rte_rcu_qsbr_dq_enqueue(h->dq, &rcu_dq_entry) != 0) - RTE_LOG(ERR, HASH, "Failed to push QSBR FIFO\n"); + RTE_LOG_LINE(ERR, HASH, "Failed to push QSBR FIFO"); } __hash_rw_writer_unlock(h); return ret; diff --git a/lib/hash/rte_fbk_hash.c b/lib/hash/rte_fbk_hash.c index faeb50cd89..20433a92c8 100644 --- a/lib/hash/rte_fbk_hash.c +++ b/lib/hash/rte_fbk_hash.c @@ -118,7 +118,7 @@ rte_fbk_hash_create(const struct rte_fbk_hash_params *params) te = rte_zmalloc("FBK_HASH_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, HASH, "Failed to allocate tailq entry\n"); + RTE_LOG_LINE(ERR, HASH, "Failed to allocate tailq entry"); goto exit; } @@ -126,7 +126,7 @@ rte_fbk_hash_create(const struct rte_fbk_hash_params *params) ht = rte_zmalloc_socket(hash_name, mem_size, 0, params->socket_id); if (ht == NULL) { - RTE_LOG(ERR, HASH, "Failed to allocate fbk hash table\n"); + RTE_LOG_LINE(ERR, HASH, "Failed to allocate fbk hash table"); rte_free(te); goto exit; } diff --git a/lib/hash/rte_hash_crc.c b/lib/hash/rte_hash_crc.c index 1439d8a71f..0d52840eaa 100644 --- a/lib/hash/rte_hash_crc.c +++ b/lib/hash/rte_hash_crc.c @@ -34,8 +34,8 @@ rte_hash_crc_set_alg(uint8_t alg) #if defined RTE_ARCH_X86 if (!(alg & CRC32_SSE42_x64)) - RTE_LOG(WARNING, HASH_CRC, - "Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42\n"); + RTE_LOG_LINE(WARNING, HASH_CRC, + "Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42"); if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T) || alg == CRC32_SSE42) rte_hash_crc32_alg = CRC32_SSE42; else @@ -44,15 +44,15 @@ rte_hash_crc_set_alg(uint8_t alg) #if defined RTE_ARCH_ARM64 if (!(alg & CRC32_ARM64)) - RTE_LOG(WARNING, HASH_CRC, - "Unsupported CRC32 algorithm requested using CRC32_ARM64\n"); + RTE_LOG_LINE(WARNING, HASH_CRC, + "Unsupported CRC32 algorithm requested using CRC32_ARM64"); if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_CRC32)) rte_hash_crc32_alg = CRC32_ARM64; #endif if (rte_hash_crc32_alg == CRC32_SW) - RTE_LOG(WARNING, HASH_CRC, - "Unsupported CRC32 algorithm requested using CRC32_SW\n"); + RTE_LOG_LINE(WARNING, HASH_CRC, + "Unsupported CRC32 algorithm requested using CRC32_SW"); } /* Setting the best available algorithm */ diff --git a/lib/hash/rte_thash.c b/lib/hash/rte_thash.c index d819dddd84..a5d84eee8e 100644 --- a/lib/hash/rte_thash.c +++ b/lib/hash/rte_thash.c @@ -243,8 +243,8 @@ rte_thash_init_ctx(const char *name, uint32_t key_len, uint32_t reta_sz, /* allocate tailq entry */ te = rte_zmalloc("THASH_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, HASH, - "Can not allocate tailq entry for thash context %s\n", + RTE_LOG_LINE(ERR, HASH, + "Can not allocate tailq entry for thash context %s", name); rte_errno = ENOMEM; goto exit; @@ -252,7 +252,7 @@ rte_thash_init_ctx(const char *name, uint32_t key_len, uint32_t reta_sz, ctx = rte_zmalloc(NULL, sizeof(struct rte_thash_ctx) + key_len, 0); if (ctx == NULL) { - RTE_LOG(ERR, HASH, "thash ctx %s memory allocation failed\n", + RTE_LOG_LINE(ERR, HASH, "thash ctx %s memory allocation failed", name); rte_errno = ENOMEM; goto free_te; @@ -275,7 +275,7 @@ rte_thash_init_ctx(const char *name, uint32_t key_len, uint32_t reta_sz, ctx->matrices = rte_zmalloc(NULL, key_len * sizeof(uint64_t), RTE_CACHE_LINE_SIZE); if (ctx->matrices == NULL) { - RTE_LOG(ERR, HASH, "Cannot allocate matrices\n"); + RTE_LOG_LINE(ERR, HASH, "Cannot allocate matrices"); rte_errno = ENOMEM; goto free_ctx; } @@ -390,8 +390,8 @@ generate_subkey(struct rte_thash_ctx *ctx, struct thash_lfsr *lfsr, if (((lfsr->bits_cnt + req_bits) > (1ULL << lfsr->deg) - 1) && ((ctx->flags & RTE_THASH_IGNORE_PERIOD_OVERFLOW) != RTE_THASH_IGNORE_PERIOD_OVERFLOW)) { - RTE_LOG(ERR, HASH, - "Can't generate m-sequence due to period overflow\n"); + RTE_LOG_LINE(ERR, HASH, + "Can't generate m-sequence due to period overflow"); return -ENOSPC; } @@ -470,9 +470,9 @@ insert_before(struct rte_thash_ctx *ctx, return ret; } } else if ((next_ent != NULL) && (end > next_ent->offset)) { - RTE_LOG(ERR, HASH, + RTE_LOG_LINE(ERR, HASH, "Can't add helper %s due to conflict with existing" - " helper %s\n", ent->name, next_ent->name); + " helper %s", ent->name, next_ent->name); rte_free(ent); return -ENOSPC; } @@ -519,9 +519,9 @@ insert_after(struct rte_thash_ctx *ctx, int ret; if ((next_ent != NULL) && (end > next_ent->offset)) { - RTE_LOG(ERR, HASH, + RTE_LOG_LINE(ERR, HASH, "Can't add helper %s due to conflict with existing" - " helper %s\n", ent->name, next_ent->name); + " helper %s", ent->name, next_ent->name); rte_free(ent); return -EEXIST; } diff --git a/lib/hash/rte_thash_gfni.c b/lib/hash/rte_thash_gfni.c index c863789b51..6b84180b62 100644 --- a/lib/hash/rte_thash_gfni.c +++ b/lib/hash/rte_thash_gfni.c @@ -20,8 +20,8 @@ rte_thash_gfni(const uint64_t *mtrx __rte_unused, if (!warned) { warned = true; - RTE_LOG(ERR, HASH, - "%s is undefined under given arch\n", __func__); + RTE_LOG_LINE(ERR, HASH, + "%s is undefined under given arch", __func__); } return 0; @@ -38,8 +38,8 @@ rte_thash_gfni_bulk(const uint64_t *mtrx __rte_unused, if (!warned) { warned = true; - RTE_LOG(ERR, HASH, - "%s is undefined under given arch\n", __func__); + RTE_LOG_LINE(ERR, HASH, + "%s is undefined under given arch", __func__); } for (i = 0; i < num; i++) diff --git a/lib/ip_frag/rte_ip_frag_common.c b/lib/ip_frag/rte_ip_frag_common.c index eed399da6b..02dcac3137 100644 --- a/lib/ip_frag/rte_ip_frag_common.c +++ b/lib/ip_frag/rte_ip_frag_common.c @@ -54,20 +54,20 @@ rte_ip_frag_table_create(uint32_t bucket_num, uint32_t bucket_entries, if (rte_is_power_of_2(bucket_entries) == 0 || nb_entries > UINT32_MAX || nb_entries == 0 || nb_entries < max_entries) { - RTE_LOG(ERR, IPFRAG, "%s: invalid input parameter\n", __func__); + RTE_LOG_LINE(ERR, IPFRAG, "%s: invalid input parameter", __func__); return NULL; } sz = sizeof (*tbl) + nb_entries * sizeof (tbl->pkt[0]); if ((tbl = rte_zmalloc_socket(__func__, sz, RTE_CACHE_LINE_SIZE, socket_id)) == NULL) { - RTE_LOG(ERR, IPFRAG, - "%s: allocation of %zu bytes at socket %d failed do\n", + RTE_LOG_LINE(ERR, IPFRAG, + "%s: allocation of %zu bytes at socket %d failed do", __func__, sz, socket_id); return NULL; } - RTE_LOG(INFO, IPFRAG, "%s: allocated of %zu bytes at socket %d\n", + RTE_LOG_LINE(INFO, IPFRAG, "%s: allocated of %zu bytes at socket %d", __func__, sz, socket_id); tbl->max_cycles = max_cycles; diff --git a/lib/latencystats/rte_latencystats.c b/lib/latencystats/rte_latencystats.c index f3c1746cca..cc3c2cf4de 100644 --- a/lib/latencystats/rte_latencystats.c +++ b/lib/latencystats/rte_latencystats.c @@ -25,7 +25,6 @@ latencystat_cycles_per_ns(void) return rte_get_timer_hz() / NS_PER_SEC; } -/* Macros for printing using RTE_LOG */ RTE_LOG_REGISTER_DEFAULT(latencystat_logtype, INFO); #define RTE_LOGTYPE_LATENCY_STATS latencystat_logtype @@ -96,7 +95,7 @@ rte_latencystats_update(void) latency_stats_index, values, NUM_LATENCY_STATS); if (ret < 0) - RTE_LOG(INFO, LATENCY_STATS, "Failed to push the stats\n"); + RTE_LOG_LINE(INFO, LATENCY_STATS, "Failed to push the stats"); return ret; } @@ -228,7 +227,7 @@ rte_latencystats_init(uint64_t app_samp_intvl, mz = rte_memzone_reserve(MZ_RTE_LATENCY_STATS, sizeof(*glob_stats), rte_socket_id(), flags); if (mz == NULL) { - RTE_LOG(ERR, LATENCY_STATS, "Cannot reserve memory: %s:%d\n", + RTE_LOG_LINE(ERR, LATENCY_STATS, "Cannot reserve memory: %s:%d", __func__, __LINE__); return -ENOMEM; } @@ -244,8 +243,8 @@ rte_latencystats_init(uint64_t app_samp_intvl, latency_stats_index = rte_metrics_reg_names(ptr_strings, NUM_LATENCY_STATS); if (latency_stats_index < 0) { - RTE_LOG(DEBUG, LATENCY_STATS, - "Failed to register latency stats names\n"); + RTE_LOG_LINE(DEBUG, LATENCY_STATS, + "Failed to register latency stats names"); return -1; } @@ -253,8 +252,8 @@ rte_latencystats_init(uint64_t app_samp_intvl, ret = rte_mbuf_dyn_rx_timestamp_register(×tamp_dynfield_offset, ×tamp_dynflag); if (ret != 0) { - RTE_LOG(ERR, LATENCY_STATS, - "Cannot register mbuf field/flag for timestamp\n"); + RTE_LOG_LINE(ERR, LATENCY_STATS, + "Cannot register mbuf field/flag for timestamp"); return -rte_errno; } @@ -264,8 +263,8 @@ rte_latencystats_init(uint64_t app_samp_intvl, ret = rte_eth_dev_info_get(pid, &dev_info); if (ret != 0) { - RTE_LOG(INFO, LATENCY_STATS, - "Error during getting device (port %u) info: %s\n", + RTE_LOG_LINE(INFO, LATENCY_STATS, + "Error during getting device (port %u) info: %s", pid, strerror(-ret)); continue; @@ -276,18 +275,18 @@ rte_latencystats_init(uint64_t app_samp_intvl, cbs->cb = rte_eth_add_first_rx_callback(pid, qid, add_time_stamps, user_cb); if (!cbs->cb) - RTE_LOG(INFO, LATENCY_STATS, "Failed to " + RTE_LOG_LINE(INFO, LATENCY_STATS, "Failed to " "register Rx callback for pid=%d, " - "qid=%d\n", pid, qid); + "qid=%d", pid, qid); } for (qid = 0; qid < dev_info.nb_tx_queues; qid++) { cbs = &tx_cbs[pid][qid]; cbs->cb = rte_eth_add_tx_callback(pid, qid, calc_latency, user_cb); if (!cbs->cb) - RTE_LOG(INFO, LATENCY_STATS, "Failed to " + RTE_LOG_LINE(INFO, LATENCY_STATS, "Failed to " "register Tx callback for pid=%d, " - "qid=%d\n", pid, qid); + "qid=%d", pid, qid); } } return 0; @@ -308,8 +307,8 @@ rte_latencystats_uninit(void) ret = rte_eth_dev_info_get(pid, &dev_info); if (ret != 0) { - RTE_LOG(INFO, LATENCY_STATS, - "Error during getting device (port %u) info: %s\n", + RTE_LOG_LINE(INFO, LATENCY_STATS, + "Error during getting device (port %u) info: %s", pid, strerror(-ret)); continue; @@ -319,17 +318,17 @@ rte_latencystats_uninit(void) cbs = &rx_cbs[pid][qid]; ret = rte_eth_remove_rx_callback(pid, qid, cbs->cb); if (ret) - RTE_LOG(INFO, LATENCY_STATS, "failed to " + RTE_LOG_LINE(INFO, LATENCY_STATS, "failed to " "remove Rx callback for pid=%d, " - "qid=%d\n", pid, qid); + "qid=%d", pid, qid); } for (qid = 0; qid < dev_info.nb_tx_queues; qid++) { cbs = &tx_cbs[pid][qid]; ret = rte_eth_remove_tx_callback(pid, qid, cbs->cb); if (ret) - RTE_LOG(INFO, LATENCY_STATS, "failed to " + RTE_LOG_LINE(INFO, LATENCY_STATS, "failed to " "remove Tx callback for pid=%d, " - "qid=%d\n", pid, qid); + "qid=%d", pid, qid); } } @@ -366,8 +365,8 @@ rte_latencystats_get(struct rte_metric_value *values, uint16_t size) const struct rte_memzone *mz; mz = rte_memzone_lookup(MZ_RTE_LATENCY_STATS); if (mz == NULL) { - RTE_LOG(ERR, LATENCY_STATS, - "Latency stats memzone not found\n"); + RTE_LOG_LINE(ERR, LATENCY_STATS, + "Latency stats memzone not found"); return -ENOMEM; } glob_stats = mz->addr; diff --git a/lib/log/log.c b/lib/log/log.c index ab06132a98..fa22d128a7 100644 --- a/lib/log/log.c +++ b/lib/log/log.c @@ -146,7 +146,7 @@ logtype_set_level(uint32_t type, uint32_t level) if (current != level) { rte_logs.dynamic_types[type].loglevel = level; - RTE_LOG(DEBUG, EAL, "%s log level changed from %s to %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s log level changed from %s to %s", rte_logs.dynamic_types[type].name == NULL ? "" : rte_logs.dynamic_types[type].name, eal_log_level2str(current), @@ -518,8 +518,8 @@ eal_log_set_default(FILE *default_log) default_log_stream = default_log; #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG - RTE_LOG(NOTICE, EAL, - "Debug dataplane logs available - lower performance\n"); + RTE_LOG_LINE(NOTICE, EAL, + "Debug dataplane logs available - lower performance"); #endif } diff --git a/lib/lpm/rte_lpm.c b/lib/lpm/rte_lpm.c index 0ca8214786..a332faf720 100644 --- a/lib/lpm/rte_lpm.c +++ b/lib/lpm/rte_lpm.c @@ -192,7 +192,7 @@ rte_lpm_create(const char *name, int socket_id, /* allocate tailq entry */ te = rte_zmalloc("LPM_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, LPM, "Failed to allocate tailq entry\n"); + RTE_LOG_LINE(ERR, LPM, "Failed to allocate tailq entry"); rte_errno = ENOMEM; goto exit; } @@ -201,7 +201,7 @@ rte_lpm_create(const char *name, int socket_id, i_lpm = rte_zmalloc_socket(mem_name, mem_size, RTE_CACHE_LINE_SIZE, socket_id); if (i_lpm == NULL) { - RTE_LOG(ERR, LPM, "LPM memory allocation failed\n"); + RTE_LOG_LINE(ERR, LPM, "LPM memory allocation failed"); rte_free(te); rte_errno = ENOMEM; goto exit; @@ -211,7 +211,7 @@ rte_lpm_create(const char *name, int socket_id, (size_t)rules_size, RTE_CACHE_LINE_SIZE, socket_id); if (i_lpm->rules_tbl == NULL) { - RTE_LOG(ERR, LPM, "LPM rules_tbl memory allocation failed\n"); + RTE_LOG_LINE(ERR, LPM, "LPM rules_tbl memory allocation failed"); rte_free(i_lpm); i_lpm = NULL; rte_free(te); @@ -223,7 +223,7 @@ rte_lpm_create(const char *name, int socket_id, (size_t)tbl8s_size, RTE_CACHE_LINE_SIZE, socket_id); if (i_lpm->lpm.tbl8 == NULL) { - RTE_LOG(ERR, LPM, "LPM tbl8 memory allocation failed\n"); + RTE_LOG_LINE(ERR, LPM, "LPM tbl8 memory allocation failed"); rte_free(i_lpm->rules_tbl); rte_free(i_lpm); i_lpm = NULL; @@ -338,7 +338,7 @@ rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_lpm_rcu_config *cfg) params.v = cfg->v; i_lpm->dq = rte_rcu_qsbr_dq_create(¶ms); if (i_lpm->dq == NULL) { - RTE_LOG(ERR, LPM, "LPM defer queue creation failed\n"); + RTE_LOG_LINE(ERR, LPM, "LPM defer queue creation failed"); return 1; } } else { @@ -565,7 +565,7 @@ tbl8_free(struct __rte_lpm *i_lpm, uint32_t tbl8_group_start) status = rte_rcu_qsbr_dq_enqueue(i_lpm->dq, (void *)&tbl8_group_start); if (status == 1) { - RTE_LOG(ERR, LPM, "Failed to push QSBR FIFO\n"); + RTE_LOG_LINE(ERR, LPM, "Failed to push QSBR FIFO"); return -rte_errno; } } diff --git a/lib/lpm/rte_lpm6.c b/lib/lpm/rte_lpm6.c index 24ce7dd022..251bfcc73d 100644 --- a/lib/lpm/rte_lpm6.c +++ b/lib/lpm/rte_lpm6.c @@ -280,7 +280,7 @@ rte_lpm6_create(const char *name, int socket_id, rules_tbl = rte_hash_create(&rule_hash_tbl_params); if (rules_tbl == NULL) { - RTE_LOG(ERR, LPM, "LPM rules hash table allocation failed: %s (%d)\n", + RTE_LOG_LINE(ERR, LPM, "LPM rules hash table allocation failed: %s (%d)", rte_strerror(rte_errno), rte_errno); goto fail_wo_unlock; } @@ -290,7 +290,7 @@ rte_lpm6_create(const char *name, int socket_id, sizeof(uint32_t) * config->number_tbl8s, RTE_CACHE_LINE_SIZE); if (tbl8_pool == NULL) { - RTE_LOG(ERR, LPM, "LPM tbl8 pool allocation failed: %s (%d)\n", + RTE_LOG_LINE(ERR, LPM, "LPM tbl8 pool allocation failed: %s (%d)", rte_strerror(rte_errno), rte_errno); rte_errno = ENOMEM; goto fail_wo_unlock; @@ -301,7 +301,7 @@ rte_lpm6_create(const char *name, int socket_id, sizeof(struct rte_lpm_tbl8_hdr) * config->number_tbl8s, RTE_CACHE_LINE_SIZE); if (tbl8_hdrs == NULL) { - RTE_LOG(ERR, LPM, "LPM tbl8 headers allocation failed: %s (%d)\n", + RTE_LOG_LINE(ERR, LPM, "LPM tbl8 headers allocation failed: %s (%d)", rte_strerror(rte_errno), rte_errno); rte_errno = ENOMEM; goto fail_wo_unlock; @@ -330,7 +330,7 @@ rte_lpm6_create(const char *name, int socket_id, /* allocate tailq entry */ te = rte_zmalloc("LPM6_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, LPM, "Failed to allocate tailq entry!\n"); + RTE_LOG_LINE(ERR, LPM, "Failed to allocate tailq entry!"); rte_errno = ENOMEM; goto fail; } @@ -340,7 +340,7 @@ rte_lpm6_create(const char *name, int socket_id, RTE_CACHE_LINE_SIZE, socket_id); if (lpm == NULL) { - RTE_LOG(ERR, LPM, "LPM memory allocation failed\n"); + RTE_LOG_LINE(ERR, LPM, "LPM memory allocation failed"); rte_free(te); rte_errno = ENOMEM; goto fail; diff --git a/lib/mbuf/rte_mbuf.c b/lib/mbuf/rte_mbuf.c index 3eccc61827..8472c6a977 100644 --- a/lib/mbuf/rte_mbuf.c +++ b/lib/mbuf/rte_mbuf.c @@ -231,7 +231,7 @@ rte_pktmbuf_pool_create_by_ops(const char *name, unsigned int n, int ret; if (RTE_ALIGN(priv_size, RTE_MBUF_PRIV_ALIGN) != priv_size) { - RTE_LOG(ERR, MBUF, "mbuf priv_size=%u is not aligned\n", + RTE_LOG_LINE(ERR, MBUF, "mbuf priv_size=%u is not aligned", priv_size); rte_errno = EINVAL; return NULL; @@ -251,7 +251,7 @@ rte_pktmbuf_pool_create_by_ops(const char *name, unsigned int n, mp_ops_name = rte_mbuf_best_mempool_ops(); ret = rte_mempool_set_ops_byname(mp, mp_ops_name, NULL); if (ret != 0) { - RTE_LOG(ERR, MBUF, "error setting mempool handler\n"); + RTE_LOG_LINE(ERR, MBUF, "error setting mempool handler"); rte_mempool_free(mp); rte_errno = -ret; return NULL; @@ -297,7 +297,7 @@ rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n, int ret; if (RTE_ALIGN(priv_size, RTE_MBUF_PRIV_ALIGN) != priv_size) { - RTE_LOG(ERR, MBUF, "mbuf priv_size=%u is not aligned\n", + RTE_LOG_LINE(ERR, MBUF, "mbuf priv_size=%u is not aligned", priv_size); rte_errno = EINVAL; return NULL; @@ -307,12 +307,12 @@ rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n, const struct rte_pktmbuf_extmem *extm = ext_mem + i; if (!extm->elt_size || !extm->buf_len || !extm->buf_ptr) { - RTE_LOG(ERR, MBUF, "invalid extmem descriptor\n"); + RTE_LOG_LINE(ERR, MBUF, "invalid extmem descriptor"); rte_errno = EINVAL; return NULL; } if (data_room_size > extm->elt_size) { - RTE_LOG(ERR, MBUF, "ext elt_size=%u is too small\n", + RTE_LOG_LINE(ERR, MBUF, "ext elt_size=%u is too small", priv_size); rte_errno = EINVAL; return NULL; @@ -321,7 +321,7 @@ rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n, } /* Check whether enough external memory provided. */ if (n_elts < n) { - RTE_LOG(ERR, MBUF, "not enough extmem\n"); + RTE_LOG_LINE(ERR, MBUF, "not enough extmem"); rte_errno = ENOMEM; return NULL; } @@ -342,7 +342,7 @@ rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n, mp_ops_name = rte_mbuf_best_mempool_ops(); ret = rte_mempool_set_ops_byname(mp, mp_ops_name, NULL); if (ret != 0) { - RTE_LOG(ERR, MBUF, "error setting mempool handler\n"); + RTE_LOG_LINE(ERR, MBUF, "error setting mempool handler"); rte_mempool_free(mp); rte_errno = -ret; return NULL; diff --git a/lib/mbuf/rte_mbuf_dyn.c b/lib/mbuf/rte_mbuf_dyn.c index 4fb1863a10..a9f7bb2b81 100644 --- a/lib/mbuf/rte_mbuf_dyn.c +++ b/lib/mbuf/rte_mbuf_dyn.c @@ -118,7 +118,7 @@ init_shared_mem(void) mz = rte_memzone_lookup(RTE_MBUF_DYN_MZNAME); } if (mz == NULL) { - RTE_LOG(ERR, MBUF, "Failed to get mbuf dyn shared memory\n"); + RTE_LOG_LINE(ERR, MBUF, "Failed to get mbuf dyn shared memory"); return -1; } @@ -317,7 +317,7 @@ __rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield *params, shm->free_space[i] = 0; process_score(); - RTE_LOG(DEBUG, MBUF, "Registered dynamic field %s (sz=%zu, al=%zu, fl=0x%x) -> %zd\n", + RTE_LOG_LINE(DEBUG, MBUF, "Registered dynamic field %s (sz=%zu, al=%zu, fl=0x%x) -> %zd", params->name, params->size, params->align, params->flags, offset); @@ -491,7 +491,7 @@ __rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params, shm->free_flags &= ~(1ULL << bitnum); - RTE_LOG(DEBUG, MBUF, "Registered dynamic flag %s (fl=0x%x) -> %u\n", + RTE_LOG_LINE(DEBUG, MBUF, "Registered dynamic flag %s (fl=0x%x) -> %u", params->name, params->flags, bitnum); return bitnum; @@ -592,8 +592,8 @@ rte_mbuf_dyn_timestamp_register(int *field_offset, uint64_t *flag, offset = rte_mbuf_dynfield_register(&field_desc); if (offset < 0) { - RTE_LOG(ERR, MBUF, - "Failed to register mbuf field for timestamp\n"); + RTE_LOG_LINE(ERR, MBUF, + "Failed to register mbuf field for timestamp"); return -1; } if (field_offset != NULL) @@ -602,8 +602,8 @@ rte_mbuf_dyn_timestamp_register(int *field_offset, uint64_t *flag, strlcpy(flag_desc.name, flag_name, sizeof(flag_desc.name)); offset = rte_mbuf_dynflag_register(&flag_desc); if (offset < 0) { - RTE_LOG(ERR, MBUF, - "Failed to register mbuf flag for %s timestamp\n", + RTE_LOG_LINE(ERR, MBUF, + "Failed to register mbuf flag for %s timestamp", direction); return -1; } diff --git a/lib/mbuf/rte_mbuf_pool_ops.c b/lib/mbuf/rte_mbuf_pool_ops.c index 5318430126..639aa557f8 100644 --- a/lib/mbuf/rte_mbuf_pool_ops.c +++ b/lib/mbuf/rte_mbuf_pool_ops.c @@ -33,8 +33,8 @@ rte_mbuf_set_platform_mempool_ops(const char *ops_name) return 0; } - RTE_LOG(ERR, MBUF, - "%s is already registered as platform mbuf pool ops\n", + RTE_LOG_LINE(ERR, MBUF, + "%s is already registered as platform mbuf pool ops", (char *)mz->addr); return -EEXIST; } diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c index 2f8adad5ca..b66c8898a8 100644 --- a/lib/mempool/rte_mempool.c +++ b/lib/mempool/rte_mempool.c @@ -775,7 +775,7 @@ rte_mempool_cache_create(uint32_t size, int socket_id) cache = rte_zmalloc_socket("MEMPOOL_CACHE", sizeof(*cache), RTE_CACHE_LINE_SIZE, socket_id); if (cache == NULL) { - RTE_LOG(ERR, MEMPOOL, "Cannot allocate mempool cache.\n"); + RTE_LOG_LINE(ERR, MEMPOOL, "Cannot allocate mempool cache."); rte_errno = ENOMEM; return NULL; } @@ -877,7 +877,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size, /* try to allocate tailq entry */ te = rte_zmalloc("MEMPOOL_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, MEMPOOL, "Cannot allocate tailq entry!\n"); + RTE_LOG_LINE(ERR, MEMPOOL, "Cannot allocate tailq entry!"); goto exit_unlock; } @@ -1088,16 +1088,16 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp, if (free == 0) { if (cookie != RTE_MEMPOOL_HEADER_COOKIE1) { - RTE_LOG(CRIT, MEMPOOL, - "obj=%p, mempool=%p, cookie=%" PRIx64 "\n", + RTE_LOG_LINE(CRIT, MEMPOOL, + "obj=%p, mempool=%p, cookie=%" PRIx64, obj, (const void *) mp, cookie); rte_panic("MEMPOOL: bad header cookie (put)\n"); } hdr->cookie = RTE_MEMPOOL_HEADER_COOKIE2; } else if (free == 1) { if (cookie != RTE_MEMPOOL_HEADER_COOKIE2) { - RTE_LOG(CRIT, MEMPOOL, - "obj=%p, mempool=%p, cookie=%" PRIx64 "\n", + RTE_LOG_LINE(CRIT, MEMPOOL, + "obj=%p, mempool=%p, cookie=%" PRIx64, obj, (const void *) mp, cookie); rte_panic("MEMPOOL: bad header cookie (get)\n"); } @@ -1105,8 +1105,8 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp, } else if (free == 2) { if (cookie != RTE_MEMPOOL_HEADER_COOKIE1 && cookie != RTE_MEMPOOL_HEADER_COOKIE2) { - RTE_LOG(CRIT, MEMPOOL, - "obj=%p, mempool=%p, cookie=%" PRIx64 "\n", + RTE_LOG_LINE(CRIT, MEMPOOL, + "obj=%p, mempool=%p, cookie=%" PRIx64, obj, (const void *) mp, cookie); rte_panic("MEMPOOL: bad header cookie (audit)\n"); } @@ -1114,8 +1114,8 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp, tlr = rte_mempool_get_trailer(obj); cookie = tlr->cookie; if (cookie != RTE_MEMPOOL_TRAILER_COOKIE) { - RTE_LOG(CRIT, MEMPOOL, - "obj=%p, mempool=%p, cookie=%" PRIx64 "\n", + RTE_LOG_LINE(CRIT, MEMPOOL, + "obj=%p, mempool=%p, cookie=%" PRIx64, obj, (const void *) mp, cookie); rte_panic("MEMPOOL: bad trailer cookie\n"); } @@ -1200,7 +1200,7 @@ mempool_audit_cache(const struct rte_mempool *mp) const struct rte_mempool_cache *cache; cache = &mp->local_cache[lcore_id]; if (cache->len > RTE_DIM(cache->objs)) { - RTE_LOG(CRIT, MEMPOOL, "badness on cache[%u]\n", + RTE_LOG_LINE(CRIT, MEMPOOL, "badness on cache[%u]", lcore_id); rte_panic("MEMPOOL: invalid cache len\n"); } @@ -1429,7 +1429,7 @@ rte_mempool_event_callback_register(rte_mempool_event_callback *func, cb = calloc(1, sizeof(*cb)); if (cb == NULL) { - RTE_LOG(ERR, MEMPOOL, "Cannot allocate event callback!\n"); + RTE_LOG_LINE(ERR, MEMPOOL, "Cannot allocate event callback!"); ret = -ENOMEM; goto exit; } diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h index 4f8511b8f5..30ce579737 100644 --- a/lib/mempool/rte_mempool.h +++ b/lib/mempool/rte_mempool.h @@ -847,7 +847,7 @@ rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table, ret = ops->enqueue(mp, obj_table, n); #ifdef RTE_LIBRTE_MEMPOOL_DEBUG if (unlikely(ret < 0)) - RTE_LOG(CRIT, MEMPOOL, "cannot enqueue %u objects to mempool %s\n", + RTE_LOG_LINE(CRIT, MEMPOOL, "cannot enqueue %u objects to mempool %s", n, mp->name); #endif return ret; diff --git a/lib/mempool/rte_mempool_ops.c b/lib/mempool/rte_mempool_ops.c index e871de9ec9..d35e9b118b 100644 --- a/lib/mempool/rte_mempool_ops.c +++ b/lib/mempool/rte_mempool_ops.c @@ -31,22 +31,22 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h) if (rte_mempool_ops_table.num_ops >= RTE_MEMPOOL_MAX_OPS_IDX) { rte_spinlock_unlock(&rte_mempool_ops_table.sl); - RTE_LOG(ERR, MEMPOOL, - "Maximum number of mempool ops structs exceeded\n"); + RTE_LOG_LINE(ERR, MEMPOOL, + "Maximum number of mempool ops structs exceeded"); return -ENOSPC; } if (h->alloc == NULL || h->enqueue == NULL || h->dequeue == NULL || h->get_count == NULL) { rte_spinlock_unlock(&rte_mempool_ops_table.sl); - RTE_LOG(ERR, MEMPOOL, - "Missing callback while registering mempool ops\n"); + RTE_LOG_LINE(ERR, MEMPOOL, + "Missing callback while registering mempool ops"); return -EINVAL; } if (strlen(h->name) >= sizeof(ops->name) - 1) { rte_spinlock_unlock(&rte_mempool_ops_table.sl); - RTE_LOG(DEBUG, MEMPOOL, "%s(): mempool_ops <%s>: name too long\n", + RTE_LOG_LINE(DEBUG, MEMPOOL, "%s(): mempool_ops <%s>: name too long", __func__, h->name); rte_errno = EEXIST; return -EEXIST; diff --git a/lib/pipeline/rte_pipeline.c b/lib/pipeline/rte_pipeline.c index 436cf54953..fe91c48947 100644 --- a/lib/pipeline/rte_pipeline.c +++ b/lib/pipeline/rte_pipeline.c @@ -160,22 +160,22 @@ static int rte_pipeline_check_params(struct rte_pipeline_params *params) { if (params == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: Incorrect value for parameter params\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Incorrect value for parameter params", __func__); return -EINVAL; } /* name */ if (params->name == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: Incorrect value for parameter name\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Incorrect value for parameter name", __func__); return -EINVAL; } /* socket */ if (params->socket_id < 0) { - RTE_LOG(ERR, PIPELINE, - "%s: Incorrect value for parameter socket_id\n", + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Incorrect value for parameter socket_id", __func__); return -EINVAL; } @@ -192,8 +192,8 @@ rte_pipeline_create(struct rte_pipeline_params *params) /* Check input parameters */ status = rte_pipeline_check_params(params); if (status != 0) { - RTE_LOG(ERR, PIPELINE, - "%s: Pipeline params check failed (%d)\n", + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Pipeline params check failed (%d)", __func__, status); return NULL; } @@ -203,8 +203,8 @@ rte_pipeline_create(struct rte_pipeline_params *params) RTE_CACHE_LINE_SIZE, params->socket_id); if (p == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: Pipeline memory allocation failed\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Pipeline memory allocation failed", __func__); return NULL; } @@ -232,8 +232,8 @@ rte_pipeline_free(struct rte_pipeline *p) /* Check input parameters */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: rte_pipeline parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: rte_pipeline parameter is NULL", __func__); return -EINVAL; } @@ -273,44 +273,44 @@ rte_table_check_params(struct rte_pipeline *p, uint32_t *table_id) { if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter is NULL", __func__); return -EINVAL; } if (params == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: params parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: params parameter is NULL", __func__); return -EINVAL; } if (table_id == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: table_id parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: table_id parameter is NULL", __func__); return -EINVAL; } /* ops */ if (params->ops == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: params->ops is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: params->ops is NULL", __func__); return -EINVAL; } if (params->ops->f_create == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_create function pointer is NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: f_create function pointer is NULL", __func__); return -EINVAL; } if (params->ops->f_lookup == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_lookup function pointer is NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: f_lookup function pointer is NULL", __func__); return -EINVAL; } /* De we have room for one more table? */ if (p->num_tables == RTE_PIPELINE_TABLE_MAX) { - RTE_LOG(ERR, PIPELINE, - "%s: Incorrect value for num_tables parameter\n", + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Incorrect value for num_tables parameter", __func__); return -EINVAL; } @@ -343,8 +343,8 @@ rte_pipeline_table_create(struct rte_pipeline *p, default_entry = rte_zmalloc_socket( "PIPELINE", entry_size, RTE_CACHE_LINE_SIZE, p->socket_id); if (default_entry == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: Failed to allocate default entry\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Failed to allocate default entry", __func__); return -EINVAL; } @@ -353,7 +353,7 @@ rte_pipeline_table_create(struct rte_pipeline *p, entry_size); if (h_table == NULL) { rte_free(default_entry); - RTE_LOG(ERR, PIPELINE, "%s: Table creation failed\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, "%s: Table creation failed", __func__); return -EINVAL; } @@ -399,20 +399,20 @@ rte_pipeline_table_default_entry_add(struct rte_pipeline *p, /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter is NULL", __func__); return -EINVAL; } if (default_entry == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: default_entry parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: default_entry parameter is NULL", __func__); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: table_id %d out of range\n", __func__, table_id); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: table_id %d out of range", __func__, table_id); return -EINVAL; } @@ -421,8 +421,8 @@ rte_pipeline_table_default_entry_add(struct rte_pipeline *p, if ((default_entry->action == RTE_PIPELINE_ACTION_TABLE) && table->table_next_id_valid && (default_entry->table_id != table->table_next_id)) { - RTE_LOG(ERR, PIPELINE, - "%s: Tree-like topologies not allowed\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Tree-like topologies not allowed", __func__); return -EINVAL; } @@ -448,14 +448,14 @@ rte_pipeline_table_default_entry_delete(struct rte_pipeline *p, /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: pipeline parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: pipeline parameter is NULL", __func__); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: table_id %d out of range\n", __func__, table_id); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: table_id %d out of range", __func__, table_id); return -EINVAL; } @@ -484,32 +484,32 @@ rte_pipeline_table_entry_add(struct rte_pipeline *p, /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter is NULL", __func__); return -EINVAL; } if (key == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: key parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, "%s: key parameter is NULL", __func__); return -EINVAL; } if (entry == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: entry parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: entry parameter is NULL", __func__); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: table_id %d out of range\n", __func__, table_id); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: table_id %d out of range", __func__, table_id); return -EINVAL; } table = &p->tables[table_id]; if (table->ops.f_add == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: f_add function pointer NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: f_add function pointer NULL", __func__); return -EINVAL; } @@ -517,8 +517,8 @@ rte_pipeline_table_entry_add(struct rte_pipeline *p, if ((entry->action == RTE_PIPELINE_ACTION_TABLE) && table->table_next_id_valid && (entry->table_id != table->table_next_id)) { - RTE_LOG(ERR, PIPELINE, - "%s: Tree-like topologies not allowed\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Tree-like topologies not allowed", __func__); return -EINVAL; } @@ -544,28 +544,28 @@ rte_pipeline_table_entry_delete(struct rte_pipeline *p, /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (key == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: key parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: key parameter is NULL", __func__); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: table_id %d out of range\n", __func__, table_id); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: table_id %d out of range", __func__, table_id); return -EINVAL; } table = &p->tables[table_id]; if (table->ops.f_delete == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_delete function pointer NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: f_delete function pointer NULL", __func__); return -EINVAL; } @@ -585,32 +585,32 @@ int rte_pipeline_table_entry_add_bulk(struct rte_pipeline *p, /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter is NULL", __func__); return -EINVAL; } if (keys == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: keys parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, "%s: keys parameter is NULL", __func__); return -EINVAL; } if (entries == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: entries parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: entries parameter is NULL", __func__); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: table_id %d out of range\n", __func__, table_id); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: table_id %d out of range", __func__, table_id); return -EINVAL; } table = &p->tables[table_id]; if (table->ops.f_add_bulk == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: f_add_bulk function pointer NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: f_add_bulk function pointer NULL", __func__); return -EINVAL; } @@ -619,8 +619,8 @@ int rte_pipeline_table_entry_add_bulk(struct rte_pipeline *p, if ((entries[i]->action == RTE_PIPELINE_ACTION_TABLE) && table->table_next_id_valid && (entries[i]->table_id != table->table_next_id)) { - RTE_LOG(ERR, PIPELINE, - "%s: Tree-like topologies not allowed\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Tree-like topologies not allowed", __func__); return -EINVAL; } } @@ -649,28 +649,28 @@ int rte_pipeline_table_entry_delete_bulk(struct rte_pipeline *p, /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (keys == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: key parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: key parameter is NULL", __func__); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: table_id %d out of range\n", __func__, table_id); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: table_id %d out of range", __func__, table_id); return -EINVAL; } table = &p->tables[table_id]; if (table->ops.f_delete_bulk == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_delete function pointer NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: f_delete function pointer NULL", __func__); return -EINVAL; } @@ -687,35 +687,35 @@ rte_pipeline_port_in_check_params(struct rte_pipeline *p, uint32_t *port_id) { if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (params == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: params parameter NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, "%s: params parameter NULL", __func__); return -EINVAL; } if (port_id == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: port_id parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: port_id parameter NULL", __func__); return -EINVAL; } /* ops */ if (params->ops == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: params->ops parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: params->ops parameter NULL", __func__); return -EINVAL; } if (params->ops->f_create == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_create function pointer NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: f_create function pointer NULL", __func__); return -EINVAL; } if (params->ops->f_rx == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: f_rx function pointer NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: f_rx function pointer NULL", __func__); return -EINVAL; } @@ -723,15 +723,15 @@ rte_pipeline_port_in_check_params(struct rte_pipeline *p, /* burst_size */ if ((params->burst_size == 0) || (params->burst_size > RTE_PORT_IN_BURST_SIZE_MAX)) { - RTE_LOG(ERR, PIPELINE, "%s: invalid value for burst_size\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: invalid value for burst_size", __func__); return -EINVAL; } /* Do we have room for one more port? */ if (p->num_ports_in == RTE_PIPELINE_PORT_IN_MAX) { - RTE_LOG(ERR, PIPELINE, - "%s: invalid value for num_ports_in\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: invalid value for num_ports_in", __func__); return -EINVAL; } @@ -744,51 +744,51 @@ rte_pipeline_port_out_check_params(struct rte_pipeline *p, uint32_t *port_id) { if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (params == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: params parameter NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, "%s: params parameter NULL", __func__); return -EINVAL; } if (port_id == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: port_id parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: port_id parameter NULL", __func__); return -EINVAL; } /* ops */ if (params->ops == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: params->ops parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: params->ops parameter NULL", __func__); return -EINVAL; } if (params->ops->f_create == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_create function pointer NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: f_create function pointer NULL", __func__); return -EINVAL; } if (params->ops->f_tx == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_tx function pointer NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: f_tx function pointer NULL", __func__); return -EINVAL; } if (params->ops->f_tx_bulk == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_tx_bulk function pointer NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: f_tx_bulk function pointer NULL", __func__); return -EINVAL; } /* Do we have room for one more port? */ if (p->num_ports_out == RTE_PIPELINE_PORT_OUT_MAX) { - RTE_LOG(ERR, PIPELINE, - "%s: invalid value for num_ports_out\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: invalid value for num_ports_out", __func__); return -EINVAL; } @@ -816,7 +816,7 @@ rte_pipeline_port_in_create(struct rte_pipeline *p, /* Create the port */ h_port = params->ops->f_create(params->arg_create, p->socket_id); if (h_port == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: Port creation failed\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, "%s: Port creation failed", __func__); return -EINVAL; } @@ -866,7 +866,7 @@ rte_pipeline_port_out_create(struct rte_pipeline *p, /* Create the port */ h_port = params->ops->f_create(params->arg_create, p->socket_id); if (h_port == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: Port creation failed\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, "%s: Port creation failed", __func__); return -EINVAL; } @@ -901,21 +901,21 @@ rte_pipeline_port_in_connect_to_table(struct rte_pipeline *p, /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (port_id >= p->num_ports_in) { - RTE_LOG(ERR, PIPELINE, - "%s: port IN ID %u is out of range\n", + RTE_LOG_LINE(ERR, PIPELINE, + "%s: port IN ID %u is out of range", __func__, port_id); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: Table ID %u is out of range\n", + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Table ID %u is out of range", __func__, table_id); return -EINVAL; } @@ -935,14 +935,14 @@ rte_pipeline_port_in_enable(struct rte_pipeline *p, uint32_t port_id) /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (port_id >= p->num_ports_in) { - RTE_LOG(ERR, PIPELINE, - "%s: port IN ID %u is out of range\n", + RTE_LOG_LINE(ERR, PIPELINE, + "%s: port IN ID %u is out of range", __func__, port_id); return -EINVAL; } @@ -982,13 +982,13 @@ rte_pipeline_port_in_disable(struct rte_pipeline *p, uint32_t port_id) /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (port_id >= p->num_ports_in) { - RTE_LOG(ERR, PIPELINE, "%s: port IN ID %u is out of range\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: port IN ID %u is out of range", __func__, port_id); return -EINVAL; } @@ -1035,7 +1035,7 @@ rte_pipeline_check(struct rte_pipeline *p) /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } @@ -1043,17 +1043,17 @@ rte_pipeline_check(struct rte_pipeline *p) /* Check that pipeline has at least one input port, one table and one output port */ if (p->num_ports_in == 0) { - RTE_LOG(ERR, PIPELINE, "%s: must have at least 1 input port\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: must have at least 1 input port", __func__); return -EINVAL; } if (p->num_tables == 0) { - RTE_LOG(ERR, PIPELINE, "%s: must have at least 1 table\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: must have at least 1 table", __func__); return -EINVAL; } if (p->num_ports_out == 0) { - RTE_LOG(ERR, PIPELINE, "%s: must have at least 1 output port\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: must have at least 1 output port", __func__); return -EINVAL; } @@ -1063,8 +1063,8 @@ rte_pipeline_check(struct rte_pipeline *p) struct rte_port_in *port_in = &p->ports_in[port_in_id]; if (port_in->table_id == RTE_TABLE_INVALID) { - RTE_LOG(ERR, PIPELINE, - "%s: Port IN ID %u is not connected\n", + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Port IN ID %u is not connected", __func__, port_in_id); return -EINVAL; } @@ -1447,7 +1447,7 @@ rte_pipeline_flush(struct rte_pipeline *p) /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } @@ -1500,14 +1500,14 @@ int rte_pipeline_port_in_stats_read(struct rte_pipeline *p, uint32_t port_id, int retval; if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (port_id >= p->num_ports_in) { - RTE_LOG(ERR, PIPELINE, - "%s: port IN ID %u is out of range\n", + RTE_LOG_LINE(ERR, PIPELINE, + "%s: port IN ID %u is out of range", __func__, port_id); return -EINVAL; } @@ -1537,13 +1537,13 @@ int rte_pipeline_port_out_stats_read(struct rte_pipeline *p, uint32_t port_id, int retval; if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (port_id >= p->num_ports_out) { - RTE_LOG(ERR, PIPELINE, - "%s: port OUT ID %u is out of range\n", __func__, port_id); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: port OUT ID %u is out of range", __func__, port_id); return -EINVAL; } @@ -1571,14 +1571,14 @@ int rte_pipeline_table_stats_read(struct rte_pipeline *p, uint32_t table_id, int retval; if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: table %u is out of range\n", __func__, table_id); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: table %u is out of range", __func__, table_id); return -EINVAL; } diff --git a/lib/port/rte_port_ethdev.c b/lib/port/rte_port_ethdev.c index e6bb7ee480..7f7eadda11 100644 --- a/lib/port/rte_port_ethdev.c +++ b/lib/port/rte_port_ethdev.c @@ -43,7 +43,7 @@ rte_port_ethdev_reader_create(void *params, int socket_id) /* Check input parameters */ if (conf == NULL) { - RTE_LOG(ERR, PORT, "%s: params is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: params is NULL", __func__); return NULL; } @@ -51,7 +51,7 @@ rte_port_ethdev_reader_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -78,7 +78,7 @@ static int rte_port_ethdev_reader_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__); return -EINVAL; } @@ -142,7 +142,7 @@ rte_port_ethdev_writer_create(void *params, int socket_id) (conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->tx_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__); return NULL; } @@ -150,7 +150,7 @@ rte_port_ethdev_writer_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -257,7 +257,7 @@ static int rte_port_ethdev_writer_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } @@ -323,7 +323,7 @@ rte_port_ethdev_writer_nodrop_create(void *params, int socket_id) (conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->tx_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__); return NULL; } @@ -331,7 +331,7 @@ rte_port_ethdev_writer_nodrop_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -470,7 +470,7 @@ static int rte_port_ethdev_writer_nodrop_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } diff --git a/lib/port/rte_port_eventdev.c b/lib/port/rte_port_eventdev.c index 13350fd608..1d0571966c 100644 --- a/lib/port/rte_port_eventdev.c +++ b/lib/port/rte_port_eventdev.c @@ -45,7 +45,7 @@ rte_port_eventdev_reader_create(void *params, int socket_id) /* Check input parameters */ if (conf == NULL) { - RTE_LOG(ERR, PORT, "%s: params is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: params is NULL", __func__); return NULL; } @@ -53,7 +53,7 @@ rte_port_eventdev_reader_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -85,7 +85,7 @@ static int rte_port_eventdev_reader_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__); return -EINVAL; } @@ -155,7 +155,7 @@ rte_port_eventdev_writer_create(void *params, int socket_id) (conf->enq_burst_sz == 0) || (conf->enq_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->enq_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__); return NULL; } @@ -163,7 +163,7 @@ rte_port_eventdev_writer_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -290,7 +290,7 @@ static int rte_port_eventdev_writer_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } @@ -362,7 +362,7 @@ rte_port_eventdev_writer_nodrop_create(void *params, int socket_id) (conf->enq_burst_sz == 0) || (conf->enq_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->enq_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__); return NULL; } @@ -370,7 +370,7 @@ rte_port_eventdev_writer_nodrop_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -530,7 +530,7 @@ static int rte_port_eventdev_writer_nodrop_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } diff --git a/lib/port/rte_port_fd.c b/lib/port/rte_port_fd.c index 7e140793b2..1b95d7b014 100644 --- a/lib/port/rte_port_fd.c +++ b/lib/port/rte_port_fd.c @@ -43,19 +43,19 @@ rte_port_fd_reader_create(void *params, int socket_id) /* Check input parameters */ if (conf == NULL) { - RTE_LOG(ERR, PORT, "%s: params is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: params is NULL", __func__); return NULL; } if (conf->fd < 0) { - RTE_LOG(ERR, PORT, "%s: Invalid file descriptor\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid file descriptor", __func__); return NULL; } if (conf->mtu == 0) { - RTE_LOG(ERR, PORT, "%s: Invalid MTU\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid MTU", __func__); return NULL; } if (conf->mempool == NULL) { - RTE_LOG(ERR, PORT, "%s: Invalid mempool\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid mempool", __func__); return NULL; } @@ -63,7 +63,7 @@ rte_port_fd_reader_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -109,7 +109,7 @@ static int rte_port_fd_reader_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__); return -EINVAL; } @@ -171,7 +171,7 @@ rte_port_fd_writer_create(void *params, int socket_id) (conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->tx_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__); return NULL; } @@ -179,7 +179,7 @@ rte_port_fd_writer_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -279,7 +279,7 @@ static int rte_port_fd_writer_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } @@ -344,7 +344,7 @@ rte_port_fd_writer_nodrop_create(void *params, int socket_id) (conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->tx_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__); return NULL; } @@ -352,7 +352,7 @@ rte_port_fd_writer_nodrop_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -464,7 +464,7 @@ static int rte_port_fd_writer_nodrop_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } diff --git a/lib/port/rte_port_frag.c b/lib/port/rte_port_frag.c index e1f1892176..39ff31e447 100644 --- a/lib/port/rte_port_frag.c +++ b/lib/port/rte_port_frag.c @@ -62,24 +62,24 @@ rte_port_ring_reader_frag_create(void *params, int socket_id, int is_ipv4) /* Check input parameters */ if (conf == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter conf is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Parameter conf is NULL", __func__); return NULL; } if (conf->ring == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter ring is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Parameter ring is NULL", __func__); return NULL; } if (conf->mtu == 0) { - RTE_LOG(ERR, PORT, "%s: Parameter mtu is invalid\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Parameter mtu is invalid", __func__); return NULL; } if (conf->pool_direct == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter pool_direct is NULL\n", + RTE_LOG_LINE(ERR, PORT, "%s: Parameter pool_direct is NULL", __func__); return NULL; } if (conf->pool_indirect == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter pool_indirect is NULL\n", + RTE_LOG_LINE(ERR, PORT, "%s: Parameter pool_indirect is NULL", __func__); return NULL; } @@ -88,7 +88,7 @@ rte_port_ring_reader_frag_create(void *params, int socket_id, int is_ipv4) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__); return NULL; } @@ -232,7 +232,7 @@ static int rte_port_ring_reader_frag_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Parameter port is NULL", __func__); return -1; } diff --git a/lib/port/rte_port_ras.c b/lib/port/rte_port_ras.c index 15109661d1..1e697fd226 100644 --- a/lib/port/rte_port_ras.c +++ b/lib/port/rte_port_ras.c @@ -69,16 +69,16 @@ rte_port_ring_writer_ras_create(void *params, int socket_id, int is_ipv4) /* Check input parameters */ if (conf == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter conf is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Parameter conf is NULL", __func__); return NULL; } if (conf->ring == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter ring is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Parameter ring is NULL", __func__); return NULL; } if ((conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX)) { - RTE_LOG(ERR, PORT, "%s: Parameter tx_burst_sz is invalid\n", + RTE_LOG_LINE(ERR, PORT, "%s: Parameter tx_burst_sz is invalid", __func__); return NULL; } @@ -87,7 +87,7 @@ rte_port_ring_writer_ras_create(void *params, int socket_id, int is_ipv4) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate socket\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate socket", __func__); return NULL; } @@ -103,7 +103,7 @@ rte_port_ring_writer_ras_create(void *params, int socket_id, int is_ipv4) socket_id); if (port->frag_tbl == NULL) { - RTE_LOG(ERR, PORT, "%s: rte_ip_frag_table_create failed\n", + RTE_LOG_LINE(ERR, PORT, "%s: rte_ip_frag_table_create failed", __func__); rte_free(port); return NULL; @@ -282,7 +282,7 @@ rte_port_ring_writer_ras_free(void *port) port; if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Parameter port is NULL", __func__); return -1; } diff --git a/lib/port/rte_port_ring.c b/lib/port/rte_port_ring.c index 002efb7c3e..42b33763d1 100644 --- a/lib/port/rte_port_ring.c +++ b/lib/port/rte_port_ring.c @@ -46,7 +46,7 @@ rte_port_ring_reader_create_internal(void *params, int socket_id, (conf->ring == NULL) || (rte_ring_is_cons_single(conf->ring) && is_multi) || (!rte_ring_is_cons_single(conf->ring) && !is_multi)) { - RTE_LOG(ERR, PORT, "%s: Invalid Parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid Parameters", __func__); return NULL; } @@ -54,7 +54,7 @@ rte_port_ring_reader_create_internal(void *params, int socket_id, port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -107,7 +107,7 @@ static int rte_port_ring_reader_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__); return -EINVAL; } @@ -174,7 +174,7 @@ rte_port_ring_writer_create_internal(void *params, int socket_id, (rte_ring_is_prod_single(conf->ring) && is_multi) || (!rte_ring_is_prod_single(conf->ring) && !is_multi) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX)) { - RTE_LOG(ERR, PORT, "%s: Invalid Parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid Parameters", __func__); return NULL; } @@ -182,7 +182,7 @@ rte_port_ring_writer_create_internal(void *params, int socket_id, port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -370,7 +370,7 @@ rte_port_ring_writer_free(void *port) struct rte_port_ring_writer *p = port; if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } @@ -443,7 +443,7 @@ rte_port_ring_writer_nodrop_create_internal(void *params, int socket_id, (rte_ring_is_prod_single(conf->ring) && is_multi) || (!rte_ring_is_prod_single(conf->ring) && !is_multi) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX)) { - RTE_LOG(ERR, PORT, "%s: Invalid Parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid Parameters", __func__); return NULL; } @@ -451,7 +451,7 @@ rte_port_ring_writer_nodrop_create_internal(void *params, int socket_id, port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -703,7 +703,7 @@ rte_port_ring_writer_nodrop_free(void *port) port; if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } diff --git a/lib/port/rte_port_sched.c b/lib/port/rte_port_sched.c index f6255c4346..e83112989f 100644 --- a/lib/port/rte_port_sched.c +++ b/lib/port/rte_port_sched.c @@ -40,7 +40,7 @@ rte_port_sched_reader_create(void *params, int socket_id) /* Check input parameters */ if ((conf == NULL) || (conf->sched == NULL)) { - RTE_LOG(ERR, PORT, "%s: Invalid params\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid params", __func__); return NULL; } @@ -48,7 +48,7 @@ rte_port_sched_reader_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -74,7 +74,7 @@ static int rte_port_sched_reader_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__); return -EINVAL; } @@ -139,7 +139,7 @@ rte_port_sched_writer_create(void *params, int socket_id) (conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->tx_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid params\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid params", __func__); return NULL; } @@ -147,7 +147,7 @@ rte_port_sched_writer_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -247,7 +247,7 @@ static int rte_port_sched_writer_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__); return -EINVAL; } diff --git a/lib/port/rte_port_source_sink.c b/lib/port/rte_port_source_sink.c index ff9677cdfe..cb4b7fa7fb 100644 --- a/lib/port/rte_port_source_sink.c +++ b/lib/port/rte_port_source_sink.c @@ -75,8 +75,8 @@ pcap_source_load(struct rte_port_source *port, /* first time open, get packet number */ pcap_handle = pcap_open_offline(file_name, pcap_errbuf); if (pcap_handle == NULL) { - RTE_LOG(ERR, PORT, "Failed to open pcap file " - "'%s' for reading\n", file_name); + RTE_LOG_LINE(ERR, PORT, "Failed to open pcap file " + "'%s' for reading", file_name); goto error_exit; } @@ -88,29 +88,29 @@ pcap_source_load(struct rte_port_source *port, port->pkt_len = rte_zmalloc_socket("PCAP", (sizeof(*port->pkt_len) * n_pkts), 0, socket_id); if (port->pkt_len == NULL) { - RTE_LOG(ERR, PORT, "No enough memory\n"); + RTE_LOG_LINE(ERR, PORT, "No enough memory"); goto error_exit; } pkt_len_aligns = rte_malloc("PCAP", (sizeof(*pkt_len_aligns) * n_pkts), 0); if (pkt_len_aligns == NULL) { - RTE_LOG(ERR, PORT, "No enough memory\n"); + RTE_LOG_LINE(ERR, PORT, "No enough memory"); goto error_exit; } port->pkts = rte_zmalloc_socket("PCAP", (sizeof(*port->pkts) * n_pkts), 0, socket_id); if (port->pkts == NULL) { - RTE_LOG(ERR, PORT, "No enough memory\n"); + RTE_LOG_LINE(ERR, PORT, "No enough memory"); goto error_exit; } /* open 2nd time, get pkt_len */ pcap_handle = pcap_open_offline(file_name, pcap_errbuf); if (pcap_handle == NULL) { - RTE_LOG(ERR, PORT, "Failed to open pcap file " - "'%s' for reading\n", file_name); + RTE_LOG_LINE(ERR, PORT, "Failed to open pcap file " + "'%s' for reading", file_name); goto error_exit; } @@ -128,7 +128,7 @@ pcap_source_load(struct rte_port_source *port, buff = rte_zmalloc_socket("PCAP", total_buff_len, 0, socket_id); if (buff == NULL) { - RTE_LOG(ERR, PORT, "No enough memory\n"); + RTE_LOG_LINE(ERR, PORT, "No enough memory"); goto error_exit; } @@ -137,8 +137,8 @@ pcap_source_load(struct rte_port_source *port, /* open file one last time to copy the pkt content */ pcap_handle = pcap_open_offline(file_name, pcap_errbuf); if (pcap_handle == NULL) { - RTE_LOG(ERR, PORT, "Failed to open pcap file " - "'%s' for reading\n", file_name); + RTE_LOG_LINE(ERR, PORT, "Failed to open pcap file " + "'%s' for reading", file_name); goto error_exit; } @@ -155,8 +155,8 @@ pcap_source_load(struct rte_port_source *port, rte_free(pkt_len_aligns); - RTE_LOG(INFO, PORT, "Successfully load pcap file " - "'%s' with %u pkts\n", + RTE_LOG_LINE(INFO, PORT, "Successfully load pcap file " + "'%s' with %u pkts", file_name, port->n_pkts); return 0; @@ -180,8 +180,8 @@ pcap_source_load(struct rte_port_source *port, int _ret = 0; \ \ if (file_name) { \ - RTE_LOG(ERR, PORT, "Source port field " \ - "\"file_name\" is not NULL.\n"); \ + RTE_LOG_LINE(ERR, PORT, "Source port field " \ + "\"file_name\" is not NULL."); \ _ret = -1; \ } \ \ @@ -199,7 +199,7 @@ rte_port_source_create(void *params, int socket_id) /* Check input arguments*/ if ((p == NULL) || (p->mempool == NULL)) { - RTE_LOG(ERR, PORT, "%s: Invalid params\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid params", __func__); return NULL; } @@ -207,7 +207,7 @@ rte_port_source_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -332,15 +332,15 @@ pcap_sink_open(struct rte_port_sink *port, /** Open a dead pcap handler for opening dumper file */ tx_pcap = pcap_open_dead(DLT_EN10MB, 65535); if (tx_pcap == NULL) { - RTE_LOG(ERR, PORT, "Cannot open pcap dead handler\n"); + RTE_LOG_LINE(ERR, PORT, "Cannot open pcap dead handler"); return -1; } /* The dumper is created using the previous pcap_t reference */ pcap_dumper = pcap_dump_open(tx_pcap, file_name); if (pcap_dumper == NULL) { - RTE_LOG(ERR, PORT, "Failed to open pcap file " - "\"%s\" for writing\n", file_name); + RTE_LOG_LINE(ERR, PORT, "Failed to open pcap file " + "\"%s\" for writing", file_name); return -1; } @@ -349,7 +349,7 @@ pcap_sink_open(struct rte_port_sink *port, port->pkt_index = 0; port->dump_finish = 0; - RTE_LOG(INFO, PORT, "Ready to dump packets to file \"%s\"\n", + RTE_LOG_LINE(INFO, PORT, "Ready to dump packets to file \"%s\"", file_name); return 0; @@ -402,7 +402,7 @@ pcap_sink_write_pkt(struct rte_port_sink *port, struct rte_mbuf *mbuf) if ((port->max_pkts != 0) && (port->pkt_index >= port->max_pkts)) { port->dump_finish = 1; - RTE_LOG(INFO, PORT, "Dumped %u packets to file\n", + RTE_LOG_LINE(INFO, PORT, "Dumped %u packets to file", port->pkt_index); } @@ -433,8 +433,8 @@ do { \ int _ret = 0; \ \ if (file_name) { \ - RTE_LOG(ERR, PORT, "Sink port field " \ - "\"file_name\" is not NULL.\n"); \ + RTE_LOG_LINE(ERR, PORT, "Sink port field " \ + "\"file_name\" is not NULL."); \ _ret = -1; \ } \ \ @@ -459,7 +459,7 @@ rte_port_sink_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } diff --git a/lib/port/rte_port_sym_crypto.c b/lib/port/rte_port_sym_crypto.c index 27b7e07cea..8e9abff9d6 100644 --- a/lib/port/rte_port_sym_crypto.c +++ b/lib/port/rte_port_sym_crypto.c @@ -44,7 +44,7 @@ rte_port_sym_crypto_reader_create(void *params, int socket_id) /* Check input parameters */ if (conf == NULL) { - RTE_LOG(ERR, PORT, "%s: params is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: params is NULL", __func__); return NULL; } @@ -52,7 +52,7 @@ rte_port_sym_crypto_reader_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -100,7 +100,7 @@ static int rte_port_sym_crypto_reader_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__); return -EINVAL; } @@ -167,7 +167,7 @@ rte_port_sym_crypto_writer_create(void *params, int socket_id) (conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->tx_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__); return NULL; } @@ -175,7 +175,7 @@ rte_port_sym_crypto_writer_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -285,7 +285,7 @@ static int rte_port_sym_crypto_writer_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } @@ -353,7 +353,7 @@ rte_port_sym_crypto_writer_nodrop_create(void *params, int socket_id) (conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->tx_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__); return NULL; } @@ -361,7 +361,7 @@ rte_port_sym_crypto_writer_nodrop_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -497,7 +497,7 @@ static int rte_port_sym_crypto_writer_nodrop_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } diff --git a/lib/power/guest_channel.c b/lib/power/guest_channel.c index a6f2097d5b..a9bbda8f48 100644 --- a/lib/power/guest_channel.c +++ b/lib/power/guest_channel.c @@ -59,38 +59,38 @@ guest_channel_host_connect(const char *path, unsigned int lcore_id) int fd = -1; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d\n", + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d", lcore_id, RTE_MAX_LCORE-1); return -1; } /* check if path is already open */ if (global_fds[lcore_id] != -1) { - RTE_LOG(ERR, GUEST_CHANNEL, "Channel(%u) is already open with fd %d\n", + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Channel(%u) is already open with fd %d", lcore_id, global_fds[lcore_id]); return -1; } snprintf(fd_path, PATH_MAX, "%s.%u", path, lcore_id); - RTE_LOG(INFO, GUEST_CHANNEL, "Opening channel '%s' for lcore %u\n", + RTE_LOG_LINE(INFO, GUEST_CHANNEL, "Opening channel '%s' for lcore %u", fd_path, lcore_id); fd = open(fd_path, O_RDWR); if (fd < 0) { - RTE_LOG(ERR, GUEST_CHANNEL, "Unable to connect to '%s' with error " - "%s\n", fd_path, strerror(errno)); + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Unable to connect to '%s' with error " + "%s", fd_path, strerror(errno)); return -1; } flags = fcntl(fd, F_GETFL, 0); if (flags < 0) { - RTE_LOG(ERR, GUEST_CHANNEL, "Failed on fcntl get flags for file %s\n", + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Failed on fcntl get flags for file %s", fd_path); goto error; } flags |= O_NONBLOCK; if (fcntl(fd, F_SETFL, flags) < 0) { - RTE_LOG(ERR, GUEST_CHANNEL, "Failed on setting non-blocking mode for " - "file %s\n", fd_path); + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Failed on setting non-blocking mode for " + "file %s", fd_path); goto error; } /* QEMU needs a delay after connection */ @@ -103,13 +103,13 @@ guest_channel_host_connect(const char *path, unsigned int lcore_id) global_fds[lcore_id] = fd; ret = guest_channel_send_msg(&pkt, lcore_id); if (ret != 0) { - RTE_LOG(ERR, GUEST_CHANNEL, - "Error on channel '%s' communications test: %s\n", + RTE_LOG_LINE(ERR, GUEST_CHANNEL, + "Error on channel '%s' communications test: %s", fd_path, ret > 0 ? strerror(ret) : "channel not connected"); goto error; } - RTE_LOG(INFO, GUEST_CHANNEL, "Channel '%s' is now connected\n", fd_path); + RTE_LOG_LINE(INFO, GUEST_CHANNEL, "Channel '%s' is now connected", fd_path); return 0; error: close(fd); @@ -125,13 +125,13 @@ guest_channel_send_msg(struct rte_power_channel_packet *pkt, void *buffer = pkt; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d\n", + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d", lcore_id, RTE_MAX_LCORE-1); return -1; } if (global_fds[lcore_id] < 0) { - RTE_LOG(ERR, GUEST_CHANNEL, "Channel is not connected\n"); + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Channel is not connected"); return -1; } while (buffer_len > 0) { @@ -166,13 +166,13 @@ int power_guest_channel_read_msg(void *pkt, return -1; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d\n", + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d", lcore_id, RTE_MAX_LCORE-1); return -1; } if (global_fds[lcore_id] < 0) { - RTE_LOG(ERR, GUEST_CHANNEL, "Channel is not connected\n"); + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Channel is not connected"); return -1; } @@ -181,10 +181,10 @@ int power_guest_channel_read_msg(void *pkt, ret = poll(&fds, 1, TIMEOUT); if (ret == 0) { - RTE_LOG(DEBUG, GUEST_CHANNEL, "Timeout occurred during poll function.\n"); + RTE_LOG_LINE(DEBUG, GUEST_CHANNEL, "Timeout occurred during poll function."); return -1; } else if (ret < 0) { - RTE_LOG(ERR, GUEST_CHANNEL, "Error occurred during poll function: %s\n", + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Error occurred during poll function: %s", strerror(errno)); return -1; } @@ -200,7 +200,7 @@ int power_guest_channel_read_msg(void *pkt, } if (ret == 0) { - RTE_LOG(ERR, GUEST_CHANNEL, "Expected more data, but connection has been closed.\n"); + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Expected more data, but connection has been closed."); return -1; } pkt = (char *)pkt + ret; @@ -221,7 +221,7 @@ void guest_channel_host_disconnect(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d\n", + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d", lcore_id, RTE_MAX_LCORE-1); return; } diff --git a/lib/power/power_acpi_cpufreq.c b/lib/power/power_acpi_cpufreq.c index 8b55f19247..dd143f2cc8 100644 --- a/lib/power/power_acpi_cpufreq.c +++ b/lib/power/power_acpi_cpufreq.c @@ -63,8 +63,8 @@ static int set_freq_internal(struct acpi_power_info *pi, uint32_t idx) { if (idx >= RTE_MAX_LCORE_FREQS || idx >= pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Invalid frequency index %u, which " - "should be less than %u\n", idx, pi->nb_freqs); + RTE_LOG_LINE(ERR, POWER, "Invalid frequency index %u, which " + "should be less than %u", idx, pi->nb_freqs); return -1; } @@ -75,13 +75,13 @@ set_freq_internal(struct acpi_power_info *pi, uint32_t idx) POWER_DEBUG_TRACE("Frequency[%u] %u to be set for lcore %u\n", idx, pi->freqs[idx], pi->lcore_id); if (fseek(pi->f, 0, SEEK_SET) < 0) { - RTE_LOG(ERR, POWER, "Fail to set file position indicator to 0 " - "for setting frequency for lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to set file position indicator to 0 " + "for setting frequency for lcore %u", pi->lcore_id); return -1; } if (fprintf(pi->f, "%u", pi->freqs[idx]) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new frequency for " - "lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to write new frequency for " + "lcore %u", pi->lcore_id); return -1; } fflush(pi->f); @@ -127,14 +127,14 @@ power_get_available_freqs(struct acpi_power_info *pi) open_core_sysfs_file(&f, "r", POWER_SYSFILE_AVAIL_FREQ, pi->lcore_id); if (f == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_AVAIL_FREQ); goto out; } ret = read_core_sysfs_s(f, buf, sizeof(buf)); if ((ret) < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_AVAIL_FREQ); goto out; } @@ -143,12 +143,12 @@ power_get_available_freqs(struct acpi_power_info *pi) count = rte_strsplit(buf, sizeof(buf), freqs, RTE_MAX_LCORE_FREQS, ' '); if (count <= 0) { - RTE_LOG(ERR, POWER, "No available frequency in " - ""POWER_SYSFILE_AVAIL_FREQ"\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "No available frequency in " + POWER_SYSFILE_AVAIL_FREQ, pi->lcore_id); goto out; } if (count >= RTE_MAX_LCORE_FREQS) { - RTE_LOG(ERR, POWER, "Too many available frequencies : %d\n", + RTE_LOG_LINE(ERR, POWER, "Too many available frequencies : %d", count); goto out; } @@ -196,14 +196,14 @@ power_init_for_setting_freq(struct acpi_power_info *pi) open_core_sysfs_file(&f, "rw+", POWER_SYSFILE_SETSPEED, pi->lcore_id); if (f == NULL) { - RTE_LOG(ERR, POWER, "Failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to open %s", POWER_SYSFILE_SETSPEED); goto err; } ret = read_core_sysfs_s(f, buf, sizeof(buf)); if ((ret) < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_SETSPEED); goto err; } @@ -237,7 +237,7 @@ power_acpi_cpufreq_init(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n", + RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceeds %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -253,42 +253,42 @@ power_acpi_cpufreq_init(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "in use\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is " + "in use", lcore_id); return -1; } pi->lcore_id = lcore_id; /* Check and set the governor */ if (power_set_governor_userspace(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to " - "userspace\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set governor of lcore %u to " + "userspace", lcore_id); goto fail; } /* Get the available frequencies */ if (power_get_available_freqs(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot get available frequencies of " - "lcore %u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot get available frequencies of " + "lcore %u", lcore_id); goto fail; } /* Init for setting lcore frequency */ if (power_init_for_setting_freq(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot init for setting frequency for " - "lcore %u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot init for setting frequency for " + "lcore %u", lcore_id); goto fail; } /* Set freq to max by default */ if (power_acpi_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, "Cannot set frequency of lcore %u " - "to max\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set frequency of lcore %u " + "to max", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Initialized successfully for lcore %u " - "power management\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Initialized successfully for lcore %u " + "power management", lcore_id); exp_state = POWER_ONGOING; rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_USED, rte_memory_order_release, rte_memory_order_relaxed); @@ -310,7 +310,7 @@ power_acpi_cpufreq_exit(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n", + RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceeds %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -325,8 +325,8 @@ power_acpi_cpufreq_exit(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "not used\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is " + "not used", lcore_id); return -1; } @@ -336,14 +336,14 @@ power_acpi_cpufreq_exit(unsigned int lcore_id) /* Set the governor back to the original */ if (power_set_governor_original(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set the governor of %u back " - "to the original\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set the governor of %u back " + "to the original", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Power management of lcore %u has exited from " + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u has exited from " "'userspace' mode and been set back to the " - "original\n", lcore_id); + "original", lcore_id); exp_state = POWER_ONGOING; rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_IDLE, rte_memory_order_release, rte_memory_order_relaxed); @@ -364,18 +364,18 @@ power_acpi_cpufreq_freqs(unsigned int lcore_id, uint32_t *freqs, uint32_t num) struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return 0; } if (freqs == NULL) { - RTE_LOG(ERR, POWER, "NULL buffer supplied\n"); + RTE_LOG_LINE(ERR, POWER, "NULL buffer supplied"); return 0; } pi = &lcore_power_info[lcore_id]; if (num < pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Buffer size is not enough\n"); + RTE_LOG_LINE(ERR, POWER, "Buffer size is not enough"); return 0; } rte_memcpy(freqs, pi->freqs, pi->nb_freqs * sizeof(uint32_t)); @@ -387,7 +387,7 @@ uint32_t power_acpi_cpufreq_get_freq(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return RTE_POWER_INVALID_FREQ_INDEX; } @@ -398,7 +398,7 @@ int power_acpi_cpufreq_set_freq(unsigned int lcore_id, uint32_t index) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -411,7 +411,7 @@ power_acpi_cpufreq_freq_down(unsigned int lcore_id) struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -429,7 +429,7 @@ power_acpi_cpufreq_freq_up(unsigned int lcore_id) struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -446,7 +446,7 @@ int power_acpi_cpufreq_freq_max(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -470,7 +470,7 @@ power_acpi_cpufreq_freq_min(unsigned int lcore_id) struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -487,7 +487,7 @@ power_acpi_turbo_status(unsigned int lcore_id) struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -503,7 +503,7 @@ power_acpi_enable_turbo(unsigned int lcore_id) struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -513,16 +513,16 @@ power_acpi_enable_turbo(unsigned int lcore_id) pi->turbo_enable = 1; else { pi->turbo_enable = 0; - RTE_LOG(ERR, POWER, - "Failed to enable turbo on lcore %u\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to enable turbo on lcore %u", lcore_id); return -1; } /* Max may have changed, so call to max function */ if (power_acpi_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, - "Failed to set frequency of lcore %u to max\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to set frequency of lcore %u to max", lcore_id); return -1; } @@ -536,7 +536,7 @@ power_acpi_disable_turbo(unsigned int lcore_id) struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -547,8 +547,8 @@ power_acpi_disable_turbo(unsigned int lcore_id) if ((pi->turbo_available) && (pi->curr_idx <= 1)) { /* Try to set freq to max by default coming out of turbo */ if (power_acpi_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, - "Failed to set frequency of lcore %u to max\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to set frequency of lcore %u to max", lcore_id); return -1; } @@ -563,11 +563,11 @@ int power_acpi_get_capabilities(unsigned int lcore_id, struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } if (caps == NULL) { - RTE_LOG(ERR, POWER, "Invalid argument\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid argument"); return -1; } diff --git a/lib/power/power_amd_pstate_cpufreq.c b/lib/power/power_amd_pstate_cpufreq.c index dbd9d2b3ee..44581fd48b 100644 --- a/lib/power/power_amd_pstate_cpufreq.c +++ b/lib/power/power_amd_pstate_cpufreq.c @@ -70,8 +70,8 @@ static int set_freq_internal(struct amd_pstate_power_info *pi, uint32_t idx) { if (idx >= RTE_MAX_LCORE_FREQS || idx >= pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Invalid frequency index %u, which " - "should be less than %u\n", idx, pi->nb_freqs); + RTE_LOG_LINE(ERR, POWER, "Invalid frequency index %u, which " + "should be less than %u", idx, pi->nb_freqs); return -1; } @@ -82,13 +82,13 @@ set_freq_internal(struct amd_pstate_power_info *pi, uint32_t idx) POWER_DEBUG_TRACE("Frequency[%u] %u to be set for lcore %u\n", idx, pi->freqs[idx], pi->lcore_id); if (fseek(pi->f, 0, SEEK_SET) < 0) { - RTE_LOG(ERR, POWER, "Fail to set file position indicator to 0 " - "for setting frequency for lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to set file position indicator to 0 " + "for setting frequency for lcore %u", pi->lcore_id); return -1; } if (fprintf(pi->f, "%u", pi->freqs[idx]) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new frequency for " - "lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to write new frequency for " + "lcore %u", pi->lcore_id); return -1; } fflush(pi->f); @@ -119,7 +119,7 @@ power_check_turbo(struct amd_pstate_power_info *pi) open_core_sysfs_file(&f_max, "r", POWER_SYSFILE_HIGHEST_PERF, pi->lcore_id); if (f_max == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_HIGHEST_PERF); goto err; } @@ -127,21 +127,21 @@ power_check_turbo(struct amd_pstate_power_info *pi) open_core_sysfs_file(&f_nom, "r", POWER_SYSFILE_NOMINAL_PERF, pi->lcore_id); if (f_nom == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_NOMINAL_PERF); goto err; } ret = read_core_sysfs_u32(f_max, &highest_perf); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_HIGHEST_PERF); goto err; } ret = read_core_sysfs_u32(f_nom, &nominal_perf); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_NOMINAL_PERF); goto err; } @@ -190,7 +190,7 @@ power_get_available_freqs(struct amd_pstate_power_info *pi) open_core_sysfs_file(&f_max, "r", POWER_SYSFILE_SCALING_MAX_FREQ, pi->lcore_id); if (f_max == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_SCALING_MAX_FREQ); goto out; } @@ -198,7 +198,7 @@ power_get_available_freqs(struct amd_pstate_power_info *pi) open_core_sysfs_file(&f_min, "r", POWER_SYSFILE_SCALING_MIN_FREQ, pi->lcore_id); if (f_min == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_SCALING_MIN_FREQ); goto out; } @@ -206,28 +206,28 @@ power_get_available_freqs(struct amd_pstate_power_info *pi) open_core_sysfs_file(&f_nom, "r", POWER_SYSFILE_NOMINAL_FREQ, pi->lcore_id); if (f_nom == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_NOMINAL_FREQ); goto out; } ret = read_core_sysfs_u32(f_max, &scaling_max_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_SCALING_MAX_FREQ); goto out; } ret = read_core_sysfs_u32(f_min, &scaling_min_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_SCALING_MIN_FREQ); goto out; } ret = read_core_sysfs_u32(f_nom, &nominal_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_NOMINAL_FREQ); goto out; } @@ -235,8 +235,8 @@ power_get_available_freqs(struct amd_pstate_power_info *pi) power_check_turbo(pi); if (scaling_max_freq < scaling_min_freq) { - RTE_LOG(ERR, POWER, "scaling min freq exceeds max freq, " - "not expected! Check system power policy\n"); + RTE_LOG_LINE(ERR, POWER, "scaling min freq exceeds max freq, " + "not expected! Check system power policy"); goto out; } else if (scaling_max_freq == scaling_min_freq) { num_freqs = 1; @@ -304,14 +304,14 @@ power_init_for_setting_freq(struct amd_pstate_power_info *pi) open_core_sysfs_file(&f, "rw+", POWER_SYSFILE_SETSPEED, pi->lcore_id); if (f == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_SETSPEED); goto err; } ret = read_core_sysfs_s(f, buf, sizeof(buf)); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_SETSPEED); goto err; } @@ -355,7 +355,7 @@ power_amd_pstate_cpufreq_init(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n", + RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceeds %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -371,42 +371,42 @@ power_amd_pstate_cpufreq_init(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "in use\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is " + "in use", lcore_id); return -1; } pi->lcore_id = lcore_id; /* Check and set the governor */ if (power_set_governor_userspace(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to " - "userspace\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set governor of lcore %u to " + "userspace", lcore_id); goto fail; } /* Get the available frequencies */ if (power_get_available_freqs(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot get available frequencies of " - "lcore %u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot get available frequencies of " + "lcore %u", lcore_id); goto fail; } /* Init for setting lcore frequency */ if (power_init_for_setting_freq(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot init for setting frequency for " - "lcore %u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot init for setting frequency for " + "lcore %u", lcore_id); goto fail; } /* Set freq to max by default */ if (power_amd_pstate_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, "Cannot set frequency of lcore %u " - "to max\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set frequency of lcore %u " + "to max", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Initialized successfully for lcore %u " - "power management\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Initialized successfully for lcore %u " + "power management", lcore_id); rte_atomic_store_explicit(&(pi->state), POWER_USED, rte_memory_order_release); @@ -434,7 +434,7 @@ power_amd_pstate_cpufreq_exit(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n", + RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceeds %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -449,8 +449,8 @@ power_amd_pstate_cpufreq_exit(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "not used\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is " + "not used", lcore_id); return -1; } @@ -460,14 +460,14 @@ power_amd_pstate_cpufreq_exit(unsigned int lcore_id) /* Set the governor back to the original */ if (power_set_governor_original(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set the governor of %u back " - "to the original\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set the governor of %u back " + "to the original", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Power management of lcore %u has exited from " + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u has exited from " "'userspace' mode and been set back to the " - "original\n", lcore_id); + "original", lcore_id); rte_atomic_store_explicit(&(pi->state), POWER_IDLE, rte_memory_order_release); return 0; @@ -484,18 +484,18 @@ power_amd_pstate_cpufreq_freqs(unsigned int lcore_id, uint32_t *freqs, uint32_t struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return 0; } if (freqs == NULL) { - RTE_LOG(ERR, POWER, "NULL buffer supplied\n"); + RTE_LOG_LINE(ERR, POWER, "NULL buffer supplied"); return 0; } pi = &lcore_power_info[lcore_id]; if (num < pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Buffer size is not enough\n"); + RTE_LOG_LINE(ERR, POWER, "Buffer size is not enough"); return 0; } rte_memcpy(freqs, pi->freqs, pi->nb_freqs * sizeof(uint32_t)); @@ -507,7 +507,7 @@ uint32_t power_amd_pstate_cpufreq_get_freq(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return RTE_POWER_INVALID_FREQ_INDEX; } @@ -518,7 +518,7 @@ int power_amd_pstate_cpufreq_set_freq(unsigned int lcore_id, uint32_t index) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -531,7 +531,7 @@ power_amd_pstate_cpufreq_freq_down(unsigned int lcore_id) struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -549,7 +549,7 @@ power_amd_pstate_cpufreq_freq_up(unsigned int lcore_id) struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -566,7 +566,7 @@ int power_amd_pstate_cpufreq_freq_max(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -591,7 +591,7 @@ power_amd_pstate_cpufreq_freq_min(unsigned int lcore_id) struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -607,7 +607,7 @@ power_amd_pstate_turbo_status(unsigned int lcore_id) struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -622,7 +622,7 @@ power_amd_pstate_enable_turbo(unsigned int lcore_id) struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -632,8 +632,8 @@ power_amd_pstate_enable_turbo(unsigned int lcore_id) pi->turbo_enable = 1; else { pi->turbo_enable = 0; - RTE_LOG(ERR, POWER, - "Failed to enable turbo on lcore %u\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to enable turbo on lcore %u", lcore_id); return -1; } @@ -643,8 +643,8 @@ power_amd_pstate_enable_turbo(unsigned int lcore_id) */ /* Max may have changed, so call to max function */ if (power_amd_pstate_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, - "Failed to set frequency of lcore %u to max\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to set frequency of lcore %u to max", lcore_id); return -1; } @@ -658,7 +658,7 @@ power_amd_pstate_disable_turbo(unsigned int lcore_id) struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -669,8 +669,8 @@ power_amd_pstate_disable_turbo(unsigned int lcore_id) if ((pi->turbo_available) && (pi->curr_idx <= pi->nom_idx)) { /* Try to set freq to max by default coming out of turbo */ if (power_amd_pstate_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, - "Failed to set frequency of lcore %u to max\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to set frequency of lcore %u to max", lcore_id); return -1; } @@ -686,11 +686,11 @@ power_amd_pstate_get_capabilities(unsigned int lcore_id, struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } if (caps == NULL) { - RTE_LOG(ERR, POWER, "Invalid argument\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid argument"); return -1; } diff --git a/lib/power/power_common.c b/lib/power/power_common.c index bf77eafa88..bc57642cd1 100644 --- a/lib/power/power_common.c +++ b/lib/power/power_common.c @@ -163,14 +163,14 @@ power_set_governor(unsigned int lcore_id, const char *new_governor, open_core_sysfs_file(&f_governor, "rw+", POWER_SYSFILE_GOVERNOR, lcore_id); if (f_governor == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_GOVERNOR); goto out; } ret = read_core_sysfs_s(f_governor, buf, sizeof(buf)); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_GOVERNOR); goto out; } @@ -190,14 +190,14 @@ power_set_governor(unsigned int lcore_id, const char *new_governor, /* Write the new governor */ ret = write_core_sysfs_s(f_governor, new_governor); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to write %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to write %s", POWER_SYSFILE_GOVERNOR); goto out; } ret = 0; - RTE_LOG(INFO, POWER, "Power management governor of lcore %u has been " - "set to '%s' successfully\n", lcore_id, new_governor); + RTE_LOG_LINE(INFO, POWER, "Power management governor of lcore %u has been " + "set to '%s' successfully", lcore_id, new_governor); out: if (f_governor != NULL) fclose(f_governor); diff --git a/lib/power/power_cppc_cpufreq.c b/lib/power/power_cppc_cpufreq.c index bb70f6ae52..83e1e62830 100644 --- a/lib/power/power_cppc_cpufreq.c +++ b/lib/power/power_cppc_cpufreq.c @@ -73,8 +73,8 @@ static int set_freq_internal(struct cppc_power_info *pi, uint32_t idx) { if (idx >= RTE_MAX_LCORE_FREQS || idx >= pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Invalid frequency index %u, which " - "should be less than %u\n", idx, pi->nb_freqs); + RTE_LOG_LINE(ERR, POWER, "Invalid frequency index %u, which " + "should be less than %u", idx, pi->nb_freqs); return -1; } @@ -85,13 +85,13 @@ set_freq_internal(struct cppc_power_info *pi, uint32_t idx) POWER_DEBUG_TRACE("Frequency[%u] %u to be set for lcore %u\n", idx, pi->freqs[idx], pi->lcore_id); if (fseek(pi->f, 0, SEEK_SET) < 0) { - RTE_LOG(ERR, POWER, "Fail to set file position indicator to 0 " - "for setting frequency for lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to set file position indicator to 0 " + "for setting frequency for lcore %u", pi->lcore_id); return -1; } if (fprintf(pi->f, "%u", pi->freqs[idx]) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new frequency for " - "lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to write new frequency for " + "lcore %u", pi->lcore_id); return -1; } fflush(pi->f); @@ -122,7 +122,7 @@ power_check_turbo(struct cppc_power_info *pi) open_core_sysfs_file(&f_max, "r", POWER_SYSFILE_HIGHEST_PERF, pi->lcore_id); if (f_max == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_HIGHEST_PERF); goto err; } @@ -130,7 +130,7 @@ power_check_turbo(struct cppc_power_info *pi) open_core_sysfs_file(&f_nom, "r", POWER_SYSFILE_NOMINAL_PERF, pi->lcore_id); if (f_nom == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_NOMINAL_PERF); goto err; } @@ -138,28 +138,28 @@ power_check_turbo(struct cppc_power_info *pi) open_core_sysfs_file(&f_cmax, "r", POWER_SYSFILE_SYS_MAX, pi->lcore_id); if (f_cmax == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_SYS_MAX); goto err; } ret = read_core_sysfs_u32(f_max, &highest_perf); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_HIGHEST_PERF); goto err; } ret = read_core_sysfs_u32(f_nom, &nominal_perf); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_NOMINAL_PERF); goto err; } ret = read_core_sysfs_u32(f_cmax, &cpuinfo_max_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_SYS_MAX); goto err; } @@ -209,7 +209,7 @@ power_get_available_freqs(struct cppc_power_info *pi) open_core_sysfs_file(&f_max, "r", POWER_SYSFILE_SCALING_MAX_FREQ, pi->lcore_id); if (f_max == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_SCALING_MAX_FREQ); goto out; } @@ -217,21 +217,21 @@ power_get_available_freqs(struct cppc_power_info *pi) open_core_sysfs_file(&f_min, "r", POWER_SYSFILE_SCALING_MIN_FREQ, pi->lcore_id); if (f_min == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_SCALING_MIN_FREQ); goto out; } ret = read_core_sysfs_u32(f_max, &scaling_max_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_SCALING_MAX_FREQ); goto out; } ret = read_core_sysfs_u32(f_min, &scaling_min_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_SCALING_MIN_FREQ); goto out; } @@ -249,7 +249,7 @@ power_get_available_freqs(struct cppc_power_info *pi) num_freqs = (nominal_perf - scaling_min_freq) / BUS_FREQ + 1 + pi->turbo_available; if (num_freqs >= RTE_MAX_LCORE_FREQS) { - RTE_LOG(ERR, POWER, "Too many available frequencies: %d\n", + RTE_LOG_LINE(ERR, POWER, "Too many available frequencies: %d", num_freqs); goto out; } @@ -290,14 +290,14 @@ power_init_for_setting_freq(struct cppc_power_info *pi) open_core_sysfs_file(&f, "rw+", POWER_SYSFILE_SETSPEED, pi->lcore_id); if (f == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_SETSPEED); goto err; } ret = read_core_sysfs_s(f, buf, sizeof(buf)); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_SETSPEED); goto err; } @@ -341,7 +341,7 @@ power_cppc_cpufreq_init(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n", + RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceeds %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -357,42 +357,42 @@ power_cppc_cpufreq_init(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "in use\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is " + "in use", lcore_id); return -1; } pi->lcore_id = lcore_id; /* Check and set the governor */ if (power_set_governor_userspace(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to " - "userspace\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set governor of lcore %u to " + "userspace", lcore_id); goto fail; } /* Get the available frequencies */ if (power_get_available_freqs(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot get available frequencies of " - "lcore %u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot get available frequencies of " + "lcore %u", lcore_id); goto fail; } /* Init for setting lcore frequency */ if (power_init_for_setting_freq(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot init for setting frequency for " - "lcore %u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot init for setting frequency for " + "lcore %u", lcore_id); goto fail; } /* Set freq to max by default */ if (power_cppc_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, "Cannot set frequency of lcore %u " - "to max\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set frequency of lcore %u " + "to max", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Initialized successfully for lcore %u " - "power management\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Initialized successfully for lcore %u " + "power management", lcore_id); rte_atomic_store_explicit(&(pi->state), POWER_USED, rte_memory_order_release); @@ -420,7 +420,7 @@ power_cppc_cpufreq_exit(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n", + RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceeds %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -435,8 +435,8 @@ power_cppc_cpufreq_exit(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "not used\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is " + "not used", lcore_id); return -1; } @@ -446,14 +446,14 @@ power_cppc_cpufreq_exit(unsigned int lcore_id) /* Set the governor back to the original */ if (power_set_governor_original(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set the governor of %u back " - "to the original\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set the governor of %u back " + "to the original", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Power management of lcore %u has exited from " + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u has exited from " "'userspace' mode and been set back to the " - "original\n", lcore_id); + "original", lcore_id); rte_atomic_store_explicit(&(pi->state), POWER_IDLE, rte_memory_order_release); return 0; @@ -470,18 +470,18 @@ power_cppc_cpufreq_freqs(unsigned int lcore_id, uint32_t *freqs, uint32_t num) struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return 0; } if (freqs == NULL) { - RTE_LOG(ERR, POWER, "NULL buffer supplied\n"); + RTE_LOG_LINE(ERR, POWER, "NULL buffer supplied"); return 0; } pi = &lcore_power_info[lcore_id]; if (num < pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Buffer size is not enough\n"); + RTE_LOG_LINE(ERR, POWER, "Buffer size is not enough"); return 0; } rte_memcpy(freqs, pi->freqs, pi->nb_freqs * sizeof(uint32_t)); @@ -493,7 +493,7 @@ uint32_t power_cppc_cpufreq_get_freq(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return RTE_POWER_INVALID_FREQ_INDEX; } @@ -504,7 +504,7 @@ int power_cppc_cpufreq_set_freq(unsigned int lcore_id, uint32_t index) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -517,7 +517,7 @@ power_cppc_cpufreq_freq_down(unsigned int lcore_id) struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -535,7 +535,7 @@ power_cppc_cpufreq_freq_up(unsigned int lcore_id) struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -552,7 +552,7 @@ int power_cppc_cpufreq_freq_max(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -576,7 +576,7 @@ power_cppc_cpufreq_freq_min(unsigned int lcore_id) struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -592,7 +592,7 @@ power_cppc_turbo_status(unsigned int lcore_id) struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -607,7 +607,7 @@ power_cppc_enable_turbo(unsigned int lcore_id) struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -617,8 +617,8 @@ power_cppc_enable_turbo(unsigned int lcore_id) pi->turbo_enable = 1; else { pi->turbo_enable = 0; - RTE_LOG(ERR, POWER, - "Failed to enable turbo on lcore %u\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to enable turbo on lcore %u", lcore_id); return -1; } @@ -628,8 +628,8 @@ power_cppc_enable_turbo(unsigned int lcore_id) */ /* Max may have changed, so call to max function */ if (power_cppc_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, - "Failed to set frequency of lcore %u to max\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to set frequency of lcore %u to max", lcore_id); return -1; } @@ -643,7 +643,7 @@ power_cppc_disable_turbo(unsigned int lcore_id) struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -654,8 +654,8 @@ power_cppc_disable_turbo(unsigned int lcore_id) if ((pi->turbo_available) && (pi->curr_idx <= 1)) { /* Try to set freq to max by default coming out of turbo */ if (power_cppc_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, - "Failed to set frequency of lcore %u to max\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to set frequency of lcore %u to max", lcore_id); return -1; } @@ -671,11 +671,11 @@ power_cppc_get_capabilities(unsigned int lcore_id, struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } if (caps == NULL) { - RTE_LOG(ERR, POWER, "Invalid argument\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid argument"); return -1; } diff --git a/lib/power/power_intel_uncore.c b/lib/power/power_intel_uncore.c index 688aebc4ee..0ee8e603d2 100644 --- a/lib/power/power_intel_uncore.c +++ b/lib/power/power_intel_uncore.c @@ -52,8 +52,8 @@ set_uncore_freq_internal(struct uncore_power_info *ui, uint32_t idx) int ret; if (idx >= MAX_UNCORE_FREQS || idx >= ui->nb_freqs) { - RTE_LOG(DEBUG, POWER, "Invalid uncore frequency index %u, which " - "should be less than %u\n", idx, ui->nb_freqs); + RTE_LOG_LINE(DEBUG, POWER, "Invalid uncore frequency index %u, which " + "should be less than %u", idx, ui->nb_freqs); return -1; } @@ -65,13 +65,13 @@ set_uncore_freq_internal(struct uncore_power_info *ui, uint32_t idx) open_core_sysfs_file(&ui->f_cur_max, "rw+", POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ, ui->pkg, ui->die); if (ui->f_cur_max == NULL) { - RTE_LOG(DEBUG, POWER, "failed to open %s\n", + RTE_LOG_LINE(DEBUG, POWER, "failed to open %s", POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ); return -1; } ret = read_core_sysfs_u32(ui->f_cur_max, &curr_max_freq); if (ret < 0) { - RTE_LOG(DEBUG, POWER, "Failed to read %s\n", + RTE_LOG_LINE(DEBUG, POWER, "Failed to read %s", POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ); fclose(ui->f_cur_max); return -1; @@ -79,14 +79,14 @@ set_uncore_freq_internal(struct uncore_power_info *ui, uint32_t idx) /* check this value first before fprintf value to f_cur_max, so value isn't overwritten */ if (fprintf(ui->f_cur_min, "%u", target_uncore_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new uncore frequency for " - "pkg %02u die %02u\n", ui->pkg, ui->die); + RTE_LOG_LINE(ERR, POWER, "Fail to write new uncore frequency for " + "pkg %02u die %02u", ui->pkg, ui->die); return -1; } if (fprintf(ui->f_cur_max, "%u", target_uncore_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new uncore frequency for " - "pkg %02u die %02u\n", ui->pkg, ui->die); + RTE_LOG_LINE(ERR, POWER, "Fail to write new uncore frequency for " + "pkg %02u die %02u", ui->pkg, ui->die); return -1; } @@ -121,13 +121,13 @@ power_init_for_setting_uncore_freq(struct uncore_power_info *ui) open_core_sysfs_file(&f_base_max, "r", POWER_INTEL_UNCORE_SYSFILE_BASE_MAX_FREQ, ui->pkg, ui->die); if (f_base_max == NULL) { - RTE_LOG(DEBUG, POWER, "failed to open %s\n", + RTE_LOG_LINE(DEBUG, POWER, "failed to open %s", POWER_INTEL_UNCORE_SYSFILE_BASE_MAX_FREQ); goto err; } ret = read_core_sysfs_u32(f_base_max, &base_max_freq); if (ret < 0) { - RTE_LOG(DEBUG, POWER, "Failed to read %s\n", + RTE_LOG_LINE(DEBUG, POWER, "Failed to read %s", POWER_INTEL_UNCORE_SYSFILE_BASE_MAX_FREQ); goto err; } @@ -136,14 +136,14 @@ power_init_for_setting_uncore_freq(struct uncore_power_info *ui) open_core_sysfs_file(&f_base_min, "r", POWER_INTEL_UNCORE_SYSFILE_BASE_MIN_FREQ, ui->pkg, ui->die); if (f_base_min == NULL) { - RTE_LOG(DEBUG, POWER, "failed to open %s\n", + RTE_LOG_LINE(DEBUG, POWER, "failed to open %s", POWER_INTEL_UNCORE_SYSFILE_BASE_MIN_FREQ); goto err; } if (f_base_min != NULL) { ret = read_core_sysfs_u32(f_base_min, &base_min_freq); if (ret < 0) { - RTE_LOG(DEBUG, POWER, "Failed to read %s\n", + RTE_LOG_LINE(DEBUG, POWER, "Failed to read %s", POWER_INTEL_UNCORE_SYSFILE_BASE_MIN_FREQ); goto err; } @@ -153,14 +153,14 @@ power_init_for_setting_uncore_freq(struct uncore_power_info *ui) open_core_sysfs_file(&f_min, "rw+", POWER_INTEL_UNCORE_SYSFILE_MIN_FREQ, ui->pkg, ui->die); if (f_min == NULL) { - RTE_LOG(DEBUG, POWER, "failed to open %s\n", + RTE_LOG_LINE(DEBUG, POWER, "failed to open %s", POWER_INTEL_UNCORE_SYSFILE_MIN_FREQ); goto err; } if (f_min != NULL) { ret = read_core_sysfs_u32(f_min, &min_freq); if (ret < 0) { - RTE_LOG(DEBUG, POWER, "Failed to read %s\n", + RTE_LOG_LINE(DEBUG, POWER, "Failed to read %s", POWER_INTEL_UNCORE_SYSFILE_MIN_FREQ); goto err; } @@ -170,14 +170,14 @@ power_init_for_setting_uncore_freq(struct uncore_power_info *ui) open_core_sysfs_file(&f_max, "rw+", POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ, ui->pkg, ui->die); if (f_max == NULL) { - RTE_LOG(DEBUG, POWER, "failed to open %s\n", + RTE_LOG_LINE(DEBUG, POWER, "failed to open %s", POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ); goto err; } if (f_max != NULL) { ret = read_core_sysfs_u32(f_max, &max_freq); if (ret < 0) { - RTE_LOG(DEBUG, POWER, "Failed to read %s\n", + RTE_LOG_LINE(DEBUG, POWER, "Failed to read %s", POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ); goto err; } @@ -222,7 +222,7 @@ power_get_available_uncore_freqs(struct uncore_power_info *ui) num_uncore_freqs = (ui->init_max_freq - ui->init_min_freq) / BUS_FREQ + 1; if (num_uncore_freqs >= MAX_UNCORE_FREQS) { - RTE_LOG(ERR, POWER, "Too many available uncore frequencies: %d\n", + RTE_LOG_LINE(ERR, POWER, "Too many available uncore frequencies: %d", num_uncore_freqs); goto out; } @@ -250,7 +250,7 @@ check_pkg_die_values(unsigned int pkg, unsigned int die) if (max_pkgs == 0) return -1; if (pkg >= max_pkgs) { - RTE_LOG(DEBUG, POWER, "Package number %02u can not exceed %u\n", + RTE_LOG_LINE(DEBUG, POWER, "Package number %02u can not exceed %u", pkg, max_pkgs); return -1; } @@ -259,7 +259,7 @@ check_pkg_die_values(unsigned int pkg, unsigned int die) if (max_dies == 0) return -1; if (die >= max_dies) { - RTE_LOG(DEBUG, POWER, "Die number %02u can not exceed %u\n", + RTE_LOG_LINE(DEBUG, POWER, "Die number %02u can not exceed %u", die, max_dies); return -1; } @@ -282,15 +282,15 @@ power_intel_uncore_init(unsigned int pkg, unsigned int die) /* Init for setting uncore die frequency */ if (power_init_for_setting_uncore_freq(ui) < 0) { - RTE_LOG(DEBUG, POWER, "Cannot init for setting uncore frequency for " - "pkg %02u die %02u\n", pkg, die); + RTE_LOG_LINE(DEBUG, POWER, "Cannot init for setting uncore frequency for " + "pkg %02u die %02u", pkg, die); return -1; } /* Get the available frequencies */ if (power_get_available_uncore_freqs(ui) < 0) { - RTE_LOG(DEBUG, POWER, "Cannot get available uncore frequencies of " - "pkg %02u die %02u\n", pkg, die); + RTE_LOG_LINE(DEBUG, POWER, "Cannot get available uncore frequencies of " + "pkg %02u die %02u", pkg, die); return -1; } @@ -309,14 +309,14 @@ power_intel_uncore_exit(unsigned int pkg, unsigned int die) ui = &uncore_info[pkg][die]; if (fprintf(ui->f_cur_min, "%u", ui->org_min_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write original uncore frequency for " - "pkg %02u die %02u\n", ui->pkg, ui->die); + RTE_LOG_LINE(ERR, POWER, "Fail to write original uncore frequency for " + "pkg %02u die %02u", ui->pkg, ui->die); return -1; } if (fprintf(ui->f_cur_max, "%u", ui->org_max_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write original uncore frequency for " - "pkg %02u die %02u\n", ui->pkg, ui->die); + RTE_LOG_LINE(ERR, POWER, "Fail to write original uncore frequency for " + "pkg %02u die %02u", ui->pkg, ui->die); return -1; } @@ -385,13 +385,13 @@ power_intel_uncore_freqs(unsigned int pkg, unsigned int die, uint32_t *freqs, ui return -1; if (freqs == NULL) { - RTE_LOG(ERR, POWER, "NULL buffer supplied\n"); + RTE_LOG_LINE(ERR, POWER, "NULL buffer supplied"); return 0; } ui = &uncore_info[pkg][die]; if (num < ui->nb_freqs) { - RTE_LOG(ERR, POWER, "Buffer size is not enough\n"); + RTE_LOG_LINE(ERR, POWER, "Buffer size is not enough"); return 0; } rte_memcpy(freqs, ui->freqs, ui->nb_freqs * sizeof(uint32_t)); @@ -419,10 +419,10 @@ power_intel_uncore_get_num_pkgs(void) d = opendir(INTEL_UNCORE_FREQUENCY_DIR); if (d == NULL) { - RTE_LOG(ERR, POWER, + RTE_LOG_LINE(ERR, POWER, "Uncore frequency management not supported/enabled on this kernel. " "Please enable CONFIG_INTEL_UNCORE_FREQ_CONTROL if on Intel x86 with linux kernel" - " >= 5.6\n"); + " >= 5.6"); return 0; } @@ -451,16 +451,16 @@ power_intel_uncore_get_num_dies(unsigned int pkg) if (max_pkgs == 0) return 0; if (pkg >= max_pkgs) { - RTE_LOG(DEBUG, POWER, "Invalid package number\n"); + RTE_LOG_LINE(DEBUG, POWER, "Invalid package number"); return 0; } d = opendir(INTEL_UNCORE_FREQUENCY_DIR); if (d == NULL) { - RTE_LOG(ERR, POWER, + RTE_LOG_LINE(ERR, POWER, "Uncore frequency management not supported/enabled on this kernel. " "Please enable CONFIG_INTEL_UNCORE_FREQ_CONTROL if on Intel x86 with linux kernel" - " >= 5.6\n"); + " >= 5.6"); return 0; } diff --git a/lib/power/power_kvm_vm.c b/lib/power/power_kvm_vm.c index db031f4310..218799491e 100644 --- a/lib/power/power_kvm_vm.c +++ b/lib/power/power_kvm_vm.c @@ -25,7 +25,7 @@ int power_kvm_vm_init(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Core(%u) is out of range 0...%d\n", + RTE_LOG_LINE(ERR, POWER, "Core(%u) is out of range 0...%d", lcore_id, RTE_MAX_LCORE-1); return -1; } @@ -46,16 +46,16 @@ power_kvm_vm_freqs(__rte_unused unsigned int lcore_id, __rte_unused uint32_t *freqs, __rte_unused uint32_t num) { - RTE_LOG(ERR, POWER, "rte_power_freqs is not implemented " - "for Virtual Machine Power Management\n"); + RTE_LOG_LINE(ERR, POWER, "rte_power_freqs is not implemented " + "for Virtual Machine Power Management"); return -ENOTSUP; } uint32_t power_kvm_vm_get_freq(__rte_unused unsigned int lcore_id) { - RTE_LOG(ERR, POWER, "rte_power_get_freq is not implemented " - "for Virtual Machine Power Management\n"); + RTE_LOG_LINE(ERR, POWER, "rte_power_get_freq is not implemented " + "for Virtual Machine Power Management"); return -ENOTSUP; } @@ -63,8 +63,8 @@ int power_kvm_vm_set_freq(__rte_unused unsigned int lcore_id, __rte_unused uint32_t index) { - RTE_LOG(ERR, POWER, "rte_power_set_freq is not implemented " - "for Virtual Machine Power Management\n"); + RTE_LOG_LINE(ERR, POWER, "rte_power_set_freq is not implemented " + "for Virtual Machine Power Management"); return -ENOTSUP; } @@ -74,7 +74,7 @@ send_msg(unsigned int lcore_id, uint32_t scale_direction) int ret; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Core(%u) is out of range 0...%d\n", + RTE_LOG_LINE(ERR, POWER, "Core(%u) is out of range 0...%d", lcore_id, RTE_MAX_LCORE-1); return -1; } @@ -82,7 +82,7 @@ send_msg(unsigned int lcore_id, uint32_t scale_direction) ret = guest_channel_send_msg(&pkt[lcore_id], lcore_id); if (ret == 0) return 1; - RTE_LOG(DEBUG, POWER, "Error sending message: %s\n", + RTE_LOG_LINE(DEBUG, POWER, "Error sending message: %s", ret > 0 ? strerror(ret) : "channel not connected"); return -1; } @@ -114,7 +114,7 @@ power_kvm_vm_freq_min(unsigned int lcore_id) int power_kvm_vm_turbo_status(__rte_unused unsigned int lcore_id) { - RTE_LOG(ERR, POWER, "rte_power_turbo_status is not implemented for Virtual Machine Power Management\n"); + RTE_LOG_LINE(ERR, POWER, "rte_power_turbo_status is not implemented for Virtual Machine Power Management"); return -ENOTSUP; } @@ -134,6 +134,6 @@ struct rte_power_core_capabilities; int power_kvm_vm_get_capabilities(__rte_unused unsigned int lcore_id, __rte_unused struct rte_power_core_capabilities *caps) { - RTE_LOG(ERR, POWER, "rte_power_get_capabilities is not implemented for Virtual Machine Power Management\n"); + RTE_LOG_LINE(ERR, POWER, "rte_power_get_capabilities is not implemented for Virtual Machine Power Management"); return -ENOTSUP; } diff --git a/lib/power/power_pstate_cpufreq.c b/lib/power/power_pstate_cpufreq.c index 5ca5f60bcd..56aa302b5d 100644 --- a/lib/power/power_pstate_cpufreq.c +++ b/lib/power/power_pstate_cpufreq.c @@ -82,7 +82,7 @@ power_read_turbo_pct(uint64_t *outVal) fd = open(POWER_SYSFILE_TURBO_PCT, O_RDONLY); if (fd < 0) { - RTE_LOG(ERR, POWER, "Error opening '%s': %s\n", POWER_SYSFILE_TURBO_PCT, + RTE_LOG_LINE(ERR, POWER, "Error opening '%s': %s", POWER_SYSFILE_TURBO_PCT, strerror(errno)); return fd; } @@ -90,7 +90,7 @@ power_read_turbo_pct(uint64_t *outVal) ret = read(fd, val, sizeof(val)); if (ret < 0) { - RTE_LOG(ERR, POWER, "Error reading '%s': %s\n", POWER_SYSFILE_TURBO_PCT, + RTE_LOG_LINE(ERR, POWER, "Error reading '%s': %s", POWER_SYSFILE_TURBO_PCT, strerror(errno)); goto out; } @@ -98,7 +98,7 @@ power_read_turbo_pct(uint64_t *outVal) errno = 0; *outVal = (uint64_t) strtol(val, &endptr, 10); if (errno != 0 || (*endptr != 0 && *endptr != '\n')) { - RTE_LOG(ERR, POWER, "Error converting str to digits, read from %s: %s\n", + RTE_LOG_LINE(ERR, POWER, "Error converting str to digits, read from %s: %s", POWER_SYSFILE_TURBO_PCT, strerror(errno)); ret = -1; goto out; @@ -126,7 +126,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) open_core_sysfs_file(&f_base_max, "r", POWER_SYSFILE_BASE_MAX_FREQ, pi->lcore_id); if (f_base_max == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_BASE_MAX_FREQ); goto err; } @@ -134,7 +134,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) open_core_sysfs_file(&f_base_min, "r", POWER_SYSFILE_BASE_MIN_FREQ, pi->lcore_id); if (f_base_min == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_BASE_MIN_FREQ); goto err; } @@ -142,7 +142,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) open_core_sysfs_file(&f_min, "rw+", POWER_SYSFILE_MIN_FREQ, pi->lcore_id); if (f_min == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_MIN_FREQ); goto err; } @@ -150,7 +150,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) open_core_sysfs_file(&f_max, "rw+", POWER_SYSFILE_MAX_FREQ, pi->lcore_id); if (f_max == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_MAX_FREQ); goto err; } @@ -162,7 +162,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) /* read base max ratio */ ret = read_core_sysfs_u32(f_base_max, &base_max_ratio); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_BASE_MAX_FREQ); goto err; } @@ -170,7 +170,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) /* read base min ratio */ ret = read_core_sysfs_u32(f_base_min, &base_min_ratio); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_BASE_MIN_FREQ); goto err; } @@ -179,7 +179,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) if (f_base != NULL) { ret = read_core_sysfs_u32(f_base, &base_ratio); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_BASE_FREQ); goto err; } @@ -257,8 +257,8 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx) uint32_t target_freq = 0; if (idx >= RTE_MAX_LCORE_FREQS || idx >= pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Invalid frequency index %u, which " - "should be less than %u\n", idx, pi->nb_freqs); + RTE_LOG_LINE(ERR, POWER, "Invalid frequency index %u, which " + "should be less than %u", idx, pi->nb_freqs); return -1; } @@ -270,15 +270,15 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx) * User need change the min/max as same value. */ if (fseek(pi->f_cur_min, 0, SEEK_SET) < 0) { - RTE_LOG(ERR, POWER, "Fail to set file position indicator to 0 " - "for setting frequency for lcore %u\n", + RTE_LOG_LINE(ERR, POWER, "Fail to set file position indicator to 0 " + "for setting frequency for lcore %u", pi->lcore_id); return -1; } if (fseek(pi->f_cur_max, 0, SEEK_SET) < 0) { - RTE_LOG(ERR, POWER, "Fail to set file position indicator to 0 " - "for setting frequency for lcore %u\n", + RTE_LOG_LINE(ERR, POWER, "Fail to set file position indicator to 0 " + "for setting frequency for lcore %u", pi->lcore_id); return -1; } @@ -288,7 +288,7 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx) if (pi->turbo_enable) target_freq = pi->sys_max_freq; else { - RTE_LOG(ERR, POWER, "Turbo is off, frequency can't be scaled up more %u\n", + RTE_LOG_LINE(ERR, POWER, "Turbo is off, frequency can't be scaled up more %u", pi->lcore_id); return -1; } @@ -299,14 +299,14 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx) if (idx > pi->curr_idx) { if (fprintf(pi->f_cur_min, "%u", target_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new frequency for " - "lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to write new frequency for " + "lcore %u", pi->lcore_id); return -1; } if (fprintf(pi->f_cur_max, "%u", target_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new frequency for " - "lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to write new frequency for " + "lcore %u", pi->lcore_id); return -1; } @@ -322,14 +322,14 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx) if (idx < pi->curr_idx) { if (fprintf(pi->f_cur_max, "%u", target_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new frequency for " - "lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to write new frequency for " + "lcore %u", pi->lcore_id); return -1; } if (fprintf(pi->f_cur_min, "%u", target_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new frequency for " - "lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to write new frequency for " + "lcore %u", pi->lcore_id); return -1; } @@ -384,7 +384,7 @@ power_get_available_freqs(struct pstate_power_info *pi) open_core_sysfs_file(&f_max, "r", POWER_SYSFILE_BASE_MAX_FREQ, pi->lcore_id); if (f_max == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_BASE_MAX_FREQ); goto out; } @@ -392,7 +392,7 @@ power_get_available_freqs(struct pstate_power_info *pi) open_core_sysfs_file(&f_min, "r", POWER_SYSFILE_BASE_MIN_FREQ, pi->lcore_id); if (f_min == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_BASE_MIN_FREQ); goto out; } @@ -400,14 +400,14 @@ power_get_available_freqs(struct pstate_power_info *pi) /* read base ratios */ ret = read_core_sysfs_u32(f_max, &sys_max_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_BASE_MAX_FREQ); goto out; } ret = read_core_sysfs_u32(f_min, &sys_min_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_BASE_MIN_FREQ); goto out; } @@ -450,7 +450,7 @@ power_get_available_freqs(struct pstate_power_info *pi) num_freqs = (RTE_MIN(base_max_freq, sys_max_freq) - sys_min_freq) / BUS_FREQ + 1 + pi->turbo_available; if (num_freqs >= RTE_MAX_LCORE_FREQS) { - RTE_LOG(ERR, POWER, "Too many available frequencies: %d\n", + RTE_LOG_LINE(ERR, POWER, "Too many available frequencies: %d", num_freqs); goto out; } @@ -494,14 +494,14 @@ power_get_cur_idx(struct pstate_power_info *pi) open_core_sysfs_file(&f_cur, "r", POWER_SYSFILE_CUR_FREQ, pi->lcore_id); if (f_cur == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_CUR_FREQ); goto fail; } ret = read_core_sysfs_u32(f_cur, &sys_cur_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_CUR_FREQ); goto fail; } @@ -543,7 +543,7 @@ power_pstate_cpufreq_init(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceed %u\n", + RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceed %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -559,47 +559,47 @@ power_pstate_cpufreq_init(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "in use\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is " + "in use", lcore_id); return -1; } pi->lcore_id = lcore_id; /* Check and set the governor */ if (power_set_governor_performance(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to " - "performance\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set governor of lcore %u to " + "performance", lcore_id); goto fail; } /* Init for setting lcore frequency */ if (power_init_for_setting_freq(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot init for setting frequency for " - "lcore %u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot init for setting frequency for " + "lcore %u", lcore_id); goto fail; } /* Get the available frequencies */ if (power_get_available_freqs(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot get available frequencies of " - "lcore %u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot get available frequencies of " + "lcore %u", lcore_id); goto fail; } if (power_get_cur_idx(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot get current frequency " - "index of lcore %u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot get current frequency " + "index of lcore %u", lcore_id); goto fail; } /* Set freq to max by default */ if (power_pstate_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, "Cannot set frequency of lcore %u " - "to max\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set frequency of lcore %u " + "to max", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Initialized successfully for lcore %u " - "power management\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Initialized successfully for lcore %u " + "power management", lcore_id); exp_state = POWER_ONGOING; rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_USED, rte_memory_order_release, rte_memory_order_relaxed); @@ -621,7 +621,7 @@ power_pstate_cpufreq_exit(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n", + RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceeds %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -637,8 +637,8 @@ power_pstate_cpufreq_exit(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "not used\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is " + "not used", lcore_id); return -1; } @@ -650,14 +650,14 @@ power_pstate_cpufreq_exit(unsigned int lcore_id) /* Set the governor back to the original */ if (power_set_governor_original(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set the governor of %u back " - "to the original\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set the governor of %u back " + "to the original", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Power management of lcore %u has exited from " + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u has exited from " "'performance' mode and been set back to the " - "original\n", lcore_id); + "original", lcore_id); exp_state = POWER_ONGOING; rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_IDLE, rte_memory_order_release, rte_memory_order_relaxed); @@ -679,18 +679,18 @@ power_pstate_cpufreq_freqs(unsigned int lcore_id, uint32_t *freqs, uint32_t num) struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return 0; } if (freqs == NULL) { - RTE_LOG(ERR, POWER, "NULL buffer supplied\n"); + RTE_LOG_LINE(ERR, POWER, "NULL buffer supplied"); return 0; } pi = &lcore_power_info[lcore_id]; if (num < pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Buffer size is not enough\n"); + RTE_LOG_LINE(ERR, POWER, "Buffer size is not enough"); return 0; } rte_memcpy(freqs, pi->freqs, pi->nb_freqs * sizeof(uint32_t)); @@ -702,7 +702,7 @@ uint32_t power_pstate_cpufreq_get_freq(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return RTE_POWER_INVALID_FREQ_INDEX; } @@ -714,7 +714,7 @@ int power_pstate_cpufreq_set_freq(unsigned int lcore_id, uint32_t index) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -727,7 +727,7 @@ power_pstate_cpufreq_freq_up(unsigned int lcore_id) struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -746,7 +746,7 @@ power_pstate_cpufreq_freq_down(unsigned int lcore_id) struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -762,7 +762,7 @@ int power_pstate_cpufreq_freq_max(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -787,7 +787,7 @@ power_pstate_cpufreq_freq_min(unsigned int lcore_id) struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -804,7 +804,7 @@ power_pstate_turbo_status(unsigned int lcore_id) struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -819,7 +819,7 @@ power_pstate_enable_turbo(unsigned int lcore_id) struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -829,8 +829,8 @@ power_pstate_enable_turbo(unsigned int lcore_id) pi->turbo_enable = 1; else { pi->turbo_enable = 0; - RTE_LOG(ERR, POWER, - "Failed to enable turbo on lcore %u\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to enable turbo on lcore %u", lcore_id); return -1; } @@ -845,7 +845,7 @@ power_pstate_disable_turbo(unsigned int lcore_id) struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -856,8 +856,8 @@ power_pstate_disable_turbo(unsigned int lcore_id) if (pi->turbo_available && pi->curr_idx <= 1) { /* Try to set freq to max by default coming out of turbo */ if (power_pstate_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, - "Failed to set frequency of lcore %u to max\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to set frequency of lcore %u to max", lcore_id); return -1; } @@ -873,11 +873,11 @@ int power_pstate_get_capabilities(unsigned int lcore_id, struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } if (caps == NULL) { - RTE_LOG(ERR, POWER, "Invalid argument\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid argument"); return -1; } diff --git a/lib/power/rte_power.c b/lib/power/rte_power.c index 1502612b0a..7bee4f88f9 100644 --- a/lib/power/rte_power.c +++ b/lib/power/rte_power.c @@ -74,7 +74,7 @@ rte_power_set_env(enum power_management_env env) rte_spinlock_lock(&global_env_cfg_lock); if (global_default_env != PM_ENV_NOT_SET) { - RTE_LOG(ERR, POWER, "Power Management Environment already set.\n"); + RTE_LOG_LINE(ERR, POWER, "Power Management Environment already set."); rte_spinlock_unlock(&global_env_cfg_lock); return -1; } @@ -143,7 +143,7 @@ rte_power_set_env(enum power_management_env env) rte_power_freq_disable_turbo = power_amd_pstate_disable_turbo; rte_power_get_capabilities = power_amd_pstate_get_capabilities; } else { - RTE_LOG(ERR, POWER, "Invalid Power Management Environment(%d) set\n", + RTE_LOG_LINE(ERR, POWER, "Invalid Power Management Environment(%d) set", env); ret = -1; } @@ -190,46 +190,46 @@ rte_power_init(unsigned int lcore_id) case PM_ENV_AMD_PSTATE_CPUFREQ: return power_amd_pstate_cpufreq_init(lcore_id); default: - RTE_LOG(INFO, POWER, "Env isn't set yet!\n"); + RTE_LOG_LINE(INFO, POWER, "Env isn't set yet!"); } /* Auto detect Environment */ - RTE_LOG(INFO, POWER, "Attempting to initialise ACPI cpufreq power management...\n"); + RTE_LOG_LINE(INFO, POWER, "Attempting to initialise ACPI cpufreq power management..."); ret = power_acpi_cpufreq_init(lcore_id); if (ret == 0) { rte_power_set_env(PM_ENV_ACPI_CPUFREQ); goto out; } - RTE_LOG(INFO, POWER, "Attempting to initialise PSTAT power management...\n"); + RTE_LOG_LINE(INFO, POWER, "Attempting to initialise PSTAT power management..."); ret = power_pstate_cpufreq_init(lcore_id); if (ret == 0) { rte_power_set_env(PM_ENV_PSTATE_CPUFREQ); goto out; } - RTE_LOG(INFO, POWER, "Attempting to initialise AMD PSTATE power management...\n"); + RTE_LOG_LINE(INFO, POWER, "Attempting to initialise AMD PSTATE power management..."); ret = power_amd_pstate_cpufreq_init(lcore_id); if (ret == 0) { rte_power_set_env(PM_ENV_AMD_PSTATE_CPUFREQ); goto out; } - RTE_LOG(INFO, POWER, "Attempting to initialise CPPC power management...\n"); + RTE_LOG_LINE(INFO, POWER, "Attempting to initialise CPPC power management..."); ret = power_cppc_cpufreq_init(lcore_id); if (ret == 0) { rte_power_set_env(PM_ENV_CPPC_CPUFREQ); goto out; } - RTE_LOG(INFO, POWER, "Attempting to initialise VM power management...\n"); + RTE_LOG_LINE(INFO, POWER, "Attempting to initialise VM power management..."); ret = power_kvm_vm_init(lcore_id); if (ret == 0) { rte_power_set_env(PM_ENV_KVM_VM); goto out; } - RTE_LOG(ERR, POWER, "Unable to set Power Management Environment for lcore " - "%u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Unable to set Power Management Environment for lcore " + "%u", lcore_id); out: return ret; } @@ -249,7 +249,7 @@ rte_power_exit(unsigned int lcore_id) case PM_ENV_AMD_PSTATE_CPUFREQ: return power_amd_pstate_cpufreq_exit(lcore_id); default: - RTE_LOG(ERR, POWER, "Environment has not been set, unable to exit gracefully\n"); + RTE_LOG_LINE(ERR, POWER, "Environment has not been set, unable to exit gracefully"); } return -1; diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c index 6f18ed0adf..fb7d8fddb3 100644 --- a/lib/power/rte_power_pmd_mgmt.c +++ b/lib/power/rte_power_pmd_mgmt.c @@ -146,7 +146,7 @@ get_monitor_addresses(struct pmd_core_cfg *cfg, /* attempted out of bounds access */ if (i >= len) { - RTE_LOG(ERR, POWER, "Too many queues being monitored\n"); + RTE_LOG_LINE(ERR, POWER, "Too many queues being monitored"); return -1; } @@ -423,7 +423,7 @@ check_scale(unsigned int lcore) if (!rte_power_check_env_supported(PM_ENV_ACPI_CPUFREQ) && !rte_power_check_env_supported(PM_ENV_PSTATE_CPUFREQ) && !rte_power_check_env_supported(PM_ENV_AMD_PSTATE_CPUFREQ)) { - RTE_LOG(DEBUG, POWER, "Neither ACPI nor PSTATE modes are supported\n"); + RTE_LOG_LINE(DEBUG, POWER, "Neither ACPI nor PSTATE modes are supported"); return -ENOTSUP; } /* ensure we could initialize the power library */ @@ -434,7 +434,7 @@ check_scale(unsigned int lcore) env = rte_power_get_env(); if (env != PM_ENV_ACPI_CPUFREQ && env != PM_ENV_PSTATE_CPUFREQ && env != PM_ENV_AMD_PSTATE_CPUFREQ) { - RTE_LOG(DEBUG, POWER, "Neither ACPI nor PSTATE modes were initialized\n"); + RTE_LOG_LINE(DEBUG, POWER, "Neither ACPI nor PSTATE modes were initialized"); return -ENOTSUP; } @@ -450,7 +450,7 @@ check_monitor(struct pmd_core_cfg *cfg, const union queue *qdata) /* check if rte_power_monitor is supported */ if (!global_data.intrinsics_support.power_monitor) { - RTE_LOG(DEBUG, POWER, "Monitoring intrinsics are not supported\n"); + RTE_LOG_LINE(DEBUG, POWER, "Monitoring intrinsics are not supported"); return -ENOTSUP; } /* check if multi-monitor is supported */ @@ -459,14 +459,14 @@ check_monitor(struct pmd_core_cfg *cfg, const union queue *qdata) /* if we're adding a new queue, do we support multiple queues? */ if (cfg->n_queues > 0 && !multimonitor_supported) { - RTE_LOG(DEBUG, POWER, "Monitoring multiple queues is not supported\n"); + RTE_LOG_LINE(DEBUG, POWER, "Monitoring multiple queues is not supported"); return -ENOTSUP; } /* check if the device supports the necessary PMD API */ if (rte_eth_get_monitor_addr(qdata->portid, qdata->qid, &dummy) == -ENOTSUP) { - RTE_LOG(DEBUG, POWER, "The device does not support rte_eth_get_monitor_addr\n"); + RTE_LOG_LINE(DEBUG, POWER, "The device does not support rte_eth_get_monitor_addr"); return -ENOTSUP; } @@ -566,14 +566,14 @@ rte_power_ethdev_pmgmt_queue_enable(unsigned int lcore_id, uint16_t port_id, clb = clb_pause; break; default: - RTE_LOG(DEBUG, POWER, "Invalid power management type\n"); + RTE_LOG_LINE(DEBUG, POWER, "Invalid power management type"); ret = -EINVAL; goto end; } /* add this queue to the list */ ret = queue_list_add(lcore_cfg, &qdata); if (ret < 0) { - RTE_LOG(DEBUG, POWER, "Failed to add queue to list: %s\n", + RTE_LOG_LINE(DEBUG, POWER, "Failed to add queue to list: %s", strerror(-ret)); goto end; } @@ -686,7 +686,7 @@ int rte_power_pmd_mgmt_set_pause_duration(unsigned int duration) { if (duration == 0) { - RTE_LOG(ERR, POWER, "Pause duration must be greater than 0, value unchanged\n"); + RTE_LOG_LINE(ERR, POWER, "Pause duration must be greater than 0, value unchanged"); return -EINVAL; } pause_duration = duration; @@ -704,12 +704,12 @@ int rte_power_pmd_mgmt_set_scaling_freq_min(unsigned int lcore, unsigned int min) { if (lcore >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID: %u\n", lcore); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID: %u", lcore); return -EINVAL; } if (min > scale_freq_max[lcore]) { - RTE_LOG(ERR, POWER, "Invalid min frequency: Cannot be greater than max frequency\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid min frequency: Cannot be greater than max frequency"); return -EINVAL; } scale_freq_min[lcore] = min; @@ -721,7 +721,7 @@ int rte_power_pmd_mgmt_set_scaling_freq_max(unsigned int lcore, unsigned int max) { if (lcore >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID: %u\n", lcore); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID: %u", lcore); return -EINVAL; } @@ -729,7 +729,7 @@ rte_power_pmd_mgmt_set_scaling_freq_max(unsigned int lcore, unsigned int max) if (max == 0) max = UINT32_MAX; if (max < scale_freq_min[lcore]) { - RTE_LOG(ERR, POWER, "Invalid max frequency: Cannot be less than min frequency\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid max frequency: Cannot be less than min frequency"); return -EINVAL; } @@ -742,12 +742,12 @@ int rte_power_pmd_mgmt_get_scaling_freq_min(unsigned int lcore) { if (lcore >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID: %u\n", lcore); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID: %u", lcore); return -EINVAL; } if (scale_freq_max[lcore] == 0) - RTE_LOG(DEBUG, POWER, "Scaling freq min config not set. Using sysfs min freq.\n"); + RTE_LOG_LINE(DEBUG, POWER, "Scaling freq min config not set. Using sysfs min freq."); return scale_freq_min[lcore]; } @@ -756,12 +756,12 @@ int rte_power_pmd_mgmt_get_scaling_freq_max(unsigned int lcore) { if (lcore >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID: %u\n", lcore); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID: %u", lcore); return -EINVAL; } if (scale_freq_max[lcore] == UINT32_MAX) { - RTE_LOG(DEBUG, POWER, "Scaling freq max config not set. Using sysfs max freq.\n"); + RTE_LOG_LINE(DEBUG, POWER, "Scaling freq max config not set. Using sysfs max freq."); return 0; } diff --git a/lib/power/rte_power_uncore.c b/lib/power/rte_power_uncore.c index 9c20fe150d..d57fc18faa 100644 --- a/lib/power/rte_power_uncore.c +++ b/lib/power/rte_power_uncore.c @@ -101,7 +101,7 @@ rte_power_set_uncore_env(enum rte_uncore_power_mgmt_env env) rte_spinlock_lock(&global_env_cfg_lock); if (default_uncore_env != RTE_UNCORE_PM_ENV_NOT_SET) { - RTE_LOG(ERR, POWER, "Uncore Power Management Env already set.\n"); + RTE_LOG_LINE(ERR, POWER, "Uncore Power Management Env already set."); rte_spinlock_unlock(&global_env_cfg_lock); return -1; } @@ -124,7 +124,7 @@ rte_power_set_uncore_env(enum rte_uncore_power_mgmt_env env) rte_power_uncore_get_num_pkgs = power_intel_uncore_get_num_pkgs; rte_power_uncore_get_num_dies = power_intel_uncore_get_num_dies; } else { - RTE_LOG(ERR, POWER, "Invalid Power Management Environment(%d) set\n", env); + RTE_LOG_LINE(ERR, POWER, "Invalid Power Management Environment(%d) set", env); ret = -1; goto out; } @@ -159,12 +159,12 @@ rte_power_uncore_init(unsigned int pkg, unsigned int die) case RTE_UNCORE_PM_ENV_INTEL_UNCORE: return power_intel_uncore_init(pkg, die); default: - RTE_LOG(INFO, POWER, "Uncore Env isn't set yet!\n"); + RTE_LOG_LINE(INFO, POWER, "Uncore Env isn't set yet!"); break; } /* Auto detect Environment */ - RTE_LOG(INFO, POWER, "Attempting to initialise Intel Uncore power mgmt...\n"); + RTE_LOG_LINE(INFO, POWER, "Attempting to initialise Intel Uncore power mgmt..."); ret = power_intel_uncore_init(pkg, die); if (ret == 0) { rte_power_set_uncore_env(RTE_UNCORE_PM_ENV_INTEL_UNCORE); @@ -172,8 +172,8 @@ rte_power_uncore_init(unsigned int pkg, unsigned int die) } if (default_uncore_env == RTE_UNCORE_PM_ENV_NOT_SET) { - RTE_LOG(ERR, POWER, "Unable to set Power Management Environment " - "for package %u Die %u\n", pkg, die); + RTE_LOG_LINE(ERR, POWER, "Unable to set Power Management Environment " + "for package %u Die %u", pkg, die); ret = 0; } out: @@ -187,7 +187,7 @@ rte_power_uncore_exit(unsigned int pkg, unsigned int die) case RTE_UNCORE_PM_ENV_INTEL_UNCORE: return power_intel_uncore_exit(pkg, die); default: - RTE_LOG(ERR, POWER, "Uncore Env has not been set, unable to exit gracefully\n"); + RTE_LOG_LINE(ERR, POWER, "Uncore Env has not been set, unable to exit gracefully"); break; } return -1; diff --git a/lib/rcu/rte_rcu_qsbr.c b/lib/rcu/rte_rcu_qsbr.c index 5b6530788a..bd0b83be0c 100644 --- a/lib/rcu/rte_rcu_qsbr.c +++ b/lib/rcu/rte_rcu_qsbr.c @@ -20,7 +20,7 @@ #include "rcu_qsbr_pvt.h" #define RCU_LOG(level, fmt, args...) \ - RTE_LOG(level, RCU, "%s(): " fmt "\n", __func__, ## args) + RTE_LOG_LINE(level, RCU, "%s(): " fmt, __func__, ## args) /* Get the memory size of QSBR variable */ size_t diff --git a/lib/reorder/rte_reorder.c b/lib/reorder/rte_reorder.c index 640719c3ec..847e45b9f7 100644 --- a/lib/reorder/rte_reorder.c +++ b/lib/reorder/rte_reorder.c @@ -74,34 +74,34 @@ rte_reorder_init(struct rte_reorder_buffer *b, unsigned int bufsize, }; if (b == NULL) { - RTE_LOG(ERR, REORDER, "Invalid reorder buffer parameter:" - " NULL\n"); + RTE_LOG_LINE(ERR, REORDER, "Invalid reorder buffer parameter:" + " NULL"); rte_errno = EINVAL; return NULL; } if (!rte_is_power_of_2(size)) { - RTE_LOG(ERR, REORDER, "Invalid reorder buffer size" - " - Not a power of 2\n"); + RTE_LOG_LINE(ERR, REORDER, "Invalid reorder buffer size" + " - Not a power of 2"); rte_errno = EINVAL; return NULL; } if (name == NULL) { - RTE_LOG(ERR, REORDER, "Invalid reorder buffer name ptr:" - " NULL\n"); + RTE_LOG_LINE(ERR, REORDER, "Invalid reorder buffer name ptr:" + " NULL"); rte_errno = EINVAL; return NULL; } if (bufsize < min_bufsize) { - RTE_LOG(ERR, REORDER, "Invalid reorder buffer memory size: %u, " - "minimum required: %u\n", bufsize, min_bufsize); + RTE_LOG_LINE(ERR, REORDER, "Invalid reorder buffer memory size: %u, " + "minimum required: %u", bufsize, min_bufsize); rte_errno = EINVAL; return NULL; } rte_reorder_seqn_dynfield_offset = rte_mbuf_dynfield_register(&reorder_seqn_dynfield_desc); if (rte_reorder_seqn_dynfield_offset < 0) { - RTE_LOG(ERR, REORDER, - "Failed to register mbuf field for reorder sequence number, rte_errno: %i\n", + RTE_LOG_LINE(ERR, REORDER, + "Failed to register mbuf field for reorder sequence number, rte_errno: %i", rte_errno); rte_errno = ENOMEM; return NULL; @@ -161,14 +161,14 @@ rte_reorder_create(const char *name, unsigned socket_id, unsigned int size) /* Check user arguments. */ if (!rte_is_power_of_2(size)) { - RTE_LOG(ERR, REORDER, "Invalid reorder buffer size" - " - Not a power of 2\n"); + RTE_LOG_LINE(ERR, REORDER, "Invalid reorder buffer size" + " - Not a power of 2"); rte_errno = EINVAL; return NULL; } if (name == NULL) { - RTE_LOG(ERR, REORDER, "Invalid reorder buffer name ptr:" - " NULL\n"); + RTE_LOG_LINE(ERR, REORDER, "Invalid reorder buffer name ptr:" + " NULL"); rte_errno = EINVAL; return NULL; } @@ -176,7 +176,7 @@ rte_reorder_create(const char *name, unsigned socket_id, unsigned int size) /* allocate tailq entry */ te = rte_zmalloc("REORDER_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, REORDER, "Failed to allocate tailq entry\n"); + RTE_LOG_LINE(ERR, REORDER, "Failed to allocate tailq entry"); rte_errno = ENOMEM; return NULL; } @@ -184,7 +184,7 @@ rte_reorder_create(const char *name, unsigned socket_id, unsigned int size) /* Allocate memory to store the reorder buffer structure. */ b = rte_zmalloc_socket("REORDER_BUFFER", bufsize, 0, socket_id); if (b == NULL) { - RTE_LOG(ERR, REORDER, "Memzone allocation failed\n"); + RTE_LOG_LINE(ERR, REORDER, "Memzone allocation failed"); rte_errno = ENOMEM; rte_free(te); return NULL; diff --git a/lib/rib/rte_rib.c b/lib/rib/rte_rib.c index 251d0d4ef1..baee4bff5a 100644 --- a/lib/rib/rte_rib.c +++ b/lib/rib/rte_rib.c @@ -416,8 +416,8 @@ rte_rib_create(const char *name, int socket_id, const struct rte_rib_conf *conf) NULL, NULL, NULL, NULL, socket_id, 0); if (node_pool == NULL) { - RTE_LOG(ERR, LPM, - "Can not allocate mempool for RIB %s\n", name); + RTE_LOG_LINE(ERR, LPM, + "Can not allocate mempool for RIB %s", name); return NULL; } @@ -441,8 +441,8 @@ rte_rib_create(const char *name, int socket_id, const struct rte_rib_conf *conf) /* allocate tailq entry */ te = rte_zmalloc("RIB_TAILQ_ENTRY", sizeof(*te), 0); if (unlikely(te == NULL)) { - RTE_LOG(ERR, LPM, - "Can not allocate tailq entry for RIB %s\n", name); + RTE_LOG_LINE(ERR, LPM, + "Can not allocate tailq entry for RIB %s", name); rte_errno = ENOMEM; goto exit; } @@ -451,7 +451,7 @@ rte_rib_create(const char *name, int socket_id, const struct rte_rib_conf *conf) rib = rte_zmalloc_socket(mem_name, sizeof(struct rte_rib), RTE_CACHE_LINE_SIZE, socket_id); if (unlikely(rib == NULL)) { - RTE_LOG(ERR, LPM, "RIB %s memory allocation failed\n", name); + RTE_LOG_LINE(ERR, LPM, "RIB %s memory allocation failed", name); rte_errno = ENOMEM; goto free_te; } diff --git a/lib/rib/rte_rib6.c b/lib/rib/rte_rib6.c index ad3d48ab8e..ce54f51208 100644 --- a/lib/rib/rte_rib6.c +++ b/lib/rib/rte_rib6.c @@ -485,8 +485,8 @@ rte_rib6_create(const char *name, int socket_id, NULL, NULL, NULL, NULL, socket_id, 0); if (node_pool == NULL) { - RTE_LOG(ERR, LPM, - "Can not allocate mempool for RIB6 %s\n", name); + RTE_LOG_LINE(ERR, LPM, + "Can not allocate mempool for RIB6 %s", name); return NULL; } @@ -510,8 +510,8 @@ rte_rib6_create(const char *name, int socket_id, /* allocate tailq entry */ te = rte_zmalloc("RIB6_TAILQ_ENTRY", sizeof(*te), 0); if (unlikely(te == NULL)) { - RTE_LOG(ERR, LPM, - "Can not allocate tailq entry for RIB6 %s\n", name); + RTE_LOG_LINE(ERR, LPM, + "Can not allocate tailq entry for RIB6 %s", name); rte_errno = ENOMEM; goto exit; } @@ -520,7 +520,7 @@ rte_rib6_create(const char *name, int socket_id, rib = rte_zmalloc_socket(mem_name, sizeof(struct rte_rib6), RTE_CACHE_LINE_SIZE, socket_id); if (unlikely(rib == NULL)) { - RTE_LOG(ERR, LPM, "RIB6 %s memory allocation failed\n", name); + RTE_LOG_LINE(ERR, LPM, "RIB6 %s memory allocation failed", name); rte_errno = ENOMEM; goto free_te; } diff --git a/lib/ring/rte_ring.c b/lib/ring/rte_ring.c index 12046419f1..7fd6576c8c 100644 --- a/lib/ring/rte_ring.c +++ b/lib/ring/rte_ring.c @@ -55,15 +55,15 @@ rte_ring_get_memsize_elem(unsigned int esize, unsigned int count) /* Check if element size is a multiple of 4B */ if (esize % 4 != 0) { - RTE_LOG(ERR, RING, "element size is not a multiple of 4\n"); + RTE_LOG_LINE(ERR, RING, "element size is not a multiple of 4"); return -EINVAL; } /* count must be a power of 2 */ if ((!POWEROF2(count)) || (count > RTE_RING_SZ_MASK )) { - RTE_LOG(ERR, RING, - "Requested number of elements is invalid, must be power of 2, and not exceed %u\n", + RTE_LOG_LINE(ERR, RING, + "Requested number of elements is invalid, must be power of 2, and not exceed %u", RTE_RING_SZ_MASK); return -EINVAL; @@ -198,8 +198,8 @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned int count, /* future proof flags, only allow supported values */ if (flags & ~RING_F_MASK) { - RTE_LOG(ERR, RING, - "Unsupported flags requested %#x\n", flags); + RTE_LOG_LINE(ERR, RING, + "Unsupported flags requested %#x", flags); return -EINVAL; } @@ -219,8 +219,8 @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned int count, r->capacity = count; } else { if ((!POWEROF2(count)) || (count > RTE_RING_SZ_MASK)) { - RTE_LOG(ERR, RING, - "Requested size is invalid, must be power of 2, and not exceed the size limit %u\n", + RTE_LOG_LINE(ERR, RING, + "Requested size is invalid, must be power of 2, and not exceed the size limit %u", RTE_RING_SZ_MASK); return -EINVAL; } @@ -274,7 +274,7 @@ rte_ring_create_elem(const char *name, unsigned int esize, unsigned int count, te = rte_zmalloc("RING_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, RING, "Cannot reserve memory for tailq\n"); + RTE_LOG_LINE(ERR, RING, "Cannot reserve memory for tailq"); rte_errno = ENOMEM; return NULL; } @@ -299,7 +299,7 @@ rte_ring_create_elem(const char *name, unsigned int esize, unsigned int count, TAILQ_INSERT_TAIL(ring_list, te, next); } else { r = NULL; - RTE_LOG(ERR, RING, "Cannot reserve memory\n"); + RTE_LOG_LINE(ERR, RING, "Cannot reserve memory"); rte_free(te); } rte_mcfg_tailq_write_unlock(); @@ -331,8 +331,8 @@ rte_ring_free(struct rte_ring *r) * therefore, there is no memzone to free. */ if (r->memzone == NULL) { - RTE_LOG(ERR, RING, - "Cannot free ring, not created with rte_ring_create()\n"); + RTE_LOG_LINE(ERR, RING, + "Cannot free ring, not created with rte_ring_create()"); return; } @@ -355,7 +355,7 @@ rte_ring_free(struct rte_ring *r) rte_mcfg_tailq_write_unlock(); if (rte_memzone_free(r->memzone) != 0) - RTE_LOG(ERR, RING, "Cannot free memory\n"); + RTE_LOG_LINE(ERR, RING, "Cannot free memory"); rte_free(te); } diff --git a/lib/sched/rte_pie.c b/lib/sched/rte_pie.c index cce0ce762d..ac1f99e2bd 100644 --- a/lib/sched/rte_pie.c +++ b/lib/sched/rte_pie.c @@ -17,7 +17,7 @@ int rte_pie_rt_data_init(struct rte_pie *pie) { if (pie == NULL) { - RTE_LOG(ERR, SCHED, "%s: Invalid addr for pie\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: Invalid addr for pie", __func__); return -EINVAL; } @@ -39,26 +39,26 @@ rte_pie_config_init(struct rte_pie_config *pie_cfg, return -1; if (qdelay_ref <= 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for qdelay_ref\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for qdelay_ref", __func__); return -EINVAL; } if (dp_update_interval <= 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for dp_update_interval\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for dp_update_interval", __func__); return -EINVAL; } if (max_burst <= 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for max_burst\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for max_burst", __func__); return -EINVAL; } if (tailq_th <= 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for tailq_th\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for tailq_th", __func__); return -EINVAL; } diff --git a/lib/sched/rte_sched.c b/lib/sched/rte_sched.c index 76dd8dd738..75f2f12007 100644 --- a/lib/sched/rte_sched.c +++ b/lib/sched/rte_sched.c @@ -325,23 +325,23 @@ pipe_profile_check(struct rte_sched_pipe_params *params, /* Pipe parameters */ if (params == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter params\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter params", __func__); return -EINVAL; } /* TB rate: non-zero, not greater than port rate */ if (params->tb_rate == 0 || params->tb_rate > rate) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for tb rate\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for tb rate", __func__); return -EINVAL; } /* TB size: non-zero */ if (params->tb_size == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for tb size\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for tb size", __func__); return -EINVAL; } @@ -350,38 +350,38 @@ pipe_profile_check(struct rte_sched_pipe_params *params, if ((qsize[i] == 0 && params->tc_rate[i] != 0) || (qsize[i] != 0 && (params->tc_rate[i] == 0 || params->tc_rate[i] > params->tb_rate))) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for qsize or tc_rate\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for qsize or tc_rate", __func__); return -EINVAL; } } if (params->tc_rate[RTE_SCHED_TRAFFIC_CLASS_BE] == 0 || qsize[RTE_SCHED_TRAFFIC_CLASS_BE] == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for be traffic class rate\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for be traffic class rate", __func__); return -EINVAL; } /* TC period: non-zero */ if (params->tc_period == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for tc period\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for tc period", __func__); return -EINVAL; } /* Best effort tc oversubscription weight: non-zero */ if (params->tc_ov_weight == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for tc ov weight\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for tc ov weight", __func__); return -EINVAL; } /* Queue WRR weights: non-zero */ for (i = 0; i < RTE_SCHED_BE_QUEUES_PER_PIPE; i++) { if (params->wrr_weights[i] == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for wrr weight\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for wrr weight", __func__); return -EINVAL; } } @@ -397,20 +397,20 @@ subport_profile_check(struct rte_sched_subport_profile_params *params, /* Check user parameters */ if (params == NULL) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for parameter params\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Incorrect value for parameter params", __func__); return -EINVAL; } if (params->tb_rate == 0 || params->tb_rate > rate) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for tb rate\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Incorrect value for tb rate", __func__); return -EINVAL; } if (params->tb_size == 0) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for tb size\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Incorrect value for tb size", __func__); return -EINVAL; } @@ -418,21 +418,21 @@ subport_profile_check(struct rte_sched_subport_profile_params *params, uint64_t tc_rate = params->tc_rate[i]; if (tc_rate == 0 || (tc_rate > params->tb_rate)) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for tc rate\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Incorrect value for tc rate", __func__); return -EINVAL; } } if (params->tc_rate[RTE_SCHED_TRAFFIC_CLASS_BE] == 0) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect tc rate(best effort)\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Incorrect tc rate(best effort)", __func__); return -EINVAL; } if (params->tc_period == 0) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for tc period\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Incorrect value for tc period", __func__); return -EINVAL; } @@ -445,29 +445,29 @@ rte_sched_port_check_params(struct rte_sched_port_params *params) uint32_t i; if (params == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter params\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter params", __func__); return -EINVAL; } /* socket */ if (params->socket < 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for socket id\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for socket id", __func__); return -EINVAL; } /* rate */ if (params->rate == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for rate\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for rate", __func__); return -EINVAL; } /* mtu */ if (params->mtu == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for mtu\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for mtu", __func__); return -EINVAL; } @@ -475,8 +475,8 @@ rte_sched_port_check_params(struct rte_sched_port_params *params) if (params->n_subports_per_port == 0 || params->n_subports_per_port > 1u << 16 || !rte_is_power_of_2(params->n_subports_per_port)) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for number of subports\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for number of subports", __func__); return -EINVAL; } @@ -484,8 +484,8 @@ rte_sched_port_check_params(struct rte_sched_port_params *params) params->n_subport_profiles == 0 || params->n_max_subport_profiles == 0 || params->n_subport_profiles > params->n_max_subport_profiles) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for subport profiles\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for subport profiles", __func__); return -EINVAL; } @@ -496,8 +496,8 @@ rte_sched_port_check_params(struct rte_sched_port_params *params) status = subport_profile_check(p, params->rate); if (status != 0) { - RTE_LOG(ERR, SCHED, - "%s: subport profile check failed(%d)\n", + RTE_LOG_LINE(ERR, SCHED, + "%s: subport profile check failed(%d)", __func__, status); return -EINVAL; } @@ -506,8 +506,8 @@ rte_sched_port_check_params(struct rte_sched_port_params *params) /* n_pipes_per_subport: non-zero, power of 2 */ if (params->n_pipes_per_subport == 0 || !rte_is_power_of_2(params->n_pipes_per_subport)) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for maximum pipes number\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for maximum pipes number", __func__); return -EINVAL; } @@ -830,8 +830,8 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params, /* Check user parameters */ if (params == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter params\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter params", __func__); return -EINVAL; } @@ -842,14 +842,14 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params, uint16_t qsize = params->qsize[i]; if (qsize != 0 && !rte_is_power_of_2(qsize)) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for qsize\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for qsize", __func__); return -EINVAL; } } if (params->qsize[RTE_SCHED_TRAFFIC_CLASS_BE] == 0) { - RTE_LOG(ERR, SCHED, "%s: Incorrect qsize\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: Incorrect qsize", __func__); return -EINVAL; } @@ -857,8 +857,8 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params, if (params->n_pipes_per_subport_enabled == 0 || params->n_pipes_per_subport_enabled > n_max_pipes_per_subport || !rte_is_power_of_2(params->n_pipes_per_subport_enabled)) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for pipes number\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for pipes number", __func__); return -EINVAL; } @@ -867,8 +867,8 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params, params->n_pipe_profiles == 0 || params->n_max_pipe_profiles == 0 || params->n_pipe_profiles > params->n_max_pipe_profiles) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for pipe profiles\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for pipe profiles", __func__); return -EINVAL; } @@ -878,8 +878,8 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params, status = pipe_profile_check(p, rate, ¶ms->qsize[0]); if (status != 0) { - RTE_LOG(ERR, SCHED, - "%s: Pipe profile check failed(%d)\n", __func__, status); + RTE_LOG_LINE(ERR, SCHED, + "%s: Pipe profile check failed(%d)", __func__, status); return -EINVAL; } } @@ -896,8 +896,8 @@ rte_sched_port_get_memory_footprint(struct rte_sched_port_params *port_params, status = rte_sched_port_check_params(port_params); if (status != 0) { - RTE_LOG(ERR, SCHED, - "%s: Port scheduler port params check failed (%d)\n", + RTE_LOG_LINE(ERR, SCHED, + "%s: Port scheduler port params check failed (%d)", __func__, status); return 0; @@ -910,8 +910,8 @@ rte_sched_port_get_memory_footprint(struct rte_sched_port_params *port_params, port_params->n_pipes_per_subport, port_params->rate); if (status != 0) { - RTE_LOG(ERR, SCHED, - "%s: Port scheduler subport params check failed (%d)\n", + RTE_LOG_LINE(ERR, SCHED, + "%s: Port scheduler subport params check failed (%d)", __func__, status); return 0; @@ -941,8 +941,8 @@ rte_sched_port_config(struct rte_sched_port_params *params) status = rte_sched_port_check_params(params); if (status != 0) { - RTE_LOG(ERR, SCHED, - "%s: Port scheduler params check failed (%d)\n", + RTE_LOG_LINE(ERR, SCHED, + "%s: Port scheduler params check failed (%d)", __func__, status); return NULL; } @@ -956,7 +956,7 @@ rte_sched_port_config(struct rte_sched_port_params *params) port = rte_zmalloc_socket("qos_params", size0 + size1, RTE_CACHE_LINE_SIZE, params->socket); if (port == NULL) { - RTE_LOG(ERR, SCHED, "%s: Memory allocation fails\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: Memory allocation fails", __func__); return NULL; } @@ -965,7 +965,7 @@ rte_sched_port_config(struct rte_sched_port_params *params) port->subport_profiles = rte_zmalloc_socket("subport_profile", size2, RTE_CACHE_LINE_SIZE, params->socket); if (port->subport_profiles == NULL) { - RTE_LOG(ERR, SCHED, "%s: Memory allocation fails\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: Memory allocation fails", __func__); rte_free(port); return NULL; } @@ -1107,8 +1107,8 @@ rte_sched_red_config(struct rte_sched_port *port, params->cman_params->red_params[i][j].maxp_inv) != 0) { rte_sched_free_memory(port, n_subports); - RTE_LOG(NOTICE, SCHED, - "%s: RED configuration init fails\n", __func__); + RTE_LOG_LINE(NOTICE, SCHED, + "%s: RED configuration init fails", __func__); return -EINVAL; } } @@ -1127,8 +1127,8 @@ rte_sched_pie_config(struct rte_sched_port *port, for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) { if (params->cman_params->pie_params[i].tailq_th > params->qsize[i]) { - RTE_LOG(NOTICE, SCHED, - "%s: PIE tailq threshold incorrect\n", __func__); + RTE_LOG_LINE(NOTICE, SCHED, + "%s: PIE tailq threshold incorrect", __func__); return -EINVAL; } @@ -1139,8 +1139,8 @@ rte_sched_pie_config(struct rte_sched_port *port, params->cman_params->pie_params[i].tailq_th) != 0) { rte_sched_free_memory(port, n_subports); - RTE_LOG(NOTICE, SCHED, - "%s: PIE configuration init fails\n", __func__); + RTE_LOG_LINE(NOTICE, SCHED, + "%s: PIE configuration init fails", __func__); return -EINVAL; } } @@ -1171,14 +1171,14 @@ rte_sched_subport_tc_ov_config(struct rte_sched_port *port, struct rte_sched_subport *s; if (port == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter port\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter port", __func__); return -EINVAL; } if (subport_id >= port->n_subports_per_port) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter subport id\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter subport id", __func__); return -EINVAL; } @@ -1204,21 +1204,21 @@ rte_sched_subport_config(struct rte_sched_port *port, /* Check user parameters */ if (port == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter port\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter port", __func__); return 0; } if (subport_id >= port->n_subports_per_port) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for subport id\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for subport id", __func__); ret = -EINVAL; goto out; } if (subport_profile_id >= port->n_max_subport_profiles) { - RTE_LOG(ERR, SCHED, "%s: " - "Number of subport profile exceeds the max limit\n", + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Number of subport profile exceeds the max limit", __func__); ret = -EINVAL; goto out; @@ -1234,8 +1234,8 @@ rte_sched_subport_config(struct rte_sched_port *port, port->n_pipes_per_subport, port->rate); if (status != 0) { - RTE_LOG(NOTICE, SCHED, - "%s: Port scheduler params check failed (%d)\n", + RTE_LOG_LINE(NOTICE, SCHED, + "%s: Port scheduler params check failed (%d)", __func__, status); ret = -EINVAL; goto out; @@ -1250,8 +1250,8 @@ rte_sched_subport_config(struct rte_sched_port *port, s = rte_zmalloc_socket("subport_params", size0 + size1, RTE_CACHE_LINE_SIZE, port->socket); if (s == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Memory allocation fails\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Memory allocation fails", __func__); ret = -ENOMEM; goto out; } @@ -1282,8 +1282,8 @@ rte_sched_subport_config(struct rte_sched_port *port, s->cman_enabled = true; status = rte_sched_cman_config(port, s, params, n_subports); if (status) { - RTE_LOG(NOTICE, SCHED, - "%s: CMAN configuration fails\n", __func__); + RTE_LOG_LINE(NOTICE, SCHED, + "%s: CMAN configuration fails", __func__); return status; } } else { @@ -1330,8 +1330,8 @@ rte_sched_subport_config(struct rte_sched_port *port, s->bmp = rte_bitmap_init(n_subport_pipe_queues, s->bmp_array, bmp_mem_size); if (s->bmp == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Subport bitmap init error\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Subport bitmap init error", __func__); ret = -EINVAL; goto out; } @@ -1400,29 +1400,29 @@ rte_sched_pipe_config(struct rte_sched_port *port, deactivate = (pipe_profile < 0); if (port == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter port\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter port", __func__); return -EINVAL; } if (subport_id >= port->n_subports_per_port) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter subport id\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter subport id", __func__); ret = -EINVAL; goto out; } s = port->subports[subport_id]; if (pipe_id >= s->n_pipes_per_subport_enabled) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter pipe id\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter pipe id", __func__); ret = -EINVAL; goto out; } if (!deactivate && profile >= s->n_pipe_profiles) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter pipe profile\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter pipe profile", __func__); ret = -EINVAL; goto out; } @@ -1447,8 +1447,8 @@ rte_sched_pipe_config(struct rte_sched_port *port, s->tc_ov = s->tc_ov_rate > subport_tc_be_rate; if (s->tc_ov != tc_be_ov) { - RTE_LOG(DEBUG, SCHED, - "Subport %u Best-effort TC oversubscription is OFF (%.4lf >= %.4lf)\n", + RTE_LOG_LINE(DEBUG, SCHED, + "Subport %u Best-effort TC oversubscription is OFF (%.4lf >= %.4lf)", subport_id, subport_tc_be_rate, s->tc_ov_rate); } @@ -1489,8 +1489,8 @@ rte_sched_pipe_config(struct rte_sched_port *port, s->tc_ov = s->tc_ov_rate > subport_tc_be_rate; if (s->tc_ov != tc_be_ov) { - RTE_LOG(DEBUG, SCHED, - "Subport %u Best effort TC oversubscription is ON (%.4lf < %.4lf)\n", + RTE_LOG_LINE(DEBUG, SCHED, + "Subport %u Best effort TC oversubscription is ON (%.4lf < %.4lf)", subport_id, subport_tc_be_rate, s->tc_ov_rate); } p->tc_ov_period_id = s->tc_ov_period_id; @@ -1518,15 +1518,15 @@ rte_sched_subport_pipe_profile_add(struct rte_sched_port *port, /* Port */ if (port == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter port\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter port", __func__); return -EINVAL; } /* Subport id not exceeds the max limit */ if (subport_id > port->n_subports_per_port) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for subport id\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for subport id", __func__); return -EINVAL; } @@ -1534,16 +1534,16 @@ rte_sched_subport_pipe_profile_add(struct rte_sched_port *port, /* Pipe profiles exceeds the max limit */ if (s->n_pipe_profiles >= s->n_max_pipe_profiles) { - RTE_LOG(ERR, SCHED, - "%s: Number of pipe profiles exceeds the max limit\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Number of pipe profiles exceeds the max limit", __func__); return -EINVAL; } /* Pipe params */ status = pipe_profile_check(params, port->rate, &s->qsize[0]); if (status != 0) { - RTE_LOG(ERR, SCHED, - "%s: Pipe profile check failed(%d)\n", __func__, status); + RTE_LOG_LINE(ERR, SCHED, + "%s: Pipe profile check failed(%d)", __func__, status); return -EINVAL; } @@ -1553,8 +1553,8 @@ rte_sched_subport_pipe_profile_add(struct rte_sched_port *port, /* Pipe profile should not exists */ for (i = 0; i < s->n_pipe_profiles; i++) if (memcmp(s->pipe_profiles + i, pp, sizeof(*pp)) == 0) { - RTE_LOG(ERR, SCHED, - "%s: Pipe profile exists\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Pipe profile exists", __func__); return -EINVAL; } @@ -1581,20 +1581,20 @@ rte_sched_port_subport_profile_add(struct rte_sched_port *port, /* Port */ if (port == NULL) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for parameter port\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Incorrect value for parameter port", __func__); return -EINVAL; } if (params == NULL) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for parameter profile\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Incorrect value for parameter profile", __func__); return -EINVAL; } if (subport_profile_id == NULL) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for parameter subport_profile_id\n", + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Incorrect value for parameter subport_profile_id", __func__); return -EINVAL; } @@ -1603,16 +1603,16 @@ rte_sched_port_subport_profile_add(struct rte_sched_port *port, /* Subport profiles exceeds the max limit */ if (port->n_subport_profiles >= port->n_max_subport_profiles) { - RTE_LOG(ERR, SCHED, "%s: " - "Number of subport profiles exceeds the max limit\n", + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Number of subport profiles exceeds the max limit", __func__); return -EINVAL; } status = subport_profile_check(params, port->rate); if (status != 0) { - RTE_LOG(ERR, SCHED, - "%s: subport profile check failed(%d)\n", __func__, status); + RTE_LOG_LINE(ERR, SCHED, + "%s: subport profile check failed(%d)", __func__, status); return -EINVAL; } @@ -1622,8 +1622,8 @@ rte_sched_port_subport_profile_add(struct rte_sched_port *port, for (i = 0; i < port->n_subport_profiles; i++) if (memcmp(port->subport_profiles + i, dst, sizeof(*dst)) == 0) { - RTE_LOG(ERR, SCHED, - "%s: subport profile exists\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: subport profile exists", __func__); return -EINVAL; } @@ -1695,26 +1695,26 @@ rte_sched_subport_read_stats(struct rte_sched_port *port, /* Check user parameters */ if (port == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter port\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter port", __func__); return -EINVAL; } if (subport_id >= port->n_subports_per_port) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for subport id\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for subport id", __func__); return -EINVAL; } if (stats == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter stats\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter stats", __func__); return -EINVAL; } if (tc_ov == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for tc_ov\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for tc_ov", __func__); return -EINVAL; } @@ -1743,26 +1743,26 @@ rte_sched_queue_read_stats(struct rte_sched_port *port, /* Check user parameters */ if (port == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter port\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter port", __func__); return -EINVAL; } if (queue_id >= rte_sched_port_queues_per_port(port)) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for queue id\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for queue id", __func__); return -EINVAL; } if (stats == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter stats\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter stats", __func__); return -EINVAL; } if (qlen == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter qlen\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter qlen", __func__); return -EINVAL; } subport_qmask = port->n_pipes_per_subport_log2 + 4; diff --git a/lib/table/rte_table_acl.c b/lib/table/rte_table_acl.c index 902cb78eac..944f5064d2 100644 --- a/lib/table/rte_table_acl.c +++ b/lib/table/rte_table_acl.c @@ -65,21 +65,21 @@ rte_table_acl_create( /* Check input parameters */ if (p == NULL) { - RTE_LOG(ERR, TABLE, "%s: Invalid value for params\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid value for params", __func__); return NULL; } if (p->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: Invalid value for name\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid value for name", __func__); return NULL; } if (p->n_rules == 0) { - RTE_LOG(ERR, TABLE, "%s: Invalid value for n_rules\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid value for n_rules", __func__); return NULL; } if ((p->n_rule_fields == 0) || (p->n_rule_fields > RTE_ACL_MAX_FIELDS)) { - RTE_LOG(ERR, TABLE, "%s: Invalid value for n_rule_fields\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid value for n_rule_fields", __func__); return NULL; } @@ -98,8 +98,8 @@ rte_table_acl_create( acl = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE, socket_id); if (acl == NULL) { - RTE_LOG(ERR, TABLE, - "%s: Cannot allocate %u bytes for ACL table\n", + RTE_LOG_LINE(ERR, TABLE, + "%s: Cannot allocate %u bytes for ACL table", __func__, total_size); return NULL; } @@ -140,7 +140,7 @@ rte_table_acl_free(void *table) /* Check input parameters */ if (table == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } @@ -164,7 +164,7 @@ rte_table_acl_build(struct rte_table_acl *acl, struct rte_acl_ctx **acl_ctx) /* Create low level ACL table */ ctx = rte_acl_create(&acl->acl_params); if (ctx == NULL) { - RTE_LOG(ERR, TABLE, "%s: Cannot create low level ACL table\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot create low level ACL table", __func__); return -1; } @@ -176,8 +176,8 @@ rte_table_acl_build(struct rte_table_acl *acl, struct rte_acl_ctx **acl_ctx) status = rte_acl_add_rules(ctx, acl->acl_rule_list[i], 1); if (status != 0) { - RTE_LOG(ERR, TABLE, - "%s: Cannot add rule to low level ACL table\n", + RTE_LOG_LINE(ERR, TABLE, + "%s: Cannot add rule to low level ACL table", __func__); rte_acl_free(ctx); return -1; @@ -196,8 +196,8 @@ rte_table_acl_build(struct rte_table_acl *acl, struct rte_acl_ctx **acl_ctx) /* Build low level ACl table */ status = rte_acl_build(ctx, &acl->cfg); if (status != 0) { - RTE_LOG(ERR, TABLE, - "%s: Cannot build the low level ACL table\n", + RTE_LOG_LINE(ERR, TABLE, + "%s: Cannot build the low level ACL table", __func__); rte_acl_free(ctx); return -1; @@ -226,29 +226,29 @@ rte_table_acl_entry_add( /* Check input parameters */ if (table == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } if (key == NULL) { - RTE_LOG(ERR, TABLE, "%s: key parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: key parameter is NULL", __func__); return -EINVAL; } if (entry == NULL) { - RTE_LOG(ERR, TABLE, "%s: entry parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: entry parameter is NULL", __func__); return -EINVAL; } if (key_found == NULL) { - RTE_LOG(ERR, TABLE, "%s: key_found parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: key_found parameter is NULL", __func__); return -EINVAL; } if (entry_ptr == NULL) { - RTE_LOG(ERR, TABLE, "%s: entry_ptr parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: entry_ptr parameter is NULL", __func__); return -EINVAL; } if (rule->priority > RTE_ACL_MAX_PRIORITY) { - RTE_LOG(ERR, TABLE, "%s: Priority is too high\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: Priority is too high", __func__); return -EINVAL; } @@ -291,7 +291,7 @@ rte_table_acl_entry_add( /* Return if max rules */ if (free_pos_valid == 0) { - RTE_LOG(ERR, TABLE, "%s: Max number of rules reached\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Max number of rules reached", __func__); return -ENOSPC; } @@ -342,15 +342,15 @@ rte_table_acl_entry_delete( /* Check input parameters */ if (table == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } if (key == NULL) { - RTE_LOG(ERR, TABLE, "%s: key parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: key parameter is NULL", __func__); return -EINVAL; } if (key_found == NULL) { - RTE_LOG(ERR, TABLE, "%s: key_found parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: key_found parameter is NULL", __func__); return -EINVAL; } @@ -424,28 +424,28 @@ rte_table_acl_entry_add_bulk( /* Check input parameters */ if (table == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } if (keys == NULL) { - RTE_LOG(ERR, TABLE, "%s: keys parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: keys parameter is NULL", __func__); return -EINVAL; } if (entries == NULL) { - RTE_LOG(ERR, TABLE, "%s: entries parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: entries parameter is NULL", __func__); return -EINVAL; } if (n_keys == 0) { - RTE_LOG(ERR, TABLE, "%s: 0 rules to add\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: 0 rules to add", __func__); return -EINVAL; } if (key_found == NULL) { - RTE_LOG(ERR, TABLE, "%s: key_found parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: key_found parameter is NULL", __func__); return -EINVAL; } if (entries_ptr == NULL) { - RTE_LOG(ERR, TABLE, "%s: entries_ptr parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: entries_ptr parameter is NULL", __func__); return -EINVAL; } @@ -455,20 +455,20 @@ rte_table_acl_entry_add_bulk( struct rte_table_acl_rule_add_params *rule; if (keys[i] == NULL) { - RTE_LOG(ERR, TABLE, "%s: keys[%" PRIu32 "] parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: keys[%" PRIu32 "] parameter is NULL", __func__, i); return -EINVAL; } if (entries[i] == NULL) { - RTE_LOG(ERR, TABLE, "%s: entries[%" PRIu32 "] parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: entries[%" PRIu32 "] parameter is NULL", __func__, i); return -EINVAL; } rule = keys[i]; if (rule->priority > RTE_ACL_MAX_PRIORITY) { - RTE_LOG(ERR, TABLE, "%s: Priority is too high\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: Priority is too high", __func__); return -EINVAL; } } @@ -604,26 +604,26 @@ rte_table_acl_entry_delete_bulk( /* Check input parameters */ if (table == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } if (keys == NULL) { - RTE_LOG(ERR, TABLE, "%s: key parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: key parameter is NULL", __func__); return -EINVAL; } if (n_keys == 0) { - RTE_LOG(ERR, TABLE, "%s: 0 rules to delete\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: 0 rules to delete", __func__); return -EINVAL; } if (key_found == NULL) { - RTE_LOG(ERR, TABLE, "%s: key_found parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: key_found parameter is NULL", __func__); return -EINVAL; } for (i = 0; i < n_keys; i++) { if (keys[i] == NULL) { - RTE_LOG(ERR, TABLE, "%s: keys[%" PRIu32 "] parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: keys[%" PRIu32 "] parameter is NULL", __func__, i); return -EINVAL; } diff --git a/lib/table/rte_table_array.c b/lib/table/rte_table_array.c index a45b29ed6a..0b3107104d 100644 --- a/lib/table/rte_table_array.c +++ b/lib/table/rte_table_array.c @@ -61,8 +61,8 @@ rte_table_array_create(void *params, int socket_id, uint32_t entry_size) total_size = total_cl_size * RTE_CACHE_LINE_SIZE; t = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE, socket_id); if (t == NULL) { - RTE_LOG(ERR, TABLE, - "%s: Cannot allocate %u bytes for array table\n", + RTE_LOG_LINE(ERR, TABLE, + "%s: Cannot allocate %u bytes for array table", __func__, total_size); return NULL; } @@ -83,7 +83,7 @@ rte_table_array_free(void *table) /* Check input parameters */ if (t == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } @@ -107,24 +107,24 @@ rte_table_array_entry_add( /* Check input parameters */ if (table == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } if (key == NULL) { - RTE_LOG(ERR, TABLE, "%s: key parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: key parameter is NULL", __func__); return -EINVAL; } if (entry == NULL) { - RTE_LOG(ERR, TABLE, "%s: entry parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: entry parameter is NULL", __func__); return -EINVAL; } if (key_found == NULL) { - RTE_LOG(ERR, TABLE, "%s: key_found parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: key_found parameter is NULL", __func__); return -EINVAL; } if (entry_ptr == NULL) { - RTE_LOG(ERR, TABLE, "%s: entry_ptr parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: entry_ptr parameter is NULL", __func__); return -EINVAL; } diff --git a/lib/table/rte_table_hash_cuckoo.c b/lib/table/rte_table_hash_cuckoo.c index 86c960c103..228b49a893 100644 --- a/lib/table/rte_table_hash_cuckoo.c +++ b/lib/table/rte_table_hash_cuckoo.c @@ -47,27 +47,27 @@ static int check_params_create_hash_cuckoo(struct rte_table_hash_cuckoo_params *params) { if (params == NULL) { - RTE_LOG(ERR, TABLE, "NULL Input Parameters.\n"); + RTE_LOG_LINE(ERR, TABLE, "NULL Input Parameters."); return -EINVAL; } if (params->name == NULL) { - RTE_LOG(ERR, TABLE, "Table name is NULL.\n"); + RTE_LOG_LINE(ERR, TABLE, "Table name is NULL."); return -EINVAL; } if (params->key_size == 0) { - RTE_LOG(ERR, TABLE, "Invalid key_size.\n"); + RTE_LOG_LINE(ERR, TABLE, "Invalid key_size."); return -EINVAL; } if (params->n_keys == 0) { - RTE_LOG(ERR, TABLE, "Invalid n_keys.\n"); + RTE_LOG_LINE(ERR, TABLE, "Invalid n_keys."); return -EINVAL; } if (params->f_hash == NULL) { - RTE_LOG(ERR, TABLE, "f_hash is NULL.\n"); + RTE_LOG_LINE(ERR, TABLE, "f_hash is NULL."); return -EINVAL; } @@ -94,8 +94,8 @@ rte_table_hash_cuckoo_create(void *params, t = rte_zmalloc_socket(p->name, total_size, RTE_CACHE_LINE_SIZE, socket_id); if (t == NULL) { - RTE_LOG(ERR, TABLE, - "%s: Cannot allocate %u bytes for cuckoo hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, + "%s: Cannot allocate %u bytes for cuckoo hash table %s", __func__, total_size, p->name); return NULL; } @@ -114,8 +114,8 @@ rte_table_hash_cuckoo_create(void *params, if (h_table == NULL) { h_table = rte_hash_create(&hash_cuckoo_params); if (h_table == NULL) { - RTE_LOG(ERR, TABLE, - "%s: failed to create cuckoo hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, + "%s: failed to create cuckoo hash table %s", __func__, p->name); rte_free(t); return NULL; @@ -131,8 +131,8 @@ rte_table_hash_cuckoo_create(void *params, t->key_offset = p->key_offset; t->h_table = h_table; - RTE_LOG(INFO, TABLE, - "%s: Cuckoo hash table %s memory footprint is %u bytes\n", + RTE_LOG_LINE(INFO, TABLE, + "%s: Cuckoo hash table %s memory footprint is %u bytes", __func__, p->name, total_size); return t; } diff --git a/lib/table/rte_table_hash_ext.c b/lib/table/rte_table_hash_ext.c index 9f0220ded2..38ea96c654 100644 --- a/lib/table/rte_table_hash_ext.c +++ b/lib/table/rte_table_hash_ext.c @@ -128,33 +128,33 @@ check_params_create(struct rte_table_hash_params *params) { /* name */ if (params->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: name invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: name invalid value", __func__); return -EINVAL; } /* key_size */ if ((params->key_size < sizeof(uint64_t)) || (!rte_is_power_of_2(params->key_size))) { - RTE_LOG(ERR, TABLE, "%s: key_size invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: key_size invalid value", __func__); return -EINVAL; } /* n_keys */ if (params->n_keys == 0) { - RTE_LOG(ERR, TABLE, "%s: n_keys invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_keys invalid value", __func__); return -EINVAL; } /* n_buckets */ if ((params->n_buckets == 0) || (!rte_is_power_of_2(params->n_buckets))) { - RTE_LOG(ERR, TABLE, "%s: n_buckets invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_buckets invalid value", __func__); return -EINVAL; } /* f_hash */ if (params->f_hash == NULL) { - RTE_LOG(ERR, TABLE, "%s: f_hash invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: f_hash invalid value", __func__); return -EINVAL; } @@ -211,8 +211,8 @@ rte_table_hash_ext_create(void *params, int socket_id, uint32_t entry_size) key_sz + key_stack_sz + bkt_ext_stack_sz + data_sz; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes" - " for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes" + " for hash table %s", __func__, total_size, p->name); return NULL; } @@ -222,13 +222,13 @@ rte_table_hash_ext_create(void *params, int socket_id, uint32_t entry_size) RTE_CACHE_LINE_SIZE, socket_id); if (t == NULL) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes" - " for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes" + " for hash table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, "%s (%u-byte key): Hash table %s memory " - "footprint is %" PRIu64 " bytes\n", + RTE_LOG_LINE(INFO, TABLE, "%s (%u-byte key): Hash table %s memory " + "footprint is %" PRIu64 " bytes", __func__, p->key_size, p->name, total_size); /* Memory initialization */ diff --git a/lib/table/rte_table_hash_key16.c b/lib/table/rte_table_hash_key16.c index 584c3f2c98..63b28f79c0 100644 --- a/lib/table/rte_table_hash_key16.c +++ b/lib/table/rte_table_hash_key16.c @@ -107,32 +107,32 @@ check_params_create(struct rte_table_hash_params *params) { /* name */ if (params->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: name invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: name invalid value", __func__); return -EINVAL; } /* key_size */ if (params->key_size != KEY_SIZE) { - RTE_LOG(ERR, TABLE, "%s: key_size invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: key_size invalid value", __func__); return -EINVAL; } /* n_keys */ if (params->n_keys == 0) { - RTE_LOG(ERR, TABLE, "%s: n_keys is zero\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_keys is zero", __func__); return -EINVAL; } /* n_buckets */ if ((params->n_buckets == 0) || (!rte_is_power_of_2(params->n_buckets))) { - RTE_LOG(ERR, TABLE, "%s: n_buckets invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_buckets invalid value", __func__); return -EINVAL; } /* f_hash */ if (params->f_hash == NULL) { - RTE_LOG(ERR, TABLE, "%s: f_hash function pointer is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: f_hash function pointer is NULL", __func__); return -EINVAL; } @@ -181,8 +181,8 @@ rte_table_hash_create_key16_lru(void *params, total_size = sizeof(struct rte_table_hash) + n_buckets * bucket_size; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } @@ -192,13 +192,13 @@ rte_table_hash_create_key16_lru(void *params, RTE_CACHE_LINE_SIZE, socket_id); if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, "%s: Hash table %s memory footprint " - "is %" PRIu64 " bytes\n", + RTE_LOG_LINE(INFO, TABLE, "%s: Hash table %s memory footprint " + "is %" PRIu64 " bytes", __func__, p->name, total_size); /* Memory initialization */ @@ -236,7 +236,7 @@ rte_table_hash_free_key16_lru(void *table) /* Check input parameters */ if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } @@ -391,8 +391,8 @@ rte_table_hash_create_key16_ext(void *params, total_size = sizeof(struct rte_table_hash) + (p->n_buckets + n_buckets_ext) * bucket_size + stack_size; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } @@ -402,13 +402,13 @@ rte_table_hash_create_key16_ext(void *params, RTE_CACHE_LINE_SIZE, socket_id); if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, "%s: Hash table %s memory footprint " - "is %" PRIu64 " bytes\n", + RTE_LOG_LINE(INFO, TABLE, "%s: Hash table %s memory footprint " + "is %" PRIu64 " bytes", __func__, p->name, total_size); /* Memory initialization */ @@ -446,7 +446,7 @@ rte_table_hash_free_key16_ext(void *table) /* Check input parameters */ if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } diff --git a/lib/table/rte_table_hash_key32.c b/lib/table/rte_table_hash_key32.c index 22b5ca9166..6293bf518b 100644 --- a/lib/table/rte_table_hash_key32.c +++ b/lib/table/rte_table_hash_key32.c @@ -111,32 +111,32 @@ check_params_create(struct rte_table_hash_params *params) { /* name */ if (params->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: name invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: name invalid value", __func__); return -EINVAL; } /* key_size */ if (params->key_size != KEY_SIZE) { - RTE_LOG(ERR, TABLE, "%s: key_size invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: key_size invalid value", __func__); return -EINVAL; } /* n_keys */ if (params->n_keys == 0) { - RTE_LOG(ERR, TABLE, "%s: n_keys is zero\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_keys is zero", __func__); return -EINVAL; } /* n_buckets */ if ((params->n_buckets == 0) || (!rte_is_power_of_2(params->n_buckets))) { - RTE_LOG(ERR, TABLE, "%s: n_buckets invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_buckets invalid value", __func__); return -EINVAL; } /* f_hash */ if (params->f_hash == NULL) { - RTE_LOG(ERR, TABLE, "%s: f_hash function pointer is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: f_hash function pointer is NULL", __func__); return -EINVAL; } @@ -184,8 +184,8 @@ rte_table_hash_create_key32_lru(void *params, KEYS_PER_BUCKET * entry_size); total_size = sizeof(struct rte_table_hash) + n_buckets * bucket_size; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } @@ -195,14 +195,14 @@ rte_table_hash_create_key32_lru(void *params, RTE_CACHE_LINE_SIZE, socket_id); if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, + RTE_LOG_LINE(INFO, TABLE, "%s: Hash table %s memory footprint " - "is %" PRIu64 " bytes\n", + "is %" PRIu64 " bytes", __func__, p->name, total_size); /* Memory initialization */ @@ -244,7 +244,7 @@ rte_table_hash_free_key32_lru(void *table) /* Check input parameters */ if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } @@ -400,8 +400,8 @@ rte_table_hash_create_key32_ext(void *params, total_size = sizeof(struct rte_table_hash) + (p->n_buckets + n_buckets_ext) * bucket_size + stack_size; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } @@ -411,14 +411,14 @@ rte_table_hash_create_key32_ext(void *params, RTE_CACHE_LINE_SIZE, socket_id); if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, + RTE_LOG_LINE(INFO, TABLE, "%s: Hash table %s memory footprint " - "is %" PRIu64" bytes\n", + "is %" PRIu64" bytes", __func__, p->name, total_size); /* Memory initialization */ @@ -460,7 +460,7 @@ rte_table_hash_free_key32_ext(void *table) /* Check input parameters */ if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } diff --git a/lib/table/rte_table_hash_key8.c b/lib/table/rte_table_hash_key8.c index bd0ec4aac0..69e61c2ec8 100644 --- a/lib/table/rte_table_hash_key8.c +++ b/lib/table/rte_table_hash_key8.c @@ -101,32 +101,32 @@ check_params_create(struct rte_table_hash_params *params) { /* name */ if (params->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: name invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: name invalid value", __func__); return -EINVAL; } /* key_size */ if (params->key_size != KEY_SIZE) { - RTE_LOG(ERR, TABLE, "%s: key_size invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: key_size invalid value", __func__); return -EINVAL; } /* n_keys */ if (params->n_keys == 0) { - RTE_LOG(ERR, TABLE, "%s: n_keys is zero\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_keys is zero", __func__); return -EINVAL; } /* n_buckets */ if ((params->n_buckets == 0) || (!rte_is_power_of_2(params->n_buckets))) { - RTE_LOG(ERR, TABLE, "%s: n_buckets invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_buckets invalid value", __func__); return -EINVAL; } /* f_hash */ if (params->f_hash == NULL) { - RTE_LOG(ERR, TABLE, "%s: f_hash function pointer is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: f_hash function pointer is NULL", __func__); return -EINVAL; } @@ -173,8 +173,8 @@ rte_table_hash_create_key8_lru(void *params, int socket_id, uint32_t entry_size) total_size = sizeof(struct rte_table_hash) + n_buckets * bucket_size; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes" - " for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes" + " for hash table %s", __func__, total_size, p->name); return NULL; } @@ -184,14 +184,14 @@ rte_table_hash_create_key8_lru(void *params, int socket_id, uint32_t entry_size) RTE_CACHE_LINE_SIZE, socket_id); if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes" - " for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes" + " for hash table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, "%s: Hash table %s memory footprint " - "is %" PRIu64 " bytes\n", + RTE_LOG_LINE(INFO, TABLE, "%s: Hash table %s memory footprint " + "is %" PRIu64 " bytes", __func__, p->name, total_size); /* Memory initialization */ @@ -226,7 +226,7 @@ rte_table_hash_free_key8_lru(void *table) /* Check input parameters */ if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } @@ -377,8 +377,8 @@ rte_table_hash_create_key8_ext(void *params, int socket_id, uint32_t entry_size) (p->n_buckets + n_buckets_ext) * bucket_size + stack_size; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } @@ -388,14 +388,14 @@ rte_table_hash_create_key8_ext(void *params, int socket_id, uint32_t entry_size) RTE_CACHE_LINE_SIZE, socket_id); if (f == NULL) { - RTE_LOG(ERR, TABLE, + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + "for hash table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, "%s: Hash table %s memory footprint " - "is %" PRIu64 " bytes\n", + RTE_LOG_LINE(INFO, TABLE, "%s: Hash table %s memory footprint " + "is %" PRIu64 " bytes", __func__, p->name, total_size); /* Memory initialization */ @@ -430,7 +430,7 @@ rte_table_hash_free_key8_ext(void *table) /* Check input parameters */ if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } diff --git a/lib/table/rte_table_hash_lru.c b/lib/table/rte_table_hash_lru.c index 758ec4fe7a..190062b33f 100644 --- a/lib/table/rte_table_hash_lru.c +++ b/lib/table/rte_table_hash_lru.c @@ -105,33 +105,33 @@ check_params_create(struct rte_table_hash_params *params) { /* name */ if (params->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: name invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: name invalid value", __func__); return -EINVAL; } /* key_size */ if ((params->key_size < sizeof(uint64_t)) || (!rte_is_power_of_2(params->key_size))) { - RTE_LOG(ERR, TABLE, "%s: key_size invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: key_size invalid value", __func__); return -EINVAL; } /* n_keys */ if (params->n_keys == 0) { - RTE_LOG(ERR, TABLE, "%s: n_keys invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_keys invalid value", __func__); return -EINVAL; } /* n_buckets */ if ((params->n_buckets == 0) || (!rte_is_power_of_2(params->n_buckets))) { - RTE_LOG(ERR, TABLE, "%s: n_buckets invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_buckets invalid value", __func__); return -EINVAL; } /* f_hash */ if (params->f_hash == NULL) { - RTE_LOG(ERR, TABLE, "%s: f_hash invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: f_hash invalid value", __func__); return -EINVAL; } @@ -187,9 +187,9 @@ rte_table_hash_lru_create(void *params, int socket_id, uint32_t entry_size) key_stack_sz + data_sz; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes for hash " - "table %s\n", + "table %s", __func__, total_size, p->name); return NULL; } @@ -199,14 +199,14 @@ rte_table_hash_lru_create(void *params, int socket_id, uint32_t entry_size) RTE_CACHE_LINE_SIZE, socket_id); if (t == NULL) { - RTE_LOG(ERR, TABLE, + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes for hash " - "table %s\n", + "table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, "%s (%u-byte key): Hash table %s memory footprint" - " is %" PRIu64 " bytes\n", + RTE_LOG_LINE(INFO, TABLE, "%s (%u-byte key): Hash table %s memory footprint" + " is %" PRIu64 " bytes", __func__, p->key_size, p->name, total_size); /* Memory initialization */ diff --git a/lib/table/rte_table_lpm.c b/lib/table/rte_table_lpm.c index c2ef0d9ba0..989ab65ee6 100644 --- a/lib/table/rte_table_lpm.c +++ b/lib/table/rte_table_lpm.c @@ -59,29 +59,29 @@ rte_table_lpm_create(void *params, int socket_id, uint32_t entry_size) /* Check input parameters */ if (p == NULL) { - RTE_LOG(ERR, TABLE, "%s: NULL input parameters\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: NULL input parameters", __func__); return NULL; } if (p->n_rules == 0) { - RTE_LOG(ERR, TABLE, "%s: Invalid n_rules\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid n_rules", __func__); return NULL; } if (p->number_tbl8s == 0) { - RTE_LOG(ERR, TABLE, "%s: Invalid number_tbl8s\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid number_tbl8s", __func__); return NULL; } if (p->entry_unique_size == 0) { - RTE_LOG(ERR, TABLE, "%s: Invalid entry_unique_size\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid entry_unique_size", __func__); return NULL; } if (p->entry_unique_size > entry_size) { - RTE_LOG(ERR, TABLE, "%s: Invalid entry_unique_size\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid entry_unique_size", __func__); return NULL; } if (p->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: Table name is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Table name is NULL", __func__); return NULL; } @@ -93,8 +93,8 @@ rte_table_lpm_create(void *params, int socket_id, uint32_t entry_size) lpm = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE, socket_id); if (lpm == NULL) { - RTE_LOG(ERR, TABLE, - "%s: Cannot allocate %u bytes for LPM table\n", + RTE_LOG_LINE(ERR, TABLE, + "%s: Cannot allocate %u bytes for LPM table", __func__, total_size); return NULL; } @@ -107,7 +107,7 @@ rte_table_lpm_create(void *params, int socket_id, uint32_t entry_size) if (lpm->lpm == NULL) { rte_free(lpm); - RTE_LOG(ERR, TABLE, "Unable to create low-level LPM table\n"); + RTE_LOG_LINE(ERR, TABLE, "Unable to create low-level LPM table"); return NULL; } @@ -127,7 +127,7 @@ rte_table_lpm_free(void *table) /* Check input parameters */ if (lpm == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } @@ -187,21 +187,21 @@ rte_table_lpm_entry_add( /* Check input parameters */ if (lpm == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } if (ip_prefix == NULL) { - RTE_LOG(ERR, TABLE, "%s: ip_prefix parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: ip_prefix parameter is NULL", __func__); return -EINVAL; } if (entry == NULL) { - RTE_LOG(ERR, TABLE, "%s: entry parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: entry parameter is NULL", __func__); return -EINVAL; } if ((ip_prefix->depth == 0) || (ip_prefix->depth > 32)) { - RTE_LOG(ERR, TABLE, "%s: invalid depth (%d)\n", + RTE_LOG_LINE(ERR, TABLE, "%s: invalid depth (%d)", __func__, ip_prefix->depth); return -EINVAL; } @@ -216,7 +216,7 @@ rte_table_lpm_entry_add( uint8_t *nht_entry; if (nht_find_free(lpm, &nht_pos) == 0) { - RTE_LOG(ERR, TABLE, "%s: NHT full\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: NHT full", __func__); return -1; } @@ -226,7 +226,7 @@ rte_table_lpm_entry_add( /* Add rule to low level LPM table */ if (rte_lpm_add(lpm->lpm, ip_prefix->ip, ip_prefix->depth, nht_pos) < 0) { - RTE_LOG(ERR, TABLE, "%s: LPM rule add failed\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: LPM rule add failed", __func__); return -1; } @@ -253,16 +253,16 @@ rte_table_lpm_entry_delete( /* Check input parameters */ if (lpm == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } if (ip_prefix == NULL) { - RTE_LOG(ERR, TABLE, "%s: ip_prefix parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: ip_prefix parameter is NULL", __func__); return -EINVAL; } if ((ip_prefix->depth == 0) || (ip_prefix->depth > 32)) { - RTE_LOG(ERR, TABLE, "%s: invalid depth (%d)\n", __func__, + RTE_LOG_LINE(ERR, TABLE, "%s: invalid depth (%d)", __func__, ip_prefix->depth); return -EINVAL; } @@ -271,7 +271,7 @@ rte_table_lpm_entry_delete( status = rte_lpm_is_rule_present(lpm->lpm, ip_prefix->ip, ip_prefix->depth, &nht_pos); if (status < 0) { - RTE_LOG(ERR, TABLE, "%s: LPM algorithmic error\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: LPM algorithmic error", __func__); return -1; } if (status == 0) { @@ -282,7 +282,7 @@ rte_table_lpm_entry_delete( /* Delete rule from the low-level LPM table */ status = rte_lpm_delete(lpm->lpm, ip_prefix->ip, ip_prefix->depth); if (status) { - RTE_LOG(ERR, TABLE, "%s: LPM rule delete failed\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: LPM rule delete failed", __func__); return -1; } diff --git a/lib/table/rte_table_lpm_ipv6.c b/lib/table/rte_table_lpm_ipv6.c index 6f3e11a14f..5b0e643832 100644 --- a/lib/table/rte_table_lpm_ipv6.c +++ b/lib/table/rte_table_lpm_ipv6.c @@ -56,29 +56,29 @@ rte_table_lpm_ipv6_create(void *params, int socket_id, uint32_t entry_size) /* Check input parameters */ if (p == NULL) { - RTE_LOG(ERR, TABLE, "%s: NULL input parameters\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: NULL input parameters", __func__); return NULL; } if (p->n_rules == 0) { - RTE_LOG(ERR, TABLE, "%s: Invalid n_rules\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid n_rules", __func__); return NULL; } if (p->number_tbl8s == 0) { - RTE_LOG(ERR, TABLE, "%s: Invalid n_rules\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid n_rules", __func__); return NULL; } if (p->entry_unique_size == 0) { - RTE_LOG(ERR, TABLE, "%s: Invalid entry_unique_size\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid entry_unique_size", __func__); return NULL; } if (p->entry_unique_size > entry_size) { - RTE_LOG(ERR, TABLE, "%s: Invalid entry_unique_size\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid entry_unique_size", __func__); return NULL; } if (p->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: Table name is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Table name is NULL", __func__); return NULL; } @@ -90,8 +90,8 @@ rte_table_lpm_ipv6_create(void *params, int socket_id, uint32_t entry_size) lpm = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE, socket_id); if (lpm == NULL) { - RTE_LOG(ERR, TABLE, - "%s: Cannot allocate %u bytes for LPM IPv6 table\n", + RTE_LOG_LINE(ERR, TABLE, + "%s: Cannot allocate %u bytes for LPM IPv6 table", __func__, total_size); return NULL; } @@ -103,8 +103,8 @@ rte_table_lpm_ipv6_create(void *params, int socket_id, uint32_t entry_size) lpm->lpm = rte_lpm6_create(p->name, socket_id, &lpm6_config); if (lpm->lpm == NULL) { rte_free(lpm); - RTE_LOG(ERR, TABLE, - "Unable to create low-level LPM IPv6 table\n"); + RTE_LOG_LINE(ERR, TABLE, + "Unable to create low-level LPM IPv6 table"); return NULL; } @@ -124,7 +124,7 @@ rte_table_lpm_ipv6_free(void *table) /* Check input parameters */ if (lpm == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } @@ -184,21 +184,21 @@ rte_table_lpm_ipv6_entry_add( /* Check input parameters */ if (lpm == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } if (ip_prefix == NULL) { - RTE_LOG(ERR, TABLE, "%s: ip_prefix parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: ip_prefix parameter is NULL", __func__); return -EINVAL; } if (entry == NULL) { - RTE_LOG(ERR, TABLE, "%s: entry parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: entry parameter is NULL", __func__); return -EINVAL; } if ((ip_prefix->depth == 0) || (ip_prefix->depth > 128)) { - RTE_LOG(ERR, TABLE, "%s: invalid depth (%d)\n", __func__, + RTE_LOG_LINE(ERR, TABLE, "%s: invalid depth (%d)", __func__, ip_prefix->depth); return -EINVAL; } @@ -213,7 +213,7 @@ rte_table_lpm_ipv6_entry_add( uint8_t *nht_entry; if (nht_find_free(lpm, &nht_pos) == 0) { - RTE_LOG(ERR, TABLE, "%s: NHT full\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: NHT full", __func__); return -1; } @@ -224,7 +224,7 @@ rte_table_lpm_ipv6_entry_add( /* Add rule to low level LPM table */ if (rte_lpm6_add(lpm->lpm, ip_prefix->ip, ip_prefix->depth, nht_pos) < 0) { - RTE_LOG(ERR, TABLE, "%s: LPM IPv6 rule add failed\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: LPM IPv6 rule add failed", __func__); return -1; } @@ -252,16 +252,16 @@ rte_table_lpm_ipv6_entry_delete( /* Check input parameters */ if (lpm == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } if (ip_prefix == NULL) { - RTE_LOG(ERR, TABLE, "%s: ip_prefix parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: ip_prefix parameter is NULL", __func__); return -EINVAL; } if ((ip_prefix->depth == 0) || (ip_prefix->depth > 128)) { - RTE_LOG(ERR, TABLE, "%s: invalid depth (%d)\n", __func__, + RTE_LOG_LINE(ERR, TABLE, "%s: invalid depth (%d)", __func__, ip_prefix->depth); return -EINVAL; } @@ -270,7 +270,7 @@ rte_table_lpm_ipv6_entry_delete( status = rte_lpm6_is_rule_present(lpm->lpm, ip_prefix->ip, ip_prefix->depth, &nht_pos); if (status < 0) { - RTE_LOG(ERR, TABLE, "%s: LPM IPv6 algorithmic error\n", + RTE_LOG_LINE(ERR, TABLE, "%s: LPM IPv6 algorithmic error", __func__); return -1; } @@ -282,7 +282,7 @@ rte_table_lpm_ipv6_entry_delete( /* Delete rule from the low-level LPM table */ status = rte_lpm6_delete(lpm->lpm, ip_prefix->ip, ip_prefix->depth); if (status) { - RTE_LOG(ERR, TABLE, "%s: LPM IPv6 rule delete failed\n", + RTE_LOG_LINE(ERR, TABLE, "%s: LPM IPv6 rule delete failed", __func__); return -1; } diff --git a/lib/table/rte_table_stub.c b/lib/table/rte_table_stub.c index cc21516995..a54b502f79 100644 --- a/lib/table/rte_table_stub.c +++ b/lib/table/rte_table_stub.c @@ -38,8 +38,8 @@ rte_table_stub_create(__rte_unused void *params, stub = rte_zmalloc_socket("TABLE", size, RTE_CACHE_LINE_SIZE, socket_id); if (stub == NULL) { - RTE_LOG(ERR, TABLE, - "%s: Cannot allocate %u bytes for stub table\n", + RTE_LOG_LINE(ERR, TABLE, + "%s: Cannot allocate %u bytes for stub table", __func__, size); return NULL; } diff --git a/lib/vhost/fd_man.c b/lib/vhost/fd_man.c index 83586c5b4f..ff91c3169a 100644 --- a/lib/vhost/fd_man.c +++ b/lib/vhost/fd_man.c @@ -334,8 +334,8 @@ fdset_pipe_init(struct fdset *fdset) int ret; if (pipe(fdset->u.pipefd) < 0) { - RTE_LOG(ERR, VHOST_FDMAN, - "failed to create pipe for vhost fdset\n"); + RTE_LOG_LINE(ERR, VHOST_FDMAN, + "failed to create pipe for vhost fdset"); return -1; } @@ -343,8 +343,8 @@ fdset_pipe_init(struct fdset *fdset) fdset_pipe_read_cb, NULL, NULL); if (ret < 0) { - RTE_LOG(ERR, VHOST_FDMAN, - "failed to add pipe readfd %d into vhost server fdset\n", + RTE_LOG_LINE(ERR, VHOST_FDMAN, + "failed to add pipe readfd %d into vhost server fdset", fdset->u.readfd); fdset_pipe_uninit(fdset); -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v3 13/14] lib: replace logging helpers 2023-12-18 9:27 ` [PATCH v3 00/14] Detect superfluous newline in logs David Marchand ` (11 preceding siblings ...) 2023-12-18 9:27 ` [PATCH v3 12/14] lib: convert to per line logging David Marchand @ 2023-12-18 9:27 ` David Marchand 2023-12-18 9:27 ` [PATCH v3 14/14] lib: use per line logging in helpers David Marchand 13 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-18 9:27 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Andrew Rybchenko, Konstantin Ananyev, Ruifeng Wang, Ori Kam, Yipeng Wang, Sameh Gobriel, Reshma Pattan, Anatoly Burakov, David Hunt, Sivaprasad Tummala, Ciara Power, Maxime Coquelin, Chenbo Xia This is a preparation step before the next change. Many libraries have their own logging helpers that do not add a newline in their format string. Some previous changes fixed places where some of those helpers are called without a trailing newline. Using RTE_LOG_LINE in the existing helpers will ensure we don't introduce new issues in the future. The problem is that if we simply convert to the RTE_LOG_LINE helper, a future fix may introduce a regression since the logging helper change won't be backported. To address this concern, rename existing helpers: backporting a call to them will trigger some conflict or build issue in LTS branches. Note: - bpf and vhost that still has some debug multilines messages, a direct call to RTE_LOG/RTE_LOG_DP is used: this will make it easier to notice such special cases, - about previously publicly exposed logging helpers, when such helper is not publicly used (iow in public inline API), it is removed from the public API (this is the case for the member library), Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> --- Changes since RFC v2: - kept a RTE_ prefix for bpf log macro to avoid potential collision with external code, --- lib/bpf/bpf.c | 2 +- lib/bpf/bpf_convert.c | 16 +- lib/bpf/bpf_exec.c | 12 +- lib/bpf/bpf_impl.h | 5 +- lib/bpf/bpf_jit_arm64.c | 8 +- lib/bpf/bpf_jit_x86.c | 4 +- lib/bpf/bpf_load.c | 2 +- lib/bpf/bpf_load_elf.c | 24 +- lib/bpf/bpf_pkt.c | 4 +- lib/bpf/bpf_stub.c | 4 +- lib/bpf/bpf_validate.c | 38 +- lib/ethdev/ethdev_driver.c | 44 +- lib/ethdev/ethdev_pci.h | 2 +- lib/ethdev/ethdev_private.c | 10 +- lib/ethdev/rte_class_eth.c | 2 +- lib/ethdev/rte_ethdev.c | 878 +++++++++++++-------------- lib/ethdev/rte_ethdev.h | 52 +- lib/ethdev/rte_ethdev_cman.c | 16 +- lib/ethdev/rte_ethdev_telemetry.c | 44 +- lib/ethdev/rte_flow.c | 64 +- lib/ethdev/rte_flow.h | 3 - lib/ethdev/sff_telemetry.c | 30 +- lib/member/member.h | 14 + lib/member/rte_member.c | 15 +- lib/member/rte_member.h | 9 - lib/member/rte_member_heap.h | 39 +- lib/member/rte_member_ht.c | 13 +- lib/member/rte_member_sketch.c | 41 +- lib/member/rte_member_vbf.c | 9 +- lib/pdump/rte_pdump.c | 112 ++-- lib/power/power_acpi_cpufreq.c | 10 +- lib/power/power_amd_pstate_cpufreq.c | 12 +- lib/power/power_common.c | 4 +- lib/power/power_common.h | 6 +- lib/power/power_cppc_cpufreq.c | 12 +- lib/power/power_intel_uncore.c | 4 +- lib/power/power_pstate_cpufreq.c | 12 +- lib/regexdev/rte_regexdev.c | 86 +-- lib/regexdev/rte_regexdev.h | 14 +- lib/telemetry/telemetry.c | 41 +- lib/vhost/iotlb.c | 18 +- lib/vhost/socket.c | 102 ++-- lib/vhost/vdpa.c | 8 +- lib/vhost/vduse.c | 120 ++-- lib/vhost/vduse.h | 4 +- lib/vhost/vhost.c | 118 ++-- lib/vhost/vhost.h | 24 +- lib/vhost/vhost_user.c | 508 ++++++++-------- lib/vhost/virtio_net.c | 188 +++--- lib/vhost/virtio_net_ctrl.c | 38 +- 50 files changed, 1431 insertions(+), 1414 deletions(-) create mode 100644 lib/member/member.h diff --git a/lib/bpf/bpf.c b/lib/bpf/bpf.c index 8a0254d8bb..bbe75c8bfe 100644 --- a/lib/bpf/bpf.c +++ b/lib/bpf/bpf.c @@ -44,7 +44,7 @@ __rte_bpf_jit(struct rte_bpf *bpf) #endif if (rc != 0) - RTE_BPF_LOG(WARNING, "%s(%p) failed, error code: %d;\n", + RTE_BPF_LOG_LINE(WARNING, "%s(%p) failed, error code: %d;", __func__, bpf, rc); return rc; } diff --git a/lib/bpf/bpf_convert.c b/lib/bpf/bpf_convert.c index d441be6663..d7ff2b4325 100644 --- a/lib/bpf/bpf_convert.c +++ b/lib/bpf/bpf_convert.c @@ -226,8 +226,8 @@ static bool convert_bpf_load(const struct bpf_insn *fp, case SKF_AD_OFF + SKF_AD_RANDOM: case SKF_AD_OFF + SKF_AD_ALU_XOR_X: /* Linux has special negative offsets to access meta-data. */ - RTE_BPF_LOG(ERR, - "rte_bpf_convert: socket offset %d not supported\n", + RTE_BPF_LOG_LINE(ERR, + "rte_bpf_convert: socket offset %d not supported", fp->k - SKF_AD_OFF); return true; default: @@ -246,7 +246,7 @@ static int bpf_convert_filter(const struct bpf_insn *prog, size_t len, uint8_t bpf_src; if (len > BPF_MAXINSNS) { - RTE_BPF_LOG(ERR, "%s: cBPF program too long (%zu insns)\n", + RTE_BPF_LOG_LINE(ERR, "%s: cBPF program too long (%zu insns)", __func__, len); return -EINVAL; } @@ -482,7 +482,7 @@ static int bpf_convert_filter(const struct bpf_insn *prog, size_t len, /* Unknown instruction. */ default: - RTE_BPF_LOG(ERR, "%s: Unknown instruction!: %#x\n", + RTE_BPF_LOG_LINE(ERR, "%s: Unknown instruction!: %#x", __func__, fp->code); goto err; } @@ -526,7 +526,7 @@ rte_bpf_convert(const struct bpf_program *prog) int ret; if (prog == NULL) { - RTE_BPF_LOG(ERR, "%s: NULL program\n", __func__); + RTE_BPF_LOG_LINE(ERR, "%s: NULL program", __func__); rte_errno = EINVAL; return NULL; } @@ -534,12 +534,12 @@ rte_bpf_convert(const struct bpf_program *prog) /* 1st pass: calculate the eBPF program length */ ret = bpf_convert_filter(prog->bf_insns, prog->bf_len, NULL, &ebpf_len); if (ret < 0) { - RTE_BPF_LOG(ERR, "%s: cannot get eBPF length\n", __func__); + RTE_BPF_LOG_LINE(ERR, "%s: cannot get eBPF length", __func__); rte_errno = -ret; return NULL; } - RTE_BPF_LOG(DEBUG, "%s: prog len cBPF=%u -> eBPF=%u\n", + RTE_BPF_LOG_LINE(DEBUG, "%s: prog len cBPF=%u -> eBPF=%u", __func__, prog->bf_len, ebpf_len); prm = rte_zmalloc("bpf_filter", @@ -555,7 +555,7 @@ rte_bpf_convert(const struct bpf_program *prog) /* 2nd pass: remap cBPF to eBPF instructions */ ret = bpf_convert_filter(prog->bf_insns, prog->bf_len, ebpf, &ebpf_len); if (ret < 0) { - RTE_BPF_LOG(ERR, "%s: cannot convert cBPF to eBPF\n", __func__); + RTE_BPF_LOG_LINE(ERR, "%s: cannot convert cBPF to eBPF", __func__); free(prm); rte_errno = -ret; return NULL; diff --git a/lib/bpf/bpf_exec.c b/lib/bpf/bpf_exec.c index 09f4a9a571..5d597ec170 100644 --- a/lib/bpf/bpf_exec.c +++ b/lib/bpf/bpf_exec.c @@ -43,8 +43,8 @@ #define BPF_DIV_ZERO_CHECK(bpf, reg, ins, type) do { \ if ((type)(reg)[(ins)->src_reg] == 0) { \ - RTE_BPF_LOG(ERR, \ - "%s(%p): division by 0 at pc: %#zx;\n", \ + RTE_BPF_LOG_LINE(ERR, \ + "%s(%p): division by 0 at pc: %#zx;", \ __func__, bpf, \ (uintptr_t)(ins) - (uintptr_t)(bpf)->prm.ins); \ return 0; \ @@ -136,8 +136,8 @@ bpf_ld_mbuf(const struct rte_bpf *bpf, uint64_t reg[EBPF_REG_NUM], mb = (const struct rte_mbuf *)(uintptr_t)reg[EBPF_REG_6]; p = rte_pktmbuf_read(mb, off, len, reg + EBPF_REG_0); if (p == NULL) - RTE_BPF_LOG(DEBUG, "%s(bpf=%p, mbuf=%p, ofs=%u, len=%u): " - "load beyond packet boundary at pc: %#zx;\n", + RTE_BPF_LOG_LINE(DEBUG, "%s(bpf=%p, mbuf=%p, ofs=%u, len=%u): " + "load beyond packet boundary at pc: %#zx;", __func__, bpf, mb, off, len, (uintptr_t)(ins) - (uintptr_t)(bpf)->prm.ins); return p; @@ -462,8 +462,8 @@ bpf_exec(const struct rte_bpf *bpf, uint64_t reg[EBPF_REG_NUM]) case (BPF_JMP | EBPF_EXIT): return reg[EBPF_REG_0]; default: - RTE_BPF_LOG(ERR, - "%s(%p): invalid opcode %#x at pc: %#zx;\n", + RTE_BPF_LOG_LINE(ERR, + "%s(%p): invalid opcode %#x at pc: %#zx;", __func__, bpf, ins->code, (uintptr_t)ins - (uintptr_t)bpf->prm.ins); return 0; diff --git a/lib/bpf/bpf_impl.h b/lib/bpf/bpf_impl.h index b483569071..6a82ae4ef2 100644 --- a/lib/bpf/bpf_impl.h +++ b/lib/bpf/bpf_impl.h @@ -27,9 +27,10 @@ int __rte_bpf_jit_x86(struct rte_bpf *bpf); int __rte_bpf_jit_arm64(struct rte_bpf *bpf); extern int rte_bpf_logtype; +#define RTE_LOGTYPE_BPF rte_bpf_logtype -#define RTE_BPF_LOG(lvl, fmt, args...) \ - rte_log(RTE_LOG_## lvl, rte_bpf_logtype, fmt, ##args) +#define RTE_BPF_LOG_LINE(lvl, fmt, args...) \ + RTE_LOG(lvl, BPF, fmt "\n", ##args) static inline size_t bpf_size(uint32_t bpf_op_sz) diff --git a/lib/bpf/bpf_jit_arm64.c b/lib/bpf/bpf_jit_arm64.c index f9ddafd7dc..96b8cd2e03 100644 --- a/lib/bpf/bpf_jit_arm64.c +++ b/lib/bpf/bpf_jit_arm64.c @@ -98,8 +98,8 @@ check_invalid_args(struct a64_jit_ctx *ctx, uint32_t limit) for (idx = 0; idx < limit; idx++) { if (rte_le_to_cpu_32(ctx->ins[idx]) == A64_INVALID_OP_CODE) { - RTE_BPF_LOG(ERR, - "%s: invalid opcode at %u;\n", __func__, idx); + RTE_BPF_LOG_LINE(ERR, + "%s: invalid opcode at %u;", __func__, idx); return -EINVAL; } } @@ -1378,8 +1378,8 @@ emit(struct a64_jit_ctx *ctx, struct rte_bpf *bpf) emit_epilogue(ctx); break; default: - RTE_BPF_LOG(ERR, - "%s(%p): invalid opcode %#x at pc: %u;\n", + RTE_BPF_LOG_LINE(ERR, + "%s(%p): invalid opcode %#x at pc: %u;", __func__, bpf, ins->code, i); return -EINVAL; } diff --git a/lib/bpf/bpf_jit_x86.c b/lib/bpf/bpf_jit_x86.c index a73b2006db..4d74e418f8 100644 --- a/lib/bpf/bpf_jit_x86.c +++ b/lib/bpf/bpf_jit_x86.c @@ -1476,8 +1476,8 @@ emit(struct bpf_jit_state *st, const struct rte_bpf *bpf) emit_epilog(st); break; default: - RTE_BPF_LOG(ERR, - "%s(%p): invalid opcode %#x at pc: %u;\n", + RTE_BPF_LOG_LINE(ERR, + "%s(%p): invalid opcode %#x at pc: %u;", __func__, bpf, ins->code, i); return -EINVAL; } diff --git a/lib/bpf/bpf_load.c b/lib/bpf/bpf_load.c index 45ce9210da..de43347405 100644 --- a/lib/bpf/bpf_load.c +++ b/lib/bpf/bpf_load.c @@ -98,7 +98,7 @@ rte_bpf_load(const struct rte_bpf_prm *prm) if (rc != 0) { rte_errno = -rc; - RTE_BPF_LOG(ERR, "%s: %d-th xsym is invalid\n", __func__, i); + RTE_BPF_LOG_LINE(ERR, "%s: %d-th xsym is invalid", __func__, i); return NULL; } diff --git a/lib/bpf/bpf_load_elf.c b/lib/bpf/bpf_load_elf.c index 02a5d8ba0d..e0abd3c856 100644 --- a/lib/bpf/bpf_load_elf.c +++ b/lib/bpf/bpf_load_elf.c @@ -84,8 +84,8 @@ resolve_xsym(const char *sn, size_t ofs, struct ebpf_insn *ins, size_t ins_sz, * as an ordinary EBPF_CALL. */ if (ins[idx].src_reg == EBPF_PSEUDO_CALL) { - RTE_BPF_LOG(INFO, "%s(%u): " - "EBPF_PSEUDO_CALL to external function: %s\n", + RTE_BPF_LOG_LINE(INFO, "%s(%u): " + "EBPF_PSEUDO_CALL to external function: %s", __func__, idx, sn); ins[idx].src_reg = EBPF_REG_0; } @@ -121,7 +121,7 @@ check_elf_header(const Elf64_Ehdr *eh) err = "unexpected machine type"; if (err != NULL) { - RTE_BPF_LOG(ERR, "%s(): %s\n", __func__, err); + RTE_BPF_LOG_LINE(ERR, "%s(): %s", __func__, err); return -EINVAL; } @@ -144,7 +144,7 @@ find_elf_code(Elf *elf, const char *section, Elf_Data **psd, size_t *pidx) eh = elf64_getehdr(elf); if (eh == NULL) { rc = elf_errno(); - RTE_BPF_LOG(ERR, "%s(%p, %s) error code: %d(%s)\n", + RTE_BPF_LOG_LINE(ERR, "%s(%p, %s) error code: %d(%s)", __func__, elf, section, rc, elf_errmsg(rc)); return -EINVAL; } @@ -167,7 +167,7 @@ find_elf_code(Elf *elf, const char *section, Elf_Data **psd, size_t *pidx) if (sd == NULL || sd->d_size == 0 || sd->d_size % sizeof(struct ebpf_insn) != 0) { rc = elf_errno(); - RTE_BPF_LOG(ERR, "%s(%p, %s) error code: %d(%s)\n", + RTE_BPF_LOG_LINE(ERR, "%s(%p, %s) error code: %d(%s)", __func__, elf, section, rc, elf_errmsg(rc)); return -EINVAL; } @@ -216,8 +216,8 @@ process_reloc(Elf *elf, size_t sym_idx, Elf64_Rel *re, size_t re_sz, rc = resolve_xsym(sn, ofs, ins, ins_sz, prm); if (rc != 0) { - RTE_BPF_LOG(ERR, - "resolve_xsym(%s, %zu) error code: %d\n", + RTE_BPF_LOG_LINE(ERR, + "resolve_xsym(%s, %zu) error code: %d", sn, ofs, rc); return rc; } @@ -309,7 +309,7 @@ rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname, fd = open(fname, O_RDONLY); if (fd < 0) { rc = errno; - RTE_BPF_LOG(ERR, "%s(%s) error code: %d(%s)\n", + RTE_BPF_LOG_LINE(ERR, "%s(%s) error code: %d(%s)", __func__, fname, rc, strerror(rc)); rte_errno = EINVAL; return NULL; @@ -319,15 +319,15 @@ rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname, close(fd); if (bpf == NULL) { - RTE_BPF_LOG(ERR, + RTE_BPF_LOG_LINE(ERR, "%s(fname=\"%s\", sname=\"%s\") failed, " - "error code: %d\n", + "error code: %d", __func__, fname, sname, rte_errno); return NULL; } - RTE_BPF_LOG(INFO, "%s(fname=\"%s\", sname=\"%s\") " - "successfully creates %p(jit={.func=%p,.sz=%zu});\n", + RTE_BPF_LOG_LINE(INFO, "%s(fname=\"%s\", sname=\"%s\") " + "successfully creates %p(jit={.func=%p,.sz=%zu});", __func__, fname, sname, bpf, bpf->jit.func, bpf->jit.sz); return bpf; } diff --git a/lib/bpf/bpf_pkt.c b/lib/bpf/bpf_pkt.c index 7a8e4a6ef4..793a75ded9 100644 --- a/lib/bpf/bpf_pkt.c +++ b/lib/bpf/bpf_pkt.c @@ -512,7 +512,7 @@ bpf_eth_elf_load(struct bpf_eth_cbh *cbh, uint16_t port, uint16_t queue, ftx = select_tx_callback(prm->prog_arg.type, flags); if (frx == NULL && ftx == NULL) { - RTE_BPF_LOG(ERR, "%s(%u, %u): no callback selected;\n", + RTE_BPF_LOG_LINE(ERR, "%s(%u, %u): no callback selected;", __func__, port, queue); return -EINVAL; } @@ -524,7 +524,7 @@ bpf_eth_elf_load(struct bpf_eth_cbh *cbh, uint16_t port, uint16_t queue, rte_bpf_get_jit(bpf, &jit); if ((flags & RTE_BPF_ETH_F_JIT) != 0 && jit.func == NULL) { - RTE_BPF_LOG(ERR, "%s(%u, %u): no JIT generated;\n", + RTE_BPF_LOG_LINE(ERR, "%s(%u, %u): no JIT generated;", __func__, port, queue); rte_bpf_destroy(bpf); return -ENOTSUP; diff --git a/lib/bpf/bpf_stub.c b/lib/bpf/bpf_stub.c index 83c2203622..1babb16bde 100644 --- a/lib/bpf/bpf_stub.c +++ b/lib/bpf/bpf_stub.c @@ -19,7 +19,7 @@ rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname, return NULL; } - RTE_BPF_LOG(ERR, "%s() is not supported, rebuild with libelf installed\n", + RTE_BPF_LOG_LINE(ERR, "%s() is not supported, rebuild with libelf installed", __func__); rte_errno = ENOTSUP; return NULL; @@ -35,7 +35,7 @@ rte_bpf_convert(const struct bpf_program *prog) return NULL; } - RTE_BPF_LOG(ERR, "%s() is not supported, rebuild with libpcap installed\n", + RTE_BPF_LOG_LINE(ERR, "%s() is not supported, rebuild with libpcap installed", __func__); rte_errno = ENOTSUP; return NULL; diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c index f246b3c5eb..79be5e917d 100644 --- a/lib/bpf/bpf_validate.c +++ b/lib/bpf/bpf_validate.c @@ -1812,15 +1812,15 @@ add_edge(struct bpf_verifier *bvf, struct inst_node *node, uint32_t nidx) uint32_t ne; if (nidx > bvf->prm->nb_ins) { - RTE_BPF_LOG(ERR, "%s: program boundary violation at pc: %u, " - "next pc: %u\n", + RTE_BPF_LOG_LINE(ERR, "%s: program boundary violation at pc: %u, " + "next pc: %u", __func__, get_node_idx(bvf, node), nidx); return -EINVAL; } ne = node->nb_edge; if (ne >= RTE_DIM(node->edge_dest)) { - RTE_BPF_LOG(ERR, "%s: internal error at pc: %u\n", + RTE_BPF_LOG_LINE(ERR, "%s: internal error at pc: %u", __func__, get_node_idx(bvf, node)); return -EINVAL; } @@ -1927,7 +1927,7 @@ log_unreachable(const struct bpf_verifier *bvf) if (node->colour == WHITE && ins->code != (BPF_LD | BPF_IMM | EBPF_DW)) - RTE_BPF_LOG(ERR, "unreachable code at pc: %u;\n", i); + RTE_BPF_LOG_LINE(ERR, "unreachable code at pc: %u;", i); } } @@ -1948,8 +1948,8 @@ log_loop(const struct bpf_verifier *bvf) for (j = 0; j != node->nb_edge; j++) { if (node->edge_type[j] == BACK_EDGE) - RTE_BPF_LOG(ERR, - "loop at pc:%u --> pc:%u;\n", + RTE_BPF_LOG_LINE(ERR, + "loop at pc:%u --> pc:%u;", i, node->edge_dest[j]); } } @@ -1979,7 +1979,7 @@ validate(struct bpf_verifier *bvf) err = check_syntax(ins); if (err != 0) { - RTE_BPF_LOG(ERR, "%s: %s at pc: %u\n", + RTE_BPF_LOG_LINE(ERR, "%s: %s at pc: %u", __func__, err, i); rc |= -EINVAL; } @@ -2048,7 +2048,7 @@ validate(struct bpf_verifier *bvf) dfs(bvf); - RTE_BPF_LOG(DEBUG, "%s(%p) stats:\n" + RTE_LOG(DEBUG, BPF, "%s(%p) stats:\n" "nb_nodes=%u;\n" "nb_jcc_nodes=%u;\n" "node_color={[WHITE]=%u, [GREY]=%u,, [BLACK]=%u};\n" @@ -2062,7 +2062,7 @@ validate(struct bpf_verifier *bvf) bvf->edge_type[BACK_EDGE], bvf->edge_type[CROSS_EDGE]); if (bvf->node_colour[BLACK] != bvf->nb_nodes) { - RTE_BPF_LOG(ERR, "%s(%p) unreachable instructions;\n", + RTE_BPF_LOG_LINE(ERR, "%s(%p) unreachable instructions;", __func__, bvf); log_unreachable(bvf); return -EINVAL; @@ -2070,13 +2070,13 @@ validate(struct bpf_verifier *bvf) if (bvf->node_colour[GREY] != 0 || bvf->node_colour[WHITE] != 0 || bvf->edge_type[UNKNOWN_EDGE] != 0) { - RTE_BPF_LOG(ERR, "%s(%p) DFS internal error;\n", + RTE_BPF_LOG_LINE(ERR, "%s(%p) DFS internal error;", __func__, bvf); return -EINVAL; } if (bvf->edge_type[BACK_EDGE] != 0) { - RTE_BPF_LOG(ERR, "%s(%p) loops detected;\n", + RTE_BPF_LOG_LINE(ERR, "%s(%p) loops detected;", __func__, bvf); log_loop(bvf); return -EINVAL; @@ -2144,8 +2144,8 @@ save_eval_state(struct bpf_verifier *bvf, struct inst_node *node) /* get new eval_state for this node */ st = pull_eval_state(bvf); if (st == NULL) { - RTE_BPF_LOG(ERR, - "%s: internal error (out of space) at pc: %u\n", + RTE_BPF_LOG_LINE(ERR, + "%s: internal error (out of space) at pc: %u", __func__, get_node_idx(bvf, node)); return -ENOMEM; } @@ -2157,7 +2157,7 @@ save_eval_state(struct bpf_verifier *bvf, struct inst_node *node) node->evst = bvf->evst; bvf->evst = st; - RTE_BPF_LOG(DEBUG, "%s(bvf=%p,node=%u) old/new states: %p/%p;\n", + RTE_BPF_LOG_LINE(DEBUG, "%s(bvf=%p,node=%u) old/new states: %p/%p;", __func__, bvf, get_node_idx(bvf, node), node->evst, bvf->evst); return 0; @@ -2169,7 +2169,7 @@ save_eval_state(struct bpf_verifier *bvf, struct inst_node *node) static void restore_eval_state(struct bpf_verifier *bvf, struct inst_node *node) { - RTE_BPF_LOG(DEBUG, "%s(bvf=%p,node=%u) old/new states: %p/%p;\n", + RTE_BPF_LOG_LINE(DEBUG, "%s(bvf=%p,node=%u) old/new states: %p/%p;", __func__, bvf, get_node_idx(bvf, node), bvf->evst, node->evst); bvf->evst = node->evst; @@ -2184,12 +2184,12 @@ log_dbg_eval_state(const struct bpf_verifier *bvf, const struct ebpf_insn *ins, const struct bpf_eval_state *st; const struct bpf_reg_val *rv; - RTE_BPF_LOG(DEBUG, "%s(pc=%u):\n", __func__, pc); + RTE_BPF_LOG_LINE(DEBUG, "%s(pc=%u):", __func__, pc); st = bvf->evst; rv = st->rv + ins->dst_reg; - RTE_BPF_LOG(DEBUG, + RTE_LOG(DEBUG, BPF, "r%u={\n" "\tv={type=%u, size=%zu},\n" "\tmask=0x%" PRIx64 ",\n" @@ -2263,7 +2263,7 @@ evaluate(struct bpf_verifier *bvf) if (ins_chk[op].eval != NULL && rc == 0) { err = ins_chk[op].eval(bvf, ins + idx); if (err != NULL) { - RTE_BPF_LOG(ERR, "%s: %s at pc: %u\n", + RTE_BPF_LOG_LINE(ERR, "%s: %s at pc: %u", __func__, err, idx); rc = -EINVAL; } @@ -2312,7 +2312,7 @@ __rte_bpf_validate(struct rte_bpf *bpf) bpf->prm.prog_arg.type != RTE_BPF_ARG_PTR && (sizeof(uint64_t) != sizeof(uintptr_t) || bpf->prm.prog_arg.type != RTE_BPF_ARG_PTR_MBUF)) { - RTE_BPF_LOG(ERR, "%s: unsupported argument type\n", __func__); + RTE_BPF_LOG_LINE(ERR, "%s: unsupported argument type", __func__); return -ENOTSUP; } diff --git a/lib/ethdev/ethdev_driver.c b/lib/ethdev/ethdev_driver.c index 55a9dcc565..bd917a15fc 100644 --- a/lib/ethdev/ethdev_driver.c +++ b/lib/ethdev/ethdev_driver.c @@ -80,12 +80,12 @@ rte_eth_dev_allocate(const char *name) name_len = strnlen(name, RTE_ETH_NAME_MAX_LEN); if (name_len == 0) { - RTE_ETHDEV_LOG(ERR, "Zero length Ethernet device name\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Zero length Ethernet device name"); return NULL; } if (name_len >= RTE_ETH_NAME_MAX_LEN) { - RTE_ETHDEV_LOG(ERR, "Ethernet device name is too long\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Ethernet device name is too long"); return NULL; } @@ -96,16 +96,16 @@ rte_eth_dev_allocate(const char *name) goto unlock; if (eth_dev_allocated(name) != NULL) { - RTE_ETHDEV_LOG(ERR, - "Ethernet device with name %s already allocated\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethernet device with name %s already allocated", name); goto unlock; } port_id = eth_dev_find_free_port(); if (port_id == RTE_MAX_ETHPORTS) { - RTE_ETHDEV_LOG(ERR, - "Reached maximum number of Ethernet ports\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Reached maximum number of Ethernet ports"); goto unlock; } @@ -163,8 +163,8 @@ rte_eth_dev_attach_secondary(const char *name) break; } if (i == RTE_MAX_ETHPORTS) { - RTE_ETHDEV_LOG(ERR, - "Device %s is not driven by the primary process\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Device %s is not driven by the primary process", name); } else { eth_dev = eth_dev_get(i); @@ -302,8 +302,8 @@ rte_eth_dev_create(struct rte_device *device, const char *name, device->numa_node); if (!ethdev->data->dev_private) { - RTE_ETHDEV_LOG(ERR, - "failed to allocate private data\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "failed to allocate private data"); retval = -ENOMEM; goto probe_failed; } @@ -311,8 +311,8 @@ rte_eth_dev_create(struct rte_device *device, const char *name, } else { ethdev = rte_eth_dev_attach_secondary(name); if (!ethdev) { - RTE_ETHDEV_LOG(ERR, - "secondary process attach failed, ethdev doesn't exist\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "secondary process attach failed, ethdev doesn't exist"); return -ENODEV; } } @@ -322,15 +322,15 @@ rte_eth_dev_create(struct rte_device *device, const char *name, if (ethdev_bus_specific_init) { retval = ethdev_bus_specific_init(ethdev, bus_init_params); if (retval) { - RTE_ETHDEV_LOG(ERR, - "ethdev bus specific initialisation failed\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "ethdev bus specific initialisation failed"); goto probe_failed; } } retval = ethdev_init(ethdev, init_params); if (retval) { - RTE_ETHDEV_LOG(ERR, "ethdev initialisation failed\n"); + RTE_ETHDEV_LOG_LINE(ERR, "ethdev initialisation failed"); goto probe_failed; } @@ -394,7 +394,7 @@ void rte_eth_dev_internal_reset(struct rte_eth_dev *dev) { if (dev->data->dev_started) { - RTE_ETHDEV_LOG(ERR, "Port %u must be stopped to allow reset\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port %u must be stopped to allow reset", dev->data->port_id); return; } @@ -487,7 +487,7 @@ rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da) pair = &args.pairs[i]; if (strcmp("representor", pair->key) == 0) { if (eth_da->type != RTE_ETH_REPRESENTOR_NONE) { - RTE_ETHDEV_LOG(ERR, "duplicated representor key: %s\n", + RTE_ETHDEV_LOG_LINE(ERR, "duplicated representor key: %s", dargs); result = -1; goto parse_cleanup; @@ -524,7 +524,7 @@ rte_eth_dma_zone_free(const struct rte_eth_dev *dev, const char *ring_name, rc = eth_dev_dma_mzone_name(z_name, sizeof(z_name), dev->data->port_id, queue_id, ring_name); if (rc >= RTE_MEMZONE_NAMESIZE) { - RTE_ETHDEV_LOG(ERR, "ring name too long\n"); + RTE_ETHDEV_LOG_LINE(ERR, "ring name too long"); return -ENAMETOOLONG; } @@ -549,7 +549,7 @@ rte_eth_dma_zone_reserve(const struct rte_eth_dev *dev, const char *ring_name, rc = eth_dev_dma_mzone_name(z_name, sizeof(z_name), dev->data->port_id, queue_id, ring_name); if (rc >= RTE_MEMZONE_NAMESIZE) { - RTE_ETHDEV_LOG(ERR, "ring name too long\n"); + RTE_ETHDEV_LOG_LINE(ERR, "ring name too long"); rte_errno = ENAMETOOLONG; return NULL; } @@ -559,8 +559,8 @@ rte_eth_dma_zone_reserve(const struct rte_eth_dev *dev, const char *ring_name, if ((socket_id != SOCKET_ID_ANY && socket_id != mz->socket_id) || size > mz->len || ((uintptr_t)mz->addr & (align - 1)) != 0) { - RTE_ETHDEV_LOG(ERR, - "memzone %s does not justify the requested attributes\n", + RTE_ETHDEV_LOG_LINE(ERR, + "memzone %s does not justify the requested attributes", mz->name); return NULL; } @@ -713,7 +713,7 @@ rte_eth_representor_id_get(uint16_t port_id, if (info->ranges[i].controller != controller) continue; if (info->ranges[i].id_end < info->ranges[i].id_base) { - RTE_ETHDEV_LOG(WARNING, "Port %hu invalid representor ID Range %u - %u, entry %d\n", + RTE_ETHDEV_LOG_LINE(WARNING, "Port %hu invalid representor ID Range %u - %u, entry %d", port_id, info->ranges[i].id_base, info->ranges[i].id_end, i); continue; diff --git a/lib/ethdev/ethdev_pci.h b/lib/ethdev/ethdev_pci.h index ddb559aa95..737fff1833 100644 --- a/lib/ethdev/ethdev_pci.h +++ b/lib/ethdev/ethdev_pci.h @@ -31,7 +31,7 @@ rte_eth_copy_pci_info(struct rte_eth_dev *eth_dev, struct rte_pci_device *pci_dev) { if ((eth_dev == NULL) || (pci_dev == NULL)) { - RTE_ETHDEV_LOG(ERR, "NULL pointer eth_dev=%p pci_dev=%p\n", + RTE_ETHDEV_LOG_LINE(ERR, "NULL pointer eth_dev=%p pci_dev=%p", (void *)eth_dev, (void *)pci_dev); return; } diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c index 0e1c7b23c1..a656df293c 100644 --- a/lib/ethdev/ethdev_private.c +++ b/lib/ethdev/ethdev_private.c @@ -182,7 +182,7 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data) RTE_DIM(eth_da->representor_ports)); done: if (str == NULL) - RTE_ETHDEV_LOG(ERR, "wrong representor format: %s\n", str); + RTE_ETHDEV_LOG_LINE(ERR, "wrong representor format: %s", str); return str == NULL ? -1 : 0; } @@ -214,7 +214,7 @@ dummy_eth_rx_burst(void *rxq, port_id = queue - per_port_queues; if (port_id < RTE_DIM(per_port_queues) && !queue->rx_warn_once) { - RTE_ETHDEV_LOG(ERR, "lcore %u called rx_pkt_burst for not ready port %"PRIuPTR"\n", + RTE_ETHDEV_LOG_LINE(ERR, "lcore %u called rx_pkt_burst for not ready port %"PRIuPTR, rte_lcore_id(), port_id); rte_dump_stack(); queue->rx_warn_once = true; @@ -233,7 +233,7 @@ dummy_eth_tx_burst(void *txq, port_id = queue - per_port_queues; if (port_id < RTE_DIM(per_port_queues) && !queue->tx_warn_once) { - RTE_ETHDEV_LOG(ERR, "lcore %u called tx_pkt_burst for not ready port %"PRIuPTR"\n", + RTE_ETHDEV_LOG_LINE(ERR, "lcore %u called tx_pkt_burst for not ready port %"PRIuPTR, rte_lcore_id(), port_id); rte_dump_stack(); queue->tx_warn_once = true; @@ -337,7 +337,7 @@ eth_dev_shared_data_prepare(void) sizeof(*eth_dev_shared_data), rte_socket_id(), flags); if (mz == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot allocate ethdev shared data\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot allocate ethdev shared data"); goto out; } @@ -355,7 +355,7 @@ eth_dev_shared_data_prepare(void) /* Clean remaining any traces of a previous shared mem */ eth_dev_shared_mz = NULL; eth_dev_shared_data = NULL; - RTE_ETHDEV_LOG(ERR, "Cannot lookup ethdev shared data\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot lookup ethdev shared data"); goto out; } if (mz == eth_dev_shared_mz && mz->addr == eth_dev_shared_data) diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c index 311beb17cb..bc003db8af 100644 --- a/lib/ethdev/rte_class_eth.c +++ b/lib/ethdev/rte_class_eth.c @@ -165,7 +165,7 @@ eth_dev_iterate(const void *start, valid_keys = eth_params_keys; kvargs = rte_kvargs_parse(str, valid_keys); if (kvargs == NULL) { - RTE_ETHDEV_LOG(ERR, "cannot parse argument list\n"); + RTE_ETHDEV_LOG_LINE(ERR, "cannot parse argument list"); rte_errno = EINVAL; return NULL; } diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 9dd0efa9d8..c5e75a91c8 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -182,13 +182,13 @@ rte_eth_iterator_init(struct rte_dev_iterator *iter, const char *devargs_str) int str_size; if (iter == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot initialize NULL iterator\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot initialize NULL iterator"); return -EINVAL; } if (devargs_str == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot initialize iterator from NULL device description string\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot initialize iterator from NULL device description string"); return -EINVAL; } @@ -279,7 +279,7 @@ rte_eth_iterator_init(struct rte_dev_iterator *iter, const char *devargs_str) error: if (ret == -ENOTSUP) - RTE_ETHDEV_LOG(ERR, "Bus %s does not support iterating.\n", + RTE_ETHDEV_LOG_LINE(ERR, "Bus %s does not support iterating.", iter->bus->name); rte_devargs_reset(&devargs); free(bus_str); @@ -291,8 +291,8 @@ uint16_t rte_eth_iterator_next(struct rte_dev_iterator *iter) { if (iter == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get next device from NULL iterator\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get next device from NULL iterator"); return RTE_MAX_ETHPORTS; } @@ -331,7 +331,7 @@ void rte_eth_iterator_cleanup(struct rte_dev_iterator *iter) { if (iter == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot do clean up from NULL iterator\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot do clean up from NULL iterator"); return; } @@ -447,7 +447,7 @@ rte_eth_dev_owner_new(uint64_t *owner_id) int ret; if (owner_id == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get new owner ID to NULL\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get new owner ID to NULL"); return -EINVAL; } @@ -477,30 +477,30 @@ eth_dev_owner_set(const uint16_t port_id, const uint64_t old_owner_id, struct rte_eth_dev_owner *port_owner; if (port_id >= RTE_MAX_ETHPORTS || !eth_dev_is_allocated(ethdev)) { - RTE_ETHDEV_LOG(ERR, "Port ID %"PRIu16" is not allocated\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port ID %"PRIu16" is not allocated", port_id); return -ENODEV; } if (new_owner == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot set ethdev port %u owner from NULL owner\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set ethdev port %u owner from NULL owner", port_id); return -EINVAL; } if (!eth_is_valid_owner_id(new_owner->id) && !eth_is_valid_owner_id(old_owner_id)) { - RTE_ETHDEV_LOG(ERR, - "Invalid owner old_id=%016"PRIx64" new_id=%016"PRIx64"\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid owner old_id=%016"PRIx64" new_id=%016"PRIx64, old_owner_id, new_owner->id); return -EINVAL; } port_owner = &rte_eth_devices[port_id].data->owner; if (port_owner->id != old_owner_id) { - RTE_ETHDEV_LOG(ERR, - "Cannot set owner to port %u already owned by %s_%016"PRIX64"\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set owner to port %u already owned by %s_%016"PRIX64, port_id, port_owner->name, port_owner->id); return -EPERM; } @@ -510,7 +510,7 @@ eth_dev_owner_set(const uint16_t port_id, const uint64_t old_owner_id, port_owner->id = new_owner->id; - RTE_ETHDEV_LOG(DEBUG, "Port %u owner is %s_%016"PRIx64"\n", + RTE_ETHDEV_LOG_LINE(DEBUG, "Port %u owner is %s_%016"PRIx64, port_id, new_owner->name, new_owner->id); return 0; @@ -575,14 +575,14 @@ rte_eth_dev_owner_delete(const uint64_t owner_id) memset(&data->owner, 0, sizeof(struct rte_eth_dev_owner)); } - RTE_ETHDEV_LOG(NOTICE, - "All port owners owned by %016"PRIx64" identifier have removed\n", + RTE_ETHDEV_LOG_LINE(NOTICE, + "All port owners owned by %016"PRIx64" identifier have removed", owner_id); eth_dev_shared_data->allocated_owners--; eth_dev_shared_data_release(); } else { - RTE_ETHDEV_LOG(ERR, - "Invalid owner ID=%016"PRIx64"\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid owner ID=%016"PRIx64, owner_id); ret = -EINVAL; } @@ -604,13 +604,13 @@ rte_eth_dev_owner_get(const uint16_t port_id, struct rte_eth_dev_owner *owner) ethdev = &rte_eth_devices[port_id]; if (!eth_dev_is_allocated(ethdev)) { - RTE_ETHDEV_LOG(ERR, "Port ID %"PRIu16" is not allocated\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port ID %"PRIu16" is not allocated", port_id); return -ENODEV; } if (owner == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u owner to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u owner to NULL", port_id); return -EINVAL; } @@ -699,7 +699,7 @@ rte_eth_dev_get_name_by_port(uint16_t port_id, char *name) RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); if (name == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u name to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u name to NULL", port_id); return -EINVAL; } @@ -724,13 +724,13 @@ rte_eth_dev_get_port_by_name(const char *name, uint16_t *port_id) uint16_t pid; if (name == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get port ID from NULL name\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get port ID from NULL name"); return -EINVAL; } if (port_id == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get port ID to NULL for %s\n", name); + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get port ID to NULL for %s", name); return -EINVAL; } @@ -766,16 +766,16 @@ eth_dev_validate_rx_queue(const struct rte_eth_dev *dev, uint16_t rx_queue_id) if (rx_queue_id >= dev->data->nb_rx_queues) { port_id = dev->data->port_id; - RTE_ETHDEV_LOG(ERR, - "Invalid Rx queue_id=%u of device with port_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid Rx queue_id=%u of device with port_id=%u", rx_queue_id, port_id); return -EINVAL; } if (dev->data->rx_queues[rx_queue_id] == NULL) { port_id = dev->data->port_id; - RTE_ETHDEV_LOG(ERR, - "Queue %u of device with port_id=%u has not been setup\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Queue %u of device with port_id=%u has not been setup", rx_queue_id, port_id); return -EINVAL; } @@ -790,16 +790,16 @@ eth_dev_validate_tx_queue(const struct rte_eth_dev *dev, uint16_t tx_queue_id) if (tx_queue_id >= dev->data->nb_tx_queues) { port_id = dev->data->port_id; - RTE_ETHDEV_LOG(ERR, - "Invalid Tx queue_id=%u of device with port_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid Tx queue_id=%u of device with port_id=%u", tx_queue_id, port_id); return -EINVAL; } if (dev->data->tx_queues[tx_queue_id] == NULL) { port_id = dev->data->port_id; - RTE_ETHDEV_LOG(ERR, - "Queue %u of device with port_id=%u has not been setup\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Queue %u of device with port_id=%u has not been setup", tx_queue_id, port_id); return -EINVAL; } @@ -839,8 +839,8 @@ rte_eth_dev_rx_queue_start(uint16_t port_id, uint16_t rx_queue_id) dev = &rte_eth_devices[port_id]; if (!dev->data->dev_started) { - RTE_ETHDEV_LOG(ERR, - "Port %u must be started before start any queue\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u must be started before start any queue", port_id); return -EINVAL; } @@ -853,15 +853,15 @@ rte_eth_dev_rx_queue_start(uint16_t port_id, uint16_t rx_queue_id) return -ENOTSUP; if (rte_eth_dev_is_rx_hairpin_queue(dev, rx_queue_id)) { - RTE_ETHDEV_LOG(INFO, - "Can't start Rx hairpin queue %"PRIu16" of device with port_id=%"PRIu16"\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Can't start Rx hairpin queue %"PRIu16" of device with port_id=%"PRIu16, rx_queue_id, port_id); return -EINVAL; } if (dev->data->rx_queue_state[rx_queue_id] != RTE_ETH_QUEUE_STATE_STOPPED) { - RTE_ETHDEV_LOG(INFO, - "Queue %"PRIu16" of device with port_id=%"PRIu16" already started\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Queue %"PRIu16" of device with port_id=%"PRIu16" already started", rx_queue_id, port_id); return 0; } @@ -890,15 +890,15 @@ rte_eth_dev_rx_queue_stop(uint16_t port_id, uint16_t rx_queue_id) return -ENOTSUP; if (rte_eth_dev_is_rx_hairpin_queue(dev, rx_queue_id)) { - RTE_ETHDEV_LOG(INFO, - "Can't stop Rx hairpin queue %"PRIu16" of device with port_id=%"PRIu16"\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Can't stop Rx hairpin queue %"PRIu16" of device with port_id=%"PRIu16, rx_queue_id, port_id); return -EINVAL; } if (dev->data->rx_queue_state[rx_queue_id] == RTE_ETH_QUEUE_STATE_STOPPED) { - RTE_ETHDEV_LOG(INFO, - "Queue %"PRIu16" of device with port_id=%"PRIu16" already stopped\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Queue %"PRIu16" of device with port_id=%"PRIu16" already stopped", rx_queue_id, port_id); return 0; } @@ -920,8 +920,8 @@ rte_eth_dev_tx_queue_start(uint16_t port_id, uint16_t tx_queue_id) dev = &rte_eth_devices[port_id]; if (!dev->data->dev_started) { - RTE_ETHDEV_LOG(ERR, - "Port %u must be started before start any queue\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u must be started before start any queue", port_id); return -EINVAL; } @@ -934,15 +934,15 @@ rte_eth_dev_tx_queue_start(uint16_t port_id, uint16_t tx_queue_id) return -ENOTSUP; if (rte_eth_dev_is_tx_hairpin_queue(dev, tx_queue_id)) { - RTE_ETHDEV_LOG(INFO, - "Can't start Tx hairpin queue %"PRIu16" of device with port_id=%"PRIu16"\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Can't start Tx hairpin queue %"PRIu16" of device with port_id=%"PRIu16, tx_queue_id, port_id); return -EINVAL; } if (dev->data->tx_queue_state[tx_queue_id] != RTE_ETH_QUEUE_STATE_STOPPED) { - RTE_ETHDEV_LOG(INFO, - "Queue %"PRIu16" of device with port_id=%"PRIu16" already started\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Queue %"PRIu16" of device with port_id=%"PRIu16" already started", tx_queue_id, port_id); return 0; } @@ -971,15 +971,15 @@ rte_eth_dev_tx_queue_stop(uint16_t port_id, uint16_t tx_queue_id) return -ENOTSUP; if (rte_eth_dev_is_tx_hairpin_queue(dev, tx_queue_id)) { - RTE_ETHDEV_LOG(INFO, - "Can't stop Tx hairpin queue %"PRIu16" of device with port_id=%"PRIu16"\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Can't stop Tx hairpin queue %"PRIu16" of device with port_id=%"PRIu16, tx_queue_id, port_id); return -EINVAL; } if (dev->data->tx_queue_state[tx_queue_id] == RTE_ETH_QUEUE_STATE_STOPPED) { - RTE_ETHDEV_LOG(INFO, - "Queue %"PRIu16" of device with port_id=%"PRIu16" already stopped\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Queue %"PRIu16" of device with port_id=%"PRIu16" already stopped", tx_queue_id, port_id); return 0; } @@ -1153,19 +1153,19 @@ eth_dev_check_lro_pkt_size(uint16_t port_id, uint32_t config_size, if (dev_info_size == 0) { if (config_size != max_rx_pkt_len) { - RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size" - " %u != %u is not allowed\n", + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%d max_lro_pkt_size" + " %u != %u is not allowed", port_id, config_size, max_rx_pkt_len); ret = -EINVAL; } } else if (config_size > dev_info_size) { - RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size %u " - "> max allowed value %u\n", port_id, config_size, + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%d max_lro_pkt_size %u " + "> max allowed value %u", port_id, config_size, dev_info_size); ret = -EINVAL; } else if (config_size < RTE_ETHER_MIN_LEN) { - RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size %u " - "< min allowed value %u\n", port_id, config_size, + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%d max_lro_pkt_size %u " + "< min allowed value %u", port_id, config_size, (unsigned int)RTE_ETHER_MIN_LEN); ret = -EINVAL; } @@ -1203,16 +1203,16 @@ eth_dev_validate_offloads(uint16_t port_id, uint64_t req_offloads, /* Check if any offload is requested but not enabled. */ offload = RTE_BIT64(rte_ctz64(offloads_diff)); if (offload & req_offloads) { - RTE_ETHDEV_LOG(ERR, - "Port %u failed to enable %s offload %s\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u failed to enable %s offload %s", port_id, offload_type, offload_name(offload)); ret = -EINVAL; } /* Check if offload couldn't be disabled. */ if (offload & set_offloads) { - RTE_ETHDEV_LOG(DEBUG, - "Port %u %s offload %s is not requested but enabled\n", + RTE_ETHDEV_LOG_LINE(DEBUG, + "Port %u %s offload %s is not requested but enabled", port_id, offload_type, offload_name(offload)); } @@ -1244,14 +1244,14 @@ eth_dev_validate_mtu(uint16_t port_id, struct rte_eth_dev_info *dev_info, uint32_t frame_size; if (mtu < dev_info->min_mtu) { - RTE_ETHDEV_LOG(ERR, - "MTU (%u) < device min MTU (%u) for port_id %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "MTU (%u) < device min MTU (%u) for port_id %u", mtu, dev_info->min_mtu, port_id); return -EINVAL; } if (mtu > dev_info->max_mtu) { - RTE_ETHDEV_LOG(ERR, - "MTU (%u) > device max MTU (%u) for port_id %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "MTU (%u) > device max MTU (%u) for port_id %u", mtu, dev_info->max_mtu, port_id); return -EINVAL; } @@ -1260,15 +1260,15 @@ eth_dev_validate_mtu(uint16_t port_id, struct rte_eth_dev_info *dev_info, dev_info->max_mtu); frame_size = mtu + overhead_len; if (frame_size < RTE_ETHER_MIN_LEN) { - RTE_ETHDEV_LOG(ERR, - "Frame size (%u) < min frame size (%u) for port_id %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Frame size (%u) < min frame size (%u) for port_id %u", frame_size, RTE_ETHER_MIN_LEN, port_id); return -EINVAL; } if (frame_size > dev_info->max_rx_pktlen) { - RTE_ETHDEV_LOG(ERR, - "Frame size (%u) > device max frame size (%u) for port_id %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Frame size (%u) > device max frame size (%u) for port_id %u", frame_size, dev_info->max_rx_pktlen, port_id); return -EINVAL; } @@ -1292,8 +1292,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, dev = &rte_eth_devices[port_id]; if (dev_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot configure ethdev port %u from NULL config\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot configure ethdev port %u from NULL config", port_id); return -EINVAL; } @@ -1302,8 +1302,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, return -ENOTSUP; if (dev->data->dev_started) { - RTE_ETHDEV_LOG(ERR, - "Port %u must be stopped to allow configuration\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u must be stopped to allow configuration", port_id); return -EBUSY; } @@ -1334,7 +1334,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, dev_conf->rxmode.reserved_64s[1] != 0 || dev_conf->rxmode.reserved_ptrs[0] != NULL || dev_conf->rxmode.reserved_ptrs[1] != NULL) { - RTE_ETHDEV_LOG(ERR, "Rxmode reserved fields not zero\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rxmode reserved fields not zero"); ret = -EINVAL; goto rollback; } @@ -1343,7 +1343,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, dev_conf->txmode.reserved_64s[1] != 0 || dev_conf->txmode.reserved_ptrs[0] != NULL || dev_conf->txmode.reserved_ptrs[1] != NULL) { - RTE_ETHDEV_LOG(ERR, "txmode reserved fields not zero\n"); + RTE_ETHDEV_LOG_LINE(ERR, "txmode reserved fields not zero"); ret = -EINVAL; goto rollback; } @@ -1368,16 +1368,16 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, } if (nb_rx_q > RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Number of Rx queues requested (%u) is greater than max supported(%d)\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Number of Rx queues requested (%u) is greater than max supported(%d)", nb_rx_q, RTE_MAX_QUEUES_PER_PORT); ret = -EINVAL; goto rollback; } if (nb_tx_q > RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Number of Tx queues requested (%u) is greater than max supported(%d)\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Number of Tx queues requested (%u) is greater than max supported(%d)", nb_tx_q, RTE_MAX_QUEUES_PER_PORT); ret = -EINVAL; goto rollback; @@ -1389,14 +1389,14 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, * configured device. */ if (nb_rx_q > dev_info.max_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%u nb_rx_queues=%u > %u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u nb_rx_queues=%u > %u", port_id, nb_rx_q, dev_info.max_rx_queues); ret = -EINVAL; goto rollback; } if (nb_tx_q > dev_info.max_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%u nb_tx_queues=%u > %u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u nb_tx_queues=%u > %u", port_id, nb_tx_q, dev_info.max_tx_queues); ret = -EINVAL; goto rollback; @@ -1405,14 +1405,14 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, /* Check that the device supports requested interrupts */ if ((dev_conf->intr_conf.lsc == 1) && (!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC))) { - RTE_ETHDEV_LOG(ERR, "Driver %s does not support lsc\n", + RTE_ETHDEV_LOG_LINE(ERR, "Driver %s does not support lsc", dev->device->driver->name); ret = -EINVAL; goto rollback; } if ((dev_conf->intr_conf.rmv == 1) && (!(dev->data->dev_flags & RTE_ETH_DEV_INTR_RMV))) { - RTE_ETHDEV_LOG(ERR, "Driver %s does not support rmv\n", + RTE_ETHDEV_LOG_LINE(ERR, "Driver %s does not support rmv", dev->device->driver->name); ret = -EINVAL; goto rollback; @@ -1456,14 +1456,14 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, dev_conf->rxmode.offloads) { char buffer[512]; - RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%u does not support Rx offloads %s\n", + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u does not support Rx offloads %s", port_id, eth_dev_offload_names( dev_conf->rxmode.offloads & ~dev_info.rx_offload_capa, buffer, sizeof(buffer), rte_eth_dev_rx_offload_name)); - RTE_ETHDEV_LOG(DEBUG, "Ethdev port_id=%u was requested Rx offloads %s\n", + RTE_ETHDEV_LOG_LINE(DEBUG, "Ethdev port_id=%u was requested Rx offloads %s", port_id, eth_dev_offload_names(dev_conf->rxmode.offloads, buffer, sizeof(buffer), rte_eth_dev_rx_offload_name)); - RTE_ETHDEV_LOG(DEBUG, "Ethdev port_id=%u supports Rx offloads %s\n", + RTE_ETHDEV_LOG_LINE(DEBUG, "Ethdev port_id=%u supports Rx offloads %s", port_id, eth_dev_offload_names(dev_info.rx_offload_capa, buffer, sizeof(buffer), rte_eth_dev_rx_offload_name)); @@ -1474,14 +1474,14 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, dev_conf->txmode.offloads) { char buffer[512]; - RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%u does not support Tx offloads %s\n", + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u does not support Tx offloads %s", port_id, eth_dev_offload_names( dev_conf->txmode.offloads & ~dev_info.tx_offload_capa, buffer, sizeof(buffer), rte_eth_dev_tx_offload_name)); - RTE_ETHDEV_LOG(DEBUG, "Ethdev port_id=%u was requested Tx offloads %s\n", + RTE_ETHDEV_LOG_LINE(DEBUG, "Ethdev port_id=%u was requested Tx offloads %s", port_id, eth_dev_offload_names(dev_conf->txmode.offloads, buffer, sizeof(buffer), rte_eth_dev_tx_offload_name)); - RTE_ETHDEV_LOG(DEBUG, "Ethdev port_id=%u supports Tx offloads %s\n", + RTE_ETHDEV_LOG_LINE(DEBUG, "Ethdev port_id=%u supports Tx offloads %s", port_id, eth_dev_offload_names(dev_info.tx_offload_capa, buffer, sizeof(buffer), rte_eth_dev_tx_offload_name)); ret = -EINVAL; @@ -1495,8 +1495,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, if ((dev_info.flow_type_rss_offloads | dev_conf->rx_adv_conf.rss_conf.rss_hf) != dev_info.flow_type_rss_offloads) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%u invalid rss_hf: 0x%"PRIx64", valid value: 0x%"PRIx64"\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethdev port_id=%u invalid rss_hf: 0x%"PRIx64", valid value: 0x%"PRIx64, port_id, dev_conf->rx_adv_conf.rss_conf.rss_hf, dev_info.flow_type_rss_offloads); ret = -EINVAL; @@ -1506,8 +1506,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, /* Check if Rx RSS distribution is disabled but RSS hash is enabled. */ if (((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0) && (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%u config invalid Rx mq_mode without RSS but %s offload is requested\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethdev port_id=%u config invalid Rx mq_mode without RSS but %s offload is requested", port_id, rte_eth_dev_rx_offload_name(RTE_ETH_RX_OFFLOAD_RSS_HASH)); ret = -EINVAL; @@ -1516,8 +1516,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, if (dev_conf->rx_adv_conf.rss_conf.rss_key != NULL && dev_conf->rx_adv_conf.rss_conf.rss_key_len != dev_info.hash_key_size) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%u invalid RSS key len: %u, valid value: %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethdev port_id=%u invalid RSS key len: %u, valid value: %u", port_id, dev_conf->rx_adv_conf.rss_conf.rss_key_len, dev_info.hash_key_size); ret = -EINVAL; @@ -1527,9 +1527,9 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, algorithm = dev_conf->rx_adv_conf.rss_conf.algorithm; if ((size_t)algorithm >= CHAR_BIT * sizeof(dev_info.rss_algo_capa) || (dev_info.rss_algo_capa & RTE_ETH_HASH_ALGO_TO_CAPA(algorithm)) == 0) { - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u configured RSS hash algorithm (%u)" - "is not in the algorithm capability (0x%" PRIx32 ")\n", + "is not in the algorithm capability (0x%" PRIx32 ")", port_id, algorithm, dev_info.rss_algo_capa); ret = -EINVAL; goto rollback; @@ -1540,8 +1540,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, */ diag = eth_dev_rx_queue_config(dev, nb_rx_q); if (diag != 0) { - RTE_ETHDEV_LOG(ERR, - "Port%u eth_dev_rx_queue_config = %d\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port%u eth_dev_rx_queue_config = %d", port_id, diag); ret = diag; goto rollback; @@ -1549,8 +1549,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, diag = eth_dev_tx_queue_config(dev, nb_tx_q); if (diag != 0) { - RTE_ETHDEV_LOG(ERR, - "Port%u eth_dev_tx_queue_config = %d\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port%u eth_dev_tx_queue_config = %d", port_id, diag); eth_dev_rx_queue_config(dev, 0); ret = diag; @@ -1559,7 +1559,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, diag = (*dev->dev_ops->dev_configure)(dev); if (diag != 0) { - RTE_ETHDEV_LOG(ERR, "Port%u dev_configure = %d\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port%u dev_configure = %d", port_id, diag); ret = eth_err(port_id, diag); goto reset_queues; @@ -1568,7 +1568,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, /* Initialize Rx profiling if enabled at compilation time. */ diag = __rte_eth_dev_profile_init(port_id, dev); if (diag != 0) { - RTE_ETHDEV_LOG(ERR, "Port%u __rte_eth_dev_profile_init = %d\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port%u __rte_eth_dev_profile_init = %d", port_id, diag); ret = eth_err(port_id, diag); goto reset_queues; @@ -1666,8 +1666,8 @@ eth_dev_config_restore(struct rte_eth_dev *dev, ret = eth_err(port_id, (*dev->dev_ops->promiscuous_enable)(dev)); if (ret != 0 && ret != -ENOTSUP) { - RTE_ETHDEV_LOG(ERR, - "Failed to enable promiscuous mode for device (port %u): %s\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to enable promiscuous mode for device (port %u): %s", port_id, rte_strerror(-ret)); return ret; } @@ -1676,8 +1676,8 @@ eth_dev_config_restore(struct rte_eth_dev *dev, ret = eth_err(port_id, (*dev->dev_ops->promiscuous_disable)(dev)); if (ret != 0 && ret != -ENOTSUP) { - RTE_ETHDEV_LOG(ERR, - "Failed to disable promiscuous mode for device (port %u): %s\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to disable promiscuous mode for device (port %u): %s", port_id, rte_strerror(-ret)); return ret; } @@ -1693,8 +1693,8 @@ eth_dev_config_restore(struct rte_eth_dev *dev, ret = eth_err(port_id, (*dev->dev_ops->allmulticast_enable)(dev)); if (ret != 0 && ret != -ENOTSUP) { - RTE_ETHDEV_LOG(ERR, - "Failed to enable allmulticast mode for device (port %u): %s\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to enable allmulticast mode for device (port %u): %s", port_id, rte_strerror(-ret)); return ret; } @@ -1703,8 +1703,8 @@ eth_dev_config_restore(struct rte_eth_dev *dev, ret = eth_err(port_id, (*dev->dev_ops->allmulticast_disable)(dev)); if (ret != 0 && ret != -ENOTSUP) { - RTE_ETHDEV_LOG(ERR, - "Failed to disable allmulticast mode for device (port %u): %s\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to disable allmulticast mode for device (port %u): %s", port_id, rte_strerror(-ret)); return ret; } @@ -1728,15 +1728,15 @@ rte_eth_dev_start(uint16_t port_id) return -ENOTSUP; if (dev->data->dev_configured == 0) { - RTE_ETHDEV_LOG(INFO, - "Device with port_id=%"PRIu16" is not configured.\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Device with port_id=%"PRIu16" is not configured.", port_id); return -EINVAL; } if (dev->data->dev_started != 0) { - RTE_ETHDEV_LOG(INFO, - "Device with port_id=%"PRIu16" already started\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Device with port_id=%"PRIu16" already started", port_id); return 0; } @@ -1757,13 +1757,13 @@ rte_eth_dev_start(uint16_t port_id) ret = eth_dev_config_restore(dev, &dev_info, port_id); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, - "Error during restoring configuration for device (port %u): %s\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Error during restoring configuration for device (port %u): %s", port_id, rte_strerror(-ret)); ret_stop = rte_eth_dev_stop(port_id); if (ret_stop != 0) { - RTE_ETHDEV_LOG(ERR, - "Failed to stop device (port %u): %s\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to stop device (port %u): %s", port_id, rte_strerror(-ret_stop)); } @@ -1796,8 +1796,8 @@ rte_eth_dev_stop(uint16_t port_id) return -ENOTSUP; if (dev->data->dev_started == 0) { - RTE_ETHDEV_LOG(INFO, - "Device with port_id=%"PRIu16" already stopped\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Device with port_id=%"PRIu16" already stopped", port_id); return 0; } @@ -1866,7 +1866,7 @@ rte_eth_dev_close(uint16_t port_id) */ if (rte_eal_process_type() == RTE_PROC_PRIMARY && dev->data->dev_started) { - RTE_ETHDEV_LOG(ERR, "Cannot close started device (port %u)\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot close started device (port %u)", port_id); return -EINVAL; } @@ -1897,8 +1897,8 @@ rte_eth_dev_reset(uint16_t port_id) ret = rte_eth_dev_stop(port_id); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, - "Failed to stop device (port %u) before reset: %s - ignore\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to stop device (port %u) before reset: %s - ignore", port_id, rte_strerror(-ret)); } ret = eth_err(port_id, dev->dev_ops->dev_reset(dev)); @@ -1946,7 +1946,7 @@ rte_eth_check_rx_mempool(struct rte_mempool *mp, uint16_t offset, */ if (mp->private_data_size < sizeof(struct rte_pktmbuf_pool_private)) { - RTE_ETHDEV_LOG(ERR, "%s private_data_size %u < %u\n", + RTE_ETHDEV_LOG_LINE(ERR, "%s private_data_size %u < %u", mp->name, mp->private_data_size, (unsigned int) sizeof(struct rte_pktmbuf_pool_private)); @@ -1954,8 +1954,8 @@ rte_eth_check_rx_mempool(struct rte_mempool *mp, uint16_t offset, } data_room_size = rte_pktmbuf_data_room_size(mp); if (data_room_size < offset + min_length) { - RTE_ETHDEV_LOG(ERR, - "%s mbuf_data_room_size %u < %u (%u + %u)\n", + RTE_ETHDEV_LOG_LINE(ERR, + "%s mbuf_data_room_size %u < %u (%u + %u)", mp->name, data_room_size, offset + min_length, offset, min_length); return -EINVAL; @@ -2001,8 +2001,8 @@ rte_eth_rx_queue_check_split(uint16_t port_id, int i; if (n_seg > seg_capa->max_nseg) { - RTE_ETHDEV_LOG(ERR, - "Requested Rx segments %u exceed supported %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Requested Rx segments %u exceed supported %u", n_seg, seg_capa->max_nseg); return -EINVAL; } @@ -2023,24 +2023,24 @@ rte_eth_rx_queue_check_split(uint16_t port_id, uint32_t proto_hdr = rx_seg[seg_idx].proto_hdr; if (mpl == NULL) { - RTE_ETHDEV_LOG(ERR, "null mempool pointer\n"); + RTE_ETHDEV_LOG_LINE(ERR, "null mempool pointer"); ret = -EINVAL; goto out; } if (seg_idx != 0 && mp_first != mpl && seg_capa->multi_pools == 0) { - RTE_ETHDEV_LOG(ERR, "Receiving to multiple pools is not supported\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Receiving to multiple pools is not supported"); ret = -ENOTSUP; goto out; } if (offset != 0) { if (seg_capa->offset_allowed == 0) { - RTE_ETHDEV_LOG(ERR, "Rx segmentation with offset is not supported\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx segmentation with offset is not supported"); ret = -ENOTSUP; goto out; } if (offset & offset_mask) { - RTE_ETHDEV_LOG(ERR, "Rx segmentation invalid offset alignment %u, %u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Rx segmentation invalid offset alignment %u, %u", offset, seg_capa->offset_align_log2); ret = -EINVAL; @@ -2053,22 +2053,22 @@ rte_eth_rx_queue_check_split(uint16_t port_id, if (proto_hdr != 0) { /* Split based on protocol headers. */ if (length != 0) { - RTE_ETHDEV_LOG(ERR, - "Do not set length split and protocol split within a segment\n" + RTE_ETHDEV_LOG_LINE(ERR, + "Do not set length split and protocol split within a segment" ); ret = -EINVAL; goto out; } if ((proto_hdr & prev_proto_hdrs) != 0) { - RTE_ETHDEV_LOG(ERR, - "Repeat with previous protocol headers or proto-split after length-based split\n" + RTE_ETHDEV_LOG_LINE(ERR, + "Repeat with previous protocol headers or proto-split after length-based split" ); ret = -EINVAL; goto out; } if (ptype_cnt <= 0) { - RTE_ETHDEV_LOG(ERR, - "Port %u failed to get supported buffer split header protocols\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u failed to get supported buffer split header protocols", port_id); ret = -ENOTSUP; goto out; @@ -2078,8 +2078,8 @@ rte_eth_rx_queue_check_split(uint16_t port_id, break; } if (i == ptype_cnt) { - RTE_ETHDEV_LOG(ERR, - "Requested Rx split header protocols 0x%x is not supported.\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Requested Rx split header protocols 0x%x is not supported.", proto_hdr); ret = -EINVAL; goto out; @@ -2109,8 +2109,8 @@ rte_eth_rx_queue_check_mempools(struct rte_mempool **rx_mempools, int ret; if (n_mempools > dev_info->max_rx_mempools) { - RTE_ETHDEV_LOG(ERR, - "Too many Rx mempools %u vs maximum %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Too many Rx mempools %u vs maximum %u", n_mempools, dev_info->max_rx_mempools); return -EINVAL; } @@ -2119,7 +2119,7 @@ rte_eth_rx_queue_check_mempools(struct rte_mempool **rx_mempools, struct rte_mempool *mp = rx_mempools[pool_idx]; if (mp == NULL) { - RTE_ETHDEV_LOG(ERR, "null Rx mempool pointer\n"); + RTE_ETHDEV_LOG_LINE(ERR, "null Rx mempool pointer"); return -EINVAL; } @@ -2153,7 +2153,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, dev = &rte_eth_devices[port_id]; if (rx_queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", rx_queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", rx_queue_id); return -EINVAL; } @@ -2165,7 +2165,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, rx_conf->reserved_64s[1] != 0 || rx_conf->reserved_ptrs[0] != NULL || rx_conf->reserved_ptrs[1] != NULL)) { - RTE_ETHDEV_LOG(ERR, "Rx conf reserved fields not zero\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx conf reserved fields not zero"); return -EINVAL; } @@ -2181,8 +2181,8 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, if ((mp != NULL) + (rx_conf != NULL && rx_conf->rx_nseg > 0) + (rx_conf != NULL && rx_conf->rx_nmempool > 0) != 1) { - RTE_ETHDEV_LOG(ERR, - "Ambiguous Rx mempools configuration\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Ambiguous Rx mempools configuration"); return -EINVAL; } @@ -2196,9 +2196,9 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, mbp_buf_size = rte_pktmbuf_data_room_size(mp); buf_data_size = mbp_buf_size - RTE_PKTMBUF_HEADROOM; if (buf_data_size > dev_info.max_rx_bufsize) - RTE_ETHDEV_LOG(DEBUG, + RTE_ETHDEV_LOG_LINE(DEBUG, "For port_id=%u, the mbuf data buffer size (%u) is bigger than " - "max buffer size (%u) device can utilize, so mbuf size can be reduced.\n", + "max buffer size (%u) device can utilize, so mbuf size can be reduced.", port_id, buf_data_size, dev_info.max_rx_bufsize); } else if (rx_conf != NULL && rx_conf->rx_nseg > 0) { const struct rte_eth_rxseg_split *rx_seg; @@ -2206,8 +2206,8 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, /* Extended multi-segment configuration check. */ if (rx_conf->rx_seg == NULL) { - RTE_ETHDEV_LOG(ERR, - "Memory pool is null and no multi-segment configuration provided\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Memory pool is null and no multi-segment configuration provided"); return -EINVAL; } @@ -2221,13 +2221,13 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, if (ret != 0) return ret; } else { - RTE_ETHDEV_LOG(ERR, "No Rx segmentation offload configured\n"); + RTE_ETHDEV_LOG_LINE(ERR, "No Rx segmentation offload configured"); return -EINVAL; } } else if (rx_conf != NULL && rx_conf->rx_nmempool > 0) { /* Extended multi-pool configuration check. */ if (rx_conf->rx_mempools == NULL) { - RTE_ETHDEV_LOG(ERR, "Memory pools array is null\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Memory pools array is null"); return -EINVAL; } @@ -2238,7 +2238,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, if (ret != 0) return ret; } else { - RTE_ETHDEV_LOG(ERR, "Missing Rx mempool configuration\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Missing Rx mempool configuration"); return -EINVAL; } @@ -2254,8 +2254,8 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, nb_rx_desc < dev_info.rx_desc_lim.nb_min || nb_rx_desc % dev_info.rx_desc_lim.nb_align != 0) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for nb_rx_desc(=%hu), should be: <= %hu, >= %hu, and a product of %hu\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for nb_rx_desc(=%hu), should be: <= %hu, >= %hu, and a product of %hu", nb_rx_desc, dev_info.rx_desc_lim.nb_max, dev_info.rx_desc_lim.nb_min, dev_info.rx_desc_lim.nb_align); @@ -2299,9 +2299,9 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, */ if ((local_conf.offloads & dev_info.rx_queue_offload_capa) != local_conf.offloads) { - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%d rx_queue_id=%d, new added offloads 0x%"PRIx64" must be " - "within per-queue offload capabilities 0x%"PRIx64" in %s()\n", + "within per-queue offload capabilities 0x%"PRIx64" in %s()", port_id, rx_queue_id, local_conf.offloads, dev_info.rx_queue_offload_capa, __func__); @@ -2310,8 +2310,8 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, if (local_conf.share_group > 0 && (dev_info.dev_capa & RTE_ETH_DEV_CAPA_RXQ_SHARE) == 0) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%d rx_queue_id=%d, enabled share_group=%hu while device doesn't support Rx queue share\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethdev port_id=%d rx_queue_id=%d, enabled share_group=%hu while device doesn't support Rx queue share", port_id, rx_queue_id, local_conf.share_group); return -EINVAL; } @@ -2367,20 +2367,20 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id, dev = &rte_eth_devices[port_id]; if (rx_queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", rx_queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", rx_queue_id); return -EINVAL; } if (conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot setup ethdev port %u Rx hairpin queue from NULL config\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot setup ethdev port %u Rx hairpin queue from NULL config", port_id); return -EINVAL; } if (conf->reserved != 0) { - RTE_ETHDEV_LOG(ERR, - "Rx hairpin reserved field not zero\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Rx hairpin reserved field not zero"); return -EINVAL; } @@ -2393,42 +2393,42 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id, if (nb_rx_desc == 0) nb_rx_desc = cap.max_nb_desc; if (nb_rx_desc > cap.max_nb_desc) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for nb_rx_desc(=%hu), should be: <= %hu\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for nb_rx_desc(=%hu), should be: <= %hu", nb_rx_desc, cap.max_nb_desc); return -EINVAL; } if (conf->peer_count > cap.max_rx_2_tx) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Rx queue(=%u), should be: <= %hu\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for number of peers for Rx queue(=%u), should be: <= %hu", conf->peer_count, cap.max_rx_2_tx); return -EINVAL; } if (conf->use_locked_device_memory && !cap.rx_cap.locked_device_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to use locked device memory for Rx queue, which is not supported\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to use locked device memory for Rx queue, which is not supported"); return -EINVAL; } if (conf->use_rte_memory && !cap.rx_cap.rte_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to use DPDK memory for Rx queue, which is not supported\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to use DPDK memory for Rx queue, which is not supported"); return -EINVAL; } if (conf->use_locked_device_memory && conf->use_rte_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to use mutually exclusive memory settings for Rx queue\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to use mutually exclusive memory settings for Rx queue"); return -EINVAL; } if (conf->force_memory && !conf->use_locked_device_memory && !conf->use_rte_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to force Rx queue memory settings, but none is set\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to force Rx queue memory settings, but none is set"); return -EINVAL; } if (conf->peer_count == 0) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Rx queue(=%u), should be: > 0\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for number of peers for Rx queue(=%u), should be: > 0", conf->peer_count); return -EINVAL; } @@ -2438,7 +2438,7 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id, count++; } if (count > cap.max_nb_queues) { - RTE_ETHDEV_LOG(ERR, "To many Rx hairpin queues max is %d\n", + RTE_ETHDEV_LOG_LINE(ERR, "To many Rx hairpin queues max is %d", cap.max_nb_queues); return -EINVAL; } @@ -2472,7 +2472,7 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id, dev = &rte_eth_devices[port_id]; if (tx_queue_id >= dev->data->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u\n", tx_queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u", tx_queue_id); return -EINVAL; } @@ -2484,7 +2484,7 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id, tx_conf->reserved_64s[1] != 0 || tx_conf->reserved_ptrs[0] != NULL || tx_conf->reserved_ptrs[1] != NULL)) { - RTE_ETHDEV_LOG(ERR, "Tx conf reserved fields not zero\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Tx conf reserved fields not zero"); return -EINVAL; } @@ -2502,8 +2502,8 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id, if (nb_tx_desc > dev_info.tx_desc_lim.nb_max || nb_tx_desc < dev_info.tx_desc_lim.nb_min || nb_tx_desc % dev_info.tx_desc_lim.nb_align != 0) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for nb_tx_desc(=%hu), should be: <= %hu, >= %hu, and a product of %hu\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for nb_tx_desc(=%hu), should be: <= %hu, >= %hu, and a product of %hu", nb_tx_desc, dev_info.tx_desc_lim.nb_max, dev_info.tx_desc_lim.nb_min, dev_info.tx_desc_lim.nb_align); @@ -2547,9 +2547,9 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id, */ if ((local_conf.offloads & dev_info.tx_queue_offload_capa) != local_conf.offloads) { - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%d tx_queue_id=%d, new added offloads 0x%"PRIx64" must be " - "within per-queue offload capabilities 0x%"PRIx64" in %s()\n", + "within per-queue offload capabilities 0x%"PRIx64" in %s()", port_id, tx_queue_id, local_conf.offloads, dev_info.tx_queue_offload_capa, __func__); @@ -2576,13 +2576,13 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id, dev = &rte_eth_devices[port_id]; if (tx_queue_id >= dev->data->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u\n", tx_queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u", tx_queue_id); return -EINVAL; } if (conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot setup ethdev port %u Tx hairpin queue from NULL config\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot setup ethdev port %u Tx hairpin queue from NULL config", port_id); return -EINVAL; } @@ -2596,42 +2596,42 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id, if (nb_tx_desc == 0) nb_tx_desc = cap.max_nb_desc; if (nb_tx_desc > cap.max_nb_desc) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for nb_tx_desc(=%hu), should be: <= %hu\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for nb_tx_desc(=%hu), should be: <= %hu", nb_tx_desc, cap.max_nb_desc); return -EINVAL; } if (conf->peer_count > cap.max_tx_2_rx) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Tx queue(=%u), should be: <= %hu\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for number of peers for Tx queue(=%u), should be: <= %hu", conf->peer_count, cap.max_tx_2_rx); return -EINVAL; } if (conf->use_locked_device_memory && !cap.tx_cap.locked_device_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to use locked device memory for Tx queue, which is not supported\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to use locked device memory for Tx queue, which is not supported"); return -EINVAL; } if (conf->use_rte_memory && !cap.tx_cap.rte_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to use DPDK memory for Tx queue, which is not supported\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to use DPDK memory for Tx queue, which is not supported"); return -EINVAL; } if (conf->use_locked_device_memory && conf->use_rte_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to use mutually exclusive memory settings for Tx queue\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to use mutually exclusive memory settings for Tx queue"); return -EINVAL; } if (conf->force_memory && !conf->use_locked_device_memory && !conf->use_rte_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to force Tx queue memory settings, but none is set\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to force Tx queue memory settings, but none is set"); return -EINVAL; } if (conf->peer_count == 0) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Tx queue(=%u), should be: > 0\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for number of peers for Tx queue(=%u), should be: > 0", conf->peer_count); return -EINVAL; } @@ -2641,7 +2641,7 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id, count++; } if (count > cap.max_nb_queues) { - RTE_ETHDEV_LOG(ERR, "To many Tx hairpin queues max is %d\n", + RTE_ETHDEV_LOG_LINE(ERR, "To many Tx hairpin queues max is %d", cap.max_nb_queues); return -EINVAL; } @@ -2671,7 +2671,7 @@ rte_eth_hairpin_bind(uint16_t tx_port, uint16_t rx_port) dev = &rte_eth_devices[tx_port]; if (dev->data->dev_started == 0) { - RTE_ETHDEV_LOG(ERR, "Tx port %d is not started\n", tx_port); + RTE_ETHDEV_LOG_LINE(ERR, "Tx port %d is not started", tx_port); return -EBUSY; } @@ -2679,8 +2679,8 @@ rte_eth_hairpin_bind(uint16_t tx_port, uint16_t rx_port) return -ENOTSUP; ret = (*dev->dev_ops->hairpin_bind)(dev, rx_port); if (ret != 0) - RTE_ETHDEV_LOG(ERR, "Failed to bind hairpin Tx %d" - " to Rx %d (%d - all ports)\n", + RTE_ETHDEV_LOG_LINE(ERR, "Failed to bind hairpin Tx %d" + " to Rx %d (%d - all ports)", tx_port, rx_port, RTE_MAX_ETHPORTS); rte_eth_trace_hairpin_bind(tx_port, rx_port, ret); @@ -2698,7 +2698,7 @@ rte_eth_hairpin_unbind(uint16_t tx_port, uint16_t rx_port) dev = &rte_eth_devices[tx_port]; if (dev->data->dev_started == 0) { - RTE_ETHDEV_LOG(ERR, "Tx port %d is already stopped\n", tx_port); + RTE_ETHDEV_LOG_LINE(ERR, "Tx port %d is already stopped", tx_port); return -EBUSY; } @@ -2706,8 +2706,8 @@ rte_eth_hairpin_unbind(uint16_t tx_port, uint16_t rx_port) return -ENOTSUP; ret = (*dev->dev_ops->hairpin_unbind)(dev, rx_port); if (ret != 0) - RTE_ETHDEV_LOG(ERR, "Failed to unbind hairpin Tx %d" - " from Rx %d (%d - all ports)\n", + RTE_ETHDEV_LOG_LINE(ERR, "Failed to unbind hairpin Tx %d" + " from Rx %d (%d - all ports)", tx_port, rx_port, RTE_MAX_ETHPORTS); rte_eth_trace_hairpin_unbind(tx_port, rx_port, ret); @@ -2726,15 +2726,15 @@ rte_eth_hairpin_get_peer_ports(uint16_t port_id, uint16_t *peer_ports, dev = &rte_eth_devices[port_id]; if (peer_ports == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u hairpin peer ports to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u hairpin peer ports to NULL", port_id); return -EINVAL; } if (len == 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u hairpin peer ports to array with zero size\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u hairpin peer ports to array with zero size", port_id); return -EINVAL; } @@ -2745,7 +2745,7 @@ rte_eth_hairpin_get_peer_ports(uint16_t port_id, uint16_t *peer_ports, ret = (*dev->dev_ops->hairpin_get_peer_ports)(dev, peer_ports, len, direction); if (ret < 0) - RTE_ETHDEV_LOG(ERR, "Failed to get %d hairpin peer %s ports\n", + RTE_ETHDEV_LOG_LINE(ERR, "Failed to get %d hairpin peer %s ports", port_id, direction ? "Rx" : "Tx"); rte_eth_trace_hairpin_get_peer_ports(port_id, peer_ports, len, @@ -2780,8 +2780,8 @@ rte_eth_tx_buffer_set_err_callback(struct rte_eth_dev_tx_buffer *buffer, buffer_tx_error_fn cbfn, void *userdata) { if (buffer == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot set Tx buffer error callback to NULL buffer\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set Tx buffer error callback to NULL buffer"); return -EINVAL; } @@ -2799,7 +2799,7 @@ rte_eth_tx_buffer_init(struct rte_eth_dev_tx_buffer *buffer, uint16_t size) int ret = 0; if (buffer == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot initialize NULL buffer\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot initialize NULL buffer"); return -EINVAL; } @@ -2977,7 +2977,7 @@ rte_eth_link_get(uint16_t port_id, struct rte_eth_link *eth_link) dev = &rte_eth_devices[port_id]; if (eth_link == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u link to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u link to NULL", port_id); return -EINVAL; } @@ -3005,7 +3005,7 @@ rte_eth_link_get_nowait(uint16_t port_id, struct rte_eth_link *eth_link) dev = &rte_eth_devices[port_id]; if (eth_link == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u link to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u link to NULL", port_id); return -EINVAL; } @@ -3093,18 +3093,18 @@ rte_eth_link_to_str(char *str, size_t len, const struct rte_eth_link *eth_link) int ret; if (str == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot convert link to NULL string\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot convert link to NULL string"); return -EINVAL; } if (len == 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot convert link to string with zero size\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot convert link to string with zero size"); return -EINVAL; } if (eth_link == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot convert to string from NULL link\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot convert to string from NULL link"); return -EINVAL; } @@ -3133,7 +3133,7 @@ rte_eth_stats_get(uint16_t port_id, struct rte_eth_stats *stats) dev = &rte_eth_devices[port_id]; if (stats == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u stats to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u stats to NULL", port_id); return -EINVAL; } @@ -3220,15 +3220,15 @@ rte_eth_xstats_get_id_by_name(uint16_t port_id, const char *xstat_name, RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); if (xstat_name == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u xstats ID from NULL xstat name\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u xstats ID from NULL xstat name", port_id); return -ENOMEM; } if (id == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u xstats ID to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u xstats ID to NULL", port_id); return -ENOMEM; } @@ -3236,7 +3236,7 @@ rte_eth_xstats_get_id_by_name(uint16_t port_id, const char *xstat_name, /* Get count */ cnt_xstats = rte_eth_xstats_get_names_by_id(port_id, NULL, 0, NULL); if (cnt_xstats < 0) { - RTE_ETHDEV_LOG(ERR, "Cannot get count of xstats\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get count of xstats"); return -ENODEV; } @@ -3245,7 +3245,7 @@ rte_eth_xstats_get_id_by_name(uint16_t port_id, const char *xstat_name, if (cnt_xstats != rte_eth_xstats_get_names_by_id( port_id, xstats_names, cnt_xstats, NULL)) { - RTE_ETHDEV_LOG(ERR, "Cannot get xstats lookup\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get xstats lookup"); return -1; } @@ -3376,7 +3376,7 @@ rte_eth_xstats_get_names_by_id(uint16_t port_id, sizeof(struct rte_eth_xstat_name)); if (!xstats_names_copy) { - RTE_ETHDEV_LOG(ERR, "Can't allocate memory\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Can't allocate memory"); return -ENOMEM; } @@ -3404,7 +3404,7 @@ rte_eth_xstats_get_names_by_id(uint16_t port_id, /* Filter stats */ for (i = 0; i < size; i++) { if (ids[i] >= expected_entries) { - RTE_ETHDEV_LOG(ERR, "Id value isn't valid\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Id value isn't valid"); free(xstats_names_copy); return -1; } @@ -3600,7 +3600,7 @@ rte_eth_xstats_get_by_id(uint16_t port_id, const uint64_t *ids, /* Filter stats */ for (i = 0; i < size; i++) { if (ids[i] >= expected_entries) { - RTE_ETHDEV_LOG(ERR, "Id value isn't valid\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Id value isn't valid"); return -1; } values[i] = xstats[ids[i]].value; @@ -3748,8 +3748,8 @@ rte_eth_dev_fw_version_get(uint16_t port_id, char *fw_version, size_t fw_size) dev = &rte_eth_devices[port_id]; if (fw_version == NULL && fw_size > 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u FW version to NULL when string size is non zero\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u FW version to NULL when string size is non zero", port_id); return -EINVAL; } @@ -3781,7 +3781,7 @@ rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info) dev = &rte_eth_devices[port_id]; if (dev_info == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u info to NULL", port_id); return -EINVAL; } @@ -3837,8 +3837,8 @@ rte_eth_dev_conf_get(uint16_t port_id, struct rte_eth_conf *dev_conf) dev = &rte_eth_devices[port_id]; if (dev_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u configuration to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u configuration to NULL", port_id); return -EINVAL; } @@ -3862,8 +3862,8 @@ rte_eth_dev_get_supported_ptypes(uint16_t port_id, uint32_t ptype_mask, dev = &rte_eth_devices[port_id]; if (ptypes == NULL && num > 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u supported packet types to NULL when array size is non zero\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u supported packet types to NULL when array size is non zero", port_id); return -EINVAL; } @@ -3912,8 +3912,8 @@ rte_eth_dev_set_ptypes(uint16_t port_id, uint32_t ptype_mask, dev = &rte_eth_devices[port_id]; if (num > 0 && set_ptypes == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u set packet types to NULL when array size is non zero\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u set packet types to NULL when array size is non zero", port_id); return -EINVAL; } @@ -3992,7 +3992,7 @@ rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr *ma, struct rte_eth_dev_info dev_info; if (ma == NULL) { - RTE_ETHDEV_LOG(ERR, "%s: invalid parameters\n", __func__); + RTE_ETHDEV_LOG_LINE(ERR, "%s: invalid parameters", __func__); return -EINVAL; } @@ -4019,8 +4019,8 @@ rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr) dev = &rte_eth_devices[port_id]; if (mac_addr == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u MAC address to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u MAC address to NULL", port_id); return -EINVAL; } @@ -4041,7 +4041,7 @@ rte_eth_dev_get_mtu(uint16_t port_id, uint16_t *mtu) dev = &rte_eth_devices[port_id]; if (mtu == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u MTU to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u MTU to NULL", port_id); return -EINVAL; } @@ -4082,8 +4082,8 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu) } if (dev->data->dev_configured == 0) { - RTE_ETHDEV_LOG(ERR, - "Port %u must be configured before MTU set\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u must be configured before MTU set", port_id); return -EINVAL; } @@ -4110,13 +4110,13 @@ rte_eth_dev_vlan_filter(uint16_t port_id, uint16_t vlan_id, int on) if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)) { - RTE_ETHDEV_LOG(ERR, "Port %u: VLAN-filtering disabled\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port %u: VLAN-filtering disabled", port_id); return -ENOSYS; } if (vlan_id > 4095) { - RTE_ETHDEV_LOG(ERR, "Port_id=%u invalid vlan_id=%u > 4095\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port_id=%u invalid vlan_id=%u > 4095", port_id, vlan_id); return -EINVAL; } @@ -4156,7 +4156,7 @@ rte_eth_dev_set_vlan_strip_on_queue(uint16_t port_id, uint16_t rx_queue_id, dev = &rte_eth_devices[port_id]; if (rx_queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid rx_queue_id=%u\n", rx_queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid rx_queue_id=%u", rx_queue_id); return -EINVAL; } @@ -4261,10 +4261,10 @@ rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask) /* Rx VLAN offloading must be within its device capabilities */ if ((dev_offloads & dev_info.rx_offload_capa) != dev_offloads) { new_offloads = dev_offloads & ~orig_offloads; - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u requested new added VLAN offloads " "0x%" PRIx64 " must be within Rx offloads capabilities " - "0x%" PRIx64 " in %s()\n", + "0x%" PRIx64 " in %s()", port_id, new_offloads, dev_info.rx_offload_capa, __func__); return -EINVAL; @@ -4342,8 +4342,8 @@ rte_eth_dev_flow_ctrl_get(uint16_t port_id, struct rte_eth_fc_conf *fc_conf) dev = &rte_eth_devices[port_id]; if (fc_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u flow control config to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u flow control config to NULL", port_id); return -EINVAL; } @@ -4368,14 +4368,14 @@ rte_eth_dev_flow_ctrl_set(uint16_t port_id, struct rte_eth_fc_conf *fc_conf) dev = &rte_eth_devices[port_id]; if (fc_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot set ethdev port %u flow control from NULL config\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set ethdev port %u flow control from NULL config", port_id); return -EINVAL; } if ((fc_conf->send_xon != 0) && (fc_conf->send_xon != 1)) { - RTE_ETHDEV_LOG(ERR, "Invalid send_xon, only 0/1 allowed\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid send_xon, only 0/1 allowed"); return -EINVAL; } @@ -4399,14 +4399,14 @@ rte_eth_dev_priority_flow_ctrl_set(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (pfc_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot set ethdev port %u priority flow control from NULL config\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set ethdev port %u priority flow control from NULL config", port_id); return -EINVAL; } if (pfc_conf->priority > (RTE_ETH_DCB_NUM_USER_PRIORITIES - 1)) { - RTE_ETHDEV_LOG(ERR, "Invalid priority, only 0-7 allowed\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid priority, only 0-7 allowed"); return -EINVAL; } @@ -4428,16 +4428,16 @@ validate_rx_pause_config(struct rte_eth_dev_info *dev_info, uint8_t tc_max, if ((pfc_queue_conf->mode == RTE_ETH_FC_RX_PAUSE) || (pfc_queue_conf->mode == RTE_ETH_FC_FULL)) { if (pfc_queue_conf->rx_pause.tx_qid >= dev_info->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, - "PFC Tx queue not in range for Rx pause requested:%d configured:%d\n", + RTE_ETHDEV_LOG_LINE(ERR, + "PFC Tx queue not in range for Rx pause requested:%d configured:%d", pfc_queue_conf->rx_pause.tx_qid, dev_info->nb_tx_queues); return -EINVAL; } if (pfc_queue_conf->rx_pause.tc >= tc_max) { - RTE_ETHDEV_LOG(ERR, - "PFC TC not in range for Rx pause requested:%d max:%d\n", + RTE_ETHDEV_LOG_LINE(ERR, + "PFC TC not in range for Rx pause requested:%d max:%d", pfc_queue_conf->rx_pause.tc, tc_max); return -EINVAL; } @@ -4453,16 +4453,16 @@ validate_tx_pause_config(struct rte_eth_dev_info *dev_info, uint8_t tc_max, if ((pfc_queue_conf->mode == RTE_ETH_FC_TX_PAUSE) || (pfc_queue_conf->mode == RTE_ETH_FC_FULL)) { if (pfc_queue_conf->tx_pause.rx_qid >= dev_info->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, - "PFC Rx queue not in range for Tx pause requested:%d configured:%d\n", + RTE_ETHDEV_LOG_LINE(ERR, + "PFC Rx queue not in range for Tx pause requested:%d configured:%d", pfc_queue_conf->tx_pause.rx_qid, dev_info->nb_rx_queues); return -EINVAL; } if (pfc_queue_conf->tx_pause.tc >= tc_max) { - RTE_ETHDEV_LOG(ERR, - "PFC TC not in range for Tx pause requested:%d max:%d\n", + RTE_ETHDEV_LOG_LINE(ERR, + "PFC TC not in range for Tx pause requested:%d max:%d", pfc_queue_conf->tx_pause.tc, tc_max); return -EINVAL; } @@ -4482,7 +4482,7 @@ rte_eth_dev_priority_flow_ctrl_queue_info_get(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (pfc_queue_info == NULL) { - RTE_ETHDEV_LOG(ERR, "PFC info param is NULL for port (%u)\n", + RTE_ETHDEV_LOG_LINE(ERR, "PFC info param is NULL for port (%u)", port_id); return -EINVAL; } @@ -4511,7 +4511,7 @@ rte_eth_dev_priority_flow_ctrl_queue_configure(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (pfc_queue_conf == NULL) { - RTE_ETHDEV_LOG(ERR, "PFC parameters are NULL for port (%u)\n", + RTE_ETHDEV_LOG_LINE(ERR, "PFC parameters are NULL for port (%u)", port_id); return -EINVAL; } @@ -4525,7 +4525,7 @@ rte_eth_dev_priority_flow_ctrl_queue_configure(uint16_t port_id, return ret; if (pfc_info.tc_max == 0) { - RTE_ETHDEV_LOG(ERR, "Ethdev port %u does not support PFC TC values\n", + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port %u does not support PFC TC values", port_id); return -ENOTSUP; } @@ -4533,14 +4533,14 @@ rte_eth_dev_priority_flow_ctrl_queue_configure(uint16_t port_id, /* Check requested mode supported or not */ if (pfc_info.mode_capa == RTE_ETH_FC_RX_PAUSE && pfc_queue_conf->mode == RTE_ETH_FC_TX_PAUSE) { - RTE_ETHDEV_LOG(ERR, "PFC Tx pause unsupported for port (%d)\n", + RTE_ETHDEV_LOG_LINE(ERR, "PFC Tx pause unsupported for port (%d)", port_id); return -EINVAL; } if (pfc_info.mode_capa == RTE_ETH_FC_TX_PAUSE && pfc_queue_conf->mode == RTE_ETH_FC_RX_PAUSE) { - RTE_ETHDEV_LOG(ERR, "PFC Rx pause unsupported for port (%d)\n", + RTE_ETHDEV_LOG_LINE(ERR, "PFC Rx pause unsupported for port (%d)", port_id); return -EINVAL; } @@ -4597,7 +4597,7 @@ eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t i, idx, shift; if (max_rxq == 0) { - RTE_ETHDEV_LOG(ERR, "No receive queue is available\n"); + RTE_ETHDEV_LOG_LINE(ERR, "No receive queue is available"); return -EINVAL; } @@ -4606,8 +4606,8 @@ eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf, shift = i % RTE_ETH_RETA_GROUP_SIZE; if ((reta_conf[idx].mask & RTE_BIT64(shift)) && (reta_conf[idx].reta[shift] >= max_rxq)) { - RTE_ETHDEV_LOG(ERR, - "reta_conf[%u]->reta[%u]: %u exceeds the maximum rxq index: %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "reta_conf[%u]->reta[%u]: %u exceeds the maximum rxq index: %u", idx, shift, reta_conf[idx].reta[shift], max_rxq); return -EINVAL; @@ -4630,15 +4630,15 @@ rte_eth_dev_rss_reta_update(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (reta_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot update ethdev port %u RSS RETA to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot update ethdev port %u RSS RETA to NULL", port_id); return -EINVAL; } if (reta_size == 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot update ethdev port %u RSS RETA with zero size\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot update ethdev port %u RSS RETA with zero size", port_id); return -EINVAL; } @@ -4656,7 +4656,7 @@ rte_eth_dev_rss_reta_update(uint16_t port_id, mq_mode = dev->data->dev_conf.rxmode.mq_mode; if (!(mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) { - RTE_ETHDEV_LOG(ERR, "Multi-queue RSS mode isn't enabled.\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Multi-queue RSS mode isn't enabled."); return -ENOTSUP; } @@ -4682,8 +4682,8 @@ rte_eth_dev_rss_reta_query(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (reta_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot query ethdev port %u RSS RETA from NULL config\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot query ethdev port %u RSS RETA from NULL config", port_id); return -EINVAL; } @@ -4716,8 +4716,8 @@ rte_eth_dev_rss_hash_update(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (rss_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot update ethdev port %u RSS hash from NULL config\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot update ethdev port %u RSS hash from NULL config", port_id); return -EINVAL; } @@ -4729,8 +4729,8 @@ rte_eth_dev_rss_hash_update(uint16_t port_id, rss_conf->rss_hf = rte_eth_rss_hf_refine(rss_conf->rss_hf); if ((dev_info.flow_type_rss_offloads | rss_conf->rss_hf) != dev_info.flow_type_rss_offloads) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%u invalid rss_hf: 0x%"PRIx64", valid value: 0x%"PRIx64"\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethdev port_id=%u invalid rss_hf: 0x%"PRIx64", valid value: 0x%"PRIx64, port_id, rss_conf->rss_hf, dev_info.flow_type_rss_offloads); return -EINVAL; @@ -4738,14 +4738,14 @@ rte_eth_dev_rss_hash_update(uint16_t port_id, mq_mode = dev->data->dev_conf.rxmode.mq_mode; if (!(mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) { - RTE_ETHDEV_LOG(ERR, "Multi-queue RSS mode isn't enabled.\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Multi-queue RSS mode isn't enabled."); return -ENOTSUP; } if (rss_conf->rss_key != NULL && rss_conf->rss_key_len != dev_info.hash_key_size) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%u invalid RSS key len: %u, valid value: %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethdev port_id=%u invalid RSS key len: %u, valid value: %u", port_id, rss_conf->rss_key_len, dev_info.hash_key_size); return -EINVAL; } @@ -4753,9 +4753,9 @@ rte_eth_dev_rss_hash_update(uint16_t port_id, if ((size_t)rss_conf->algorithm >= CHAR_BIT * sizeof(dev_info.rss_algo_capa) || (dev_info.rss_algo_capa & RTE_ETH_HASH_ALGO_TO_CAPA(rss_conf->algorithm)) == 0) { - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u configured RSS hash algorithm (%u)" - "is not in the algorithm capability (0x%" PRIx32 ")\n", + "is not in the algorithm capability (0x%" PRIx32 ")", port_id, rss_conf->algorithm, dev_info.rss_algo_capa); return -EINVAL; } @@ -4782,8 +4782,8 @@ rte_eth_dev_rss_hash_conf_get(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (rss_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u RSS hash config to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u RSS hash config to NULL", port_id); return -EINVAL; } @@ -4794,8 +4794,8 @@ rte_eth_dev_rss_hash_conf_get(uint16_t port_id, if (rss_conf->rss_key != NULL && rss_conf->rss_key_len < dev_info.hash_key_size) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%u invalid RSS key len: %u, should not be less than: %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethdev port_id=%u invalid RSS key len: %u, should not be less than: %u", port_id, rss_conf->rss_key_len, dev_info.hash_key_size); return -EINVAL; } @@ -4837,14 +4837,14 @@ rte_eth_dev_udp_tunnel_port_add(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (udp_tunnel == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot add ethdev port %u UDP tunnel port from NULL UDP tunnel\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot add ethdev port %u UDP tunnel port from NULL UDP tunnel", port_id); return -EINVAL; } if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) { - RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid tunnel type"); return -EINVAL; } @@ -4869,14 +4869,14 @@ rte_eth_dev_udp_tunnel_port_delete(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (udp_tunnel == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot delete ethdev port %u UDP tunnel port from NULL UDP tunnel\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot delete ethdev port %u UDP tunnel port from NULL UDP tunnel", port_id); return -EINVAL; } if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) { - RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid tunnel type"); return -EINVAL; } @@ -4938,8 +4938,8 @@ rte_eth_fec_get_capability(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (speed_fec_capa == NULL && num > 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u FEC capability to NULL when array size is non zero\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u FEC capability to NULL when array size is non zero", port_id); return -EINVAL; } @@ -4963,8 +4963,8 @@ rte_eth_fec_get(uint16_t port_id, uint32_t *fec_capa) dev = &rte_eth_devices[port_id]; if (fec_capa == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u current FEC mode to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u current FEC mode to NULL", port_id); return -EINVAL; } @@ -4988,7 +4988,7 @@ rte_eth_fec_set(uint16_t port_id, uint32_t fec_capa) dev = &rte_eth_devices[port_id]; if (fec_capa == 0) { - RTE_ETHDEV_LOG(ERR, "At least one FEC mode should be specified\n"); + RTE_ETHDEV_LOG_LINE(ERR, "At least one FEC mode should be specified"); return -EINVAL; } @@ -5040,8 +5040,8 @@ rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr, dev = &rte_eth_devices[port_id]; if (addr == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot add ethdev port %u MAC address from NULL address\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot add ethdev port %u MAC address from NULL address", port_id); return -EINVAL; } @@ -5050,12 +5050,12 @@ rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr, return -ENOTSUP; if (rte_is_zero_ether_addr(addr)) { - RTE_ETHDEV_LOG(ERR, "Port %u: Cannot add NULL MAC address\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port %u: Cannot add NULL MAC address", port_id); return -EINVAL; } if (pool >= RTE_ETH_64_POOLS) { - RTE_ETHDEV_LOG(ERR, "Pool ID must be 0-%d\n", RTE_ETH_64_POOLS - 1); + RTE_ETHDEV_LOG_LINE(ERR, "Pool ID must be 0-%d", RTE_ETH_64_POOLS - 1); return -EINVAL; } @@ -5063,7 +5063,7 @@ rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr, if (index < 0) { index = eth_dev_get_mac_addr_index(port_id, &null_mac_addr); if (index < 0) { - RTE_ETHDEV_LOG(ERR, "Port %u: MAC address array full\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port %u: MAC address array full", port_id); return -ENOSPC; } @@ -5103,8 +5103,8 @@ rte_eth_dev_mac_addr_remove(uint16_t port_id, struct rte_ether_addr *addr) dev = &rte_eth_devices[port_id]; if (addr == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot remove ethdev port %u MAC address from NULL address\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot remove ethdev port %u MAC address from NULL address", port_id); return -EINVAL; } @@ -5114,8 +5114,8 @@ rte_eth_dev_mac_addr_remove(uint16_t port_id, struct rte_ether_addr *addr) index = eth_dev_get_mac_addr_index(port_id, addr); if (index == 0) { - RTE_ETHDEV_LOG(ERR, - "Port %u: Cannot remove default MAC address\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u: Cannot remove default MAC address", port_id); return -EADDRINUSE; } else if (index < 0) @@ -5146,8 +5146,8 @@ rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct rte_ether_addr *addr) dev = &rte_eth_devices[port_id]; if (addr == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot set ethdev port %u default MAC address from NULL address\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set ethdev port %u default MAC address from NULL address", port_id); return -EINVAL; } @@ -5161,8 +5161,8 @@ rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct rte_ether_addr *addr) /* Keep address unique in dev->data->mac_addrs[]. */ index = eth_dev_get_mac_addr_index(port_id, addr); if (index > 0) { - RTE_ETHDEV_LOG(ERR, - "New default address for port %u was already in the address list. Please remove it first.\n", + RTE_ETHDEV_LOG_LINE(ERR, + "New default address for port %u was already in the address list. Please remove it first.", port_id); return -EEXIST; } @@ -5220,14 +5220,14 @@ rte_eth_dev_uc_hash_table_set(uint16_t port_id, struct rte_ether_addr *addr, dev = &rte_eth_devices[port_id]; if (addr == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot set ethdev port %u unicast hash table from NULL address\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set ethdev port %u unicast hash table from NULL address", port_id); return -EINVAL; } if (rte_is_zero_ether_addr(addr)) { - RTE_ETHDEV_LOG(ERR, "Port %u: Cannot add NULL MAC address\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port %u: Cannot add NULL MAC address", port_id); return -EINVAL; } @@ -5239,15 +5239,15 @@ rte_eth_dev_uc_hash_table_set(uint16_t port_id, struct rte_ether_addr *addr, if (index < 0) { if (!on) { - RTE_ETHDEV_LOG(ERR, - "Port %u: the MAC address was not set in UTA\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u: the MAC address was not set in UTA", port_id); return -EINVAL; } index = eth_dev_get_hash_mac_addr_index(port_id, &null_mac_addr); if (index < 0) { - RTE_ETHDEV_LOG(ERR, "Port %u: MAC address array full\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port %u: MAC address array full", port_id); return -ENOSPC; } @@ -5309,15 +5309,15 @@ int rte_eth_set_queue_rate_limit(uint16_t port_id, uint16_t queue_idx, link = dev->data->dev_link; if (queue_idx > dev_info.max_tx_queues) { - RTE_ETHDEV_LOG(ERR, - "Set queue rate limit:port %u: invalid queue ID=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Set queue rate limit:port %u: invalid queue ID=%u", port_id, queue_idx); return -EINVAL; } if (tx_rate > link.link_speed) { - RTE_ETHDEV_LOG(ERR, - "Set queue rate limit:invalid tx_rate=%u, bigger than link speed= %d\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Set queue rate limit:invalid tx_rate=%u, bigger than link speed= %d", tx_rate, link.link_speed); return -EINVAL; } @@ -5342,15 +5342,15 @@ int rte_eth_rx_avail_thresh_set(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id > dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, - "Set queue avail thresh: port %u: invalid queue ID=%u.\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Set queue avail thresh: port %u: invalid queue ID=%u.", port_id, queue_id); return -EINVAL; } if (avail_thresh > 99) { - RTE_ETHDEV_LOG(ERR, - "Set queue avail thresh: port %u: threshold should be <= 99.\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Set queue avail thresh: port %u: threshold should be <= 99.", port_id); return -EINVAL; } @@ -5415,14 +5415,14 @@ rte_eth_dev_callback_register(uint16_t port_id, uint16_t last_port; if (cb_fn == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot register ethdev port %u callback from NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot register ethdev port %u callback from NULL", port_id); return -EINVAL; } if (!rte_eth_dev_is_valid_port(port_id) && port_id != RTE_ETH_ALL) { - RTE_ETHDEV_LOG(ERR, "Invalid port_id=%d\n", port_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid port_id=%d", port_id); return -EINVAL; } @@ -5485,14 +5485,14 @@ rte_eth_dev_callback_unregister(uint16_t port_id, uint16_t last_port; if (cb_fn == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot unregister ethdev port %u callback from NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot unregister ethdev port %u callback from NULL", port_id); return -EINVAL; } if (!rte_eth_dev_is_valid_port(port_id) && port_id != RTE_ETH_ALL) { - RTE_ETHDEV_LOG(ERR, "Invalid port_id=%d\n", port_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid port_id=%d", port_id); return -EINVAL; } @@ -5551,13 +5551,13 @@ rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data) dev = &rte_eth_devices[port_id]; if (!dev->intr_handle) { - RTE_ETHDEV_LOG(ERR, "Rx Intr handle unset\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx Intr handle unset"); return -ENOTSUP; } intr_handle = dev->intr_handle; if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) { - RTE_ETHDEV_LOG(ERR, "Rx Intr vector unset\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx Intr vector unset"); return -EPERM; } @@ -5568,8 +5568,8 @@ rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data) rte_ethdev_trace_rx_intr_ctl(port_id, qid, epfd, op, data, rc); if (rc && rc != -EEXIST) { - RTE_ETHDEV_LOG(ERR, - "p %u q %u Rx ctl error op %d epfd %d vec %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "p %u q %u Rx ctl error op %d epfd %d vec %u", port_id, qid, op, epfd, vec); } } @@ -5590,18 +5590,18 @@ rte_eth_dev_rx_intr_ctl_q_get_fd(uint16_t port_id, uint16_t queue_id) dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", queue_id); return -1; } if (!dev->intr_handle) { - RTE_ETHDEV_LOG(ERR, "Rx Intr handle unset\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx Intr handle unset"); return -1; } intr_handle = dev->intr_handle; if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) { - RTE_ETHDEV_LOG(ERR, "Rx Intr vector unset\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx Intr vector unset"); return -1; } @@ -5628,18 +5628,18 @@ rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", queue_id); return -EINVAL; } if (!dev->intr_handle) { - RTE_ETHDEV_LOG(ERR, "Rx Intr handle unset\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx Intr handle unset"); return -ENOTSUP; } intr_handle = dev->intr_handle; if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) { - RTE_ETHDEV_LOG(ERR, "Rx Intr vector unset\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx Intr vector unset"); return -EPERM; } @@ -5649,8 +5649,8 @@ rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id, rte_ethdev_trace_rx_intr_ctl_q(port_id, queue_id, epfd, op, data, rc); if (rc && rc != -EEXIST) { - RTE_ETHDEV_LOG(ERR, - "p %u q %u Rx ctl error op %d epfd %d vec %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "p %u q %u Rx ctl error op %d epfd %d vec %u", port_id, queue_id, op, epfd, vec); return rc; } @@ -5949,28 +5949,28 @@ rte_eth_rx_queue_info_get(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", queue_id); return -EINVAL; } if (qinfo == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u Rx queue %u info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u Rx queue %u info to NULL", port_id, queue_id); return -EINVAL; } if (dev->data->rx_queues == NULL || dev->data->rx_queues[queue_id] == NULL) { - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Rx queue %"PRIu16" of device with port_id=%" - PRIu16" has not been setup\n", + PRIu16" has not been setup", queue_id, port_id); return -EINVAL; } if (rte_eth_dev_is_rx_hairpin_queue(dev, queue_id)) { - RTE_ETHDEV_LOG(INFO, - "Can't get hairpin Rx queue %"PRIu16" info of device with port_id=%"PRIu16"\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Can't get hairpin Rx queue %"PRIu16" info of device with port_id=%"PRIu16, queue_id, port_id); return -EINVAL; } @@ -5997,28 +5997,28 @@ rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u", queue_id); return -EINVAL; } if (qinfo == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u Tx queue %u info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u Tx queue %u info to NULL", port_id, queue_id); return -EINVAL; } if (dev->data->tx_queues == NULL || dev->data->tx_queues[queue_id] == NULL) { - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Tx queue %"PRIu16" of device with port_id=%" - PRIu16" has not been setup\n", + PRIu16" has not been setup", queue_id, port_id); return -EINVAL; } if (rte_eth_dev_is_tx_hairpin_queue(dev, queue_id)) { - RTE_ETHDEV_LOG(INFO, - "Can't get hairpin Tx queue %"PRIu16" info of device with port_id=%"PRIu16"\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Can't get hairpin Tx queue %"PRIu16" info of device with port_id=%"PRIu16, queue_id, port_id); return -EINVAL; } @@ -6068,13 +6068,13 @@ rte_eth_rx_burst_mode_get(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", queue_id); return -EINVAL; } if (mode == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u Rx queue %u burst mode to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u Rx queue %u burst mode to NULL", port_id, queue_id); return -EINVAL; } @@ -6101,13 +6101,13 @@ rte_eth_tx_burst_mode_get(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u", queue_id); return -EINVAL; } if (mode == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u Tx queue %u burst mode to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u Tx queue %u burst mode to NULL", port_id, queue_id); return -EINVAL; } @@ -6134,13 +6134,13 @@ rte_eth_get_monitor_addr(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", queue_id); return -EINVAL; } if (pmc == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u Rx queue %u power monitor condition to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u Rx queue %u power monitor condition to NULL", port_id, queue_id); return -EINVAL; } @@ -6224,8 +6224,8 @@ rte_eth_timesync_read_rx_timestamp(uint16_t port_id, struct timespec *timestamp, dev = &rte_eth_devices[port_id]; if (timestamp == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot read ethdev port %u Rx timestamp to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot read ethdev port %u Rx timestamp to NULL", port_id); return -EINVAL; } @@ -6253,8 +6253,8 @@ rte_eth_timesync_read_tx_timestamp(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (timestamp == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot read ethdev port %u Tx timestamp to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot read ethdev port %u Tx timestamp to NULL", port_id); return -EINVAL; } @@ -6299,8 +6299,8 @@ rte_eth_timesync_read_time(uint16_t port_id, struct timespec *timestamp) dev = &rte_eth_devices[port_id]; if (timestamp == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot read ethdev port %u timesync time to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot read ethdev port %u timesync time to NULL", port_id); return -EINVAL; } @@ -6325,8 +6325,8 @@ rte_eth_timesync_write_time(uint16_t port_id, const struct timespec *timestamp) dev = &rte_eth_devices[port_id]; if (timestamp == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot write ethdev port %u timesync from NULL time\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot write ethdev port %u timesync from NULL time", port_id); return -EINVAL; } @@ -6351,7 +6351,7 @@ rte_eth_read_clock(uint16_t port_id, uint64_t *clock) dev = &rte_eth_devices[port_id]; if (clock == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot read ethdev port %u clock to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot read ethdev port %u clock to NULL", port_id); return -EINVAL; } @@ -6375,8 +6375,8 @@ rte_eth_dev_get_reg_info(uint16_t port_id, struct rte_dev_reg_info *info) dev = &rte_eth_devices[port_id]; if (info == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u register info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u register info to NULL", port_id); return -EINVAL; } @@ -6418,8 +6418,8 @@ rte_eth_dev_get_eeprom(uint16_t port_id, struct rte_dev_eeprom_info *info) dev = &rte_eth_devices[port_id]; if (info == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u EEPROM info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u EEPROM info to NULL", port_id); return -EINVAL; } @@ -6443,8 +6443,8 @@ rte_eth_dev_set_eeprom(uint16_t port_id, struct rte_dev_eeprom_info *info) dev = &rte_eth_devices[port_id]; if (info == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot set ethdev port %u EEPROM from NULL info\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set ethdev port %u EEPROM from NULL info", port_id); return -EINVAL; } @@ -6469,8 +6469,8 @@ rte_eth_dev_get_module_info(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (modinfo == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u EEPROM module info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u EEPROM module info to NULL", port_id); return -EINVAL; } @@ -6495,22 +6495,22 @@ rte_eth_dev_get_module_eeprom(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (info == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u module EEPROM info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u module EEPROM info to NULL", port_id); return -EINVAL; } if (info->data == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u module EEPROM data to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u module EEPROM data to NULL", port_id); return -EINVAL; } if (info->length == 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u module EEPROM to data with zero size\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u module EEPROM to data with zero size", port_id); return -EINVAL; } @@ -6535,8 +6535,8 @@ rte_eth_dev_get_dcb_info(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (dcb_info == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u DCB info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u DCB info to NULL", port_id); return -EINVAL; } @@ -6601,8 +6601,8 @@ rte_eth_dev_hairpin_capability_get(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (cap == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u hairpin capability to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u hairpin capability to NULL", port_id); return -EINVAL; } @@ -6627,8 +6627,8 @@ rte_eth_dev_pool_ops_supported(uint16_t port_id, const char *pool) dev = &rte_eth_devices[port_id]; if (pool == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot test ethdev port %u mempool operation from NULL pool\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot test ethdev port %u mempool operation from NULL pool", port_id); return -EINVAL; } @@ -6672,14 +6672,14 @@ rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features) dev = &rte_eth_devices[port_id]; if (dev->data->dev_configured != 0) { - RTE_ETHDEV_LOG(ERR, - "The port (ID=%"PRIu16") is already configured\n", + RTE_ETHDEV_LOG_LINE(ERR, + "The port (ID=%"PRIu16") is already configured", port_id); return -EBUSY; } if (features == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid features (NULL)\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid features (NULL)"); return -EINVAL; } @@ -6708,14 +6708,14 @@ rte_eth_ip_reassembly_capability_get(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (dev->data->dev_configured == 0) { - RTE_ETHDEV_LOG(ERR, - "port_id=%u is not configured, cannot get IP reassembly capability\n", + RTE_ETHDEV_LOG_LINE(ERR, + "port_id=%u is not configured, cannot get IP reassembly capability", port_id); return -EINVAL; } if (reassembly_capa == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get reassembly capability to NULL\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get reassembly capability to NULL"); return -EINVAL; } @@ -6743,14 +6743,14 @@ rte_eth_ip_reassembly_conf_get(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (dev->data->dev_configured == 0) { - RTE_ETHDEV_LOG(ERR, - "port_id=%u is not configured, cannot get IP reassembly configuration\n", + RTE_ETHDEV_LOG_LINE(ERR, + "port_id=%u is not configured, cannot get IP reassembly configuration", port_id); return -EINVAL; } if (conf == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get reassembly info to NULL\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get reassembly info to NULL"); return -EINVAL; } @@ -6776,22 +6776,22 @@ rte_eth_ip_reassembly_conf_set(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (dev->data->dev_configured == 0) { - RTE_ETHDEV_LOG(ERR, - "port_id=%u is not configured, cannot set IP reassembly configuration\n", + RTE_ETHDEV_LOG_LINE(ERR, + "port_id=%u is not configured, cannot set IP reassembly configuration", port_id); return -EINVAL; } if (dev->data->dev_started != 0) { - RTE_ETHDEV_LOG(ERR, - "port_id=%u is started, cannot configure IP reassembly params.\n", + RTE_ETHDEV_LOG_LINE(ERR, + "port_id=%u is started, cannot configure IP reassembly params.", port_id); return -EINVAL; } if (conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Invalid IP reassembly configuration (NULL)\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid IP reassembly configuration (NULL)"); return -EINVAL; } @@ -6814,7 +6814,7 @@ rte_eth_dev_priv_dump(uint16_t port_id, FILE *file) dev = &rte_eth_devices[port_id]; if (file == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid file (NULL)\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid file (NULL)"); return -EINVAL; } @@ -6833,12 +6833,12 @@ rte_eth_rx_descriptor_dump(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", queue_id); return -EINVAL; } if (file == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid file (NULL)\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid file (NULL)"); return -EINVAL; } @@ -6859,12 +6859,12 @@ rte_eth_tx_descriptor_dump(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u", queue_id); return -EINVAL; } if (file == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid file (NULL)\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid file (NULL)"); return -EINVAL; } @@ -6886,8 +6886,8 @@ rte_eth_buffer_split_get_supported_hdr_ptypes(uint16_t port_id, uint32_t *ptypes dev = &rte_eth_devices[port_id]; if (ptypes == NULL && num > 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u supported header protocol types to NULL when array size is non zero\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u supported header protocol types to NULL when array size is non zero", port_id); return -EINVAL; } @@ -6940,7 +6940,7 @@ int rte_eth_dev_map_aggr_tx_affinity(uint16_t port_id, uint16_t tx_queue_id, dev = &rte_eth_devices[port_id]; if (tx_queue_id >= dev->data->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u\n", tx_queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u", tx_queue_id); return -EINVAL; } @@ -6948,30 +6948,30 @@ int rte_eth_dev_map_aggr_tx_affinity(uint16_t port_id, uint16_t tx_queue_id, return -ENOTSUP; if (dev->data->dev_configured == 0) { - RTE_ETHDEV_LOG(ERR, - "Port %u must be configured before Tx affinity mapping\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u must be configured before Tx affinity mapping", port_id); return -EINVAL; } if (dev->data->dev_started) { - RTE_ETHDEV_LOG(ERR, - "Port %u must be stopped to allow configuration\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u must be stopped to allow configuration", port_id); return -EBUSY; } aggr_ports = rte_eth_dev_count_aggr_ports(port_id); if (aggr_ports == 0) { - RTE_ETHDEV_LOG(ERR, - "Port %u has no aggregated port\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u has no aggregated port", port_id); return -ENOTSUP; } if (affinity > aggr_ports) { - RTE_ETHDEV_LOG(ERR, - "Port %u map invalid affinity %u exceeds the maximum number %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u map invalid affinity %u exceeds the maximum number %u", port_id, affinity, aggr_ports); return -EINVAL; } diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index 77331ce652..18debce99c 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -176,9 +176,11 @@ extern "C" { #include "rte_dev_info.h" extern int rte_eth_dev_logtype; +#define RTE_LOGTYPE_ETHDEV rte_eth_dev_logtype -#define RTE_ETHDEV_LOG(level, ...) \ - rte_log(RTE_LOG_ ## level, rte_eth_dev_logtype, "" __VA_ARGS__) +#define RTE_ETHDEV_LOG_LINE(level, ...) \ + RTE_LOG(level, ETHDEV, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_FMT_TAIL(__VA_ARGS__,))) struct rte_mbuf; @@ -2000,14 +2002,14 @@ struct rte_eth_fec_capa { /* Macros to check for valid port */ #define RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, retval) do { \ if (!rte_eth_dev_is_valid_port(port_id)) { \ - RTE_ETHDEV_LOG(ERR, "Invalid port_id=%u\n", port_id); \ + RTE_ETHDEV_LOG_LINE(ERR, "Invalid port_id=%u", port_id); \ return retval; \ } \ } while (0) #define RTE_ETH_VALID_PORTID_OR_RET(port_id) do { \ if (!rte_eth_dev_is_valid_port(port_id)) { \ - RTE_ETHDEV_LOG(ERR, "Invalid port_id=%u\n", port_id); \ + RTE_ETHDEV_LOG_LINE(ERR, "Invalid port_id=%u", port_id); \ return; \ } \ } while (0) @@ -6052,8 +6054,8 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id, #ifdef RTE_ETHDEV_DEBUG_RX if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Invalid port_id=%u or queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid port_id=%u or queue_id=%u", port_id, queue_id); return 0; } @@ -6067,7 +6069,7 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id, RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0); if (qd == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u for port_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u for port_id=%u", queue_id, port_id); return 0; } @@ -6123,8 +6125,8 @@ rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id) #ifdef RTE_ETHDEV_DEBUG_RX if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Invalid port_id=%u or queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid port_id=%u or queue_id=%u", port_id, queue_id); return -EINVAL; } @@ -6196,8 +6198,8 @@ rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id, #ifdef RTE_ETHDEV_DEBUG_RX if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Invalid port_id=%u or queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid port_id=%u or queue_id=%u", port_id, queue_id); return -EINVAL; } @@ -6267,8 +6269,8 @@ static inline int rte_eth_tx_descriptor_status(uint16_t port_id, #ifdef RTE_ETHDEV_DEBUG_TX if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Invalid port_id=%u or queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid port_id=%u or queue_id=%u", port_id, queue_id); return -EINVAL; } @@ -6391,8 +6393,8 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id, #ifdef RTE_ETHDEV_DEBUG_TX if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Invalid port_id=%u or queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid port_id=%u or queue_id=%u", port_id, queue_id); return 0; } @@ -6406,7 +6408,7 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id, RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0); if (qd == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u for port_id=%u", queue_id, port_id); return 0; } @@ -6501,8 +6503,8 @@ rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id, #ifdef RTE_ETHDEV_DEBUG_TX if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Invalid port_id=%u or queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid port_id=%u or queue_id=%u", port_id, queue_id); rte_errno = ENODEV; return 0; @@ -6515,12 +6517,12 @@ rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id, #ifdef RTE_ETHDEV_DEBUG_TX if (!rte_eth_dev_is_valid_port(port_id)) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx port_id=%u\n", port_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx port_id=%u", port_id); rte_errno = ENODEV; return 0; } if (qd == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u for port_id=%u", queue_id, port_id); rte_errno = EINVAL; return 0; @@ -6706,8 +6708,8 @@ rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id, #ifdef RTE_ETHDEV_DEBUG_TX if (tx_port_id >= RTE_MAX_ETHPORTS || tx_queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Invalid tx_port_id=%u or tx_queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid tx_port_id=%u or tx_queue_id=%u", tx_port_id, tx_queue_id); return 0; } @@ -6721,7 +6723,7 @@ rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id, RTE_ETH_VALID_PORTID_OR_ERR_RET(tx_port_id, 0); if (qd1 == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u for port_id=%u", tx_queue_id, tx_port_id); return 0; } @@ -6732,7 +6734,7 @@ rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id, #ifdef RTE_ETHDEV_DEBUG_RX if (rx_port_id >= RTE_MAX_ETHPORTS || rx_queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, "Invalid rx_port_id=%u or rx_queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Invalid rx_port_id=%u or rx_queue_id=%u", rx_port_id, rx_queue_id); return 0; } @@ -6746,7 +6748,7 @@ rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id, RTE_ETH_VALID_PORTID_OR_ERR_RET(rx_port_id, 0); if (qd2 == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u for port_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u for port_id=%u", rx_queue_id, rx_port_id); return 0; } diff --git a/lib/ethdev/rte_ethdev_cman.c b/lib/ethdev/rte_ethdev_cman.c index a9c4637521..41e38bdc89 100644 --- a/lib/ethdev/rte_ethdev_cman.c +++ b/lib/ethdev/rte_ethdev_cman.c @@ -21,12 +21,12 @@ rte_eth_cman_info_get(uint16_t port_id, struct rte_eth_cman_info *info) dev = &rte_eth_devices[port_id]; if (info == NULL) { - RTE_ETHDEV_LOG(ERR, "congestion management info is NULL\n"); + RTE_ETHDEV_LOG_LINE(ERR, "congestion management info is NULL"); return -EINVAL; } if (dev->dev_ops->cman_info_get == NULL) { - RTE_ETHDEV_LOG(ERR, "Function not implemented\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Function not implemented"); return -ENOTSUP; } @@ -49,12 +49,12 @@ rte_eth_cman_config_init(uint16_t port_id, struct rte_eth_cman_config *config) dev = &rte_eth_devices[port_id]; if (config == NULL) { - RTE_ETHDEV_LOG(ERR, "congestion management config is NULL\n"); + RTE_ETHDEV_LOG_LINE(ERR, "congestion management config is NULL"); return -EINVAL; } if (dev->dev_ops->cman_config_init == NULL) { - RTE_ETHDEV_LOG(ERR, "Function not implemented\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Function not implemented"); return -ENOTSUP; } @@ -77,12 +77,12 @@ rte_eth_cman_config_set(uint16_t port_id, const struct rte_eth_cman_config *conf dev = &rte_eth_devices[port_id]; if (config == NULL) { - RTE_ETHDEV_LOG(ERR, "congestion management config is NULL\n"); + RTE_ETHDEV_LOG_LINE(ERR, "congestion management config is NULL"); return -EINVAL; } if (dev->dev_ops->cman_config_set == NULL) { - RTE_ETHDEV_LOG(ERR, "Function not implemented\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Function not implemented"); return -ENOTSUP; } @@ -104,12 +104,12 @@ rte_eth_cman_config_get(uint16_t port_id, struct rte_eth_cman_config *config) dev = &rte_eth_devices[port_id]; if (config == NULL) { - RTE_ETHDEV_LOG(ERR, "congestion management config is NULL\n"); + RTE_ETHDEV_LOG_LINE(ERR, "congestion management config is NULL"); return -EINVAL; } if (dev->dev_ops->cman_config_get == NULL) { - RTE_ETHDEV_LOG(ERR, "Function not implemented\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Function not implemented"); return -ENOTSUP; } diff --git a/lib/ethdev/rte_ethdev_telemetry.c b/lib/ethdev/rte_ethdev_telemetry.c index b01028ce9b..6b873e7abe 100644 --- a/lib/ethdev/rte_ethdev_telemetry.c +++ b/lib/ethdev/rte_ethdev_telemetry.c @@ -36,8 +36,8 @@ eth_dev_parse_port_params(const char *params, uint16_t *port_id, pi = strtoul(params, end_param, 0); if (**end_param != '\0' && !has_next) - RTE_ETHDEV_LOG(NOTICE, - "Extra parameters passed to ethdev telemetry command, ignoring\n"); + RTE_ETHDEV_LOG_LINE(NOTICE, + "Extra parameters passed to ethdev telemetry command, ignoring"); if (pi >= UINT16_MAX || !rte_eth_dev_is_valid_port(pi)) return -EINVAL; @@ -153,8 +153,8 @@ eth_dev_handle_port_xstats(const char *cmd __rte_unused, kvlist = rte_kvargs_parse(end_param, valid_keys); ret = rte_kvargs_process(kvlist, NULL, eth_dev_parse_hide_zero, &hide_zero); if (kvlist == NULL || ret != 0) - RTE_ETHDEV_LOG(NOTICE, - "Unknown extra parameters passed to ethdev telemetry command, ignoring\n"); + RTE_ETHDEV_LOG_LINE(NOTICE, + "Unknown extra parameters passed to ethdev telemetry command, ignoring"); rte_kvargs_free(kvlist); } @@ -445,8 +445,8 @@ eth_dev_handle_port_flow_ctrl(const char *cmd __rte_unused, ret = rte_eth_dev_flow_ctrl_get(port_id, &fc_conf); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, - "Failed to get flow ctrl info, ret = %d\n", ret); + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to get flow ctrl info, ret = %d", ret); return ret; } @@ -496,8 +496,8 @@ ethdev_parse_queue_params(const char *params, bool is_rx, qid = strtoul(qid_param, &end_param, 0); } if (*end_param != '\0') - RTE_ETHDEV_LOG(NOTICE, - "Extra parameters passed to ethdev telemetry command, ignoring\n"); + RTE_ETHDEV_LOG_LINE(NOTICE, + "Extra parameters passed to ethdev telemetry command, ignoring"); if (qid >= UINT16_MAX) return -EINVAL; @@ -522,8 +522,8 @@ eth_dev_add_burst_mode(uint16_t port_id, uint16_t queue_id, return 0; if (ret != 0) { - RTE_ETHDEV_LOG(ERR, - "Failed to get burst mode for port %u\n", port_id); + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to get burst mode for port %u", port_id); return ret; } @@ -689,8 +689,8 @@ eth_dev_add_dcb_info(uint16_t port_id, struct rte_tel_data *d) ret = rte_eth_dev_get_dcb_info(port_id, &dcb_info); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, - "Failed to get dcb info, ret = %d\n", ret); + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to get dcb info, ret = %d", ret); return ret; } @@ -769,8 +769,8 @@ eth_dev_handle_port_rss_info(const char *cmd __rte_unused, ret = rte_eth_dev_info_get(port_id, &dev_info); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, - "Failed to get device info, ret = %d\n", ret); + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to get device info, ret = %d", ret); return ret; } @@ -823,7 +823,7 @@ eth_dev_fec_capas_to_string(uint32_t fec_capa, char *fec_name, uint32_t len) count = snprintf(fec_name, len, "unknown "); if (count >= len) { - RTE_ETHDEV_LOG(WARNING, "FEC capa names may be truncated\n"); + RTE_ETHDEV_LOG_LINE(WARNING, "FEC capa names may be truncated"); count = len; } @@ -994,8 +994,8 @@ eth_dev_handle_port_vlan(const char *cmd __rte_unused, ret = rte_eth_dev_conf_get(port_id, &dev_conf); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, - "Failed to get device configuration, ret = %d\n", ret); + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to get device configuration, ret = %d", ret); return ret; } @@ -1115,7 +1115,7 @@ eth_dev_handle_port_tm_caps(const char *cmd __rte_unused, ret = rte_tm_capabilities_get(port_id, &cap, &error); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, "error: %s, error type: %u\n", + RTE_ETHDEV_LOG_LINE(ERR, "error: %s, error type: %u", error.message ? error.message : "no stated reason", error.type); return ret; @@ -1229,8 +1229,8 @@ eth_dev_parse_tm_params(char *params, uint32_t *result) ret = strtoul(splited_param, ¶ms, 0); if (*params != '\0') - RTE_ETHDEV_LOG(NOTICE, - "Extra parameters passed to ethdev telemetry command, ignoring\n"); + RTE_ETHDEV_LOG_LINE(NOTICE, + "Extra parameters passed to ethdev telemetry command, ignoring"); if (ret >= UINT32_MAX) return -EINVAL; @@ -1263,7 +1263,7 @@ eth_dev_handle_port_tm_level_caps(const char *cmd __rte_unused, ret = rte_tm_level_capabilities_get(port_id, level_id, &cap, &error); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, "error: %s, error type: %u\n", + RTE_ETHDEV_LOG_LINE(ERR, "error: %s, error type: %u", error.message ? error.message : "no stated reason", error.type); return ret; @@ -1389,7 +1389,7 @@ eth_dev_handle_port_tm_node_caps(const char *cmd __rte_unused, return 0; out: - RTE_ETHDEV_LOG(WARNING, "error: %s, error type: %u\n", + RTE_ETHDEV_LOG_LINE(WARNING, "error: %s, error type: %u", error.message ? error.message : "no stated reason", error.type); return ret; diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 549e329558..f49d1d3767 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -18,6 +18,8 @@ #include "ethdev_trace.h" +#define FLOW_LOG RTE_ETHDEV_LOG_LINE + /* Mbuf dynamic field name for metadata. */ int32_t rte_flow_dynf_metadata_offs = -1; @@ -1614,13 +1616,13 @@ rte_flow_info_get(uint16_t port_id, if (unlikely(!ops)) return -rte_errno; if (dev->data->dev_configured == 0) { - RTE_FLOW_LOG(INFO, - "Device with port_id=%"PRIu16" is not configured.\n", + FLOW_LOG(INFO, + "Device with port_id=%"PRIu16" is not configured.", port_id); return -EINVAL; } if (port_info == NULL) { - RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id); + FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.", port_id); return -EINVAL; } if (likely(!!ops->info_get)) { @@ -1651,23 +1653,23 @@ rte_flow_configure(uint16_t port_id, if (unlikely(!ops)) return -rte_errno; if (dev->data->dev_configured == 0) { - RTE_FLOW_LOG(INFO, - "Device with port_id=%"PRIu16" is not configured.\n", + FLOW_LOG(INFO, + "Device with port_id=%"PRIu16" is not configured.", port_id); return -EINVAL; } if (dev->data->dev_started != 0) { - RTE_FLOW_LOG(INFO, - "Device with port_id=%"PRIu16" already started.\n", + FLOW_LOG(INFO, + "Device with port_id=%"PRIu16" already started.", port_id); return -EINVAL; } if (port_attr == NULL) { - RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id); + FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.", port_id); return -EINVAL; } if (queue_attr == NULL) { - RTE_FLOW_LOG(ERR, "Port %"PRIu16" queue info is NULL.\n", port_id); + FLOW_LOG(ERR, "Port %"PRIu16" queue info is NULL.", port_id); return -EINVAL; } if ((port_attr->flags & RTE_FLOW_PORT_FLAG_SHARE_INDIRECT) && @@ -1704,8 +1706,8 @@ rte_flow_pattern_template_create(uint16_t port_id, if (unlikely(!ops)) return NULL; if (dev->data->flow_configured == 0) { - RTE_FLOW_LOG(INFO, - "Flow engine on port_id=%"PRIu16" is not configured.\n", + FLOW_LOG(INFO, + "Flow engine on port_id=%"PRIu16" is not configured.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_STATE, @@ -1713,8 +1715,8 @@ rte_flow_pattern_template_create(uint16_t port_id, return NULL; } if (template_attr == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" template attr is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" template attr is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, @@ -1722,8 +1724,8 @@ rte_flow_pattern_template_create(uint16_t port_id, return NULL; } if (pattern == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" pattern is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" pattern is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, @@ -1791,8 +1793,8 @@ rte_flow_actions_template_create(uint16_t port_id, if (unlikely(!ops)) return NULL; if (dev->data->flow_configured == 0) { - RTE_FLOW_LOG(INFO, - "Flow engine on port_id=%"PRIu16" is not configured.\n", + FLOW_LOG(INFO, + "Flow engine on port_id=%"PRIu16" is not configured.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_STATE, @@ -1800,8 +1802,8 @@ rte_flow_actions_template_create(uint16_t port_id, return NULL; } if (template_attr == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" template attr is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" template attr is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, @@ -1809,8 +1811,8 @@ rte_flow_actions_template_create(uint16_t port_id, return NULL; } if (actions == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" actions is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" actions is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, @@ -1818,8 +1820,8 @@ rte_flow_actions_template_create(uint16_t port_id, return NULL; } if (masks == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" masks is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" masks is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, @@ -1889,8 +1891,8 @@ rte_flow_template_table_create(uint16_t port_id, if (unlikely(!ops)) return NULL; if (dev->data->flow_configured == 0) { - RTE_FLOW_LOG(INFO, - "Flow engine on port_id=%"PRIu16" is not configured.\n", + FLOW_LOG(INFO, + "Flow engine on port_id=%"PRIu16" is not configured.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_STATE, @@ -1898,8 +1900,8 @@ rte_flow_template_table_create(uint16_t port_id, return NULL; } if (table_attr == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" table attr is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" table attr is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, @@ -1907,8 +1909,8 @@ rte_flow_template_table_create(uint16_t port_id, return NULL; } if (pattern_templates == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" pattern templates is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" pattern templates is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, @@ -1916,8 +1918,8 @@ rte_flow_template_table_create(uint16_t port_id, return NULL; } if (actions_templates == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" actions templates is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" actions templates is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index affdc8121b..78b6bbb159 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -46,9 +46,6 @@ extern "C" { #endif -#define RTE_FLOW_LOG(level, ...) \ - rte_log(RTE_LOG_ ## level, rte_eth_dev_logtype, "" __VA_ARGS__) - /** * Flow rule attributes. * diff --git a/lib/ethdev/sff_telemetry.c b/lib/ethdev/sff_telemetry.c index f29e7fa882..b3f239d967 100644 --- a/lib/ethdev/sff_telemetry.c +++ b/lib/ethdev/sff_telemetry.c @@ -19,7 +19,7 @@ sff_port_module_eeprom_parse(uint16_t port_id, struct rte_tel_data *d) int ret; if (d == NULL) { - RTE_ETHDEV_LOG(ERR, "Dict invalid\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Dict invalid"); return; } @@ -27,16 +27,16 @@ sff_port_module_eeprom_parse(uint16_t port_id, struct rte_tel_data *d) if (ret != 0) { switch (ret) { case -ENODEV: - RTE_ETHDEV_LOG(ERR, "Port index %d invalid\n", port_id); + RTE_ETHDEV_LOG_LINE(ERR, "Port index %d invalid", port_id); break; case -ENOTSUP: - RTE_ETHDEV_LOG(ERR, "Operation not supported by device\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Operation not supported by device"); break; case -EIO: - RTE_ETHDEV_LOG(ERR, "Device is removed\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Device is removed"); break; default: - RTE_ETHDEV_LOG(ERR, "Unable to get port module info, %d\n", ret); + RTE_ETHDEV_LOG_LINE(ERR, "Unable to get port module info, %d", ret); break; } return; @@ -46,7 +46,7 @@ sff_port_module_eeprom_parse(uint16_t port_id, struct rte_tel_data *d) einfo.length = minfo.eeprom_len; einfo.data = calloc(1, minfo.eeprom_len); if (einfo.data == NULL) { - RTE_ETHDEV_LOG(ERR, "Allocation of port %u EEPROM data failed\n", port_id); + RTE_ETHDEV_LOG_LINE(ERR, "Allocation of port %u EEPROM data failed", port_id); return; } @@ -54,16 +54,16 @@ sff_port_module_eeprom_parse(uint16_t port_id, struct rte_tel_data *d) if (ret != 0) { switch (ret) { case -ENODEV: - RTE_ETHDEV_LOG(ERR, "Port index %d invalid\n", port_id); + RTE_ETHDEV_LOG_LINE(ERR, "Port index %d invalid", port_id); break; case -ENOTSUP: - RTE_ETHDEV_LOG(ERR, "Operation not supported by device\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Operation not supported by device"); break; case -EIO: - RTE_ETHDEV_LOG(ERR, "Device is removed\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Device is removed"); break; default: - RTE_ETHDEV_LOG(ERR, "Unable to get port module EEPROM, %d\n", ret); + RTE_ETHDEV_LOG_LINE(ERR, "Unable to get port module EEPROM, %d", ret); break; } free(einfo.data); @@ -84,7 +84,7 @@ sff_port_module_eeprom_parse(uint16_t port_id, struct rte_tel_data *d) sff_8636_show_all(einfo.data, einfo.length, d); break; default: - RTE_ETHDEV_LOG(NOTICE, "Unsupported module type: %u\n", minfo.type); + RTE_ETHDEV_LOG_LINE(NOTICE, "Unsupported module type: %u", minfo.type); break; } @@ -99,7 +99,7 @@ ssf_add_dict_string(struct rte_tel_data *d, const char *name_str, const char *va if (d->type != TEL_DICT) return; if (d->data_len >= RTE_TEL_MAX_DICT_ENTRIES) { - RTE_ETHDEV_LOG(ERR, "data_len has exceeded the maximum number of inserts\n"); + RTE_ETHDEV_LOG_LINE(ERR, "data_len has exceeded the maximum number of inserts"); return; } @@ -135,13 +135,13 @@ eth_dev_handle_port_module_eeprom(const char *cmd __rte_unused, const char *para port_id = strtoul(params, &end_param, 0); if (errno != 0 || port_id >= UINT16_MAX) { - RTE_ETHDEV_LOG(ERR, "Invalid argument, %d\n", errno); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid argument, %d", errno); return -1; } if (*end_param != '\0') - RTE_ETHDEV_LOG(NOTICE, - "Extra parameters [%s] passed to ethdev telemetry command, ignoring\n", + RTE_ETHDEV_LOG_LINE(NOTICE, + "Extra parameters [%s] passed to ethdev telemetry command, ignoring", end_param); rte_tel_data_start_dict(d); diff --git a/lib/member/member.h b/lib/member/member.h new file mode 100644 index 0000000000..ce150f7689 --- /dev/null +++ b/lib/member/member.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Red Hat, Inc. + */ + +#include <rte_log.h> + +extern int librte_member_logtype; +#define RTE_LOGTYPE_MEMBER librte_member_logtype + +#define MEMBER_LOG(level, ...) \ + RTE_LOG(level, MEMBER, \ + RTE_FMT("%s(): " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + __func__, RTE_FMT_TAIL(__VA_ARGS__,))) + diff --git a/lib/member/rte_member.c b/lib/member/rte_member.c index 8f859f7fbd..57eb7affab 100644 --- a/lib/member/rte_member.c +++ b/lib/member/rte_member.c @@ -11,6 +11,7 @@ #include <rte_tailq.h> #include <rte_ring_elem.h> +#include "member.h" #include "rte_member.h" #include "rte_member_ht.h" #include "rte_member_vbf.h" @@ -102,8 +103,8 @@ rte_member_create(const struct rte_member_parameters *params) if (params->key_len == 0 || params->prim_hash_seed == params->sec_hash_seed) { rte_errno = EINVAL; - RTE_MEMBER_LOG(ERR, "Create setsummary with " - "invalid parameters\n"); + MEMBER_LOG(ERR, "Create setsummary with " + "invalid parameters"); return NULL; } @@ -112,7 +113,7 @@ rte_member_create(const struct rte_member_parameters *params) sketch_key_ring = rte_ring_create_elem(ring_name, sizeof(uint32_t), rte_align32pow2(params->top_k), params->socket_id, 0); if (sketch_key_ring == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Ring Memory allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Ring Memory allocation failed"); return NULL; } } @@ -135,7 +136,7 @@ rte_member_create(const struct rte_member_parameters *params) } te = rte_zmalloc("MEMBER_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_MEMBER_LOG(ERR, "tailq entry allocation failed\n"); + MEMBER_LOG(ERR, "tailq entry allocation failed"); goto error_unlock_exit; } @@ -144,7 +145,7 @@ rte_member_create(const struct rte_member_parameters *params) sizeof(struct rte_member_setsum), RTE_CACHE_LINE_SIZE, params->socket_id); if (setsum == NULL) { - RTE_MEMBER_LOG(ERR, "Create setsummary failed\n"); + MEMBER_LOG(ERR, "Create setsummary failed"); goto error_unlock_exit; } strlcpy(setsum->name, params->name, sizeof(setsum->name)); @@ -171,8 +172,8 @@ rte_member_create(const struct rte_member_parameters *params) if (ret < 0) goto error_unlock_exit; - RTE_MEMBER_LOG(DEBUG, "Creating a setsummary table with " - "mode %u\n", setsum->type); + MEMBER_LOG(DEBUG, "Creating a setsummary table with " + "mode %u", setsum->type); te->data = (void *)setsum; TAILQ_INSERT_TAIL(member_list, te, next); diff --git a/lib/member/rte_member.h b/lib/member/rte_member.h index b585904368..3278bbb5c1 100644 --- a/lib/member/rte_member.h +++ b/lib/member/rte_member.h @@ -100,15 +100,6 @@ typedef uint16_t member_set_t; #define MEMBER_HASH_FUNC rte_jhash #endif -extern int librte_member_logtype; - -#define RTE_MEMBER_LOG(level, ...) \ - rte_log(RTE_LOG_ ## level, \ - librte_member_logtype, \ - RTE_FMT("%s(): " RTE_FMT_HEAD(__VA_ARGS__,), \ - __func__, \ - RTE_FMT_TAIL(__VA_ARGS__,))) - /** @internal setsummary structure. */ struct rte_member_setsum; diff --git a/lib/member/rte_member_heap.h b/lib/member/rte_member_heap.h index 9c4a01aebe..e0a3d54eab 100644 --- a/lib/member/rte_member_heap.h +++ b/lib/member/rte_member_heap.h @@ -6,6 +6,7 @@ #ifndef RTE_MEMBER_HEAP_H #define RTE_MEMBER_HEAP_H +#include "member.h" #include <rte_ring_elem.h> #include "rte_member.h" @@ -129,16 +130,16 @@ resize_hash_table(struct minheap *hp) while (1) { new_bkt_cnt = hp->hashtable->bkt_cnt * HASH_RESIZE_MULTI; - RTE_MEMBER_LOG(ERR, "Sketch Minheap HT load factor is [%f]\n", + MEMBER_LOG(ERR, "Sketch Minheap HT load factor is [%f]", hp->hashtable->num_item / ((float)hp->hashtable->bkt_cnt * HASH_BKT_SIZE)); - RTE_MEMBER_LOG(ERR, "Sketch Minheap HT resize happen!\n"); + MEMBER_LOG(ERR, "Sketch Minheap HT resize happen!"); rte_free(hp->hashtable); hp->hashtable = rte_zmalloc_socket(NULL, sizeof(struct hash) + new_bkt_cnt * sizeof(struct hash_bkt), RTE_CACHE_LINE_SIZE, hp->socket); if (hp->hashtable == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Minheap HT allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Minheap HT allocation failed"); return -ENOMEM; } @@ -147,8 +148,8 @@ resize_hash_table(struct minheap *hp) for (i = 0; i < hp->size; ++i) { if (hash_table_insert(hp->elem[i].key, i + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, - "Sketch Minheap HT resize insert fail!\n"); + MEMBER_LOG(ERR, + "Sketch Minheap HT resize insert fail!"); break; } } @@ -174,7 +175,7 @@ rte_member_minheap_init(struct minheap *heap, int size, heap->elem = rte_zmalloc_socket(NULL, sizeof(struct node) * size, RTE_CACHE_LINE_SIZE, socket); if (heap->elem == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Minheap elem allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Minheap elem allocation failed"); return -ENOMEM; } @@ -188,7 +189,7 @@ rte_member_minheap_init(struct minheap *heap, int size, RTE_CACHE_LINE_SIZE, socket); if (heap->hashtable == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Minheap HT allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Minheap HT allocation failed"); rte_free(heap->elem); return -ENOMEM; } @@ -231,13 +232,13 @@ rte_member_heapify(struct minheap *hp, uint32_t idx, bool update_hash) if (update_hash) { if (hash_table_update(hp->elem[smallest].key, idx + 1, smallest + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table update failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table update failed"); return; } if (hash_table_update(hp->elem[idx].key, smallest + 1, idx + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table update failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table update failed"); return; } } @@ -255,7 +256,7 @@ rte_member_minheap_insert_node(struct minheap *hp, const void *key, uint32_t slot_id; if (rte_ring_sc_dequeue_elem(free_key_slot, &slot_id, sizeof(uint32_t)) != 0) { - RTE_MEMBER_LOG(ERR, "Minheap get empty keyslot failed\n"); + MEMBER_LOG(ERR, "Minheap get empty keyslot failed"); return -1; } @@ -270,7 +271,7 @@ rte_member_minheap_insert_node(struct minheap *hp, const void *key, hp->elem[i] = hp->elem[PARENT(i)]; if (hash_table_update(hp->elem[i].key, PARENT(i) + 1, i + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table update failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table update failed"); return -1; } i = PARENT(i); @@ -279,7 +280,7 @@ rte_member_minheap_insert_node(struct minheap *hp, const void *key, if (hash_table_insert(key, i + 1, hp->key_len, hp->hashtable) < 0) { if (resize_hash_table(hp) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table resize failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table resize failed"); return -1; } } @@ -296,7 +297,7 @@ rte_member_minheap_delete_node(struct minheap *hp, const void *key, uint32_t offset = RTE_PTR_DIFF(hp->elem[idx].key, key_slot) / hp->key_len; if (hash_table_del(key, idx + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table delete failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table delete failed"); return -1; } @@ -311,7 +312,7 @@ rte_member_minheap_delete_node(struct minheap *hp, const void *key, if (hash_table_update(hp->elem[idx].key, hp->size, idx + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table update failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table update failed"); return -1; } hp->size--; @@ -332,7 +333,7 @@ rte_member_minheap_replace_node(struct minheap *hp, recycle_key = hp->elem[0].key; if (hash_table_del(recycle_key, 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table delete failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table delete failed"); return -1; } @@ -340,7 +341,7 @@ rte_member_minheap_replace_node(struct minheap *hp, if (hash_table_update(hp->elem[0].key, hp->size, 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table update failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table update failed"); return -1; } hp->size--; @@ -358,7 +359,7 @@ rte_member_minheap_replace_node(struct minheap *hp, hp->elem[i] = hp->elem[PARENT(i)]; if (hash_table_update(hp->elem[i].key, PARENT(i) + 1, i + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table update failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table update failed"); return -1; } i = PARENT(i); @@ -367,9 +368,9 @@ rte_member_minheap_replace_node(struct minheap *hp, hp->elem[i] = nd; if (hash_table_insert(new_key, i + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table replace insert failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table replace insert failed"); if (resize_hash_table(hp) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table replace resize failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table replace resize failed"); return -1; } } diff --git a/lib/member/rte_member_ht.c b/lib/member/rte_member_ht.c index a85561b472..357097ff4b 100644 --- a/lib/member/rte_member_ht.c +++ b/lib/member/rte_member_ht.c @@ -9,6 +9,7 @@ #include <rte_log.h> #include <rte_vect.h> +#include "member.h" #include "rte_member.h" #include "rte_member_ht.h" @@ -84,8 +85,8 @@ rte_member_create_ht(struct rte_member_setsum *ss, !rte_is_power_of_2(RTE_MEMBER_BUCKET_ENTRIES) || num_entries < RTE_MEMBER_BUCKET_ENTRIES) { rte_errno = EINVAL; - RTE_MEMBER_LOG(ERR, - "Membership HT create with invalid parameters\n"); + MEMBER_LOG(ERR, + "Membership HT create with invalid parameters"); return -EINVAL; } @@ -98,8 +99,8 @@ rte_member_create_ht(struct rte_member_setsum *ss, RTE_CACHE_LINE_SIZE, ss->socket_id); if (buckets == NULL) { - RTE_MEMBER_LOG(ERR, "memory allocation failed for HT " - "setsummary\n"); + MEMBER_LOG(ERR, "memory allocation failed for HT " + "setsummary"); return -ENOMEM; } @@ -121,8 +122,8 @@ rte_member_create_ht(struct rte_member_setsum *ss, #endif ss->sig_cmp_fn = RTE_MEMBER_COMPARE_SCALAR; - RTE_MEMBER_LOG(DEBUG, "Hash table based filter created, " - "the table has %u entries, %u buckets\n", + MEMBER_LOG(DEBUG, "Hash table based filter created, " + "the table has %u entries, %u buckets", num_entries, num_buckets); return 0; } diff --git a/lib/member/rte_member_sketch.c b/lib/member/rte_member_sketch.c index d5f35aabe9..e006e835d9 100644 --- a/lib/member/rte_member_sketch.c +++ b/lib/member/rte_member_sketch.c @@ -14,6 +14,7 @@ #include <rte_prefetch.h> #include <rte_ring_elem.h> +#include "member.h" #include "rte_member.h" #include "rte_member_sketch.h" #include "rte_member_heap.h" @@ -118,8 +119,8 @@ rte_member_create_sketch(struct rte_member_setsum *ss, if (params->sample_rate == 0 || params->sample_rate > 1) { rte_errno = EINVAL; - RTE_MEMBER_LOG(ERR, - "Membership Sketch created with invalid parameters\n"); + MEMBER_LOG(ERR, + "Membership Sketch created with invalid parameters"); return -EINVAL; } @@ -141,8 +142,8 @@ rte_member_create_sketch(struct rte_member_setsum *ss, if (ss->use_avx512 == true) { #ifdef CC_AVX512_SUPPORT ss->num_row = NUM_ROW_VEC; - RTE_MEMBER_LOG(NOTICE, - "Membership Sketch AVX512 update/lookup/delete ops is selected\n"); + MEMBER_LOG(NOTICE, + "Membership Sketch AVX512 update/lookup/delete ops is selected"); ss->sketch_update = sketch_update_avx512; ss->sketch_lookup = sketch_lookup_avx512; ss->sketch_delete = sketch_delete_avx512; @@ -151,8 +152,8 @@ rte_member_create_sketch(struct rte_member_setsum *ss, #endif { ss->num_row = NUM_ROW_SCALAR; - RTE_MEMBER_LOG(NOTICE, - "Membership Sketch SCALAR update/lookup/delete ops is selected\n"); + MEMBER_LOG(NOTICE, + "Membership Sketch SCALAR update/lookup/delete ops is selected"); ss->sketch_update = sketch_update_scalar; ss->sketch_lookup = sketch_lookup_scalar; ss->sketch_delete = sketch_delete_scalar; @@ -173,21 +174,21 @@ rte_member_create_sketch(struct rte_member_setsum *ss, sizeof(uint64_t) * num_col * ss->num_row, RTE_CACHE_LINE_SIZE, ss->socket_id); if (ss->table == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Table memory allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Table memory allocation failed"); return -ENOMEM; } ss->hash_seeds = rte_zmalloc_socket(NULL, sizeof(uint64_t) * ss->num_row, RTE_CACHE_LINE_SIZE, ss->socket_id); if (ss->hash_seeds == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Hashseeds memory allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Hashseeds memory allocation failed"); return -ENOMEM; } ss->runtime_var = rte_zmalloc_socket(NULL, sizeof(struct sketch_runtime), RTE_CACHE_LINE_SIZE, ss->socket_id); if (ss->runtime_var == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Runtime memory allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Runtime memory allocation failed"); rte_free(ss); return -ENOMEM; } @@ -205,7 +206,7 @@ rte_member_create_sketch(struct rte_member_setsum *ss, runtime->key_slots = rte_zmalloc_socket(NULL, ss->key_len * ss->topk, RTE_CACHE_LINE_SIZE, ss->socket_id); if (runtime->key_slots == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Key Slots allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Key Slots allocation failed"); goto error; } @@ -216,14 +217,14 @@ rte_member_create_sketch(struct rte_member_setsum *ss, if (rte_member_minheap_init(&(runtime->heap), params->top_k, ss->socket_id, params->prim_hash_seed) < 0) { - RTE_MEMBER_LOG(ERR, "Sketch Minheap allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Minheap allocation failed"); goto error_runtime; } runtime->report_array = rte_zmalloc_socket(NULL, sizeof(struct node) * ss->topk, RTE_CACHE_LINE_SIZE, ss->socket_id); if (runtime->report_array == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Runtime Report Array allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Runtime Report Array allocation failed"); goto error_runtime; } @@ -239,8 +240,8 @@ rte_member_create_sketch(struct rte_member_setsum *ss, ss->converge_thresh = 10 * pow(ss->error_rate, -2.0) * sqrt(log(1 / delta)); } - RTE_MEMBER_LOG(DEBUG, "Sketch created, " - "the total memory required is %u Bytes\n", ss->num_col * ss->num_row * 8); + MEMBER_LOG(DEBUG, "Sketch created, " + "the total memory required is %u Bytes", ss->num_col * ss->num_row * 8); return 0; @@ -382,8 +383,8 @@ should_converge(const struct rte_member_setsum *ss) /* For count min sketch - L1 norm */ if (runtime_var->pkt_cnt > ss->converge_thresh) { runtime_var->converged = 1; - RTE_MEMBER_LOG(DEBUG, "Sketch converged, begin sampling " - "from key count %"PRIu64"\n", + MEMBER_LOG(DEBUG, "Sketch converged, begin sampling " + "from key count %"PRIu64, runtime_var->pkt_cnt); } } @@ -471,8 +472,8 @@ rte_member_add_sketch(const struct rte_member_setsum *ss, * the rte_member_add_sketch_byte_count routine should be used. */ if (ss->count_byte == 1) { - RTE_MEMBER_LOG(ERR, "Sketch is Byte Mode, " - "should use rte_member_add_byte_count()!\n"); + MEMBER_LOG(ERR, "Sketch is Byte Mode, " + "should use rte_member_add_byte_count()!"); return -EINVAL; } @@ -528,8 +529,8 @@ rte_member_add_sketch_byte_count(const struct rte_member_setsum *ss, /* should not call this API if not in count byte mode */ if (ss->count_byte == 0) { - RTE_MEMBER_LOG(ERR, "Sketch is Pkt Mode, " - "should use rte_member_add()!\n"); + MEMBER_LOG(ERR, "Sketch is Pkt Mode, " + "should use rte_member_add()!"); return -EINVAL; } diff --git a/lib/member/rte_member_vbf.c b/lib/member/rte_member_vbf.c index 5a0c51ecc0..5ad9487fad 100644 --- a/lib/member/rte_member_vbf.c +++ b/lib/member/rte_member_vbf.c @@ -9,6 +9,7 @@ #include <rte_errno.h> #include <rte_log.h> +#include "member.h" #include "rte_member.h" #include "rte_member_vbf.h" @@ -35,7 +36,7 @@ rte_member_create_vbf(struct rte_member_setsum *ss, params->false_positive_rate == 0 || params->false_positive_rate > 1) { rte_errno = EINVAL; - RTE_MEMBER_LOG(ERR, "Membership vBF create with invalid parameters\n"); + MEMBER_LOG(ERR, "Membership vBF create with invalid parameters"); return -EINVAL; } @@ -56,7 +57,7 @@ rte_member_create_vbf(struct rte_member_setsum *ss, if (fp_one_bf == 0) { rte_errno = EINVAL; - RTE_MEMBER_LOG(ERR, "Membership BF false positive rate is too small\n"); + MEMBER_LOG(ERR, "Membership BF false positive rate is too small"); return -EINVAL; } @@ -111,10 +112,10 @@ rte_member_create_vbf(struct rte_member_setsum *ss, ss->mul_shift = rte_ctz32(ss->num_set); ss->div_shift = rte_ctz32(32 >> ss->mul_shift); - RTE_MEMBER_LOG(DEBUG, "vector bloom filter created, " + MEMBER_LOG(DEBUG, "vector bloom filter created, " "each bloom filter expects %u keys, needs %u bits, %u hashes, " "with false positive rate set as %.5f, " - "The new calculated vBF false positive rate is %.5f\n", + "The new calculated vBF false positive rate is %.5f", num_keys_per_bf, ss->bits, ss->num_hashes, fp_one_bf, new_fp); ss->table = rte_zmalloc_socket(NULL, ss->num_set * (ss->bits >> 3), diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c index 5a1ec14d7a..70963e7ee7 100644 --- a/lib/pdump/rte_pdump.c +++ b/lib/pdump/rte_pdump.c @@ -16,10 +16,10 @@ #include "rte_pdump.h" RTE_LOG_REGISTER_DEFAULT(pdump_logtype, NOTICE); +#define RTE_LOGTYPE_PDUMP pdump_logtype -/* Macro for printing using RTE_LOG */ -#define PDUMP_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, pdump_logtype, "%s(): " fmt, \ +#define PDUMP_LOG_LINE(level, fmt, args...) \ + RTE_LOG(level, PDUMP, "%s(): " fmt "\n", \ __func__, ## args) /* Used for the multi-process communication */ @@ -181,8 +181,8 @@ pdump_register_rx_callbacks(enum pdump_version ver, if (operation == ENABLE) { if (cbs->cb) { - PDUMP_LOG(ERR, - "rx callback for port=%d queue=%d, already exists\n", + PDUMP_LOG_LINE(ERR, + "rx callback for port=%d queue=%d, already exists", port, qid); return -EEXIST; } @@ -195,8 +195,8 @@ pdump_register_rx_callbacks(enum pdump_version ver, cbs->cb = rte_eth_add_first_rx_callback(port, qid, pdump_rx, cbs); if (cbs->cb == NULL) { - PDUMP_LOG(ERR, - "failed to add rx callback, errno=%d\n", + PDUMP_LOG_LINE(ERR, + "failed to add rx callback, errno=%d", rte_errno); return rte_errno; } @@ -204,15 +204,15 @@ pdump_register_rx_callbacks(enum pdump_version ver, int ret; if (cbs->cb == NULL) { - PDUMP_LOG(ERR, - "no existing rx callback for port=%d queue=%d\n", + PDUMP_LOG_LINE(ERR, + "no existing rx callback for port=%d queue=%d", port, qid); return -EINVAL; } ret = rte_eth_remove_rx_callback(port, qid, cbs->cb); if (ret < 0) { - PDUMP_LOG(ERR, - "failed to remove rx callback, errno=%d\n", + PDUMP_LOG_LINE(ERR, + "failed to remove rx callback, errno=%d", -ret); return ret; } @@ -239,8 +239,8 @@ pdump_register_tx_callbacks(enum pdump_version ver, if (operation == ENABLE) { if (cbs->cb) { - PDUMP_LOG(ERR, - "tx callback for port=%d queue=%d, already exists\n", + PDUMP_LOG_LINE(ERR, + "tx callback for port=%d queue=%d, already exists", port, qid); return -EEXIST; } @@ -253,8 +253,8 @@ pdump_register_tx_callbacks(enum pdump_version ver, cbs->cb = rte_eth_add_tx_callback(port, qid, pdump_tx, cbs); if (cbs->cb == NULL) { - PDUMP_LOG(ERR, - "failed to add tx callback, errno=%d\n", + PDUMP_LOG_LINE(ERR, + "failed to add tx callback, errno=%d", rte_errno); return rte_errno; } @@ -262,15 +262,15 @@ pdump_register_tx_callbacks(enum pdump_version ver, int ret; if (cbs->cb == NULL) { - PDUMP_LOG(ERR, - "no existing tx callback for port=%d queue=%d\n", + PDUMP_LOG_LINE(ERR, + "no existing tx callback for port=%d queue=%d", port, qid); return -EINVAL; } ret = rte_eth_remove_tx_callback(port, qid, cbs->cb); if (ret < 0) { - PDUMP_LOG(ERR, - "failed to remove tx callback, errno=%d\n", + PDUMP_LOG_LINE(ERR, + "failed to remove tx callback, errno=%d", -ret); return ret; } @@ -295,22 +295,22 @@ set_pdump_rxtx_cbs(const struct pdump_request *p) /* Check for possible DPDK version mismatch */ if (!(p->ver == V1 || p->ver == V2)) { - PDUMP_LOG(ERR, - "incorrect client version %u\n", p->ver); + PDUMP_LOG_LINE(ERR, + "incorrect client version %u", p->ver); return -EINVAL; } if (p->prm) { if (p->prm->prog_arg.type != RTE_BPF_ARG_PTR_MBUF) { - PDUMP_LOG(ERR, - "invalid BPF program type: %u\n", + PDUMP_LOG_LINE(ERR, + "invalid BPF program type: %u", p->prm->prog_arg.type); return -EINVAL; } filter = rte_bpf_load(p->prm); if (filter == NULL) { - PDUMP_LOG(ERR, "cannot load BPF filter: %s\n", + PDUMP_LOG_LINE(ERR, "cannot load BPF filter: %s", rte_strerror(rte_errno)); return -rte_errno; } @@ -324,8 +324,8 @@ set_pdump_rxtx_cbs(const struct pdump_request *p) ret = rte_eth_dev_get_port_by_name(p->device, &port); if (ret < 0) { - PDUMP_LOG(ERR, - "failed to get port id for device id=%s\n", + PDUMP_LOG_LINE(ERR, + "failed to get port id for device id=%s", p->device); return -EINVAL; } @@ -336,8 +336,8 @@ set_pdump_rxtx_cbs(const struct pdump_request *p) ret = rte_eth_dev_info_get(port, &dev_info); if (ret != 0) { - PDUMP_LOG(ERR, - "Error during getting device (port %u) info: %s\n", + PDUMP_LOG_LINE(ERR, + "Error during getting device (port %u) info: %s", port, strerror(-ret)); return ret; } @@ -345,19 +345,19 @@ set_pdump_rxtx_cbs(const struct pdump_request *p) nb_rx_q = dev_info.nb_rx_queues; nb_tx_q = dev_info.nb_tx_queues; if (nb_rx_q == 0 && flags & RTE_PDUMP_FLAG_RX) { - PDUMP_LOG(ERR, - "number of rx queues cannot be 0\n"); + PDUMP_LOG_LINE(ERR, + "number of rx queues cannot be 0"); return -EINVAL; } if (nb_tx_q == 0 && flags & RTE_PDUMP_FLAG_TX) { - PDUMP_LOG(ERR, - "number of tx queues cannot be 0\n"); + PDUMP_LOG_LINE(ERR, + "number of tx queues cannot be 0"); return -EINVAL; } if ((nb_tx_q == 0 || nb_rx_q == 0) && flags == RTE_PDUMP_FLAG_RXTX) { - PDUMP_LOG(ERR, - "both tx&rx queues must be non zero\n"); + PDUMP_LOG_LINE(ERR, + "both tx&rx queues must be non zero"); return -EINVAL; } } @@ -394,7 +394,7 @@ pdump_server(const struct rte_mp_msg *mp_msg, const void *peer) /* recv client requests */ if (mp_msg->len_param != sizeof(*cli_req)) { - PDUMP_LOG(ERR, "failed to recv from client\n"); + PDUMP_LOG_LINE(ERR, "failed to recv from client"); resp->err_value = -EINVAL; } else { cli_req = (const struct pdump_request *)mp_msg->param; @@ -407,7 +407,7 @@ pdump_server(const struct rte_mp_msg *mp_msg, const void *peer) mp_resp.len_param = sizeof(*resp); mp_resp.num_fds = 0; if (rte_mp_reply(&mp_resp, peer) < 0) { - PDUMP_LOG(ERR, "failed to send to client:%s\n", + PDUMP_LOG_LINE(ERR, "failed to send to client:%s", strerror(rte_errno)); return -1; } @@ -424,7 +424,7 @@ rte_pdump_init(void) mz = rte_memzone_reserve(MZ_RTE_PDUMP_STATS, sizeof(*pdump_stats), rte_socket_id(), 0); if (mz == NULL) { - PDUMP_LOG(ERR, "cannot allocate pdump statistics\n"); + PDUMP_LOG_LINE(ERR, "cannot allocate pdump statistics"); rte_errno = ENOMEM; return -1; } @@ -454,22 +454,22 @@ static int pdump_validate_ring_mp(struct rte_ring *ring, struct rte_mempool *mp) { if (ring == NULL || mp == NULL) { - PDUMP_LOG(ERR, "NULL ring or mempool\n"); + PDUMP_LOG_LINE(ERR, "NULL ring or mempool"); rte_errno = EINVAL; return -1; } if (mp->flags & RTE_MEMPOOL_F_SP_PUT || mp->flags & RTE_MEMPOOL_F_SC_GET) { - PDUMP_LOG(ERR, + PDUMP_LOG_LINE(ERR, "mempool with SP or SC set not valid for pdump," - "must have MP and MC set\n"); + "must have MP and MC set"); rte_errno = EINVAL; return -1; } if (rte_ring_is_prod_single(ring) || rte_ring_is_cons_single(ring)) { - PDUMP_LOG(ERR, + PDUMP_LOG_LINE(ERR, "ring with SP or SC set is not valid for pdump," - "must have MP and MC set\n"); + "must have MP and MC set"); rte_errno = EINVAL; return -1; } @@ -481,16 +481,16 @@ static int pdump_validate_flags(uint32_t flags) { if ((flags & RTE_PDUMP_FLAG_RXTX) == 0) { - PDUMP_LOG(ERR, - "invalid flags, should be either rx/tx/rxtx\n"); + PDUMP_LOG_LINE(ERR, + "invalid flags, should be either rx/tx/rxtx"); rte_errno = EINVAL; return -1; } /* mask off the flags we know about */ if (flags & ~(RTE_PDUMP_FLAG_RXTX | RTE_PDUMP_FLAG_PCAPNG)) { - PDUMP_LOG(ERR, - "unknown flags: %#x\n", flags); + PDUMP_LOG_LINE(ERR, + "unknown flags: %#x", flags); rte_errno = ENOTSUP; return -1; } @@ -504,14 +504,14 @@ pdump_validate_port(uint16_t port, char *name) int ret = 0; if (port >= RTE_MAX_ETHPORTS) { - PDUMP_LOG(ERR, "Invalid port id %u\n", port); + PDUMP_LOG_LINE(ERR, "Invalid port id %u", port); rte_errno = EINVAL; return -1; } ret = rte_eth_dev_get_name_by_port(port, name); if (ret < 0) { - PDUMP_LOG(ERR, "port %u to name mapping failed\n", + PDUMP_LOG_LINE(ERR, "port %u to name mapping failed", port); rte_errno = EINVAL; return -1; @@ -536,8 +536,8 @@ pdump_prepare_client_request(const char *device, uint16_t queue, struct pdump_response *resp; if (rte_eal_process_type() == RTE_PROC_PRIMARY) { - PDUMP_LOG(ERR, - "pdump enable/disable not allowed in primary process\n"); + PDUMP_LOG_LINE(ERR, + "pdump enable/disable not allowed in primary process"); return -EINVAL; } @@ -570,8 +570,8 @@ pdump_prepare_client_request(const char *device, uint16_t queue, } if (ret < 0) - PDUMP_LOG(ERR, - "client request for pdump enable/disable failed\n"); + PDUMP_LOG_LINE(ERR, + "client request for pdump enable/disable failed"); return ret; } @@ -738,8 +738,8 @@ rte_pdump_stats(uint16_t port, struct rte_pdump_stats *stats) memset(stats, 0, sizeof(*stats)); ret = rte_eth_dev_info_get(port, &dev_info); if (ret != 0) { - PDUMP_LOG(ERR, - "Error during getting device (port %u) info: %s\n", + PDUMP_LOG_LINE(ERR, + "Error during getting device (port %u) info: %s", port, strerror(-ret)); return ret; } @@ -747,7 +747,7 @@ rte_pdump_stats(uint16_t port, struct rte_pdump_stats *stats) if (pdump_stats == NULL) { if (rte_eal_process_type() == RTE_PROC_PRIMARY) { /* rte_pdump_init was not called */ - PDUMP_LOG(ERR, "pdump stats not initialized\n"); + PDUMP_LOG_LINE(ERR, "pdump stats not initialized"); rte_errno = EINVAL; return -1; } @@ -756,7 +756,7 @@ rte_pdump_stats(uint16_t port, struct rte_pdump_stats *stats) mz = rte_memzone_lookup(MZ_RTE_PDUMP_STATS); if (mz == NULL) { /* rte_pdump_init was not called in primary process?? */ - PDUMP_LOG(ERR, "can not find pdump stats\n"); + PDUMP_LOG_LINE(ERR, "can not find pdump stats"); rte_errno = EINVAL; return -1; } diff --git a/lib/power/power_acpi_cpufreq.c b/lib/power/power_acpi_cpufreq.c index dd143f2cc8..aecfdfa15d 100644 --- a/lib/power/power_acpi_cpufreq.c +++ b/lib/power/power_acpi_cpufreq.c @@ -72,7 +72,7 @@ set_freq_internal(struct acpi_power_info *pi, uint32_t idx) if (idx == pi->curr_idx) return 0; - POWER_DEBUG_TRACE("Frequency[%u] %u to be set for lcore %u\n", + POWER_DEBUG_LOG("Frequency[%u] %u to be set for lcore %u", idx, pi->freqs[idx], pi->lcore_id); if (fseek(pi->f, 0, SEEK_SET) < 0) { RTE_LOG_LINE(ERR, POWER, "Fail to set file position indicator to 0 " @@ -155,7 +155,7 @@ power_get_available_freqs(struct acpi_power_info *pi) /* Store the available frequencies into power context */ for (i = 0, pi->nb_freqs = 0; i < count; i++) { - POWER_DEBUG_TRACE("Lcore %u frequency[%d]: %s\n", pi->lcore_id, + POWER_DEBUG_LOG("Lcore %u frequency[%d]: %s", pi->lcore_id, i, freqs[i]); pi->freqs[pi->nb_freqs++] = strtoul(freqs[i], &p, POWER_CONVERT_TO_DECIMAL); @@ -164,17 +164,17 @@ power_get_available_freqs(struct acpi_power_info *pi) if ((pi->freqs[0]-1000) == pi->freqs[1]) { pi->turbo_available = 1; pi->turbo_enable = 1; - POWER_DEBUG_TRACE("Lcore %u Can do Turbo Boost\n", + POWER_DEBUG_LOG("Lcore %u Can do Turbo Boost", pi->lcore_id); } else { pi->turbo_available = 0; pi->turbo_enable = 0; - POWER_DEBUG_TRACE("Turbo Boost not available on Lcore %u\n", + POWER_DEBUG_LOG("Turbo Boost not available on Lcore %u", pi->lcore_id); } ret = 0; - POWER_DEBUG_TRACE("%d frequency(s) of lcore %u are available\n", + POWER_DEBUG_LOG("%d frequency(s) of lcore %u are available", count, pi->lcore_id); out: if (f != NULL) diff --git a/lib/power/power_amd_pstate_cpufreq.c b/lib/power/power_amd_pstate_cpufreq.c index 44581fd48b..f8f43a49b2 100644 --- a/lib/power/power_amd_pstate_cpufreq.c +++ b/lib/power/power_amd_pstate_cpufreq.c @@ -79,7 +79,7 @@ set_freq_internal(struct amd_pstate_power_info *pi, uint32_t idx) if (idx == pi->curr_idx) return 0; - POWER_DEBUG_TRACE("Frequency[%u] %u to be set for lcore %u\n", + POWER_DEBUG_LOG("Frequency[%u] %u to be set for lcore %u", idx, pi->freqs[idx], pi->lcore_id); if (fseek(pi->f, 0, SEEK_SET) < 0) { RTE_LOG_LINE(ERR, POWER, "Fail to set file position indicator to 0 " @@ -153,14 +153,14 @@ power_check_turbo(struct amd_pstate_power_info *pi) pi->turbo_available = 1; pi->turbo_enable = 1; ret = 0; - POWER_DEBUG_TRACE("Lcore %u can do Turbo Boost! highest perf %u, " - "nominal perf %u\n", + POWER_DEBUG_LOG("Lcore %u can do Turbo Boost! highest perf %u, " + "nominal perf %u", pi->lcore_id, highest_perf, nominal_perf); } else { pi->turbo_available = 0; pi->turbo_enable = 0; - POWER_DEBUG_TRACE("Lcore %u Turbo not available! highest perf %u, " - "nominal perf %u\n", + POWER_DEBUG_LOG("Lcore %u Turbo not available! highest perf %u, " + "nominal perf %u", pi->lcore_id, highest_perf, nominal_perf); } @@ -277,7 +277,7 @@ power_get_available_freqs(struct amd_pstate_power_info *pi) ret = 0; - POWER_DEBUG_TRACE("%d frequency(s) of lcore %u are available\n", + POWER_DEBUG_LOG("%d frequency(s) of lcore %u are available", num_freqs, pi->lcore_id); out: diff --git a/lib/power/power_common.c b/lib/power/power_common.c index bc57642cd1..b3d438c4de 100644 --- a/lib/power/power_common.c +++ b/lib/power/power_common.c @@ -182,8 +182,8 @@ power_set_governor(unsigned int lcore_id, const char *new_governor, /* Check if current governor is already what we want */ if (strcmp(buf, new_governor) == 0) { ret = 0; - POWER_DEBUG_TRACE("Power management governor of lcore %u is " - "already %s\n", lcore_id, new_governor); + POWER_DEBUG_LOG("Power management governor of lcore %u is " + "already %s", lcore_id, new_governor); goto out; } diff --git a/lib/power/power_common.h b/lib/power/power_common.h index c3fcbf4c10..ea2febbd86 100644 --- a/lib/power/power_common.h +++ b/lib/power/power_common.h @@ -14,10 +14,10 @@ extern int power_logtype; #define RTE_LOGTYPE_POWER power_logtype #ifdef RTE_LIBRTE_POWER_DEBUG -#define POWER_DEBUG_TRACE(fmt, args...) \ - RTE_LOG(ERR, POWER, "%s: " fmt, __func__, ## args) +#define POWER_DEBUG_LOG(fmt, args...) \ + RTE_LOG(ERR, POWER, "%s: " fmt "\n", __func__, ## args) #else -#define POWER_DEBUG_TRACE(fmt, args...) +#define POWER_DEBUG_LOG(fmt, args...) #endif /* check if scaling driver matches one we want */ diff --git a/lib/power/power_cppc_cpufreq.c b/lib/power/power_cppc_cpufreq.c index 83e1e62830..31eb6942a2 100644 --- a/lib/power/power_cppc_cpufreq.c +++ b/lib/power/power_cppc_cpufreq.c @@ -82,7 +82,7 @@ set_freq_internal(struct cppc_power_info *pi, uint32_t idx) if (idx == pi->curr_idx) return 0; - POWER_DEBUG_TRACE("Frequency[%u] %u to be set for lcore %u\n", + POWER_DEBUG_LOG("Frequency[%u] %u to be set for lcore %u", idx, pi->freqs[idx], pi->lcore_id); if (fseek(pi->f, 0, SEEK_SET) < 0) { RTE_LOG_LINE(ERR, POWER, "Fail to set file position indicator to 0 " @@ -172,14 +172,14 @@ power_check_turbo(struct cppc_power_info *pi) pi->turbo_available = 1; pi->turbo_enable = 1; ret = 0; - POWER_DEBUG_TRACE("Lcore %u can do Turbo Boost! highest perf %u, " - "nominal perf %u\n", + POWER_DEBUG_LOG("Lcore %u can do Turbo Boost! highest perf %u, " + "nominal perf %u", pi->lcore_id, highest_perf, nominal_perf); } else { pi->turbo_available = 0; pi->turbo_enable = 0; - POWER_DEBUG_TRACE("Lcore %u Turbo not available! highest perf %u, " - "nominal perf %u\n", + POWER_DEBUG_LOG("Lcore %u Turbo not available! highest perf %u, " + "nominal perf %u", pi->lcore_id, highest_perf, nominal_perf); } @@ -265,7 +265,7 @@ power_get_available_freqs(struct cppc_power_info *pi) ret = 0; - POWER_DEBUG_TRACE("%d frequency(s) of lcore %u are available\n", + POWER_DEBUG_LOG("%d frequency(s) of lcore %u are available", num_freqs, pi->lcore_id); out: diff --git a/lib/power/power_intel_uncore.c b/lib/power/power_intel_uncore.c index 0ee8e603d2..2cc3045056 100644 --- a/lib/power/power_intel_uncore.c +++ b/lib/power/power_intel_uncore.c @@ -90,7 +90,7 @@ set_uncore_freq_internal(struct uncore_power_info *ui, uint32_t idx) return -1; } - POWER_DEBUG_TRACE("Uncore frequency '%u' to be set for pkg %02u die %02u\n", + POWER_DEBUG_LOG("Uncore frequency '%u' to be set for pkg %02u die %02u", target_uncore_freq, ui->pkg, ui->die); /* write the minimum value first if the target freq is less than current max */ @@ -235,7 +235,7 @@ power_get_available_uncore_freqs(struct uncore_power_info *ui) ret = 0; - POWER_DEBUG_TRACE("%d frequency(s) of pkg %02u die %02u are available\n", + POWER_DEBUG_LOG("%d frequency(s) of pkg %02u die %02u are available", num_uncore_freqs, ui->pkg, ui->die); out: diff --git a/lib/power/power_pstate_cpufreq.c b/lib/power/power_pstate_cpufreq.c index 56aa302b5d..ca704e672c 100644 --- a/lib/power/power_pstate_cpufreq.c +++ b/lib/power/power_pstate_cpufreq.c @@ -104,7 +104,7 @@ power_read_turbo_pct(uint64_t *outVal) goto out; } - POWER_DEBUG_TRACE("power turbo pct: %"PRIu64"\n", *outVal); + POWER_DEBUG_LOG("power turbo pct: %"PRIu64, *outVal); out: close(fd); return ret; @@ -204,7 +204,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) max_non_turbo = base_min_ratio + (100 - max_non_turbo) * (base_max_ratio - base_min_ratio) / 100; - POWER_DEBUG_TRACE("no turbo perf %"PRIu64"\n", max_non_turbo); + POWER_DEBUG_LOG("no turbo perf %"PRIu64, max_non_turbo); pi->non_turbo_max_ratio = (uint32_t)max_non_turbo; @@ -310,7 +310,7 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx) return -1; } - POWER_DEBUG_TRACE("Frequency '%u' to be set for lcore %u\n", + POWER_DEBUG_LOG("Frequency '%u' to be set for lcore %u", target_freq, pi->lcore_id); fflush(pi->f_cur_min); @@ -333,7 +333,7 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx) return -1; } - POWER_DEBUG_TRACE("Frequency '%u' to be set for lcore %u\n", + POWER_DEBUG_LOG("Frequency '%u' to be set for lcore %u", target_freq, pi->lcore_id); fflush(pi->f_cur_max); @@ -434,7 +434,7 @@ power_get_available_freqs(struct pstate_power_info *pi) else base_max_freq = pi->non_turbo_max_ratio * BUS_FREQ; - POWER_DEBUG_TRACE("sys min %u, sys max %u, base_max %u\n", + POWER_DEBUG_LOG("sys min %u, sys max %u, base_max %u", sys_min_freq, sys_max_freq, base_max_freq); @@ -471,7 +471,7 @@ power_get_available_freqs(struct pstate_power_info *pi) ret = 0; - POWER_DEBUG_TRACE("%d frequency(s) of lcore %u are available\n", + POWER_DEBUG_LOG("%d frequency(s) of lcore %u are available", num_freqs, pi->lcore_id); out: diff --git a/lib/regexdev/rte_regexdev.c b/lib/regexdev/rte_regexdev.c index d38a85eb0b..b2c4b49d97 100644 --- a/lib/regexdev/rte_regexdev.c +++ b/lib/regexdev/rte_regexdev.c @@ -73,16 +73,16 @@ regexdev_check_name(const char *name) size_t name_len; if (name == NULL) { - RTE_REGEXDEV_LOG(ERR, "Name can't be NULL\n"); + RTE_REGEXDEV_LOG_LINE(ERR, "Name can't be NULL"); return -EINVAL; } name_len = strnlen(name, RTE_REGEXDEV_NAME_MAX_LEN); if (name_len == 0) { - RTE_REGEXDEV_LOG(ERR, "Zero length RegEx device name\n"); + RTE_REGEXDEV_LOG_LINE(ERR, "Zero length RegEx device name"); return -EINVAL; } if (name_len >= RTE_REGEXDEV_NAME_MAX_LEN) { - RTE_REGEXDEV_LOG(ERR, "RegEx device name is too long\n"); + RTE_REGEXDEV_LOG_LINE(ERR, "RegEx device name is too long"); return -EINVAL; } return (int)name_len; @@ -101,17 +101,17 @@ rte_regexdev_register(const char *name) return NULL; dev = regexdev_allocated(name); if (dev != NULL) { - RTE_REGEXDEV_LOG(ERR, "RegEx device already allocated\n"); + RTE_REGEXDEV_LOG_LINE(ERR, "RegEx device already allocated"); return NULL; } dev_id = regexdev_find_free_dev(); if (dev_id == RTE_MAX_REGEXDEV_DEVS) { - RTE_REGEXDEV_LOG - (ERR, "Reached maximum number of RegEx devices\n"); + RTE_REGEXDEV_LOG_LINE + (ERR, "Reached maximum number of RegEx devices"); return NULL; } if (regexdev_shared_data_prepare() < 0) { - RTE_REGEXDEV_LOG(ERR, "Cannot allocate RegEx shared data\n"); + RTE_REGEXDEV_LOG_LINE(ERR, "Cannot allocate RegEx shared data"); return NULL; } @@ -215,8 +215,8 @@ rte_regexdev_configure(uint8_t dev_id, const struct rte_regexdev_config *cfg) if (*dev->dev_ops->dev_configure == NULL) return -ENOTSUP; if (dev->data->dev_started) { - RTE_REGEXDEV_LOG - (ERR, "Dev %u must be stopped to allow configuration\n", + RTE_REGEXDEV_LOG_LINE + (ERR, "Dev %u must be stopped to allow configuration", dev_id); return -EBUSY; } @@ -225,66 +225,66 @@ rte_regexdev_configure(uint8_t dev_id, const struct rte_regexdev_config *cfg) return ret; if ((cfg->dev_cfg_flags & RTE_REGEXDEV_CFG_CROSS_BUFFER_SCAN_F) && !(dev_info.regexdev_capa & RTE_REGEXDEV_SUPP_CROSS_BUFFER_F)) { - RTE_REGEXDEV_LOG(ERR, - "Dev %u doesn't support cross buffer scan\n", + RTE_REGEXDEV_LOG_LINE(ERR, + "Dev %u doesn't support cross buffer scan", dev_id); return -EINVAL; } if ((cfg->dev_cfg_flags & RTE_REGEXDEV_CFG_MATCH_AS_END_F) && !(dev_info.regexdev_capa & RTE_REGEXDEV_SUPP_MATCH_AS_END_F)) { - RTE_REGEXDEV_LOG(ERR, - "Dev %u doesn't support match as end\n", + RTE_REGEXDEV_LOG_LINE(ERR, + "Dev %u doesn't support match as end", dev_id); return -EINVAL; } if ((cfg->dev_cfg_flags & RTE_REGEXDEV_CFG_MATCH_ALL_F) && !(dev_info.regexdev_capa & RTE_REGEXDEV_SUPP_MATCH_ALL_F)) { - RTE_REGEXDEV_LOG(ERR, - "Dev %u doesn't support match all\n", + RTE_REGEXDEV_LOG_LINE(ERR, + "Dev %u doesn't support match all", dev_id); return -EINVAL; } if (cfg->nb_groups == 0) { - RTE_REGEXDEV_LOG(ERR, "Dev %u num of groups must be > 0\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %u num of groups must be > 0", dev_id); return -EINVAL; } if (cfg->nb_groups > dev_info.max_groups) { - RTE_REGEXDEV_LOG(ERR, "Dev %u num of groups %d > %d\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %u num of groups %d > %d", dev_id, cfg->nb_groups, dev_info.max_groups); return -EINVAL; } if (cfg->nb_max_matches == 0) { - RTE_REGEXDEV_LOG(ERR, "Dev %u num of matches must be > 0\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %u num of matches must be > 0", dev_id); return -EINVAL; } if (cfg->nb_max_matches > dev_info.max_matches) { - RTE_REGEXDEV_LOG(ERR, "Dev %u num of matches %d > %d\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %u num of matches %d > %d", dev_id, cfg->nb_max_matches, dev_info.max_matches); return -EINVAL; } if (cfg->nb_queue_pairs == 0) { - RTE_REGEXDEV_LOG(ERR, "Dev %u num of queues must be > 0\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %u num of queues must be > 0", dev_id); return -EINVAL; } if (cfg->nb_queue_pairs > dev_info.max_queue_pairs) { - RTE_REGEXDEV_LOG(ERR, "Dev %u num of queues %d > %d\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %u num of queues %d > %d", dev_id, cfg->nb_queue_pairs, dev_info.max_queue_pairs); return -EINVAL; } if (cfg->nb_rules_per_group == 0) { - RTE_REGEXDEV_LOG(ERR, - "Dev %u num of rules per group must be > 0\n", + RTE_REGEXDEV_LOG_LINE(ERR, + "Dev %u num of rules per group must be > 0", dev_id); return -EINVAL; } if (cfg->nb_rules_per_group > dev_info.max_rules_per_group) { - RTE_REGEXDEV_LOG(ERR, - "Dev %u num of rules per group %d > %d\n", + RTE_REGEXDEV_LOG_LINE(ERR, + "Dev %u num of rules per group %d > %d", dev_id, cfg->nb_rules_per_group, dev_info.max_rules_per_group); return -EINVAL; @@ -306,21 +306,21 @@ rte_regexdev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id, if (*dev->dev_ops->dev_qp_setup == NULL) return -ENOTSUP; if (dev->data->dev_started) { - RTE_REGEXDEV_LOG - (ERR, "Dev %u must be stopped to allow configuration\n", + RTE_REGEXDEV_LOG_LINE + (ERR, "Dev %u must be stopped to allow configuration", dev_id); return -EBUSY; } if (queue_pair_id >= dev->data->dev_conf.nb_queue_pairs) { - RTE_REGEXDEV_LOG(ERR, - "Dev %u invalid queue %d > %d\n", + RTE_REGEXDEV_LOG_LINE(ERR, + "Dev %u invalid queue %d > %d", dev_id, queue_pair_id, dev->data->dev_conf.nb_queue_pairs); return -EINVAL; } if (dev->data->dev_started) { - RTE_REGEXDEV_LOG - (ERR, "Dev %u must be stopped to allow configuration\n", + RTE_REGEXDEV_LOG_LINE + (ERR, "Dev %u must be stopped to allow configuration", dev_id); return -EBUSY; } @@ -383,7 +383,7 @@ rte_regexdev_attr_get(uint8_t dev_id, enum rte_regexdev_attr_id attr_id, if (*dev->dev_ops->dev_attr_get == NULL) return -ENOTSUP; if (attr_value == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d attribute value can't be NULL\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d attribute value can't be NULL", dev_id); return -EINVAL; } @@ -401,7 +401,7 @@ rte_regexdev_attr_set(uint8_t dev_id, enum rte_regexdev_attr_id attr_id, if (*dev->dev_ops->dev_attr_set == NULL) return -ENOTSUP; if (attr_value == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d attribute value can't be NULL\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d attribute value can't be NULL", dev_id); return -EINVAL; } @@ -420,7 +420,7 @@ rte_regexdev_rule_db_update(uint8_t dev_id, if (*dev->dev_ops->dev_rule_db_update == NULL) return -ENOTSUP; if (rules == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d rules can't be NULL\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d rules can't be NULL", dev_id); return -EINVAL; } @@ -450,7 +450,7 @@ rte_regexdev_rule_db_import(uint8_t dev_id, const char *rule_db, if (*dev->dev_ops->dev_db_import == NULL) return -ENOTSUP; if (rule_db == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d rules can't be NULL\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d rules can't be NULL", dev_id); return -EINVAL; } @@ -480,7 +480,7 @@ rte_regexdev_xstats_names_get(uint8_t dev_id, if (*dev->dev_ops->dev_xstats_names_get == NULL) return -ENOTSUP; if (xstats_map == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d xstats map can't be NULL\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d xstats map can't be NULL", dev_id); return -EINVAL; } @@ -498,11 +498,11 @@ rte_regexdev_xstats_get(uint8_t dev_id, const uint16_t *ids, if (*dev->dev_ops->dev_xstats_get == NULL) return -ENOTSUP; if (ids == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d ids can't be NULL\n", dev_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d ids can't be NULL", dev_id); return -EINVAL; } if (values == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d values can't be NULL\n", dev_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d values can't be NULL", dev_id); return -EINVAL; } return (*dev->dev_ops->dev_xstats_get)(dev, ids, values, n); @@ -519,15 +519,15 @@ rte_regexdev_xstats_by_name_get(uint8_t dev_id, const char *name, if (*dev->dev_ops->dev_xstats_by_name_get == NULL) return -ENOTSUP; if (name == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d name can't be NULL\n", dev_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d name can't be NULL", dev_id); return -EINVAL; } if (id == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d id can't be NULL\n", dev_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d id can't be NULL", dev_id); return -EINVAL; } if (value == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d value can't be NULL\n", dev_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d value can't be NULL", dev_id); return -EINVAL; } return (*dev->dev_ops->dev_xstats_by_name_get)(dev, name, id, value); @@ -544,7 +544,7 @@ rte_regexdev_xstats_reset(uint8_t dev_id, const uint16_t *ids, if (*dev->dev_ops->dev_xstats_reset == NULL) return -ENOTSUP; if (ids == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d ids can't be NULL\n", dev_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d ids can't be NULL", dev_id); return -EINVAL; } return (*dev->dev_ops->dev_xstats_reset)(dev, ids, nb_ids); @@ -572,7 +572,7 @@ rte_regexdev_dump(uint8_t dev_id, FILE *f) if (*dev->dev_ops->dev_dump == NULL) return -ENOTSUP; if (f == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d file can't be NULL\n", dev_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d file can't be NULL", dev_id); return -EINVAL; } return (*dev->dev_ops->dev_dump)(dev, f); diff --git a/lib/regexdev/rte_regexdev.h b/lib/regexdev/rte_regexdev.h index d50af775b5..dc111317a5 100644 --- a/lib/regexdev/rte_regexdev.h +++ b/lib/regexdev/rte_regexdev.h @@ -206,21 +206,23 @@ extern "C" { #define RTE_REGEXDEV_NAME_MAX_LEN RTE_DEV_NAME_MAX_LEN extern int rte_regexdev_logtype; +#define RTE_LOGTYPE_REGEXDEV rte_regexdev_logtype -#define RTE_REGEXDEV_LOG(level, ...) \ - rte_log(RTE_LOG_ ## level, rte_regexdev_logtype, "" __VA_ARGS__) +#define RTE_REGEXDEV_LOG_LINE(level, ...) \ + RTE_LOG(level, REGEXDEV, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_FMT_TAIL(__VA_ARGS__,))) /* Macros to check for valid port */ #define RTE_REGEXDEV_VALID_DEV_ID_OR_ERR_RET(dev_id, retval) do { \ if (!rte_regexdev_is_valid_dev(dev_id)) { \ - RTE_REGEXDEV_LOG(ERR, "Invalid dev_id=%u\n", dev_id); \ + RTE_REGEXDEV_LOG_LINE(ERR, "Invalid dev_id=%u", dev_id); \ return retval; \ } \ } while (0) #define RTE_REGEXDEV_VALID_DEV_ID_OR_RET(dev_id) do { \ if (!rte_regexdev_is_valid_dev(dev_id)) { \ - RTE_REGEXDEV_LOG(ERR, "Invalid dev_id=%u\n", dev_id); \ + RTE_REGEXDEV_LOG_LINE(ERR, "Invalid dev_id=%u", dev_id); \ return; \ } \ } while (0) @@ -1475,7 +1477,7 @@ rte_regexdev_enqueue_burst(uint8_t dev_id, uint16_t qp_id, if (*dev->enqueue == NULL) return -ENOTSUP; if (qp_id >= dev->data->dev_conf.nb_queue_pairs) { - RTE_REGEXDEV_LOG(ERR, "Invalid queue %d\n", qp_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Invalid queue %d", qp_id); return -EINVAL; } #endif @@ -1535,7 +1537,7 @@ rte_regexdev_dequeue_burst(uint8_t dev_id, uint16_t qp_id, if (*dev->dequeue == NULL) return -ENOTSUP; if (qp_id >= dev->data->dev_conf.nb_queue_pairs) { - RTE_REGEXDEV_LOG(ERR, "Invalid queue %d\n", qp_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Invalid queue %d", qp_id); return -EINVAL; } #endif diff --git a/lib/telemetry/telemetry.c b/lib/telemetry/telemetry.c index 92982842a8..5c655e2b25 100644 --- a/lib/telemetry/telemetry.c +++ b/lib/telemetry/telemetry.c @@ -56,7 +56,10 @@ static const char *socket_dir; /* runtime directory */ static rte_cpuset_t *thread_cpuset; RTE_LOG_REGISTER_DEFAULT(logtype, WARNING); -#define TMTY_LOG(l, ...) rte_log(RTE_LOG_ ## l, logtype, "TELEMETRY: " __VA_ARGS__) +#define RTE_LOGTYPE_TMTY logtype +#define TMTY_LOG_LINE(l, ...) \ + RTE_LOG(l, TMTY, RTE_FMT("TELEMETRY: " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_FMT_TAIL(__VA_ARGS__,))) /* list of command callbacks, with one command registered by default */ static struct cmd_callback *callbacks; @@ -417,7 +420,7 @@ socket_listener(void *socket) struct socket *s = (struct socket *)socket; int s_accepted = accept(s->sock, NULL, NULL); if (s_accepted < 0) { - TMTY_LOG(ERR, "Error with accept, telemetry thread quitting\n"); + TMTY_LOG_LINE(ERR, "Error with accept, telemetry thread quitting"); return NULL; } if (s->num_clients != NULL) { @@ -433,7 +436,7 @@ socket_listener(void *socket) rc = pthread_create(&th, NULL, s->fn, (void *)(uintptr_t)s_accepted); if (rc != 0) { - TMTY_LOG(ERR, "Error with create client thread: %s\n", + TMTY_LOG_LINE(ERR, "Error with create client thread: %s", strerror(rc)); close(s_accepted); if (s->num_clients != NULL) @@ -469,22 +472,22 @@ create_socket(char *path) { int sock = socket(AF_UNIX, SOCK_SEQPACKET, 0); if (sock < 0) { - TMTY_LOG(ERR, "Error with socket creation, %s\n", strerror(errno)); + TMTY_LOG_LINE(ERR, "Error with socket creation, %s", strerror(errno)); return -1; } struct sockaddr_un sun = {.sun_family = AF_UNIX}; strlcpy(sun.sun_path, path, sizeof(sun.sun_path)); - TMTY_LOG(DEBUG, "Attempting socket bind to path '%s'\n", path); + TMTY_LOG_LINE(DEBUG, "Attempting socket bind to path '%s'", path); if (bind(sock, (void *) &sun, sizeof(sun)) < 0) { struct stat st; - TMTY_LOG(DEBUG, "Initial bind to socket '%s' failed.\n", path); + TMTY_LOG_LINE(DEBUG, "Initial bind to socket '%s' failed.", path); /* first check if we have a runtime dir */ if (stat(socket_dir, &st) < 0 || !S_ISDIR(st.st_mode)) { - TMTY_LOG(ERR, "Cannot access DPDK runtime directory: %s\n", socket_dir); + TMTY_LOG_LINE(ERR, "Cannot access DPDK runtime directory: %s", socket_dir); close(sock); return -ENOENT; } @@ -496,22 +499,22 @@ create_socket(char *path) } /* socket is not active, delete and attempt rebind */ - TMTY_LOG(DEBUG, "Attempting unlink and retrying bind\n"); + TMTY_LOG_LINE(DEBUG, "Attempting unlink and retrying bind"); unlink(sun.sun_path); if (bind(sock, (void *) &sun, sizeof(sun)) < 0) { - TMTY_LOG(ERR, "Error binding socket: %s\n", strerror(errno)); + TMTY_LOG_LINE(ERR, "Error binding socket: %s", strerror(errno)); close(sock); return -errno; /* if unlink failed, this will be -EADDRINUSE as above */ } } if (listen(sock, 1) < 0) { - TMTY_LOG(ERR, "Error calling listen for socket: %s\n", strerror(errno)); + TMTY_LOG_LINE(ERR, "Error calling listen for socket: %s", strerror(errno)); unlink(sun.sun_path); close(sock); return -errno; } - TMTY_LOG(DEBUG, "Socket creation and binding ok\n"); + TMTY_LOG_LINE(DEBUG, "Socket creation and binding ok"); return sock; } @@ -535,14 +538,14 @@ telemetry_legacy_init(void) int rc; if (num_legacy_callbacks == 1) { - TMTY_LOG(WARNING, "No legacy callbacks, legacy socket not created\n"); + TMTY_LOG_LINE(WARNING, "No legacy callbacks, legacy socket not created"); return -1; } v1_socket.fn = legacy_client_handler; if ((size_t) snprintf(v1_socket.path, sizeof(v1_socket.path), "%s/telemetry", socket_dir) >= sizeof(v1_socket.path)) { - TMTY_LOG(ERR, "Error with socket binding, path too long\n"); + TMTY_LOG_LINE(ERR, "Error with socket binding, path too long"); return -1; } v1_socket.sock = create_socket(v1_socket.path); @@ -552,7 +555,7 @@ telemetry_legacy_init(void) } rc = pthread_create(&t_old, NULL, socket_listener, &v1_socket); if (rc != 0) { - TMTY_LOG(ERR, "Error with create legacy socket thread: %s\n", + TMTY_LOG_LINE(ERR, "Error with create legacy socket thread: %s", strerror(rc)); close(v1_socket.sock); v1_socket.sock = -1; @@ -562,7 +565,7 @@ telemetry_legacy_init(void) } pthread_setaffinity_np(t_old, sizeof(*thread_cpuset), thread_cpuset); set_thread_name(t_old, "dpdk-telemet-v1"); - TMTY_LOG(DEBUG, "Legacy telemetry socket initialized ok\n"); + TMTY_LOG_LINE(DEBUG, "Legacy telemetry socket initialized ok"); pthread_detach(t_old); return 0; } @@ -584,7 +587,7 @@ telemetry_v2_init(void) "Returns help text for a command. Parameters: string command"); v2_socket.fn = client_handler; if (strlcpy(spath, get_socket_path(socket_dir, 2), sizeof(spath)) >= sizeof(spath)) { - TMTY_LOG(ERR, "Error with socket binding, path too long\n"); + TMTY_LOG_LINE(ERR, "Error with socket binding, path too long"); return -1; } memcpy(v2_socket.path, spath, sizeof(v2_socket.path)); @@ -599,14 +602,14 @@ telemetry_v2_init(void) /* add a suffix to the path if the basic version fails */ if (snprintf(v2_socket.path, sizeof(v2_socket.path), "%s:%d", spath, ++suffix) >= (int)sizeof(v2_socket.path)) { - TMTY_LOG(ERR, "Error with socket binding, path too long\n"); + TMTY_LOG_LINE(ERR, "Error with socket binding, path too long"); return -1; } v2_socket.sock = create_socket(v2_socket.path); } rc = pthread_create(&t_new, NULL, socket_listener, &v2_socket); if (rc != 0) { - TMTY_LOG(ERR, "Error with create socket thread: %s\n", + TMTY_LOG_LINE(ERR, "Error with create socket thread: %s", strerror(rc)); close(v2_socket.sock); v2_socket.sock = -1; @@ -634,7 +637,7 @@ rte_telemetry_init(const char *runtime_dir, const char *rte_version, rte_cpuset_ #ifndef RTE_EXEC_ENV_WINDOWS if (telemetry_v2_init() != 0) return -1; - TMTY_LOG(DEBUG, "Telemetry initialized ok\n"); + TMTY_LOG_LINE(DEBUG, "Telemetry initialized ok"); telemetry_legacy_init(); #endif /* RTE_EXEC_ENV_WINDOWS */ diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c index 10ab77262e..f2c275a7d7 100644 --- a/lib/vhost/iotlb.c +++ b/lib/vhost/iotlb.c @@ -150,16 +150,16 @@ vhost_user_iotlb_pending_insert(struct virtio_net *dev, uint64_t iova, uint8_t p node = vhost_user_iotlb_pool_get(dev); if (node == NULL) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "IOTLB pool empty, clear entries for pending insertion\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "IOTLB pool empty, clear entries for pending insertion"); if (!TAILQ_EMPTY(&dev->iotlb_pending_list)) vhost_user_iotlb_pending_remove_all(dev); else vhost_user_iotlb_cache_random_evict(dev); node = vhost_user_iotlb_pool_get(dev); if (node == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "IOTLB pool still empty, pending insertion failure\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "IOTLB pool still empty, pending insertion failure"); return; } } @@ -253,16 +253,16 @@ vhost_user_iotlb_cache_insert(struct virtio_net *dev, uint64_t iova, uint64_t ua new_node = vhost_user_iotlb_pool_get(dev); if (new_node == NULL) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "IOTLB pool empty, clear entries for cache insertion\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "IOTLB pool empty, clear entries for cache insertion"); if (!TAILQ_EMPTY(&dev->iotlb_list)) vhost_user_iotlb_cache_random_evict(dev); else vhost_user_iotlb_pending_remove_all(dev); new_node = vhost_user_iotlb_pool_get(dev); if (new_node == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "IOTLB pool still empty, cache insertion failed\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "IOTLB pool still empty, cache insertion failed"); return; } } @@ -415,7 +415,7 @@ vhost_user_iotlb_init(struct virtio_net *dev) dev->iotlb_pool = rte_calloc_socket("iotlb", IOTLB_CACHE_SIZE, sizeof(struct vhost_iotlb_entry), 0, socket); if (!dev->iotlb_pool) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to create IOTLB cache pool\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to create IOTLB cache pool"); return -1; } for (i = 0; i < IOTLB_CACHE_SIZE; i++) diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c index 5882e44176..a2fdac30a4 100644 --- a/lib/vhost/socket.c +++ b/lib/vhost/socket.c @@ -128,17 +128,17 @@ read_fd_message(char *ifname, int sockfd, char *buf, int buflen, int *fds, int m ret = recvmsg(sockfd, &msgh, 0); if (ret <= 0) { if (ret) - VHOST_LOG_CONFIG(ifname, ERR, "recvmsg failed on fd %d (%s)\n", + VHOST_CONFIG_LOG(ifname, ERR, "recvmsg failed on fd %d (%s)", sockfd, strerror(errno)); return ret; } if (msgh.msg_flags & MSG_TRUNC) - VHOST_LOG_CONFIG(ifname, ERR, "truncated msg (fd %d)\n", sockfd); + VHOST_CONFIG_LOG(ifname, ERR, "truncated msg (fd %d)", sockfd); /* MSG_CTRUNC may be caused by LSM misconfiguration */ if (msgh.msg_flags & MSG_CTRUNC) - VHOST_LOG_CONFIG(ifname, ERR, "truncated control data (fd %d)\n", sockfd); + VHOST_CONFIG_LOG(ifname, ERR, "truncated control data (fd %d)", sockfd); for (cmsg = CMSG_FIRSTHDR(&msgh); cmsg != NULL; cmsg = CMSG_NXTHDR(&msgh, cmsg)) { @@ -181,7 +181,7 @@ send_fd_message(char *ifname, int sockfd, char *buf, int buflen, int *fds, int f msgh.msg_controllen = sizeof(control); cmsg = CMSG_FIRSTHDR(&msgh); if (cmsg == NULL) { - VHOST_LOG_CONFIG(ifname, ERR, "cmsg == NULL\n"); + VHOST_CONFIG_LOG(ifname, ERR, "cmsg == NULL"); errno = EINVAL; return -1; } @@ -199,7 +199,7 @@ send_fd_message(char *ifname, int sockfd, char *buf, int buflen, int *fds, int f } while (ret < 0 && errno == EINTR); if (ret < 0) { - VHOST_LOG_CONFIG(ifname, ERR, "sendmsg error on fd %d (%s)\n", + VHOST_CONFIG_LOG(ifname, ERR, "sendmsg error on fd %d (%s)", sockfd, strerror(errno)); return ret; } @@ -252,13 +252,13 @@ vhost_user_add_connection(int fd, struct vhost_user_socket *vsocket) dev->async_copy = 1; } - VHOST_LOG_CONFIG(vsocket->path, INFO, "new device, handle is %d\n", vid); + VHOST_CONFIG_LOG(vsocket->path, INFO, "new device, handle is %d", vid); if (vsocket->notify_ops->new_connection) { ret = vsocket->notify_ops->new_connection(vid); if (ret < 0) { - VHOST_LOG_CONFIG(vsocket->path, ERR, - "failed to add vhost user connection with fd %d\n", + VHOST_CONFIG_LOG(vsocket->path, ERR, + "failed to add vhost user connection with fd %d", fd); goto err_cleanup; } @@ -270,8 +270,8 @@ vhost_user_add_connection(int fd, struct vhost_user_socket *vsocket) ret = fdset_add(&vhost_user.fdset, fd, vhost_user_read_cb, NULL, conn); if (ret < 0) { - VHOST_LOG_CONFIG(vsocket->path, ERR, - "failed to add fd %d into vhost server fdset\n", + VHOST_CONFIG_LOG(vsocket->path, ERR, + "failed to add fd %d into vhost server fdset", fd); if (vsocket->notify_ops->destroy_connection) @@ -304,7 +304,7 @@ vhost_user_server_new_connection(int fd, void *dat, int *remove __rte_unused) if (fd < 0) return; - VHOST_LOG_CONFIG(vsocket->path, INFO, "new vhost user connection is %d\n", fd); + VHOST_CONFIG_LOG(vsocket->path, INFO, "new vhost user connection is %d", fd); vhost_user_add_connection(fd, vsocket); } @@ -352,12 +352,12 @@ create_unix_socket(struct vhost_user_socket *vsocket) fd = socket(AF_UNIX, SOCK_STREAM, 0); if (fd < 0) return -1; - VHOST_LOG_CONFIG(vsocket->path, INFO, "vhost-user %s: socket created, fd: %d\n", + VHOST_CONFIG_LOG(vsocket->path, INFO, "vhost-user %s: socket created, fd: %d", vsocket->is_server ? "server" : "client", fd); if (!vsocket->is_server && fcntl(fd, F_SETFL, O_NONBLOCK)) { - VHOST_LOG_CONFIG(vsocket->path, ERR, - "vhost-user: can't set nonblocking mode for socket, fd: %d (%s)\n", + VHOST_CONFIG_LOG(vsocket->path, ERR, + "vhost-user: can't set nonblocking mode for socket, fd: %d (%s)", fd, strerror(errno)); close(fd); return -1; @@ -391,11 +391,11 @@ vhost_user_start_server(struct vhost_user_socket *vsocket) */ ret = bind(fd, (struct sockaddr *)&vsocket->un, sizeof(vsocket->un)); if (ret < 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to bind: %s; remove it and try again\n", + VHOST_CONFIG_LOG(path, ERR, "failed to bind: %s; remove it and try again", strerror(errno)); goto err; } - VHOST_LOG_CONFIG(path, INFO, "binding succeeded\n"); + VHOST_CONFIG_LOG(path, INFO, "binding succeeded"); ret = listen(fd, MAX_VIRTIO_BACKLOG); if (ret < 0) @@ -404,7 +404,7 @@ vhost_user_start_server(struct vhost_user_socket *vsocket) ret = fdset_add(&vhost_user.fdset, fd, vhost_user_server_new_connection, NULL, vsocket); if (ret < 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to add listen fd %d to vhost server fdset\n", + VHOST_CONFIG_LOG(path, ERR, "failed to add listen fd %d to vhost server fdset", fd); goto err; } @@ -444,12 +444,12 @@ vhost_user_connect_nonblock(char *path, int fd, struct sockaddr *un, size_t sz) flags = fcntl(fd, F_GETFL, 0); if (flags < 0) { - VHOST_LOG_CONFIG(path, ERR, "can't get flags for connfd %d (%s)\n", + VHOST_CONFIG_LOG(path, ERR, "can't get flags for connfd %d (%s)", fd, strerror(errno)); return -2; } if ((flags & O_NONBLOCK) && fcntl(fd, F_SETFL, flags & ~O_NONBLOCK)) { - VHOST_LOG_CONFIG(path, ERR, "can't disable nonblocking on fd %d\n", fd); + VHOST_CONFIG_LOG(path, ERR, "can't disable nonblocking on fd %d", fd); return -2; } return 0; @@ -477,15 +477,15 @@ vhost_user_client_reconnect(void *arg __rte_unused) sizeof(reconn->un)); if (ret == -2) { close(reconn->fd); - VHOST_LOG_CONFIG(reconn->vsocket->path, ERR, - "reconnection for fd %d failed\n", + VHOST_CONFIG_LOG(reconn->vsocket->path, ERR, + "reconnection for fd %d failed", reconn->fd); goto remove_fd; } if (ret == -1) continue; - VHOST_LOG_CONFIG(reconn->vsocket->path, INFO, "connected\n"); + VHOST_CONFIG_LOG(reconn->vsocket->path, INFO, "connected"); vhost_user_add_connection(reconn->fd, reconn->vsocket); remove_fd: TAILQ_REMOVE(&reconn_list.head, reconn, next); @@ -506,7 +506,7 @@ vhost_user_reconnect_init(void) ret = pthread_mutex_init(&reconn_list.mutex, NULL); if (ret < 0) { - VHOST_LOG_CONFIG("thread", ERR, "%s: failed to initialize mutex\n", __func__); + VHOST_CONFIG_LOG("thread", ERR, "%s: failed to initialize mutex", __func__); return ret; } TAILQ_INIT(&reconn_list.head); @@ -514,10 +514,10 @@ vhost_user_reconnect_init(void) ret = rte_thread_create_internal_control(&reconn_tid, "vhost-reco", vhost_user_client_reconnect, NULL); if (ret != 0) { - VHOST_LOG_CONFIG("thread", ERR, "failed to create reconnect thread\n"); + VHOST_CONFIG_LOG("thread", ERR, "failed to create reconnect thread"); if (pthread_mutex_destroy(&reconn_list.mutex)) - VHOST_LOG_CONFIG("thread", ERR, - "%s: failed to destroy reconnect mutex\n", + VHOST_CONFIG_LOG("thread", ERR, + "%s: failed to destroy reconnect mutex", __func__); } @@ -539,17 +539,17 @@ vhost_user_start_client(struct vhost_user_socket *vsocket) return 0; } - VHOST_LOG_CONFIG(path, WARNING, "failed to connect: %s\n", strerror(errno)); + VHOST_CONFIG_LOG(path, WARNING, "failed to connect: %s", strerror(errno)); if (ret == -2 || !vsocket->reconnect) { close(fd); return -1; } - VHOST_LOG_CONFIG(path, INFO, "reconnecting...\n"); + VHOST_CONFIG_LOG(path, INFO, "reconnecting..."); reconn = malloc(sizeof(*reconn)); if (reconn == NULL) { - VHOST_LOG_CONFIG(path, ERR, "failed to allocate memory for reconnect\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to allocate memory for reconnect"); close(fd); return -1; } @@ -638,7 +638,7 @@ rte_vhost_driver_get_vdpa_dev_type(const char *path, uint32_t *type) pthread_mutex_lock(&vhost_user.mutex); vsocket = find_vhost_user_socket(path); if (!vsocket) { - VHOST_LOG_CONFIG(path, ERR, "socket file is not registered yet.\n"); + VHOST_CONFIG_LOG(path, ERR, "socket file is not registered yet."); ret = -1; goto unlock_exit; } @@ -731,7 +731,7 @@ rte_vhost_driver_get_features(const char *path, uint64_t *features) pthread_mutex_lock(&vhost_user.mutex); vsocket = find_vhost_user_socket(path); if (!vsocket) { - VHOST_LOG_CONFIG(path, ERR, "socket file is not registered yet.\n"); + VHOST_CONFIG_LOG(path, ERR, "socket file is not registered yet."); ret = -1; goto unlock_exit; } @@ -743,7 +743,7 @@ rte_vhost_driver_get_features(const char *path, uint64_t *features) } if (vdpa_dev->ops->get_features(vdpa_dev, &vdpa_features) < 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to get vdpa features for socket file.\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to get vdpa features for socket file."); ret = -1; goto unlock_exit; } @@ -781,7 +781,7 @@ rte_vhost_driver_get_protocol_features(const char *path, pthread_mutex_lock(&vhost_user.mutex); vsocket = find_vhost_user_socket(path); if (!vsocket) { - VHOST_LOG_CONFIG(path, ERR, "socket file is not registered yet.\n"); + VHOST_CONFIG_LOG(path, ERR, "socket file is not registered yet."); ret = -1; goto unlock_exit; } @@ -794,7 +794,7 @@ rte_vhost_driver_get_protocol_features(const char *path, if (vdpa_dev->ops->get_protocol_features(vdpa_dev, &vdpa_protocol_features) < 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to get vdpa protocol features.\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to get vdpa protocol features."); ret = -1; goto unlock_exit; } @@ -818,7 +818,7 @@ rte_vhost_driver_get_queue_num(const char *path, uint32_t *queue_num) pthread_mutex_lock(&vhost_user.mutex); vsocket = find_vhost_user_socket(path); if (!vsocket) { - VHOST_LOG_CONFIG(path, ERR, "socket file is not registered yet.\n"); + VHOST_CONFIG_LOG(path, ERR, "socket file is not registered yet."); ret = -1; goto unlock_exit; } @@ -830,7 +830,7 @@ rte_vhost_driver_get_queue_num(const char *path, uint32_t *queue_num) } if (vdpa_dev->ops->get_queue_num(vdpa_dev, &vdpa_queue_num) < 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to get vdpa queue number.\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to get vdpa queue number."); ret = -1; goto unlock_exit; } @@ -848,10 +848,10 @@ rte_vhost_driver_set_max_queue_num(const char *path, uint32_t max_queue_pairs) struct vhost_user_socket *vsocket; int ret = 0; - VHOST_LOG_CONFIG(path, INFO, "Setting max queue pairs to %u\n", max_queue_pairs); + VHOST_CONFIG_LOG(path, INFO, "Setting max queue pairs to %u", max_queue_pairs); if (max_queue_pairs > VHOST_MAX_QUEUE_PAIRS) { - VHOST_LOG_CONFIG(path, ERR, "Library only supports up to %u queue pairs\n", + VHOST_CONFIG_LOG(path, ERR, "Library only supports up to %u queue pairs", VHOST_MAX_QUEUE_PAIRS); return -1; } @@ -859,7 +859,7 @@ rte_vhost_driver_set_max_queue_num(const char *path, uint32_t max_queue_pairs) pthread_mutex_lock(&vhost_user.mutex); vsocket = find_vhost_user_socket(path); if (!vsocket) { - VHOST_LOG_CONFIG(path, ERR, "socket file is not registered yet.\n"); + VHOST_CONFIG_LOG(path, ERR, "socket file is not registered yet."); ret = -1; goto unlock_exit; } @@ -898,7 +898,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags) pthread_mutex_lock(&vhost_user.mutex); if (vhost_user.vsocket_cnt == MAX_VHOST_SOCKET) { - VHOST_LOG_CONFIG(path, ERR, "the number of vhost sockets reaches maximum\n"); + VHOST_CONFIG_LOG(path, ERR, "the number of vhost sockets reaches maximum"); goto out; } @@ -908,14 +908,14 @@ rte_vhost_driver_register(const char *path, uint64_t flags) memset(vsocket, 0, sizeof(struct vhost_user_socket)); vsocket->path = strdup(path); if (vsocket->path == NULL) { - VHOST_LOG_CONFIG(path, ERR, "failed to copy socket path string\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to copy socket path string"); vhost_user_socket_mem_free(vsocket); goto out; } TAILQ_INIT(&vsocket->conn_list); ret = pthread_mutex_init(&vsocket->conn_mutex, NULL); if (ret) { - VHOST_LOG_CONFIG(path, ERR, "failed to init connection mutex\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to init connection mutex"); goto out_free; } @@ -936,7 +936,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags) if (vsocket->async_copy && (vsocket->iommu_support || (flags & RTE_VHOST_USER_POSTCOPY_SUPPORT))) { - VHOST_LOG_CONFIG(path, ERR, "async copy with IOMMU or post-copy not supported\n"); + VHOST_CONFIG_LOG(path, ERR, "async copy with IOMMU or post-copy not supported"); goto out_mutex; } @@ -965,7 +965,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags) if (vsocket->async_copy) { vsocket->supported_features &= ~(1ULL << VHOST_F_LOG_ALL); vsocket->features &= ~(1ULL << VHOST_F_LOG_ALL); - VHOST_LOG_CONFIG(path, INFO, "logging feature is disabled in async copy mode\n"); + VHOST_CONFIG_LOG(path, INFO, "logging feature is disabled in async copy mode"); } /* @@ -979,8 +979,8 @@ rte_vhost_driver_register(const char *path, uint64_t flags) (1ULL << VIRTIO_NET_F_HOST_TSO6) | (1ULL << VIRTIO_NET_F_HOST_UFO); - VHOST_LOG_CONFIG(path, INFO, "Linear buffers requested without external buffers,\n"); - VHOST_LOG_CONFIG(path, INFO, "disabling host segmentation offloading support\n"); + VHOST_CONFIG_LOG(path, INFO, "Linear buffers requested without external buffers,"); + VHOST_CONFIG_LOG(path, INFO, "disabling host segmentation offloading support"); vsocket->supported_features &= ~seg_offload_features; vsocket->features &= ~seg_offload_features; } @@ -995,7 +995,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags) ~(1ULL << VHOST_USER_PROTOCOL_F_PAGEFAULT); } else { #ifndef RTE_LIBRTE_VHOST_POSTCOPY - VHOST_LOG_CONFIG(path, ERR, "Postcopy requested but not compiled\n"); + VHOST_CONFIG_LOG(path, ERR, "Postcopy requested but not compiled"); ret = -1; goto out_mutex; #endif @@ -1023,7 +1023,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags) out_mutex: if (pthread_mutex_destroy(&vsocket->conn_mutex)) { - VHOST_LOG_CONFIG(path, ERR, "failed to destroy connection mutex\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to destroy connection mutex"); } out_free: vhost_user_socket_mem_free(vsocket); @@ -1113,7 +1113,7 @@ rte_vhost_driver_unregister(const char *path) goto again; } - VHOST_LOG_CONFIG(path, INFO, "free connfd %d\n", conn->connfd); + VHOST_CONFIG_LOG(path, INFO, "free connfd %d", conn->connfd); close(conn->connfd); vhost_destroy_device(conn->vid); TAILQ_REMOVE(&vsocket->conn_list, conn, next); @@ -1192,14 +1192,14 @@ rte_vhost_driver_start(const char *path) * rebuild the wait list of poll. */ if (fdset_pipe_init(&vhost_user.fdset) < 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to create pipe for vhost fdset\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to create pipe for vhost fdset"); return -1; } int ret = rte_thread_create_internal_control(&fdset_tid, "vhost-evt", fdset_event_dispatch, &vhost_user.fdset); if (ret != 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to create fdset handling thread\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to create fdset handling thread"); fdset_pipe_uninit(&vhost_user.fdset); return -1; } diff --git a/lib/vhost/vdpa.c b/lib/vhost/vdpa.c index 219eef879c..9776fc07a9 100644 --- a/lib/vhost/vdpa.c +++ b/lib/vhost/vdpa.c @@ -84,8 +84,8 @@ rte_vdpa_register_device(struct rte_device *rte_dev, !ops->get_protocol_features || !ops->dev_conf || !ops->dev_close || !ops->set_vring_state || !ops->set_features) { - VHOST_LOG_CONFIG(rte_dev->name, ERR, - "Some mandatory vDPA ops aren't implemented\n"); + VHOST_CONFIG_LOG(rte_dev->name, ERR, + "Some mandatory vDPA ops aren't implemented"); return NULL; } @@ -107,8 +107,8 @@ rte_vdpa_register_device(struct rte_device *rte_dev, if (ops->get_dev_type) { ret = ops->get_dev_type(dev, &dev->type); if (ret) { - VHOST_LOG_CONFIG(rte_dev->name, ERR, - "Failed to get vdpa dev type.\n"); + VHOST_CONFIG_LOG(rte_dev->name, ERR, + "Failed to get vdpa dev type."); ret = -1; goto out_unlock; } diff --git a/lib/vhost/vduse.c b/lib/vhost/vduse.c index 080b58f7de..c7ba5a61dd 100644 --- a/lib/vhost/vduse.c +++ b/lib/vhost/vduse.c @@ -78,32 +78,32 @@ vduse_iotlb_miss(struct virtio_net *dev, uint64_t iova, uint8_t perm __rte_unuse ret = ioctl(dev->vduse_dev_fd, VDUSE_IOTLB_GET_FD, &entry); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to get IOTLB entry for 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to get IOTLB entry for 0x%" PRIx64, iova); return -1; } fd = ret; - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "New IOTLB entry:\n"); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "\tIOVA: %" PRIx64 " - %" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "New IOTLB entry:"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "\tIOVA: %" PRIx64 " - %" PRIx64, (uint64_t)entry.start, (uint64_t)entry.last); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "\toffset: %" PRIx64 "\n", (uint64_t)entry.offset); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "\tfd: %d\n", fd); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "\tperm: %x\n", entry.perm); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "\toffset: %" PRIx64, (uint64_t)entry.offset); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "\tfd: %d", fd); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "\tperm: %x", entry.perm); size = entry.last - entry.start + 1; mmap_addr = mmap(0, size + entry.offset, entry.perm, MAP_SHARED, fd, 0); if (!mmap_addr) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Failed to mmap IOTLB entry for 0x%" PRIx64 "\n", iova); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Failed to mmap IOTLB entry for 0x%" PRIx64, iova); ret = -1; goto close_fd; } ret = fstat(fd, &stat); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to get page size.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to get page size."); munmap(mmap_addr, entry.offset + size); goto close_fd; } @@ -134,14 +134,14 @@ vduse_control_queue_event(int fd, void *arg, int *remove __rte_unused) ret = read(fd, &buf, sizeof(buf)); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to read control queue event: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to read control queue event: %s", strerror(errno)); return; } - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "Control queue kicked\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "Control queue kicked"); if (virtio_net_ctrl_handle(dev)) - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to handle ctrl request\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to handle ctrl request"); } static void @@ -156,21 +156,21 @@ vduse_vring_setup(struct virtio_net *dev, unsigned int index) vq_info.index = index; ret = ioctl(dev->vduse_dev_fd, VDUSE_VQ_GET_INFO, &vq_info); if (ret) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to get VQ %u info: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to get VQ %u info: %s", index, strerror(errno)); return; } - VHOST_LOG_CONFIG(dev->ifname, INFO, "VQ %u info:\n", index); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tnum: %u\n", vq_info.num); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tdesc_addr: %llx\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "VQ %u info:", index); + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tnum: %u", vq_info.num); + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tdesc_addr: %llx", (unsigned long long)vq_info.desc_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tdriver_addr: %llx\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tdriver_addr: %llx", (unsigned long long)vq_info.driver_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tdevice_addr: %llx\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tdevice_addr: %llx", (unsigned long long)vq_info.device_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tavail_idx: %u\n", vq_info.split.avail_index); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tready: %u\n", vq_info.ready); + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tavail_idx: %u", vq_info.split.avail_index); + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tready: %u", vq_info.ready); vq->last_avail_idx = vq_info.split.avail_index; vq->size = vq_info.num; @@ -182,12 +182,12 @@ vduse_vring_setup(struct virtio_net *dev, unsigned int index) vq->kickfd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC); if (vq->kickfd < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to init kickfd for VQ %u: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to init kickfd for VQ %u: %s", index, strerror(errno)); vq->kickfd = VIRTIO_INVALID_EVENTFD; return; } - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tkick fd: %d\n", vq->kickfd); + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tkick fd: %d", vq->kickfd); vq->shadow_used_split = rte_malloc_socket(NULL, vq->size * sizeof(struct vring_used_elem), @@ -198,12 +198,12 @@ vduse_vring_setup(struct virtio_net *dev, unsigned int index) vhost_user_iotlb_rd_lock(vq); if (vring_translate(dev, vq)) - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to translate vring %d addresses\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to translate vring %d addresses", index); if (vhost_enable_guest_notification(dev, vq, 0)) - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Failed to disable guest notifications on vring %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Failed to disable guest notifications on vring %d", index); vhost_user_iotlb_rd_unlock(vq); @@ -212,7 +212,7 @@ vduse_vring_setup(struct virtio_net *dev, unsigned int index) ret = ioctl(dev->vduse_dev_fd, VDUSE_VQ_SETUP_KICKFD, &vq_efd); if (ret) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to setup kickfd for VQ %u: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to setup kickfd for VQ %u: %s", index, strerror(errno)); close(vq->kickfd); vq->kickfd = VIRTIO_UNINITIALIZED_EVENTFD; @@ -222,8 +222,8 @@ vduse_vring_setup(struct virtio_net *dev, unsigned int index) if (vq == dev->cvq) { ret = fdset_add(&vduse.fdset, vq->kickfd, vduse_control_queue_event, NULL, dev); if (ret) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Failed to setup kickfd handler for VQ %u: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Failed to setup kickfd handler for VQ %u: %s", index, strerror(errno)); vq_efd.fd = VDUSE_EVENTFD_DEASSIGN; ioctl(dev->vduse_dev_fd, VDUSE_VQ_SETUP_KICKFD, &vq_efd); @@ -232,7 +232,7 @@ vduse_vring_setup(struct virtio_net *dev, unsigned int index) } fdset_pipe_notify(&vduse.fdset); vhost_enable_guest_notification(dev, vq, 1); - VHOST_LOG_CONFIG(dev->ifname, INFO, "Ctrl queue event handler installed\n"); + VHOST_CONFIG_LOG(dev->ifname, INFO, "Ctrl queue event handler installed"); } } @@ -253,7 +253,7 @@ vduse_vring_cleanup(struct virtio_net *dev, unsigned int index) ret = ioctl(dev->vduse_dev_fd, VDUSE_VQ_SETUP_KICKFD, &vq_efd); if (ret) - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to cleanup kickfd for VQ %u: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to cleanup kickfd for VQ %u: %s", index, strerror(errno)); close(vq->kickfd); @@ -279,23 +279,23 @@ vduse_device_start(struct virtio_net *dev) { unsigned int i, ret; - VHOST_LOG_CONFIG(dev->ifname, INFO, "Starting device...\n"); + VHOST_CONFIG_LOG(dev->ifname, INFO, "Starting device..."); dev->notify_ops = vhost_driver_callback_get(dev->ifname); if (!dev->notify_ops) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Failed to get callback ops for driver\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Failed to get callback ops for driver"); return; } ret = ioctl(dev->vduse_dev_fd, VDUSE_DEV_GET_FEATURES, &dev->features); if (ret) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to get features: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to get features: %s", strerror(errno)); return; } - VHOST_LOG_CONFIG(dev->ifname, INFO, "Negotiated Virtio features: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "Negotiated Virtio features: 0x%" PRIx64, dev->features); if (dev->features & @@ -331,7 +331,7 @@ vduse_device_stop(struct virtio_net *dev) { unsigned int i; - VHOST_LOG_CONFIG(dev->ifname, INFO, "Stopping device...\n"); + VHOST_CONFIG_LOG(dev->ifname, INFO, "Stopping device..."); vhost_destroy_device_notify(dev); @@ -357,34 +357,34 @@ vduse_events_handler(int fd, void *arg, int *remove __rte_unused) ret = read(fd, &req, sizeof(req)); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to read request: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to read request: %s", strerror(errno)); return; } else if (ret < (int)sizeof(req)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Incomplete to read request %d\n", ret); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Incomplete to read request %d", ret); return; } - VHOST_LOG_CONFIG(dev->ifname, INFO, "New request: %s (%u)\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "New request: %s (%u)", vduse_req_id_to_str(req.type), req.type); switch (req.type) { case VDUSE_GET_VQ_STATE: vq = dev->virtqueue[req.vq_state.index]; - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tvq index: %u, avail_index: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tvq index: %u, avail_index: %u", req.vq_state.index, vq->last_avail_idx); resp.vq_state.split.avail_index = vq->last_avail_idx; resp.result = VDUSE_REQ_RESULT_OK; break; case VDUSE_SET_STATUS: - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tnew status: 0x%08x\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tnew status: 0x%08x", req.s.status); old_status = dev->status; dev->status = req.s.status; resp.result = VDUSE_REQ_RESULT_OK; break; case VDUSE_UPDATE_IOTLB: - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tIOVA range: %" PRIx64 " - %" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tIOVA range: %" PRIx64 " - %" PRIx64, (uint64_t)req.iova.start, (uint64_t)req.iova.last); vhost_user_iotlb_cache_remove(dev, req.iova.start, req.iova.last - req.iova.start + 1); @@ -399,7 +399,7 @@ vduse_events_handler(int fd, void *arg, int *remove __rte_unused) ret = write(dev->vduse_dev_fd, &resp, sizeof(resp)); if (ret != sizeof(resp)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to write response %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to write response %s", strerror(errno)); return; } @@ -411,7 +411,7 @@ vduse_events_handler(int fd, void *arg, int *remove __rte_unused) vduse_device_stop(dev); } - VHOST_LOG_CONFIG(dev->ifname, INFO, "Request %s (%u) handled successfully\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "Request %s (%u) handled successfully", vduse_req_id_to_str(req.type), req.type); } @@ -435,14 +435,14 @@ vduse_device_create(const char *path, bool compliant_ol_flags) * rebuild the wait list of poll. */ if (fdset_pipe_init(&vduse.fdset) < 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to create pipe for vduse fdset\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to create pipe for vduse fdset"); return -1; } ret = rte_thread_create_internal_control(&fdset_tid, "vduse-evt", fdset_event_dispatch, &vduse.fdset); if (ret != 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to create vduse fdset handling thread\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to create vduse fdset handling thread"); fdset_pipe_uninit(&vduse.fdset); return -1; } @@ -452,13 +452,13 @@ vduse_device_create(const char *path, bool compliant_ol_flags) control_fd = open(VDUSE_CTRL_PATH, O_RDWR); if (control_fd < 0) { - VHOST_LOG_CONFIG(name, ERR, "Failed to open %s: %s\n", + VHOST_CONFIG_LOG(name, ERR, "Failed to open %s: %s", VDUSE_CTRL_PATH, strerror(errno)); return -1; } if (ioctl(control_fd, VDUSE_SET_API_VERSION, &ver)) { - VHOST_LOG_CONFIG(name, ERR, "Failed to set API version: %" PRIu64 ": %s\n", + VHOST_CONFIG_LOG(name, ERR, "Failed to set API version: %" PRIu64 ": %s", ver, strerror(errno)); ret = -1; goto out_ctrl_close; @@ -467,24 +467,24 @@ vduse_device_create(const char *path, bool compliant_ol_flags) dev_config = malloc(offsetof(struct vduse_dev_config, config) + sizeof(vnet_config)); if (!dev_config) { - VHOST_LOG_CONFIG(name, ERR, "Failed to allocate VDUSE config\n"); + VHOST_CONFIG_LOG(name, ERR, "Failed to allocate VDUSE config"); ret = -1; goto out_ctrl_close; } ret = rte_vhost_driver_get_features(path, &features); if (ret < 0) { - VHOST_LOG_CONFIG(name, ERR, "Failed to get backend features\n"); + VHOST_CONFIG_LOG(name, ERR, "Failed to get backend features"); goto out_free; } ret = rte_vhost_driver_get_queue_num(path, &max_queue_pairs); if (ret < 0) { - VHOST_LOG_CONFIG(name, ERR, "Failed to get max queue pairs\n"); + VHOST_CONFIG_LOG(name, ERR, "Failed to get max queue pairs"); goto out_free; } - VHOST_LOG_CONFIG(path, INFO, "VDUSE max queue pairs: %u\n", max_queue_pairs); + VHOST_CONFIG_LOG(path, INFO, "VDUSE max queue pairs: %u", max_queue_pairs); total_queues = max_queue_pairs * 2; if (max_queue_pairs == 1) @@ -506,14 +506,14 @@ vduse_device_create(const char *path, bool compliant_ol_flags) ret = ioctl(control_fd, VDUSE_CREATE_DEV, dev_config); if (ret < 0) { - VHOST_LOG_CONFIG(name, ERR, "Failed to create VDUSE device: %s\n", + VHOST_CONFIG_LOG(name, ERR, "Failed to create VDUSE device: %s", strerror(errno)); goto out_free; } dev_fd = open(path, O_RDWR); if (dev_fd < 0) { - VHOST_LOG_CONFIG(name, ERR, "Failed to open device %s: %s\n", + VHOST_CONFIG_LOG(name, ERR, "Failed to open device %s: %s", path, strerror(errno)); ret = -1; goto out_dev_close; @@ -521,14 +521,14 @@ vduse_device_create(const char *path, bool compliant_ol_flags) ret = fcntl(dev_fd, F_SETFL, O_NONBLOCK); if (ret < 0) { - VHOST_LOG_CONFIG(name, ERR, "Failed to set chardev as non-blocking: %s\n", + VHOST_CONFIG_LOG(name, ERR, "Failed to set chardev as non-blocking: %s", strerror(errno)); goto out_dev_close; } vid = vhost_new_device(&vduse_backend_ops); if (vid < 0) { - VHOST_LOG_CONFIG(name, ERR, "Failed to create new Vhost device\n"); + VHOST_CONFIG_LOG(name, ERR, "Failed to create new Vhost device"); ret = -1; goto out_dev_close; } @@ -549,7 +549,7 @@ vduse_device_create(const char *path, bool compliant_ol_flags) ret = alloc_vring_queue(dev, i); if (ret) { - VHOST_LOG_CONFIG(name, ERR, "Failed to alloc vring %d metadata\n", i); + VHOST_CONFIG_LOG(name, ERR, "Failed to alloc vring %d metadata", i); goto out_dev_destroy; } @@ -558,7 +558,7 @@ vduse_device_create(const char *path, bool compliant_ol_flags) ret = ioctl(dev->vduse_dev_fd, VDUSE_VQ_SETUP, &vq_cfg); if (ret) { - VHOST_LOG_CONFIG(name, ERR, "Failed to set-up VQ %d\n", i); + VHOST_CONFIG_LOG(name, ERR, "Failed to set-up VQ %d", i); goto out_dev_destroy; } } @@ -567,7 +567,7 @@ vduse_device_create(const char *path, bool compliant_ol_flags) ret = fdset_add(&vduse.fdset, dev->vduse_dev_fd, vduse_events_handler, NULL, dev); if (ret) { - VHOST_LOG_CONFIG(name, ERR, "Failed to add fd %d to vduse fdset\n", + VHOST_CONFIG_LOG(name, ERR, "Failed to add fd %d to vduse fdset", dev->vduse_dev_fd); goto out_dev_destroy; } @@ -624,7 +624,7 @@ vduse_device_destroy(const char *path) if (dev->vduse_ctrl_fd >= 0) { ret = ioctl(dev->vduse_ctrl_fd, VDUSE_DESTROY_DEV, name); if (ret) - VHOST_LOG_CONFIG(name, ERR, "Failed to destroy VDUSE device: %s\n", + VHOST_CONFIG_LOG(name, ERR, "Failed to destroy VDUSE device: %s", strerror(errno)); close(dev->vduse_ctrl_fd); dev->vduse_ctrl_fd = -1; diff --git a/lib/vhost/vduse.h b/lib/vhost/vduse.h index 4879b1f900..0d8f3f1205 100644 --- a/lib/vhost/vduse.h +++ b/lib/vhost/vduse.h @@ -21,14 +21,14 @@ vduse_device_create(const char *path, bool compliant_ol_flags) { RTE_SET_USED(compliant_ol_flags); - VHOST_LOG_CONFIG(path, ERR, "VDUSE support disabled at build time\n"); + VHOST_CONFIG_LOG(path, ERR, "VDUSE support disabled at build time"); return -1; } static inline int vduse_device_destroy(const char *path) { - VHOST_LOG_CONFIG(path, ERR, "VDUSE support disabled at build time\n"); + VHOST_CONFIG_LOG(path, ERR, "VDUSE support disabled at build time"); return -1; } diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index 8a1f992d9d..5912a42979 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -100,8 +100,8 @@ __vhost_iova_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq, vhost_user_iotlb_pending_insert(dev, iova, perm); if (vhost_iotlb_miss(dev, iova, perm)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "IOTLB miss req failed for IOVA 0x%" PRIx64 "\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "IOTLB miss req failed for IOVA 0x%" PRIx64, iova); vhost_user_iotlb_pending_remove(dev, iova, 1, perm); } @@ -174,8 +174,8 @@ __vhost_log_write_iova(struct virtio_net *dev, struct vhost_virtqueue *vq, hva = __vhost_iova_to_vva(dev, vq, iova, &map_len, VHOST_ACCESS_RW); if (map_len != len) { - VHOST_LOG_DATA(dev->ifname, ERR, - "failed to write log for IOVA 0x%" PRIx64 ". No IOTLB entry found\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "failed to write log for IOVA 0x%" PRIx64 ". No IOTLB entry found", iova); return; } @@ -292,8 +292,8 @@ __vhost_log_cache_write_iova(struct virtio_net *dev, struct vhost_virtqueue *vq, hva = __vhost_iova_to_vva(dev, vq, iova, &map_len, VHOST_ACCESS_RW); if (map_len != len) { - VHOST_LOG_DATA(dev->ifname, ERR, - "failed to write log for IOVA 0x%" PRIx64 ". No IOTLB entry found\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "failed to write log for IOVA 0x%" PRIx64 ". No IOTLB entry found", iova); return; } @@ -473,9 +473,9 @@ translate_log_addr(struct virtio_net *dev, struct vhost_virtqueue *vq, gpa = hva_to_gpa(dev, hva, exp_size); if (!gpa) { - VHOST_LOG_DATA(dev->ifname, ERR, + VHOST_DATA_LOG(dev->ifname, ERR, "failed to find GPA for log_addr: 0x%" - PRIx64 " hva: 0x%" PRIx64 "\n", + PRIx64 " hva: 0x%" PRIx64, log_addr, hva); return 0; } @@ -609,7 +609,7 @@ init_vring_queue(struct virtio_net *dev __rte_unused, struct vhost_virtqueue *vq #ifdef RTE_LIBRTE_VHOST_NUMA if (get_mempolicy(&numa_node, NULL, 0, vq, MPOL_F_NODE | MPOL_F_ADDR)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to query numa node: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to query numa node: %s", rte_strerror(errno)); numa_node = SOCKET_ID_ANY; } @@ -640,8 +640,8 @@ alloc_vring_queue(struct virtio_net *dev, uint32_t vring_idx) vq = rte_zmalloc(NULL, sizeof(struct vhost_virtqueue), 0); if (vq == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for vring %u.\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for vring %u.", i); return -1; } @@ -678,8 +678,8 @@ reset_device(struct virtio_net *dev) struct vhost_virtqueue *vq = dev->virtqueue[i]; if (!vq) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to reset vring, virtqueue not allocated (%d)\n", i); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to reset vring, virtqueue not allocated (%d)", i); continue; } reset_vring_queue(dev, vq); @@ -697,17 +697,17 @@ vhost_new_device(struct vhost_backend_ops *ops) int i; if (ops == NULL) { - VHOST_LOG_CONFIG("device", ERR, "missing backend ops.\n"); + VHOST_CONFIG_LOG("device", ERR, "missing backend ops."); return -1; } if (ops->iotlb_miss == NULL) { - VHOST_LOG_CONFIG("device", ERR, "missing IOTLB miss backend op.\n"); + VHOST_CONFIG_LOG("device", ERR, "missing IOTLB miss backend op."); return -1; } if (ops->inject_irq == NULL) { - VHOST_LOG_CONFIG("device", ERR, "missing IRQ injection backend op.\n"); + VHOST_CONFIG_LOG("device", ERR, "missing IRQ injection backend op."); return -1; } @@ -718,14 +718,14 @@ vhost_new_device(struct vhost_backend_ops *ops) } if (i == RTE_MAX_VHOST_DEVICE) { - VHOST_LOG_CONFIG("device", ERR, "failed to find a free slot for new device.\n"); + VHOST_CONFIG_LOG("device", ERR, "failed to find a free slot for new device."); pthread_mutex_unlock(&vhost_dev_lock); return -1; } dev = rte_zmalloc(NULL, sizeof(struct virtio_net), 0); if (dev == NULL) { - VHOST_LOG_CONFIG("device", ERR, "failed to allocate memory for new device.\n"); + VHOST_CONFIG_LOG("device", ERR, "failed to allocate memory for new device."); pthread_mutex_unlock(&vhost_dev_lock); return -1; } @@ -832,7 +832,7 @@ vhost_setup_virtio_net(int vid, bool enable, bool compliant_ol_flags, bool stats dev->flags &= ~VIRTIO_DEV_SUPPORT_IOMMU; if (vhost_user_iotlb_init(dev) < 0) - VHOST_LOG_CONFIG("device", ERR, "failed to init IOTLB\n"); + VHOST_CONFIG_LOG("device", ERR, "failed to init IOTLB"); } @@ -891,7 +891,7 @@ rte_vhost_get_numa_node(int vid) ret = get_mempolicy(&numa_node, NULL, 0, dev, MPOL_F_NODE | MPOL_F_ADDR); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to query numa node: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to query numa node: %s", rte_strerror(errno)); return -1; } @@ -1608,8 +1608,8 @@ rte_vhost_rx_queue_count(int vid, uint16_t qid) return 0; if (unlikely(qid >= dev->nr_vring || (qid & 1) == 0)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid virtqueue idx %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid virtqueue idx %d.", __func__, qid); return 0; } @@ -1775,16 +1775,16 @@ async_channel_register(struct virtio_net *dev, struct vhost_virtqueue *vq) int node = vq->numa_node; if (unlikely(vq->async)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "async register failed: already registered (qid: %d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "async register failed: already registered (qid: %d)", vq->index); return -1; } async = rte_zmalloc_socket(NULL, sizeof(struct vhost_async), 0, node); if (!async) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate async metadata (qid: %d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate async metadata (qid: %d)", vq->index); return -1; } @@ -1792,8 +1792,8 @@ async_channel_register(struct virtio_net *dev, struct vhost_virtqueue *vq) async->pkts_info = rte_malloc_socket(NULL, vq->size * sizeof(struct async_inflight_info), RTE_CACHE_LINE_SIZE, node); if (!async->pkts_info) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate async_pkts_info (qid: %d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate async_pkts_info (qid: %d)", vq->index); goto out_free_async; } @@ -1801,8 +1801,8 @@ async_channel_register(struct virtio_net *dev, struct vhost_virtqueue *vq) async->pkts_cmpl_flag = rte_zmalloc_socket(NULL, vq->size * sizeof(bool), RTE_CACHE_LINE_SIZE, node); if (!async->pkts_cmpl_flag) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate async pkts_cmpl_flag (qid: %d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate async pkts_cmpl_flag (qid: %d)", vq->index); goto out_free_async; } @@ -1812,8 +1812,8 @@ async_channel_register(struct virtio_net *dev, struct vhost_virtqueue *vq) vq->size * sizeof(struct vring_used_elem_packed), RTE_CACHE_LINE_SIZE, node); if (!async->buffers_packed) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate async buffers (qid: %d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate async buffers (qid: %d)", vq->index); goto out_free_inflight; } @@ -1822,8 +1822,8 @@ async_channel_register(struct virtio_net *dev, struct vhost_virtqueue *vq) vq->size * sizeof(struct vring_used_elem), RTE_CACHE_LINE_SIZE, node); if (!async->descs_split) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate async descs (qid: %d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate async descs (qid: %d)", vq->index); goto out_free_inflight; } @@ -1914,8 +1914,8 @@ rte_vhost_async_channel_unregister(int vid, uint16_t queue_id) return ret; if (rte_rwlock_write_trylock(&vq->access_lock)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to unregister async channel, virtqueue busy.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to unregister async channel, virtqueue busy."); return ret; } @@ -1927,9 +1927,9 @@ rte_vhost_async_channel_unregister(int vid, uint16_t queue_id) if (!vq->async) { ret = 0; } else if (vq->async->pkts_inflight_n) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to unregister async channel.\n"); - VHOST_LOG_CONFIG(dev->ifname, ERR, - "inflight packets must be completed before unregistration.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to unregister async channel."); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "inflight packets must be completed before unregistration."); } else { vhost_free_async_mem(vq); ret = 0; @@ -1964,9 +1964,9 @@ rte_vhost_async_channel_unregister_thread_unsafe(int vid, uint16_t queue_id) return 0; if (vq->async->pkts_inflight_n) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to unregister async channel.\n"); - VHOST_LOG_CONFIG(dev->ifname, ERR, - "inflight packets must be completed before unregistration.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to unregister async channel."); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "inflight packets must be completed before unregistration."); return -1; } @@ -1985,17 +1985,17 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id) pthread_mutex_lock(&vhost_dma_lock); if (!rte_dma_is_valid(dma_id)) { - VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id); + VHOST_CONFIG_LOG("dma", ERR, "DMA %d is not found.", dma_id); goto error; } if (rte_dma_info_get(dma_id, &info) != 0) { - VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d information.\n", dma_id); + VHOST_CONFIG_LOG("dma", ERR, "Fail to get DMA %d information.", dma_id); goto error; } if (vchan_id >= info.max_vchans) { - VHOST_LOG_CONFIG("dma", ERR, "Invalid DMA %d vChannel %u.\n", dma_id, vchan_id); + VHOST_CONFIG_LOG("dma", ERR, "Invalid DMA %d vChannel %u.", dma_id, vchan_id); goto error; } @@ -2005,8 +2005,8 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id) vchans = rte_zmalloc(NULL, sizeof(struct async_dma_vchan_info) * info.max_vchans, RTE_CACHE_LINE_SIZE); if (vchans == NULL) { - VHOST_LOG_CONFIG("dma", ERR, - "Failed to allocate vchans for DMA %d vChannel %u.\n", + VHOST_CONFIG_LOG("dma", ERR, + "Failed to allocate vchans for DMA %d vChannel %u.", dma_id, vchan_id); goto error; } @@ -2015,7 +2015,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id) } if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr) { - VHOST_LOG_CONFIG("dma", INFO, "DMA %d vChannel %u already registered.\n", + VHOST_CONFIG_LOG("dma", INFO, "DMA %d vChannel %u already registered.", dma_id, vchan_id); pthread_mutex_unlock(&vhost_dma_lock); return 0; @@ -2027,8 +2027,8 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id) pkts_cmpl_flag_addr = rte_zmalloc(NULL, sizeof(bool *) * max_desc, RTE_CACHE_LINE_SIZE); if (!pkts_cmpl_flag_addr) { - VHOST_LOG_CONFIG("dma", ERR, - "Failed to allocate pkts_cmpl_flag_addr for DMA %d vChannel %u.\n", + VHOST_CONFIG_LOG("dma", ERR, + "Failed to allocate pkts_cmpl_flag_addr for DMA %d vChannel %u.", dma_id, vchan_id); if (dma_copy_track[dma_id].nr_vchans == 0) { @@ -2070,8 +2070,8 @@ rte_vhost_async_get_inflight(int vid, uint16_t queue_id) return ret; if (rte_rwlock_write_trylock(&vq->access_lock)) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "failed to check in-flight packets. virtqueue busy.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "failed to check in-flight packets. virtqueue busy."); return ret; } @@ -2284,30 +2284,30 @@ rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id) pthread_mutex_lock(&vhost_dma_lock); if (!rte_dma_is_valid(dma_id)) { - VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id); + VHOST_CONFIG_LOG("dma", ERR, "DMA %d is not found.", dma_id); goto error; } if (rte_dma_info_get(dma_id, &info) != 0) { - VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d information.\n", dma_id); + VHOST_CONFIG_LOG("dma", ERR, "Fail to get DMA %d information.", dma_id); goto error; } if (vchan_id >= info.max_vchans || !dma_copy_track[dma_id].vchans || !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr) { - VHOST_LOG_CONFIG("dma", ERR, "Invalid channel %d:%u.\n", dma_id, vchan_id); + VHOST_CONFIG_LOG("dma", ERR, "Invalid channel %d:%u.", dma_id, vchan_id); goto error; } if (rte_dma_stats_get(dma_id, vchan_id, &stats) != 0) { - VHOST_LOG_CONFIG("dma", ERR, - "Failed to get stats for DMA %d vChannel %u.\n", dma_id, vchan_id); + VHOST_CONFIG_LOG("dma", ERR, + "Failed to get stats for DMA %d vChannel %u.", dma_id, vchan_id); goto error; } if (stats.submitted - stats.completed != 0) { - VHOST_LOG_CONFIG("dma", ERR, - "Do not unconfigure when there are inflight packets.\n"); + VHOST_CONFIG_LOG("dma", ERR, + "Do not unconfigure when there are inflight packets."); goto error; } diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index 5f24911190..5a74d0e628 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -673,17 +673,17 @@ vhost_log_write_iova(struct virtio_net *dev, struct vhost_virtqueue *vq, } extern int vhost_config_log_level; +#define RTE_LOGTYPE_VHOST_CONFIG vhost_config_log_level extern int vhost_data_log_level; +#define RTE_LOGTYPE_VHOST_DATA vhost_data_log_level -#define VHOST_LOG_CONFIG(prefix, level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, vhost_config_log_level, \ - "VHOST_CONFIG: (%s) " fmt, prefix, ##args) +#define VHOST_CONFIG_LOG(prefix, level, fmt, args...) \ + RTE_LOG(level, VHOST_CONFIG, \ + "VHOST_CONFIG: (%s) " fmt "\n", prefix, ##args) -#define VHOST_LOG_DATA(prefix, level, fmt, args...) \ - (void)((RTE_LOG_ ## level <= RTE_LOG_DP_LEVEL) ? \ - rte_log(RTE_LOG_ ## level, vhost_data_log_level, \ - "VHOST_DATA: (%s) " fmt, prefix, ##args) : \ - 0) +#define VHOST_DATA_LOG(prefix, level, fmt, args...) \ + RTE_LOG_DP(level, VHOST_DATA, \ + "VHOST_DATA: (%s) " fmt "\n", prefix, ##args) #ifdef RTE_LIBRTE_VHOST_DEBUG #define VHOST_MAX_PRINT_BUFF 6072 @@ -702,7 +702,7 @@ extern int vhost_data_log_level; } \ snprintf(packet + strnlen(packet, VHOST_MAX_PRINT_BUFF), VHOST_MAX_PRINT_BUFF - strnlen(packet, VHOST_MAX_PRINT_BUFF), "\n"); \ \ - VHOST_LOG_DATA(device->ifname, DEBUG, "%s", packet); \ + RTE_LOG_DP(DEBUG, VHOST_DATA, "VHOST_DATA: (%s) %s", dev->ifname, packet); \ } while (0) #else #define PRINT_PACKET(device, addr, size, header) do {} while (0) @@ -830,7 +830,7 @@ get_device(int vid) dev = vhost_devices[vid]; if (unlikely(!dev)) { - VHOST_LOG_CONFIG("device", ERR, "(%d) device not found.\n", vid); + VHOST_CONFIG_LOG("device", ERR, "(%d) device not found.", vid); } return dev; @@ -963,8 +963,8 @@ vhost_vring_call_split(struct virtio_net *dev, struct vhost_virtqueue *vq) vq->signalled_used = new; vq->signalled_used_valid = true; - VHOST_LOG_DATA(dev->ifname, DEBUG, - "%s: used_event_idx=%d, old=%d, new=%d\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "%s: used_event_idx=%d, old=%d, new=%d", __func__, vhost_used_event(vq), old, new); if (vhost_need_event(vhost_used_event(vq), new, old) || diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index 413f068bcd..bac10e6182 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -93,8 +93,8 @@ validate_msg_fds(struct virtio_net *dev, struct vhu_msg_context *ctx, int expect if (ctx->fd_num == expected_fds) return 0; - VHOST_LOG_CONFIG(dev->ifname, ERR, - "expect %d FDs for request %s, received %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "expect %d FDs for request %s, received %d", expected_fds, vhost_message_handlers[ctx->msg.request.frontend].description, ctx->fd_num); @@ -144,7 +144,7 @@ async_dma_map(struct virtio_net *dev, bool do_map) return; /* DMA mapping errors won't stop VHOST_USER_SET_MEM_TABLE. */ - VHOST_LOG_CONFIG(dev->ifname, ERR, "DMA engine map failed\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "DMA engine map failed"); } } @@ -160,7 +160,7 @@ async_dma_map(struct virtio_net *dev, bool do_map) if (rte_errno == EINVAL) return; - VHOST_LOG_CONFIG(dev->ifname, ERR, "DMA engine unmap failed\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "DMA engine unmap failed"); } } } @@ -339,7 +339,7 @@ vhost_user_set_features(struct virtio_net **pdev, rte_vhost_driver_get_features(dev->ifname, &vhost_features); if (features & ~vhost_features) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "received invalid negotiated features.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "received invalid negotiated features."); dev->flags |= VIRTIO_DEV_FEATURES_FAILED; dev->status &= ~VIRTIO_DEVICE_STATUS_FEATURES_OK; @@ -356,8 +356,8 @@ vhost_user_set_features(struct virtio_net **pdev, * is enabled when the live-migration starts. */ if ((dev->features ^ features) & ~(1ULL << VHOST_F_LOG_ALL)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "features changed while device is running.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "features changed while device is running."); return RTE_VHOST_MSG_RESULT_ERR; } @@ -374,11 +374,11 @@ vhost_user_set_features(struct virtio_net **pdev, } else { dev->vhost_hlen = sizeof(struct virtio_net_hdr); } - VHOST_LOG_CONFIG(dev->ifname, INFO, - "negotiated Virtio features: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "negotiated Virtio features: 0x%" PRIx64, dev->features); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "mergeable RX buffers %s, virtio 1 %s\n", + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "mergeable RX buffers %s, virtio 1 %s", (dev->features & (1 << VIRTIO_NET_F_MRG_RXBUF)) ? "on" : "off", (dev->features & (1ULL << VIRTIO_F_VERSION_1)) ? "on" : "off"); @@ -426,8 +426,8 @@ vhost_user_set_vring_num(struct virtio_net **pdev, struct vhost_virtqueue *vq = dev->virtqueue[ctx->msg.payload.state.index]; if (ctx->msg.payload.state.num > 32768) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid virtqueue size %u\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid virtqueue size %u", ctx->msg.payload.state.num); return RTE_VHOST_MSG_RESULT_ERR; } @@ -445,8 +445,8 @@ vhost_user_set_vring_num(struct virtio_net **pdev, */ if (!vq_is_packed(dev)) { if (vq->size & (vq->size - 1)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid virtqueue size %u\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid virtqueue size %u", vq->size); return RTE_VHOST_MSG_RESULT_ERR; } @@ -459,8 +459,8 @@ vhost_user_set_vring_num(struct virtio_net **pdev, sizeof(struct vring_used_elem_packed), RTE_CACHE_LINE_SIZE, vq->numa_node); if (!vq->shadow_used_packed) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for shadow used ring.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for shadow used ring."); return RTE_VHOST_MSG_RESULT_ERR; } @@ -472,8 +472,8 @@ vhost_user_set_vring_num(struct virtio_net **pdev, RTE_CACHE_LINE_SIZE, vq->numa_node); if (!vq->shadow_used_split) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for vq internal data.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for vq internal data."); return RTE_VHOST_MSG_RESULT_ERR; } } @@ -483,8 +483,8 @@ vhost_user_set_vring_num(struct virtio_net **pdev, vq->size * sizeof(struct batch_copy_elem), RTE_CACHE_LINE_SIZE, vq->numa_node); if (!vq->batch_copy_elems) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for batching copy.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for batching copy."); return RTE_VHOST_MSG_RESULT_ERR; } @@ -520,8 +520,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) ret = get_mempolicy(&node, NULL, 0, vq->desc, MPOL_F_NODE | MPOL_F_ADDR); if (ret) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "unable to get virtqueue %d numa information.\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "unable to get virtqueue %d numa information.", vq->index); return; } @@ -531,15 +531,15 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) vq = rte_realloc_socket(*pvq, sizeof(**pvq), 0, node); if (!vq) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc virtqueue %d on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc virtqueue %d on node %d", (*pvq)->index, node); return; } *pvq = vq; if (vq != dev->virtqueue[vq->index]) { - VHOST_LOG_CONFIG(dev->ifname, INFO, "reallocated virtqueue on node %d\n", node); + VHOST_CONFIG_LOG(dev->ifname, INFO, "reallocated virtqueue on node %d", node); dev->virtqueue[vq->index] = vq; } @@ -549,8 +549,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) sup = rte_realloc_socket(vq->shadow_used_packed, vq->size * sizeof(*sup), RTE_CACHE_LINE_SIZE, node); if (!sup) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc shadow packed on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc shadow packed on node %d", node); return; } @@ -561,8 +561,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) sus = rte_realloc_socket(vq->shadow_used_split, vq->size * sizeof(*sus), RTE_CACHE_LINE_SIZE, node); if (!sus) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc shadow split on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc shadow split on node %d", node); return; } @@ -572,8 +572,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) bce = rte_realloc_socket(vq->batch_copy_elems, vq->size * sizeof(*bce), RTE_CACHE_LINE_SIZE, node); if (!bce) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc batch copy elem on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc batch copy elem on node %d", node); return; } @@ -584,8 +584,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) lc = rte_realloc_socket(vq->log_cache, sizeof(*lc) * VHOST_LOG_CACHE_NR, 0, node); if (!lc) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc log cache on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc log cache on node %d", node); return; } @@ -597,8 +597,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) ri = rte_realloc_socket(vq->resubmit_inflight, sizeof(*ri), 0, node); if (!ri) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc resubmit inflight on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc resubmit inflight on node %d", node); return; } @@ -610,8 +610,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) rd = rte_realloc_socket(ri->resubmit_list, sizeof(*rd) * ri->resubmit_num, 0, node); if (!rd) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc resubmit list on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc resubmit list on node %d", node); return; } @@ -628,7 +628,7 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) ret = get_mempolicy(&dev_node, NULL, 0, dev, MPOL_F_NODE | MPOL_F_ADDR); if (ret) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "unable to get numa information.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "unable to get numa information."); return; } @@ -637,20 +637,20 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) dev = rte_realloc_socket(*pdev, sizeof(**pdev), 0, node); if (!dev) { - VHOST_LOG_CONFIG((*pdev)->ifname, ERR, "failed to realloc dev on node %d\n", node); + VHOST_CONFIG_LOG((*pdev)->ifname, ERR, "failed to realloc dev on node %d", node); return; } *pdev = dev; - VHOST_LOG_CONFIG(dev->ifname, INFO, "reallocated device on node %d\n", node); + VHOST_CONFIG_LOG(dev->ifname, INFO, "reallocated device on node %d", node); vhost_devices[dev->vid] = dev; mem_size = sizeof(struct rte_vhost_memory) + sizeof(struct rte_vhost_mem_region) * dev->mem->nregions; mem = rte_realloc_socket(dev->mem, mem_size, 0, node); if (!mem) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc mem table on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc mem table on node %d", node); return; } @@ -659,8 +659,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) gp = rte_realloc_socket(dev->guest_pages, dev->max_guest_pages * sizeof(*gp), RTE_CACHE_LINE_SIZE, node); if (!gp) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc guest pages on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc guest pages on node %d", node); return; } @@ -771,8 +771,8 @@ mem_set_dump(struct virtio_net *dev, void *ptr, size_t size, bool enable, uint64 size_t len = end - (uintptr_t)start; if (madvise(start, len, enable ? MADV_DODUMP : MADV_DONTDUMP) == -1) { - VHOST_LOG_CONFIG(dev->ifname, INFO, - "could not set coredump preference (%s).\n", strerror(errno)); + VHOST_CONFIG_LOG(dev->ifname, INFO, + "could not set coredump preference (%s).", strerror(errno)); } #endif } @@ -791,7 +791,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) vq->log_guest_addr = log_addr_to_gpa(dev, vq); if (vq->log_guest_addr == 0) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "failed to map log_guest_addr.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "failed to map log_guest_addr."); return; } } @@ -803,7 +803,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) if (vq->desc_packed == NULL || len != sizeof(struct vring_packed_desc) * vq->size) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "failed to map desc_packed ring.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "failed to map desc_packed ring."); return; } @@ -819,8 +819,8 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) vq, vq->ring_addrs.avail_user_addr, &len); if (vq->driver_event == NULL || len != sizeof(struct vring_packed_desc_event)) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "failed to find driver area address.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "failed to find driver area address."); return; } @@ -832,8 +832,8 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) vq, vq->ring_addrs.used_user_addr, &len); if (vq->device_event == NULL || len != sizeof(struct vring_packed_desc_event)) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "failed to find device area address.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "failed to find device area address."); return; } @@ -851,7 +851,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) vq->desc = (struct vring_desc *)(uintptr_t)ring_addr_to_vva(dev, vq, vq->ring_addrs.desc_user_addr, &len); if (vq->desc == 0 || len != sizeof(struct vring_desc) * vq->size) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "failed to map desc ring.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "failed to map desc ring."); return; } @@ -867,7 +867,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) vq->avail = (struct vring_avail *)(uintptr_t)ring_addr_to_vva(dev, vq, vq->ring_addrs.avail_user_addr, &len); if (vq->avail == 0 || len != expected_len) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "failed to map avail ring.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "failed to map avail ring."); return; } @@ -880,28 +880,28 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) vq->used = (struct vring_used *)(uintptr_t)ring_addr_to_vva(dev, vq, vq->ring_addrs.used_user_addr, &len); if (vq->used == 0 || len != expected_len) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "failed to map used ring.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "failed to map used ring."); return; } mem_set_dump(dev, vq->used, len, true, hua_to_alignment(dev->mem, vq->used)); if (vq->last_used_idx != vq->used->idx) { - VHOST_LOG_CONFIG(dev->ifname, WARNING, - "last_used_idx (%u) and vq->used->idx (%u) mismatches;\n", + VHOST_CONFIG_LOG(dev->ifname, WARNING, + "last_used_idx (%u) and vq->used->idx (%u) mismatches;", vq->last_used_idx, vq->used->idx); vq->last_used_idx = vq->used->idx; vq->last_avail_idx = vq->used->idx; - VHOST_LOG_CONFIG(dev->ifname, WARNING, - "some packets maybe resent for Tx and dropped for Rx\n"); + VHOST_CONFIG_LOG(dev->ifname, WARNING, + "some packets maybe resent for Tx and dropped for Rx"); } vq->access_ok = true; - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "mapped address desc: %p\n", vq->desc); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "mapped address avail: %p\n", vq->avail); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "mapped address used: %p\n", vq->used); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "log_guest_addr: %" PRIx64 "\n", vq->log_guest_addr); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "mapped address desc: %p", vq->desc); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "mapped address avail: %p", vq->avail); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "mapped address used: %p", vq->used); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "log_guest_addr: %" PRIx64, vq->log_guest_addr); } /* @@ -975,8 +975,8 @@ vhost_user_set_vring_base(struct virtio_net **pdev, vq->last_avail_idx = ctx->msg.payload.state.num; } - VHOST_LOG_CONFIG(dev->ifname, INFO, - "vring base idx:%u last_used_idx:%u last_avail_idx:%u.\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "vring base idx:%u last_used_idx:%u last_avail_idx:%u.", ctx->msg.payload.state.index, vq->last_used_idx, vq->last_avail_idx); return RTE_VHOST_MSG_RESULT_OK; @@ -996,7 +996,7 @@ add_one_guest_page(struct virtio_net *dev, uint64_t guest_phys_addr, dev->max_guest_pages * sizeof(*page), RTE_CACHE_LINE_SIZE); if (dev->guest_pages == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "cannot realloc guest_pages\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "cannot realloc guest_pages"); rte_free(old_pages); return -1; } @@ -1077,12 +1077,12 @@ dump_guest_pages(struct virtio_net *dev) for (i = 0; i < dev->nr_guest_pages; i++) { page = &dev->guest_pages[i]; - VHOST_LOG_CONFIG(dev->ifname, INFO, "guest physical page region %u\n", i); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tguest_phys_addr: %" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "guest physical page region %u", i); + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tguest_phys_addr: %" PRIx64, page->guest_phys_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\thost_iova : %" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "\thost_iova : %" PRIx64, page->host_iova); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tsize : %" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tsize : %" PRIx64, page->size); } } @@ -1131,9 +1131,9 @@ vhost_user_postcopy_region_register(struct virtio_net *dev, if (ioctl(dev->postcopy_ufd, UFFDIO_REGISTER, ®_struct)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to register ufd for region " - "%" PRIx64 " - %" PRIx64 " (ufd = %d) %s\n", + "%" PRIx64 " - %" PRIx64 " (ufd = %d) %s", (uint64_t)reg_struct.range.start, (uint64_t)reg_struct.range.start + (uint64_t)reg_struct.range.len - 1, @@ -1142,8 +1142,8 @@ vhost_user_postcopy_region_register(struct virtio_net *dev, return -1; } - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t userfaultfd registered for range : %" PRIx64 " - %" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t userfaultfd registered for range : %" PRIx64 " - %" PRIx64, (uint64_t)reg_struct.range.start, (uint64_t)reg_struct.range.start + (uint64_t)reg_struct.range.len - 1); @@ -1190,8 +1190,8 @@ vhost_user_postcopy_register(struct virtio_net *dev, int main_fd, * we've got to wait before we're allowed to generate faults. */ if (read_vhost_message(dev, main_fd, &ack_ctx) <= 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to read qemu ack on postcopy set-mem-table\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to read qemu ack on postcopy set-mem-table"); return -1; } @@ -1199,8 +1199,8 @@ vhost_user_postcopy_register(struct virtio_net *dev, int main_fd, return -1; if (ack_ctx.msg.request.frontend != VHOST_USER_SET_MEM_TABLE) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "bad qemu ack on postcopy set-mem-table (%d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "bad qemu ack on postcopy set-mem-table (%d)", ack_ctx.msg.request.frontend); return -1; } @@ -1227,8 +1227,8 @@ vhost_user_mmap_region(struct virtio_net *dev, /* Check for memory_size + mmap_offset overflow */ if (mmap_offset >= -region->size) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "mmap_offset (%#"PRIx64") and memory_size (%#"PRIx64") overflow\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "mmap_offset (%#"PRIx64") and memory_size (%#"PRIx64") overflow", mmap_offset, region->size); return -1; } @@ -1243,7 +1243,7 @@ vhost_user_mmap_region(struct virtio_net *dev, */ alignment = get_blk_size(region->fd); if (alignment == (uint64_t)-1) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "couldn't get hugepage size through fstat\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "couldn't get hugepage size through fstat"); return -1; } mmap_size = RTE_ALIGN_CEIL(mmap_size, alignment); @@ -1256,8 +1256,8 @@ vhost_user_mmap_region(struct virtio_net *dev, * mmap() kernel implementation would return an error, but * better catch it before and provide useful info in the logs. */ - VHOST_LOG_CONFIG(dev->ifname, ERR, - "mmap size (0x%" PRIx64 ") or alignment (0x%" PRIx64 ") is invalid\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "mmap size (0x%" PRIx64 ") or alignment (0x%" PRIx64 ") is invalid", region->size + mmap_offset, alignment); return -1; } @@ -1267,7 +1267,7 @@ vhost_user_mmap_region(struct virtio_net *dev, MAP_SHARED | populate, region->fd, 0); if (mmap_addr == MAP_FAILED) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "mmap failed (%s).\n", strerror(errno)); + VHOST_CONFIG_LOG(dev->ifname, ERR, "mmap failed (%s).", strerror(errno)); return -1; } @@ -1278,35 +1278,35 @@ vhost_user_mmap_region(struct virtio_net *dev, if (dev->async_copy) { if (add_guest_pages(dev, region, alignment) < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "adding guest pages to region failed.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "adding guest pages to region failed."); return -1; } } - VHOST_LOG_CONFIG(dev->ifname, INFO, - "guest memory region size: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "guest memory region size: 0x%" PRIx64, region->size); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t guest physical addr: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t guest physical addr: 0x%" PRIx64, region->guest_phys_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t guest virtual addr: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t guest virtual addr: 0x%" PRIx64, region->guest_user_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t host virtual addr: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t host virtual addr: 0x%" PRIx64, region->host_user_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t mmap addr : 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t mmap addr : 0x%" PRIx64, (uint64_t)(uintptr_t)mmap_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t mmap size : 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t mmap size : 0x%" PRIx64, mmap_size); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t mmap align: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t mmap align: 0x%" PRIx64, alignment); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t mmap off : 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t mmap off : 0x%" PRIx64, mmap_offset); return 0; @@ -1329,14 +1329,14 @@ vhost_user_set_mem_table(struct virtio_net **pdev, return RTE_VHOST_MSG_RESULT_ERR; if (memory->nregions > VHOST_MEMORY_MAX_NREGIONS) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "too many memory regions (%u)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "too many memory regions (%u)", memory->nregions); goto close_msg_fds; } if (dev->mem && !vhost_memory_changed(memory, dev->mem)) { - VHOST_LOG_CONFIG(dev->ifname, INFO, "memory regions not changed\n"); + VHOST_CONFIG_LOG(dev->ifname, INFO, "memory regions not changed"); close_msg_fds(ctx); @@ -1386,8 +1386,8 @@ vhost_user_set_mem_table(struct virtio_net **pdev, RTE_CACHE_LINE_SIZE, numa_node); if (dev->guest_pages == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for dev->guest_pages\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for dev->guest_pages"); goto close_msg_fds; } } @@ -1395,7 +1395,7 @@ vhost_user_set_mem_table(struct virtio_net **pdev, dev->mem = rte_zmalloc_socket("vhost-mem-table", sizeof(struct rte_vhost_memory) + sizeof(struct rte_vhost_mem_region) * memory->nregions, 0, numa_node); if (dev->mem == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to allocate memory for dev->mem\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to allocate memory for dev->mem"); goto free_guest_pages; } @@ -1416,7 +1416,7 @@ vhost_user_set_mem_table(struct virtio_net **pdev, mmap_offset = memory->regions[i].mmap_offset; if (vhost_user_mmap_region(dev, reg, mmap_offset) < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to mmap region %u\n", i); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to mmap region %u", i); goto free_mem_table; } @@ -1538,7 +1538,7 @@ virtio_is_ready(struct virtio_net *dev) dev->flags |= VIRTIO_DEV_READY; if (!(dev->flags & VIRTIO_DEV_RUNNING)) - VHOST_LOG_CONFIG(dev->ifname, INFO, "virtio is now ready for processing.\n"); + VHOST_CONFIG_LOG(dev->ifname, INFO, "virtio is now ready for processing."); return 1; } @@ -1559,7 +1559,7 @@ inflight_mem_alloc(struct virtio_net *dev, const char *name, size_t size, int *f if (mfd == -1) { mfd = mkstemp(fname); if (mfd == -1) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to get inflight buffer fd\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to get inflight buffer fd"); return NULL; } @@ -1567,14 +1567,14 @@ inflight_mem_alloc(struct virtio_net *dev, const char *name, size_t size, int *f } if (ftruncate(mfd, size) == -1) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to alloc inflight buffer\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to alloc inflight buffer"); close(mfd); return NULL; } ptr = mmap(0, size, PROT_READ | PROT_WRITE, MAP_SHARED, mfd, 0); if (ptr == MAP_FAILED) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to mmap inflight buffer\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to mmap inflight buffer"); close(mfd); return NULL; } @@ -1616,8 +1616,8 @@ vhost_user_get_inflight_fd(struct virtio_net **pdev, void *addr; if (ctx->msg.size != sizeof(ctx->msg.payload.inflight)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid get_inflight_fd message size is %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid get_inflight_fd message size is %d", ctx->msg.size); return RTE_VHOST_MSG_RESULT_ERR; } @@ -1633,7 +1633,7 @@ vhost_user_get_inflight_fd(struct virtio_net **pdev, dev->inflight_info = rte_zmalloc_socket("inflight_info", sizeof(struct inflight_mem_info), 0, numa_node); if (!dev->inflight_info) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to alloc dev inflight area\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to alloc dev inflight area"); return RTE_VHOST_MSG_RESULT_ERR; } dev->inflight_info->fd = -1; @@ -1642,11 +1642,11 @@ vhost_user_get_inflight_fd(struct virtio_net **pdev, num_queues = ctx->msg.payload.inflight.num_queues; queue_size = ctx->msg.payload.inflight.queue_size; - VHOST_LOG_CONFIG(dev->ifname, INFO, - "get_inflight_fd num_queues: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "get_inflight_fd num_queues: %u", ctx->msg.payload.inflight.num_queues); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "get_inflight_fd queue_size: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "get_inflight_fd queue_size: %u", ctx->msg.payload.inflight.queue_size); if (vq_is_packed(dev)) @@ -1657,7 +1657,7 @@ vhost_user_get_inflight_fd(struct virtio_net **pdev, mmap_size = num_queues * pervq_inflight_size; addr = inflight_mem_alloc(dev, "vhost-inflight", mmap_size, &fd); if (!addr) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to alloc vhost inflight area\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to alloc vhost inflight area"); ctx->msg.payload.inflight.mmap_size = 0; return RTE_VHOST_MSG_RESULT_ERR; } @@ -1691,14 +1691,14 @@ vhost_user_get_inflight_fd(struct virtio_net **pdev, } } - VHOST_LOG_CONFIG(dev->ifname, INFO, - "send inflight mmap_size: %"PRIu64"\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "send inflight mmap_size: %"PRIu64, ctx->msg.payload.inflight.mmap_size); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "send inflight mmap_offset: %"PRIu64"\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "send inflight mmap_offset: %"PRIu64, ctx->msg.payload.inflight.mmap_offset); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "send inflight fd: %d\n", ctx->fds[0]); + VHOST_CONFIG_LOG(dev->ifname, INFO, + "send inflight fd: %d", ctx->fds[0]); return RTE_VHOST_MSG_RESULT_REPLY; } @@ -1722,8 +1722,8 @@ vhost_user_set_inflight_fd(struct virtio_net **pdev, fd = ctx->fds[0]; if (ctx->msg.size != sizeof(ctx->msg.payload.inflight) || fd < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid set_inflight_fd message size is %d,fd is %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid set_inflight_fd message size is %d,fd is %d", ctx->msg.size, fd); return RTE_VHOST_MSG_RESULT_ERR; } @@ -1738,21 +1738,21 @@ vhost_user_set_inflight_fd(struct virtio_net **pdev, else pervq_inflight_size = get_pervq_shm_size_split(queue_size); - VHOST_LOG_CONFIG(dev->ifname, INFO, "set_inflight_fd mmap_size: %"PRIu64"\n", mmap_size); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "set_inflight_fd mmap_offset: %"PRIu64"\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "set_inflight_fd mmap_size: %"PRIu64, mmap_size); + VHOST_CONFIG_LOG(dev->ifname, INFO, + "set_inflight_fd mmap_offset: %"PRIu64, mmap_offset); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "set_inflight_fd num_queues: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "set_inflight_fd num_queues: %u", num_queues); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "set_inflight_fd queue_size: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "set_inflight_fd queue_size: %u", queue_size); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "set_inflight_fd fd: %d\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "set_inflight_fd fd: %d", fd); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "set_inflight_fd pervq_inflight_size: %d\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "set_inflight_fd pervq_inflight_size: %d", pervq_inflight_size); /* @@ -1766,7 +1766,7 @@ vhost_user_set_inflight_fd(struct virtio_net **pdev, dev->inflight_info = rte_zmalloc_socket("inflight_info", sizeof(struct inflight_mem_info), 0, numa_node); if (dev->inflight_info == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to alloc dev inflight area\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to alloc dev inflight area"); return RTE_VHOST_MSG_RESULT_ERR; } dev->inflight_info->fd = -1; @@ -1780,7 +1780,7 @@ vhost_user_set_inflight_fd(struct virtio_net **pdev, addr = mmap(0, mmap_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, mmap_offset); if (addr == MAP_FAILED) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to mmap share memory.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to mmap share memory."); return RTE_VHOST_MSG_RESULT_ERR; } @@ -1831,8 +1831,8 @@ vhost_user_set_vring_call(struct virtio_net **pdev, file.fd = VIRTIO_INVALID_EVENTFD; else file.fd = ctx->fds[0]; - VHOST_LOG_CONFIG(dev->ifname, INFO, - "vring call idx:%d file:%d\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "vring call idx:%d file:%d", file.index, file.fd); vq = dev->virtqueue[file.index]; @@ -1863,7 +1863,7 @@ static int vhost_user_set_vring_err(struct virtio_net **pdev, if (!(ctx->msg.payload.u64 & VHOST_USER_VRING_NOFD_MASK)) close(ctx->fds[0]); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "not implemented\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "not implemented"); return RTE_VHOST_MSG_RESULT_OK; } @@ -1929,8 +1929,8 @@ vhost_check_queue_inflights_split(struct virtio_net *dev, resubmit = rte_zmalloc_socket("resubmit", sizeof(struct rte_vhost_resubmit_info), 0, vq->numa_node); if (!resubmit) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for resubmit info.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for resubmit info."); return RTE_VHOST_MSG_RESULT_ERR; } @@ -1938,8 +1938,8 @@ vhost_check_queue_inflights_split(struct virtio_net *dev, resubmit_num * sizeof(struct rte_vhost_resubmit_desc), 0, vq->numa_node); if (!resubmit->resubmit_list) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for inflight desc.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for inflight desc."); rte_free(resubmit); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2025,8 +2025,8 @@ vhost_check_queue_inflights_packed(struct virtio_net *dev, resubmit = rte_zmalloc_socket("resubmit", sizeof(struct rte_vhost_resubmit_info), 0, vq->numa_node); if (resubmit == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for resubmit info.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for resubmit info."); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2034,8 +2034,8 @@ vhost_check_queue_inflights_packed(struct virtio_net *dev, resubmit_num * sizeof(struct rte_vhost_resubmit_desc), 0, vq->numa_node); if (resubmit->resubmit_list == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for resubmit desc.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for resubmit desc."); rte_free(resubmit); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2082,8 +2082,8 @@ vhost_user_set_vring_kick(struct virtio_net **pdev, file.fd = VIRTIO_INVALID_EVENTFD; else file.fd = ctx->fds[0]; - VHOST_LOG_CONFIG(dev->ifname, INFO, - "vring kick idx:%d file:%d\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "vring kick idx:%d file:%d", file.index, file.fd); /* Interpret ring addresses only when ring is started. */ @@ -2111,15 +2111,15 @@ vhost_user_set_vring_kick(struct virtio_net **pdev, if (vq_is_packed(dev)) { if (vhost_check_queue_inflights_packed(dev, vq)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to inflights for vq: %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to inflights for vq: %d", file.index); return RTE_VHOST_MSG_RESULT_ERR; } } else { if (vhost_check_queue_inflights_split(dev, vq)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to inflights for vq: %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to inflights for vq: %d", file.index); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2159,8 +2159,8 @@ vhost_user_get_vring_base(struct virtio_net **pdev, ctx->msg.payload.state.num = vq->last_avail_idx; } - VHOST_LOG_CONFIG(dev->ifname, INFO, - "vring base idx:%d file:%d\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "vring base idx:%d file:%d", ctx->msg.payload.state.index, ctx->msg.payload.state.num); /* * Based on current qemu vhost-user implementation, this message is @@ -2217,8 +2217,8 @@ vhost_user_set_vring_enable(struct virtio_net **pdev, bool enable = !!ctx->msg.payload.state.num; int index = (int)ctx->msg.payload.state.index; - VHOST_LOG_CONFIG(dev->ifname, INFO, - "set queue enable: %d to qp idx: %d\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "set queue enable: %d to qp idx: %d", enable, index); vq = dev->virtqueue[index]; @@ -2226,8 +2226,8 @@ vhost_user_set_vring_enable(struct virtio_net **pdev, /* vhost_user_lock_all_queue_pairs locked all qps */ vq_assert_lock(dev, vq); if (enable && vq->async && vq->async->pkts_inflight_n) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to enable vring. Inflight packets must be completed first\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to enable vring. Inflight packets must be completed first"); return RTE_VHOST_MSG_RESULT_ERR; } } @@ -2267,13 +2267,13 @@ vhost_user_set_protocol_features(struct virtio_net **pdev, rte_vhost_driver_get_protocol_features(dev->ifname, &backend_protocol_features); if (protocol_features & ~backend_protocol_features) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "received invalid protocol features.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "received invalid protocol features."); return RTE_VHOST_MSG_RESULT_ERR; } dev->protocol_features = protocol_features; - VHOST_LOG_CONFIG(dev->ifname, INFO, - "negotiated Vhost-user protocol features: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "negotiated Vhost-user protocol features: 0x%" PRIx64, dev->protocol_features); return RTE_VHOST_MSG_RESULT_OK; @@ -2295,13 +2295,13 @@ vhost_user_set_log_base(struct virtio_net **pdev, return RTE_VHOST_MSG_RESULT_ERR; if (fd < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "invalid log fd: %d\n", fd); + VHOST_CONFIG_LOG(dev->ifname, ERR, "invalid log fd: %d", fd); return RTE_VHOST_MSG_RESULT_ERR; } if (ctx->msg.size != sizeof(VhostUserLog)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid log base msg size: %"PRId32" != %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid log base msg size: %"PRId32" != %d", ctx->msg.size, (int)sizeof(VhostUserLog)); goto close_msg_fds; } @@ -2311,14 +2311,14 @@ vhost_user_set_log_base(struct virtio_net **pdev, /* Check for mmap size and offset overflow. */ if (off >= -size) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "log offset %#"PRIx64" and log size %#"PRIx64" overflow\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "log offset %#"PRIx64" and log size %#"PRIx64" overflow", off, size); goto close_msg_fds; } - VHOST_LOG_CONFIG(dev->ifname, INFO, - "log mmap size: %"PRId64", offset: %"PRId64"\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "log mmap size: %"PRId64", offset: %"PRId64, size, off); /* @@ -2329,7 +2329,7 @@ vhost_user_set_log_base(struct virtio_net **pdev, alignment = get_blk_size(fd); close(fd); if (addr == MAP_FAILED) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "mmap log base failed!\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "mmap log base failed!"); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2359,8 +2359,8 @@ vhost_user_set_log_base(struct virtio_net **pdev, * caching will be done, which will impact performance */ if (!vq->log_cache) - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate VQ logging cache\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate VQ logging cache"); } /* @@ -2387,7 +2387,7 @@ static int vhost_user_set_log_fd(struct virtio_net **pdev, return RTE_VHOST_MSG_RESULT_ERR; close(ctx->fds[0]); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "not implemented.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "not implemented."); return RTE_VHOST_MSG_RESULT_OK; } @@ -2409,8 +2409,8 @@ vhost_user_send_rarp(struct virtio_net **pdev, uint8_t *mac = (uint8_t *)&ctx->msg.payload.u64; struct rte_vdpa_device *vdpa_dev; - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "MAC: " RTE_ETHER_ADDR_PRT_FMT "\n", + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "MAC: " RTE_ETHER_ADDR_PRT_FMT, mac[0], mac[1], mac[2], mac[3], mac[4], mac[5]); memcpy(dev->mac.addr_bytes, mac, 6); @@ -2438,8 +2438,8 @@ vhost_user_net_set_mtu(struct virtio_net **pdev, if (ctx->msg.payload.u64 < VIRTIO_MIN_MTU || ctx->msg.payload.u64 > VIRTIO_MAX_MTU) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid MTU size (%"PRIu64")\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid MTU size (%"PRIu64")", ctx->msg.payload.u64); return RTE_VHOST_MSG_RESULT_ERR; @@ -2462,8 +2462,8 @@ vhost_user_set_req_fd(struct virtio_net **pdev, return RTE_VHOST_MSG_RESULT_ERR; if (fd < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid file descriptor for backend channel (%d)\n", fd); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid file descriptor for backend channel (%d)", fd); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2563,7 +2563,7 @@ vhost_user_get_config(struct virtio_net **pdev, return RTE_VHOST_MSG_RESULT_ERR; if (!vdpa_dev) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "is not vDPA device!\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "is not vDPA device!"); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2573,10 +2573,10 @@ vhost_user_get_config(struct virtio_net **pdev, ctx->msg.payload.cfg.size); if (ret != 0) { ctx->msg.size = 0; - VHOST_LOG_CONFIG(dev->ifname, ERR, "get_config() return error!\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "get_config() return error!"); } } else { - VHOST_LOG_CONFIG(dev->ifname, ERR, "get_config() not supported!\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "get_config() not supported!"); } return RTE_VHOST_MSG_RESULT_REPLY; @@ -2595,14 +2595,14 @@ vhost_user_set_config(struct virtio_net **pdev, return RTE_VHOST_MSG_RESULT_ERR; if (ctx->msg.payload.cfg.size > VHOST_USER_MAX_CONFIG_SIZE) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "vhost_user_config size: %"PRIu32", should not be larger than %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "vhost_user_config size: %"PRIu32", should not be larger than %d", ctx->msg.payload.cfg.size, VHOST_USER_MAX_CONFIG_SIZE); goto out; } if (!vdpa_dev) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "is not vDPA device!\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "is not vDPA device!"); goto out; } @@ -2613,9 +2613,9 @@ vhost_user_set_config(struct virtio_net **pdev, ctx->msg.payload.cfg.size, ctx->msg.payload.cfg.flags); if (ret) - VHOST_LOG_CONFIG(dev->ifname, ERR, "set_config() return error!\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "set_config() return error!"); } else { - VHOST_LOG_CONFIG(dev->ifname, ERR, "set_config() not supported!\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "set_config() not supported!"); } return RTE_VHOST_MSG_RESULT_OK; @@ -2676,7 +2676,7 @@ vhost_user_iotlb_msg(struct virtio_net **pdev, } break; default: - VHOST_LOG_CONFIG(dev->ifname, ERR, "invalid IOTLB message type (%d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "invalid IOTLB message type (%d)", imsg->type); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2696,16 +2696,16 @@ vhost_user_set_postcopy_advise(struct virtio_net **pdev, dev->postcopy_ufd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK); if (dev->postcopy_ufd == -1) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "userfaultfd not available: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "userfaultfd not available: %s", strerror(errno)); return RTE_VHOST_MSG_RESULT_ERR; } api_struct.api = UFFD_API; api_struct.features = 0; if (ioctl(dev->postcopy_ufd, UFFDIO_API, &api_struct)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "UFFDIO_API ioctl failure: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "UFFDIO_API ioctl failure: %s", strerror(errno)); close(dev->postcopy_ufd); dev->postcopy_ufd = -1; @@ -2731,8 +2731,8 @@ vhost_user_set_postcopy_listen(struct virtio_net **pdev, struct virtio_net *dev = *pdev; if (dev->mem && dev->mem->nregions) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "regions already registered at postcopy-listen\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "regions already registered at postcopy-listen"); return RTE_VHOST_MSG_RESULT_ERR; } dev->postcopy_listening = 1; @@ -2783,8 +2783,8 @@ vhost_user_set_status(struct virtio_net **pdev, /* As per Virtio specification, the device status is 8bits long */ if (ctx->msg.payload.u64 > UINT8_MAX) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid VHOST_USER_SET_STATUS payload 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid VHOST_USER_SET_STATUS payload 0x%" PRIx64, ctx->msg.payload.u64); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2793,8 +2793,8 @@ vhost_user_set_status(struct virtio_net **pdev, if ((dev->status & VIRTIO_DEVICE_STATUS_FEATURES_OK) && (dev->flags & VIRTIO_DEV_FEATURES_FAILED)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "FEATURES_OK bit is set but feature negotiation failed\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "FEATURES_OK bit is set but feature negotiation failed"); /* * Clear the bit to let the driver know about the feature * negotiation failure @@ -2802,27 +2802,27 @@ vhost_user_set_status(struct virtio_net **pdev, dev->status &= ~VIRTIO_DEVICE_STATUS_FEATURES_OK; } - VHOST_LOG_CONFIG(dev->ifname, INFO, "new device status(0x%08x):\n", dev->status); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t-RESET: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "new device status(0x%08x):", dev->status); + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t-RESET: %u", (dev->status == VIRTIO_DEVICE_STATUS_RESET)); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t-ACKNOWLEDGE: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t-ACKNOWLEDGE: %u", !!(dev->status & VIRTIO_DEVICE_STATUS_ACK)); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t-DRIVER: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t-DRIVER: %u", !!(dev->status & VIRTIO_DEVICE_STATUS_DRIVER)); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t-FEATURES_OK: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t-FEATURES_OK: %u", !!(dev->status & VIRTIO_DEVICE_STATUS_FEATURES_OK)); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t-DRIVER_OK: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t-DRIVER_OK: %u", !!(dev->status & VIRTIO_DEVICE_STATUS_DRIVER_OK)); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t-DEVICE_NEED_RESET: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t-DEVICE_NEED_RESET: %u", !!(dev->status & VIRTIO_DEVICE_STATUS_DEV_NEED_RESET)); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t-FAILED: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t-FAILED: %u", !!(dev->status & VIRTIO_DEVICE_STATUS_FAILED)); return RTE_VHOST_MSG_RESULT_OK; @@ -2881,14 +2881,14 @@ read_vhost_message(struct virtio_net *dev, int sockfd, struct vhu_msg_context * goto out; if (ret != VHOST_USER_HDR_SIZE) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Unexpected header size read\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Unexpected header size read"); ret = -1; goto out; } if (ctx->msg.size) { if (ctx->msg.size > sizeof(ctx->msg.payload)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "invalid msg size: %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "invalid msg size: %d", ctx->msg.size); ret = -1; goto out; @@ -2897,7 +2897,7 @@ read_vhost_message(struct virtio_net *dev, int sockfd, struct vhu_msg_context * if (ret <= 0) goto out; if (ret != (int)ctx->msg.size) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "read control message failed\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "read control message failed"); ret = -1; goto out; } @@ -2949,24 +2949,24 @@ send_vhost_backend_message_process_reply(struct virtio_net *dev, struct vhu_msg_ rte_spinlock_lock(&dev->backend_req_lock); ret = send_vhost_backend_message(dev, ctx); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to send config change (%d)\n", ret); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to send config change (%d)", ret); goto out; } ret = read_vhost_message(dev, dev->backend_req_fd, &msg_reply); if (ret <= 0) { if (ret < 0) - VHOST_LOG_CONFIG(dev->ifname, ERR, - "vhost read backend message reply failed\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "vhost read backend message reply failed"); else - VHOST_LOG_CONFIG(dev->ifname, INFO, "vhost peer closed\n"); + VHOST_CONFIG_LOG(dev->ifname, INFO, "vhost peer closed"); ret = -1; goto out; } if (msg_reply.msg.request.backend != ctx->msg.request.backend) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "received unexpected msg type (%u), expected %u\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "received unexpected msg type (%u), expected %u", msg_reply.msg.request.backend, ctx->msg.request.backend); ret = -1; goto out; @@ -3010,7 +3010,7 @@ vhost_user_check_and_alloc_queue_pair(struct virtio_net *dev, } if (vring_idx >= VHOST_MAX_VRING) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "invalid vring index: %u\n", vring_idx); + VHOST_CONFIG_LOG(dev->ifname, ERR, "invalid vring index: %u", vring_idx); return -1; } @@ -3078,8 +3078,8 @@ vhost_user_msg_handler(int vid, int fd) if (!dev->notify_ops) { dev->notify_ops = vhost_driver_callback_get(dev->ifname); if (!dev->notify_ops) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to get callback ops for driver\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to get callback ops for driver"); return -1; } } @@ -3087,7 +3087,7 @@ vhost_user_msg_handler(int vid, int fd) ctx.msg.request.frontend = VHOST_USER_NONE; ret = read_vhost_message(dev, fd, &ctx); if (ret == 0) { - VHOST_LOG_CONFIG(dev->ifname, INFO, "vhost peer closed\n"); + VHOST_CONFIG_LOG(dev->ifname, INFO, "vhost peer closed"); return -1; } @@ -3098,7 +3098,7 @@ vhost_user_msg_handler(int vid, int fd) msg_handler = NULL; if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "vhost read message %s%s%sfailed\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "vhost read message %s%s%sfailed", msg_handler != NULL ? "for " : "", msg_handler != NULL ? msg_handler->description : "", msg_handler != NULL ? " " : ""); @@ -3107,20 +3107,20 @@ vhost_user_msg_handler(int vid, int fd) if (msg_handler != NULL && msg_handler->description != NULL) { if (request != VHOST_USER_IOTLB_MSG) - VHOST_LOG_CONFIG(dev->ifname, INFO, - "read message %s\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "read message %s", msg_handler->description); else - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "read message %s\n", + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "read message %s", msg_handler->description); } else { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "external request %d\n", request); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "external request %d", request); } ret = vhost_user_check_and_alloc_queue_pair(dev, &ctx); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to alloc queue\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to alloc queue"); return -1; } @@ -3187,20 +3187,20 @@ vhost_user_msg_handler(int vid, int fd) switch (msg_result) { case RTE_VHOST_MSG_RESULT_ERR: - VHOST_LOG_CONFIG(dev->ifname, ERR, - "processing %s failed.\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "processing %s failed.", msg_handler->description); handled = true; break; case RTE_VHOST_MSG_RESULT_OK: - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "processing %s succeeded.\n", + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "processing %s succeeded.", msg_handler->description); handled = true; break; case RTE_VHOST_MSG_RESULT_REPLY: - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "processing %s succeeded and needs reply.\n", + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "processing %s succeeded and needs reply.", msg_handler->description); send_vhost_reply(dev, fd, &ctx); handled = true; @@ -3229,8 +3229,8 @@ vhost_user_msg_handler(int vid, int fd) /* If message was not handled at this stage, treat it as an error */ if (!handled) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "vhost message (req: %d) was not handled.\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "vhost message (req: %d) was not handled.", request); close_msg_fds(&ctx); msg_result = RTE_VHOST_MSG_RESULT_ERR; @@ -3247,7 +3247,7 @@ vhost_user_msg_handler(int vid, int fd) ctx.fd_num = 0; send_vhost_reply(dev, fd, &ctx); } else if (msg_result == RTE_VHOST_MSG_RESULT_ERR) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "vhost message handling failed.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "vhost message handling failed."); ret = -1; goto unlock; } @@ -3296,7 +3296,7 @@ vhost_user_msg_handler(int vid, int fd) if (!(dev->flags & VIRTIO_DEV_VDPA_CONFIGURED)) { if (vdpa_dev->ops->dev_conf(dev->vid)) - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to configure vDPA device\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to configure vDPA device"); else dev->flags |= VIRTIO_DEV_VDPA_CONFIGURED; } @@ -3324,8 +3324,8 @@ vhost_user_iotlb_miss(struct virtio_net *dev, uint64_t iova, uint8_t perm) ret = send_vhost_message(dev, dev->backend_req_fd, &ctx); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to send IOTLB miss message (%d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to send IOTLB miss message (%d)", ret); return ret; } @@ -3358,7 +3358,7 @@ rte_vhost_backend_config_change(int vid, bool need_reply) } if (ret < 0) - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to send config change (%d)\n", ret); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to send config change (%d)", ret); return ret; } @@ -3390,7 +3390,7 @@ static int vhost_user_backend_set_vring_host_notifier(struct virtio_net *dev, ret = send_vhost_backend_message_process_reply(dev, &ctx); if (ret < 0) - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to set host notifier (%d)\n", ret); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to set host notifier (%d)", ret); return ret; } diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 8af20f1487..280d4845f8 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -130,8 +130,8 @@ vhost_async_dma_transfer_one(struct virtio_net *dev, struct vhost_virtqueue *vq, */ if (unlikely(copy_idx < 0)) { if (!vhost_async_dma_copy_log) { - VHOST_LOG_DATA(dev->ifname, ERR, - "DMA copy failed for channel %d:%u\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "DMA copy failed for channel %d:%u", dma_id, vchan_id); vhost_async_dma_copy_log = true; } @@ -201,8 +201,8 @@ vhost_async_dma_check_completed(struct virtio_net *dev, int16_t dma_id, uint16_t */ nr_copies = rte_dma_completed(dma_id, vchan_id, max_pkts, &last_idx, &has_error); if (unlikely(!vhost_async_dma_complete_log && has_error)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "DMA completion failure on channel %d:%u\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "DMA completion failure on channel %d:%u", dma_id, vchan_id); vhost_async_dma_complete_log = true; } else if (nr_copies == 0) { @@ -1062,7 +1062,7 @@ async_iter_initialize(struct virtio_net *dev, struct vhost_async *async) struct vhost_iov_iter *iter; if (unlikely(async->iovec_idx >= VHOST_MAX_ASYNC_VEC)) { - VHOST_LOG_DATA(dev->ifname, ERR, "no more async iovec available\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "no more async iovec available"); return -1; } @@ -1084,7 +1084,7 @@ async_iter_add_iovec(struct virtio_net *dev, struct vhost_async *async, static bool vhost_max_async_vec_log; if (!vhost_max_async_vec_log) { - VHOST_LOG_DATA(dev->ifname, ERR, "no more async iovec available\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "no more async iovec available"); vhost_max_async_vec_log = true; } @@ -1145,8 +1145,8 @@ async_fill_seg(struct virtio_net *dev, struct vhost_virtqueue *vq, host_iova = (void *)(uintptr_t)gpa_to_first_hpa(dev, buf_iova + buf_offset, cpy_len, &mapped_len); if (unlikely(!host_iova)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: failed to get host iova.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: failed to get host iova.", __func__); return -1; } @@ -1243,7 +1243,7 @@ mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, } else hdr = (struct virtio_net_hdr_mrg_rxbuf *)(uintptr_t)hdr_addr; - VHOST_LOG_DATA(dev->ifname, DEBUG, "RX: num merge buffers %d\n", num_buffers); + VHOST_DATA_LOG(dev->ifname, DEBUG, "RX: num merge buffers %d", num_buffers); if (unlikely(buf_len < dev->vhost_hlen)) { buf_offset = dev->vhost_hlen - buf_len; @@ -1428,14 +1428,14 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, if (unlikely(reserve_avail_buf_split(dev, vq, pkt_len, buf_vec, &num_buffers, avail_head, &nr_vec) < 0)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, - "failed to get enough desc from vring\n"); + VHOST_DATA_LOG(dev->ifname, DEBUG, + "failed to get enough desc from vring"); vq->shadow_used_idx -= num_buffers; break; } - VHOST_LOG_DATA(dev->ifname, DEBUG, - "current index %d | end index %d\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "current index %d | end index %d", vq->last_avail_idx, vq->last_avail_idx + num_buffers); if (mbuf_to_desc(dev, vq, pkts[pkt_idx], buf_vec, nr_vec, @@ -1645,12 +1645,12 @@ virtio_dev_rx_single_packed(struct virtio_net *dev, if (unlikely(vhost_enqueue_single_packed(dev, vq, pkt, buf_vec, &nr_descs) < 0)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, "failed to get enough desc from vring\n"); + VHOST_DATA_LOG(dev->ifname, DEBUG, "failed to get enough desc from vring"); return -1; } - VHOST_LOG_DATA(dev->ifname, DEBUG, - "current index %d | end index %d\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "current index %d | end index %d", vq->last_avail_idx, vq->last_avail_idx + nr_descs); vq_inc_last_avail_packed(vq, nr_descs); @@ -1702,7 +1702,7 @@ virtio_dev_rx(struct virtio_net *dev, struct vhost_virtqueue *vq, { uint32_t nb_tx = 0; - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s\n", __func__); + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s", __func__); rte_rwlock_read_lock(&vq->access_lock); if (unlikely(!vq->enabled)) @@ -1744,15 +1744,15 @@ rte_vhost_enqueue_burst(int vid, uint16_t queue_id, return 0; if (unlikely(!(dev->flags & VIRTIO_DEV_BUILTIN_VIRTIO_NET))) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: built-in vhost net backend is disabled.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: built-in vhost net backend is disabled.", __func__); return 0; } if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid virtqueue idx %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid virtqueue idx %d.", __func__, queue_id); return 0; } @@ -1821,14 +1821,14 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev, struct vhost_virtqueue if (unlikely(reserve_avail_buf_split(dev, vq, pkt_len, buf_vec, &num_buffers, avail_head, &nr_vec) < 0)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, - "failed to get enough desc from vring\n"); + VHOST_DATA_LOG(dev->ifname, DEBUG, + "failed to get enough desc from vring"); vq->shadow_used_idx -= num_buffers; break; } - VHOST_LOG_DATA(dev->ifname, DEBUG, - "current index %d | end index %d\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "current index %d | end index %d", vq->last_avail_idx, vq->last_avail_idx + num_buffers); if (mbuf_to_desc(dev, vq, pkts[pkt_idx], buf_vec, nr_vec, num_buffers, true) < 0) { @@ -1853,8 +1853,8 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev, struct vhost_virtqueue if (unlikely(pkt_err)) { uint16_t num_descs = 0; - VHOST_LOG_DATA(dev->ifname, DEBUG, - "%s: failed to transfer %u packets for queue %u.\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "%s: failed to transfer %u packets for queue %u.", __func__, pkt_err, vq->index); /* update number of completed packets */ @@ -1967,12 +1967,12 @@ virtio_dev_rx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, if (unlikely(vhost_enqueue_async_packed(dev, vq, pkt, buf_vec, nr_descs, nr_buffers) < 0)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, "failed to get enough desc from vring\n"); + VHOST_DATA_LOG(dev->ifname, DEBUG, "failed to get enough desc from vring"); return -1; } - VHOST_LOG_DATA(dev->ifname, DEBUG, - "current index %d | end index %d\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "current index %d | end index %d", vq->last_avail_idx, vq->last_avail_idx + *nr_descs); return 0; @@ -2151,8 +2151,8 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, struct vhost_virtqueue pkt_err = pkt_idx - n_xfer; if (unlikely(pkt_err)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, - "%s: failed to transfer %u packets for queue %u.\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "%s: failed to transfer %u packets for queue %u.", __func__, pkt_err, vq->index); dma_error_handler_packed(vq, slot_idx, pkt_err, &pkt_idx); } @@ -2344,18 +2344,18 @@ rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id, if (unlikely(!dev)) return 0; - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s\n", __func__); + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s", __func__); if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid virtqueue idx %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid virtqueue idx %d.", __func__, queue_id); return 0; } if (unlikely(!dma_copy_track[dma_id].vchans || !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid channel %d:%u.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid channel %d:%u.", __func__, dma_id, vchan_id); return 0; } @@ -2363,15 +2363,15 @@ rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id, vq = dev->virtqueue[queue_id]; if (rte_rwlock_read_trylock(&vq->access_lock)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, - "%s: virtqueue %u is busy.\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "%s: virtqueue %u is busy.", __func__, queue_id); return 0; } if (unlikely(!vq->async)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: async not registered for virtqueue %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: async not registered for virtqueue %d.", __func__, queue_id); goto out; } @@ -2399,15 +2399,15 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, if (!dev) return 0; - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s\n", __func__); + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s", __func__); if (unlikely(queue_id >= dev->nr_vring)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid virtqueue idx %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid virtqueue idx %d.", __func__, queue_id); return 0; } if (unlikely(dma_id < 0 || dma_id >= RTE_DMADEV_DEFAULT_MAX)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid dma id %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid dma id %d.", __func__, dma_id); return 0; } @@ -2417,16 +2417,16 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, vq_assert_lock(dev, vq); if (unlikely(!vq->async)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: async not registered for virtqueue %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: async not registered for virtqueue %d.", __func__, queue_id); return 0; } if (unlikely(!dma_copy_track[dma_id].vchans || !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid channel %d:%u.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid channel %d:%u.", __func__, dma_id, vchan_id); return 0; } @@ -2455,15 +2455,15 @@ rte_vhost_clear_queue(int vid, uint16_t queue_id, struct rte_mbuf **pkts, if (!dev) return 0; - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s\n", __func__); + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s", __func__); if (unlikely(queue_id >= dev->nr_vring)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid virtqueue idx %u.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid virtqueue idx %u.", __func__, queue_id); return 0; } if (unlikely(dma_id < 0 || dma_id >= RTE_DMADEV_DEFAULT_MAX)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid dma id %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid dma id %d.", __func__, dma_id); return 0; } @@ -2471,20 +2471,20 @@ rte_vhost_clear_queue(int vid, uint16_t queue_id, struct rte_mbuf **pkts, vq = dev->virtqueue[queue_id]; if (rte_rwlock_read_trylock(&vq->access_lock)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s: virtqueue %u is busy.\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s: virtqueue %u is busy.", __func__, queue_id); return 0; } if (unlikely(!vq->async)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: async not registered for queue id %u.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: async not registered for queue id %u.", __func__, queue_id); goto out_access_unlock; } if (unlikely(!dma_copy_track[dma_id].vchans || !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid channel %d:%u.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid channel %d:%u.", __func__, dma_id, vchan_id); goto out_access_unlock; } @@ -2511,12 +2511,12 @@ virtio_dev_rx_async_submit(struct virtio_net *dev, struct vhost_virtqueue *vq, { uint32_t nb_tx = 0; - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s\n", __func__); + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s", __func__); if (unlikely(!dma_copy_track[dma_id].vchans || !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid channel %d:%u.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid channel %d:%u.", __func__, dma_id, vchan_id); return 0; } @@ -2565,15 +2565,15 @@ rte_vhost_submit_enqueue_burst(int vid, uint16_t queue_id, return 0; if (unlikely(!(dev->flags & VIRTIO_DEV_BUILTIN_VIRTIO_NET))) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: built-in vhost net backend is disabled.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: built-in vhost net backend is disabled.", __func__); return 0; } if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid virtqueue idx %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid virtqueue idx %d.", __func__, queue_id); return 0; } @@ -2743,8 +2743,8 @@ vhost_dequeue_offload_legacy(struct virtio_net *dev, struct virtio_net_hdr *hdr, m->l4_len = sizeof(struct rte_udp_hdr); break; default: - VHOST_LOG_DATA(dev->ifname, WARNING, - "unsupported gso type %u.\n", + VHOST_DATA_LOG(dev->ifname, WARNING, + "unsupported gso type %u.", hdr->gso_type); goto error; } @@ -2975,8 +2975,8 @@ desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, if (mbuf_avail == 0) { cur = rte_pktmbuf_alloc(mbuf_pool); if (unlikely(cur == NULL)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "failed to allocate memory for mbuf.\n"); + VHOST_DATA_LOG(dev->ifname, ERR, + "failed to allocate memory for mbuf."); goto error; } @@ -3041,7 +3041,7 @@ virtio_dev_extbuf_alloc(struct virtio_net *dev, struct rte_mbuf *pkt, uint32_t s virtio_dev_extbuf_free, buf); if (unlikely(shinfo == NULL)) { rte_free(buf); - VHOST_LOG_DATA(dev->ifname, ERR, "failed to init shinfo\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "failed to init shinfo"); return -1; } @@ -3097,11 +3097,11 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]); - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s\n", __func__); + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s", __func__); count = RTE_MIN(count, MAX_PKT_BURST); count = RTE_MIN(count, avail_entries); - VHOST_LOG_DATA(dev->ifname, DEBUG, "about to dequeue %u buffers\n", count); + VHOST_DATA_LOG(dev->ifname, DEBUG, "about to dequeue %u buffers", count); if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, count)) return 0; @@ -3138,8 +3138,8 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, * is required. Drop this packet. */ if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, - "failed mbuf alloc of size %d from %s.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "failed mbuf alloc of size %d from %s.", buf_len, mbuf_pool->name); allocerr_warned = true; } @@ -3152,7 +3152,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, mbuf_pool, legacy_ol_flags, 0, false); if (unlikely(err)) { if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, "failed to copy desc to mbuf.\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "failed to copy desc to mbuf."); allocerr_warned = true; } dropped += 1; @@ -3421,8 +3421,8 @@ vhost_dequeue_single_packed(struct virtio_net *dev, if (unlikely(virtio_dev_pktmbuf_prep(dev, pkts, buf_len))) { if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, - "failed mbuf alloc of size %d from %s.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "failed mbuf alloc of size %d from %s.", buf_len, mbuf_pool->name); allocerr_warned = true; } @@ -3433,7 +3433,7 @@ vhost_dequeue_single_packed(struct virtio_net *dev, mbuf_pool, legacy_ol_flags, 0, false); if (unlikely(err)) { if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, "failed to copy desc to mbuf.\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "failed to copy desc to mbuf."); allocerr_warned = true; } return -1; @@ -3556,15 +3556,15 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, return 0; if (unlikely(!(dev->flags & VIRTIO_DEV_BUILTIN_VIRTIO_NET))) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: built-in vhost net backend is disabled.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: built-in vhost net backend is disabled.", __func__); return 0; } if (unlikely(!is_valid_virt_queue_idx(queue_id, 1, dev->nr_vring))) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid virtqueue idx %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid virtqueue idx %d.", __func__, queue_id); return 0; } @@ -3609,7 +3609,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, rarp_mbuf = rte_net_make_rarp_packet(mbuf_pool, &dev->mac); if (rarp_mbuf == NULL) { - VHOST_LOG_DATA(dev->ifname, ERR, "failed to make RARP packet.\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "failed to make RARP packet."); count = 0; goto out; } @@ -3731,7 +3731,7 @@ virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq, count = RTE_MIN(count, MAX_PKT_BURST); count = RTE_MIN(count, avail_entries); - VHOST_LOG_DATA(dev->ifname, DEBUG, "about to dequeue %u buffers\n", count); + VHOST_DATA_LOG(dev->ifname, DEBUG, "about to dequeue %u buffers", count); if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts_prealloc, count)) goto out; @@ -3768,8 +3768,8 @@ virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq, * is required. Drop this packet. */ if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: Failed mbuf alloc of size %d from %s\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: Failed mbuf alloc of size %d from %s", __func__, buf_len, mbuf_pool->name); allocerr_warned = true; } @@ -3783,8 +3783,8 @@ virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq, legacy_ol_flags, slot_idx, true); if (unlikely(err)) { if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: Failed to offload copies to async channel.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: Failed to offload copies to async channel.", __func__); allocerr_warned = true; } @@ -3814,7 +3814,7 @@ virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq, pkt_err = pkt_idx - n_xfer; if (unlikely(pkt_err)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s: failed to transfer data.\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s: failed to transfer data.", __func__); pkt_idx = n_xfer; @@ -3914,7 +3914,7 @@ virtio_dev_tx_async_single_packed(struct virtio_net *dev, if (unlikely(virtio_dev_pktmbuf_prep(dev, pkts, buf_len))) { if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, "Failed mbuf alloc of size %d from %s.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "Failed mbuf alloc of size %d from %s.", buf_len, mbuf_pool->name); allocerr_warned = true; @@ -3927,7 +3927,7 @@ virtio_dev_tx_async_single_packed(struct virtio_net *dev, if (unlikely(err)) { rte_pktmbuf_free(pkts); if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, "Failed to copy desc to mbuf on.\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "Failed to copy desc to mbuf on."); allocerr_warned = true; } return -1; @@ -4019,7 +4019,7 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct async_inflight_info *pkts_info = async->pkts_info; struct rte_mbuf *pkts_prealloc[MAX_PKT_BURST]; - VHOST_LOG_DATA(dev->ifname, DEBUG, "(%d) about to dequeue %u buffers\n", dev->vid, count); + VHOST_DATA_LOG(dev->ifname, DEBUG, "(%d) about to dequeue %u buffers", dev->vid, count); async_iter_reset(async); @@ -4153,26 +4153,26 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id, *nr_inflight = -1; if (unlikely(!(dev->flags & VIRTIO_DEV_BUILTIN_VIRTIO_NET))) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: built-in vhost net backend is disabled.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: built-in vhost net backend is disabled.", __func__); return 0; } if (unlikely(!is_valid_virt_queue_idx(queue_id, 1, dev->nr_vring))) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid virtqueue idx %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid virtqueue idx %d.", __func__, queue_id); return 0; } if (unlikely(dma_id < 0 || dma_id >= RTE_DMADEV_DEFAULT_MAX)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid dma id %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid dma id %d.", __func__, dma_id); return 0; } if (unlikely(!dma_copy_track[dma_id].vchans || !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid channel %d:%u.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid channel %d:%u.", __func__, dma_id, vchan_id); return 0; } @@ -4188,7 +4188,7 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id, } if (unlikely(!vq->async)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: async not registered for queue id %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: async not registered for queue id %d.", __func__, queue_id); count = 0; goto out_access_unlock; @@ -4224,7 +4224,7 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id, rarp_mbuf = rte_net_make_rarp_packet(mbuf_pool, &dev->mac); if (rarp_mbuf == NULL) { - VHOST_LOG_DATA(dev->ifname, ERR, "failed to make RARP packet.\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "failed to make RARP packet."); count = 0; goto out; } diff --git a/lib/vhost/virtio_net_ctrl.c b/lib/vhost/virtio_net_ctrl.c index c4847f84ed..8f78122361 100644 --- a/lib/vhost/virtio_net_ctrl.c +++ b/lib/vhost/virtio_net_ctrl.c @@ -36,13 +36,13 @@ virtio_net_ctrl_pop(struct virtio_net *dev, struct vhost_virtqueue *cvq, avail_idx = rte_atomic_load_explicit((unsigned short __rte_atomic *)&cvq->avail->idx, rte_memory_order_acquire); if (avail_idx == cvq->last_avail_idx) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "Control queue empty\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "Control queue empty"); return 0; } desc_idx = cvq->avail->ring[cvq->last_avail_idx]; if (desc_idx >= cvq->size) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Out of range desc index, dropping\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Out of range desc index, dropping"); goto err; } @@ -55,7 +55,7 @@ virtio_net_ctrl_pop(struct virtio_net *dev, struct vhost_virtqueue *cvq, descs = (struct vring_desc *)(uintptr_t)vhost_iova_to_vva(dev, cvq, desc_iova, &desc_len, VHOST_ACCESS_RO); if (!descs || desc_len != cvq->desc[desc_idx].len) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to map ctrl indirect descs\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to map ctrl indirect descs"); goto err; } @@ -72,28 +72,28 @@ virtio_net_ctrl_pop(struct virtio_net *dev, struct vhost_virtqueue *cvq, if (descs[desc_idx].flags & VRING_DESC_F_WRITE) { if (ctrl_elem->desc_ack) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Unexpected ctrl chain layout\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Unexpected ctrl chain layout"); goto err; } if (desc_len != sizeof(uint8_t)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Invalid ack size for ctrl req, dropping\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Invalid ack size for ctrl req, dropping"); goto err; } ctrl_elem->desc_ack = (uint8_t *)(uintptr_t)vhost_iova_to_vva(dev, cvq, desc_iova, &desc_len, VHOST_ACCESS_WO); if (!ctrl_elem->desc_ack || desc_len != sizeof(uint8_t)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Failed to map ctrl ack descriptor\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Failed to map ctrl ack descriptor"); goto err; } } else { if (ctrl_elem->desc_ack) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Unexpected ctrl chain layout\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Unexpected ctrl chain layout"); goto err; } @@ -114,18 +114,18 @@ virtio_net_ctrl_pop(struct virtio_net *dev, struct vhost_virtqueue *cvq, ctrl_elem->n_descs = n_descs; if (!ctrl_elem->desc_ack) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Missing ctrl ack descriptor\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Missing ctrl ack descriptor"); goto err; } if (data_len < sizeof(ctrl_elem->ctrl_req->class) + sizeof(ctrl_elem->ctrl_req->command)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Invalid control header size\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Invalid control header size"); goto err; } ctrl_elem->ctrl_req = malloc(data_len); if (!ctrl_elem->ctrl_req) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to alloc ctrl request\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to alloc ctrl request"); goto err; } @@ -138,7 +138,7 @@ virtio_net_ctrl_pop(struct virtio_net *dev, struct vhost_virtqueue *cvq, descs = (struct vring_desc *)(uintptr_t)vhost_iova_to_vva(dev, cvq, desc_iova, &desc_len, VHOST_ACCESS_RO); if (!descs || desc_len != cvq->desc[desc_idx].len) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to map ctrl indirect descs\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to map ctrl indirect descs"); goto free_err; } @@ -153,7 +153,7 @@ virtio_net_ctrl_pop(struct virtio_net *dev, struct vhost_virtqueue *cvq, desc_addr = vhost_iova_to_vva(dev, cvq, desc_iova, &desc_len, VHOST_ACCESS_RO); if (!desc_addr || desc_len < descs[desc_idx].len) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to map ctrl descriptor\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to map ctrl descriptor"); goto free_err; } @@ -199,7 +199,7 @@ virtio_net_ctrl_handle_req(struct virtio_net *dev, struct virtio_net_ctrl *ctrl_ uint32_t i; queue_pairs = *(uint16_t *)(uintptr_t)ctrl_req->command_data; - VHOST_LOG_CONFIG(dev->ifname, INFO, "Ctrl req: MQ %u queue pairs\n", queue_pairs); + VHOST_CONFIG_LOG(dev->ifname, INFO, "Ctrl req: MQ %u queue pairs", queue_pairs); ret = VIRTIO_NET_OK; for (i = 0; i < dev->nr_vring; i++) { @@ -253,12 +253,12 @@ virtio_net_ctrl_handle(struct virtio_net *dev) int ret = 0; if (dev->features & (1ULL << VIRTIO_F_RING_PACKED)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Packed ring not supported yet\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Packed ring not supported yet"); return -1; } if (!dev->cvq) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "missing control queue\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "missing control queue"); return -1; } -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v3 14/14] lib: use per line logging in helpers 2023-12-18 9:27 ` [PATCH v3 00/14] Detect superfluous newline in logs David Marchand ` (12 preceding siblings ...) 2023-12-18 9:27 ` [PATCH v3 13/14] lib: replace logging helpers David Marchand @ 2023-12-18 9:27 ` David Marchand 13 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-18 9:27 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Chengwen Feng, Andrew Rybchenko, Nicolas Chautru, Konstantin Ananyev, Cristian Dumitrescu, Fan Zhang, Ashish Gupta, Akhil Goyal, Kevin Laatz, Jerin Jacob, Erik Gabriel Carrillo, Elena Agostini, Kiran Kumar K, Nithin Dabilpuram, Zhirun Yan, Yipeng Wang, Sameh Gobriel, Srikanth Yalavarthi, Jasvinder Singh, Pavan Nikhilesh, Reshma Pattan, Anatoly Burakov, David Hunt, Sivaprasad Tummala, Sachin Saxena, Hemant Agrawal, Honnappa Nagarahalli, Ori Kam, Ciara Power, Maxime Coquelin, Chenbo Xia Use RTE_LOG_LINE in existing macros that append a \n. Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Chengwen Feng <fengchengwen@huawei.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> --- Changes since RFC v1: - converted all logging helpers in lib/, --- lib/bbdev/rte_bbdev.c | 5 +++-- lib/bpf/bpf_impl.h | 2 +- lib/cfgfile/rte_cfgfile.c | 4 ++-- lib/compressdev/rte_compressdev_internal.h | 5 +++-- lib/cryptodev/rte_cryptodev.h | 16 +++++++--------- lib/dmadev/rte_dmadev.c | 6 ++++-- lib/ethdev/rte_ethdev.h | 3 +-- lib/eventdev/eventdev_pmd.h | 8 ++++---- lib/eventdev/rte_event_timer_adapter.c | 17 ++++++++++------- lib/gpudev/gpudev.c | 6 ++++-- lib/graph/graph_private.h | 5 +++-- lib/member/member.h | 4 ++-- lib/metrics/rte_metrics_telemetry.c | 4 ++-- lib/mldev/rte_mldev.h | 5 +++-- lib/net/rte_net_crc.c | 8 ++++---- lib/node/node_private.h | 6 ++++-- lib/pdump/rte_pdump.c | 5 ++--- lib/power/power_common.h | 2 +- lib/rawdev/rte_rawdev_pmd.h | 4 ++-- lib/rcu/rte_rcu_qsbr.h | 8 +++----- lib/regexdev/rte_regexdev.h | 3 +-- lib/stack/stack_pvt.h | 4 ++-- lib/telemetry/telemetry.c | 4 +--- lib/vhost/vhost.h | 8 ++++---- lib/vhost/vhost_crypto.c | 6 +++--- 25 files changed, 76 insertions(+), 72 deletions(-) diff --git a/lib/bbdev/rte_bbdev.c b/lib/bbdev/rte_bbdev.c index e09bb97abb..13bde3c25b 100644 --- a/lib/bbdev/rte_bbdev.c +++ b/lib/bbdev/rte_bbdev.c @@ -28,10 +28,11 @@ /* BBDev library logging ID */ RTE_LOG_REGISTER_DEFAULT(bbdev_logtype, NOTICE); +#define RTE_LOGTYPE_BBDEV bbdev_logtype /* Helper macro for logging */ -#define rte_bbdev_log(level, fmt, ...) \ - rte_log(RTE_LOG_ ## level, bbdev_logtype, fmt "\n", ##__VA_ARGS__) +#define rte_bbdev_log(level, ...) \ + RTE_LOG_LINE(level, BBDEV, "" __VA_ARGS__) #define rte_bbdev_log_debug(fmt, ...) \ rte_bbdev_log(DEBUG, RTE_STR(__LINE__) ":%s() " fmt, __func__, \ diff --git a/lib/bpf/bpf_impl.h b/lib/bpf/bpf_impl.h index 6a82ae4ef2..1a3d97d0c7 100644 --- a/lib/bpf/bpf_impl.h +++ b/lib/bpf/bpf_impl.h @@ -30,7 +30,7 @@ extern int rte_bpf_logtype; #define RTE_LOGTYPE_BPF rte_bpf_logtype #define RTE_BPF_LOG_LINE(lvl, fmt, args...) \ - RTE_LOG(lvl, BPF, fmt "\n", ##args) + RTE_LOG_LINE(lvl, BPF, fmt, ##args) static inline size_t bpf_size(uint32_t bpf_op_sz) diff --git a/lib/cfgfile/rte_cfgfile.c b/lib/cfgfile/rte_cfgfile.c index 2f9cc0722a..6a5e4fd942 100644 --- a/lib/cfgfile/rte_cfgfile.c +++ b/lib/cfgfile/rte_cfgfile.c @@ -29,10 +29,10 @@ struct rte_cfgfile { /* Setting up dynamic logging 8< */ RTE_LOG_REGISTER_DEFAULT(cfgfile_logtype, INFO); +#define RTE_LOGTYPE_CFGFILE cfgfile_logtype #define CFG_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, cfgfile_logtype, "%s(): " fmt "\n", \ - __func__, ## args) + RTE_LOG_LINE(level, CFGFILE, "%s(): " fmt, __func__, ## args) /* >8 End of setting up dynamic logging */ /** when we resize a file structure, how many extra entries diff --git a/lib/compressdev/rte_compressdev_internal.h b/lib/compressdev/rte_compressdev_internal.h index b3b193e3ee..01b7764282 100644 --- a/lib/compressdev/rte_compressdev_internal.h +++ b/lib/compressdev/rte_compressdev_internal.h @@ -21,9 +21,10 @@ extern "C" { /* Logging Macros */ extern int compressdev_logtype; +#define RTE_LOGTYPE_COMPRESSDEV compressdev_logtype + #define COMPRESSDEV_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, compressdev_logtype, "%s(): " fmt "\n", \ - __func__, ##args) + RTE_LOG_LINE(level, COMPRESSDEV, "%s(): " fmt, __func__, ## args) /** * Dequeue processed packets from queue pair of a device. diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h index 30ad2d9a95..83c8d44349 100644 --- a/lib/cryptodev/rte_cryptodev.h +++ b/lib/cryptodev/rte_cryptodev.h @@ -36,23 +36,21 @@ extern int rte_cryptodev_logtype; /* Logging Macros */ #define CDEV_LOG_ERR(...) \ - RTE_LOG(ERR, CRYPTODEV, \ - RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_LOG_LINE(ERR, CRYPTODEV, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,), \ __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__,))) #define CDEV_LOG_INFO(...) \ - RTE_LOG(INFO, CRYPTODEV, \ - RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ - RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(INFO, CRYPTODEV, "" __VA_ARGS__) #define CDEV_LOG_DEBUG(...) \ - RTE_LOG(DEBUG, CRYPTODEV, \ - RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_LOG_LINE(DEBUG, CRYPTODEV, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,), \ __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__,))) #define CDEV_PMD_TRACE(...) \ - RTE_LOG(DEBUG, CRYPTODEV, \ - RTE_FMT("[%s] %s: " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_LOG_LINE(DEBUG, CRYPTODEV, \ + RTE_FMT("[%s] %s: " RTE_FMT_HEAD(__VA_ARGS__,), \ dev, __func__, RTE_FMT_TAIL(__VA_ARGS__,))) /** diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c index 009a21849a..c1a166858c 100644 --- a/lib/dmadev/rte_dmadev.c +++ b/lib/dmadev/rte_dmadev.c @@ -32,9 +32,11 @@ static struct { } *dma_devices_shared_data; RTE_LOG_REGISTER_DEFAULT(rte_dma_logtype, INFO); +#define RTE_LOGTYPE_DMA rte_dma_logtype + #define RTE_DMA_LOG(level, ...) \ - rte_log(RTE_LOG_ ## level, rte_dma_logtype, RTE_FMT("dma: " \ - RTE_FMT_HEAD(__VA_ARGS__,) "\n", RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(level, DMA, RTE_FMT("dma: " RTE_FMT_HEAD(__VA_ARGS__,), \ + RTE_FMT_TAIL(__VA_ARGS__,))) int rte_dma_dev_max(size_t dev_max) diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index 18debce99c..21e3a21903 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -179,8 +179,7 @@ extern int rte_eth_dev_logtype; #define RTE_LOGTYPE_ETHDEV rte_eth_dev_logtype #define RTE_ETHDEV_LOG_LINE(level, ...) \ - RTE_LOG(level, ETHDEV, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ - RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(level, ETHDEV, "" __VA_ARGS__) struct rte_mbuf; diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index 2ec5aec0a8..50cf7d9057 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -33,14 +33,14 @@ extern "C" { /* Logging Macros */ #define RTE_EDEV_LOG_ERR(...) \ - RTE_LOG(ERR, EVENTDEV, \ - RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_LOG_LINE(ERR, EVENTDEV, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,), \ __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__,))) #ifdef RTE_LIBRTE_EVENTDEV_DEBUG #define RTE_EDEV_LOG_DEBUG(...) \ - RTE_LOG(DEBUG, EVENTDEV, \ - RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_LOG_LINE(DEBUG, EVENTDEV, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,), \ __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__,))) #else #define RTE_EDEV_LOG_DEBUG(...) (void)0 diff --git a/lib/eventdev/rte_event_timer_adapter.c b/lib/eventdev/rte_event_timer_adapter.c index 3f22e85173..6ebb7b257e 100644 --- a/lib/eventdev/rte_event_timer_adapter.c +++ b/lib/eventdev/rte_event_timer_adapter.c @@ -30,27 +30,30 @@ #define DATA_MZ_NAME_FORMAT "rte_event_timer_adapter_data_%d" RTE_LOG_REGISTER_SUFFIX(evtim_logtype, adapter.timer, NOTICE); +#define RTE_LOGTYPE_EVTIM evtim_logtype RTE_LOG_REGISTER_SUFFIX(evtim_buffer_logtype, adapter.timer, NOTICE); +#define RTE_LOGTYPE_EVTIM_BUF evtim_buffer_logtype RTE_LOG_REGISTER_SUFFIX(evtim_svc_logtype, adapter.timer.svc, NOTICE); +#define RTE_LOGTYPE_EVTIM_SVC evtim_svc_logtype static struct rte_event_timer_adapter *adapters; static const struct event_timer_adapter_ops swtim_ops; #define EVTIM_LOG(level, logtype, ...) \ - rte_log(RTE_LOG_ ## level, logtype, \ - RTE_FMT("EVTIMER: %s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,) \ - "\n", __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(level, logtype, \ + RTE_FMT("EVTIMER: %s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,), \ + __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__,))) -#define EVTIM_LOG_ERR(...) EVTIM_LOG(ERR, evtim_logtype, __VA_ARGS__) +#define EVTIM_LOG_ERR(...) EVTIM_LOG(ERR, EVTIM, __VA_ARGS__) #ifdef RTE_LIBRTE_EVENTDEV_DEBUG #define EVTIM_LOG_DBG(...) \ - EVTIM_LOG(DEBUG, evtim_logtype, __VA_ARGS__) + EVTIM_LOG(DEBUG, EVTIM, __VA_ARGS__) #define EVTIM_BUF_LOG_DBG(...) \ - EVTIM_LOG(DEBUG, evtim_buffer_logtype, __VA_ARGS__) + EVTIM_LOG(DEBUG, EVTIM_BUF, __VA_ARGS__) #define EVTIM_SVC_LOG_DBG(...) \ - EVTIM_LOG(DEBUG, evtim_svc_logtype, __VA_ARGS__) + EVTIM_LOG(DEBUG, EVTIM_SVC, __VA_ARGS__) #else #define EVTIM_LOG_DBG(...) (void)0 #define EVTIM_BUF_LOG_DBG(...) (void)0 diff --git a/lib/gpudev/gpudev.c b/lib/gpudev/gpudev.c index 6845d18b4d..79118c3e94 100644 --- a/lib/gpudev/gpudev.c +++ b/lib/gpudev/gpudev.c @@ -17,9 +17,11 @@ /* Logging */ RTE_LOG_REGISTER_DEFAULT(gpu_logtype, NOTICE); +#define RTE_LOGTYPE_GPUDEV gpu_logtype + #define GPU_LOG(level, ...) \ - rte_log(RTE_LOG_ ## level, gpu_logtype, RTE_FMT("gpu: " \ - RTE_FMT_HEAD(__VA_ARGS__, ) "\n", RTE_FMT_TAIL(__VA_ARGS__, ))) + RTE_LOG_LINE(level, GPUDEV, RTE_FMT("gpu: " RTE_FMT_HEAD(__VA_ARGS__, ), \ + RTE_FMT_TAIL(__VA_ARGS__, ))) /* Set any driver error as EPERM */ #define GPU_DRV_RET(function) \ diff --git a/lib/graph/graph_private.h b/lib/graph/graph_private.h index d0ef13b205..672a034287 100644 --- a/lib/graph/graph_private.h +++ b/lib/graph/graph_private.h @@ -18,10 +18,11 @@ #include "rte_graph_worker.h" extern int rte_graph_logtype; +#define RTE_LOGTYPE_GRAPH rte_graph_logtype #define GRAPH_LOG(level, ...) \ - rte_log(RTE_LOG_##level, rte_graph_logtype, \ - RTE_FMT("GRAPH: %s():%u " RTE_FMT_HEAD(__VA_ARGS__, ) "\n", \ + RTE_LOG_LINE(level, GRAPH, \ + RTE_FMT("GRAPH: %s():%u " RTE_FMT_HEAD(__VA_ARGS__, ), \ __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__, ))) #define graph_err(...) GRAPH_LOG(ERR, __VA_ARGS__) diff --git a/lib/member/member.h b/lib/member/member.h index ce150f7689..56dd2782a6 100644 --- a/lib/member/member.h +++ b/lib/member/member.h @@ -8,7 +8,7 @@ extern int librte_member_logtype; #define RTE_LOGTYPE_MEMBER librte_member_logtype #define MEMBER_LOG(level, ...) \ - RTE_LOG(level, MEMBER, \ - RTE_FMT("%s(): " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ + RTE_LOG_LINE(level, MEMBER, \ + RTE_FMT("%s(): " RTE_FMT_HEAD(__VA_ARGS__,), \ __func__, RTE_FMT_TAIL(__VA_ARGS__,))) diff --git a/lib/metrics/rte_metrics_telemetry.c b/lib/metrics/rte_metrics_telemetry.c index 1d133e1f8c..b8c9d75a7d 100644 --- a/lib/metrics/rte_metrics_telemetry.c +++ b/lib/metrics/rte_metrics_telemetry.c @@ -16,11 +16,11 @@ struct telemetry_metrics_data tel_met_data; int metrics_log_level; +#define RTE_LOGTYPE_METRICS metrics_log_level /* Logging Macros */ #define METRICS_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ##level, metrics_log_level, "%s(): "fmt "\n", \ - __func__, ##args) + RTE_LOG_LINE(level, METRICS, "%s(): "fmt, __func__, ## args) #define METRICS_LOG_ERR(fmt, args...) \ METRICS_LOG(ERR, fmt, ## args) diff --git a/lib/mldev/rte_mldev.h b/lib/mldev/rte_mldev.h index 63b2670bb0..5cf6f0566f 100644 --- a/lib/mldev/rte_mldev.h +++ b/lib/mldev/rte_mldev.h @@ -144,9 +144,10 @@ extern "C" { /* Logging Macro */ extern int rte_ml_dev_logtype; +#define RTE_LOGTYPE_MLDEV rte_ml_dev_logtype -#define RTE_MLDEV_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_##level, rte_ml_dev_logtype, "%s(): " fmt "\n", __func__, ##args) +#define RTE_MLDEV_LOG(level, fmt, args...) \ + RTE_LOG_LINE(level, MLDEV, "%s(): " fmt, __func__, ##args) #define RTE_ML_STR_MAX 128 /**< Maximum length of name string */ diff --git a/lib/net/rte_net_crc.c b/lib/net/rte_net_crc.c index 900d6de7f4..b401ea3dd8 100644 --- a/lib/net/rte_net_crc.c +++ b/lib/net/rte_net_crc.c @@ -70,11 +70,11 @@ static const rte_net_crc_handler handlers_neon[] = { static uint16_t max_simd_bitwidth; -#define NET_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, libnet_logtype, "%s(): " fmt "\n", \ - __func__, ## args) - RTE_LOG_REGISTER_DEFAULT(libnet_logtype, INFO); +#define RTE_LOGTYPE_NET libnet_logtype + +#define NET_LOG(level, fmt, args...) \ + RTE_LOG_LINE(level, NET, "%s(): " fmt, __func__, ## args) /* Scalar handling */ diff --git a/lib/node/node_private.h b/lib/node/node_private.h index 26135aaa5b..5702146db4 100644 --- a/lib/node/node_private.h +++ b/lib/node/node_private.h @@ -11,9 +11,11 @@ #include <rte_mbuf_dyn.h> extern int rte_node_logtype; +#define RTE_LOGTYPE_NODE rte_node_logtype + #define NODE_LOG(level, node_name, ...) \ - rte_log(RTE_LOG_##level, rte_node_logtype, \ - RTE_FMT("NODE %s: %s():%u " RTE_FMT_HEAD(__VA_ARGS__, ) "\n", \ + RTE_LOG_LINE(level, NODE, \ + RTE_FMT("NODE %s: %s():%u " RTE_FMT_HEAD(__VA_ARGS__, ), \ node_name, __func__, __LINE__, \ RTE_FMT_TAIL(__VA_ARGS__, ))) diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c index 70963e7ee7..f6160f9911 100644 --- a/lib/pdump/rte_pdump.c +++ b/lib/pdump/rte_pdump.c @@ -18,9 +18,8 @@ RTE_LOG_REGISTER_DEFAULT(pdump_logtype, NOTICE); #define RTE_LOGTYPE_PDUMP pdump_logtype -#define PDUMP_LOG_LINE(level, fmt, args...) \ - RTE_LOG(level, PDUMP, "%s(): " fmt "\n", \ - __func__, ## args) +#define PDUMP_LOG_LINE(level, fmt, args...) \ + RTE_LOG_LINE(level, PDUMP, "%s(): " fmt, __func__, ## args) /* Used for the multi-process communication */ #define PDUMP_MP "mp_pdump" diff --git a/lib/power/power_common.h b/lib/power/power_common.h index ea2febbd86..4e32548169 100644 --- a/lib/power/power_common.h +++ b/lib/power/power_common.h @@ -15,7 +15,7 @@ extern int power_logtype; #ifdef RTE_LIBRTE_POWER_DEBUG #define POWER_DEBUG_LOG(fmt, args...) \ - RTE_LOG(ERR, POWER, "%s: " fmt "\n", __func__, ## args) + RTE_LOG_LINE(ERR, POWER, "%s: " fmt, __func__, ## args) #else #define POWER_DEBUG_LOG(fmt, args...) #endif diff --git a/lib/rawdev/rte_rawdev_pmd.h b/lib/rawdev/rte_rawdev_pmd.h index 7b9ef1d09f..7173282c66 100644 --- a/lib/rawdev/rte_rawdev_pmd.h +++ b/lib/rawdev/rte_rawdev_pmd.h @@ -27,11 +27,11 @@ extern "C" { #include "rte_rawdev.h" extern int librawdev_logtype; +#define RTE_LOGTYPE_RAWDEV librawdev_logtype /* Logging Macros */ #define RTE_RDEV_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, librawdev_logtype, "%s(): " fmt "\n", \ - __func__, ##args) + RTE_LOG_LINE(level, RAWDEV, "%s(): " fmt, __func__, ##args) #define RTE_RDEV_ERR(fmt, args...) \ RTE_RDEV_LOG(ERR, fmt, ## args) diff --git a/lib/rcu/rte_rcu_qsbr.h b/lib/rcu/rte_rcu_qsbr.h index 0dca8310c0..23c9f89805 100644 --- a/lib/rcu/rte_rcu_qsbr.h +++ b/lib/rcu/rte_rcu_qsbr.h @@ -40,17 +40,15 @@ extern int rte_rcu_log_type; #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG #define __RTE_RCU_DP_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, rte_rcu_log_type, \ - "%s(): " fmt "\n", __func__, ## args) + RTE_LOG_LINE(level, RCU, "%s(): " fmt, __func__, ## args) #else #define __RTE_RCU_DP_LOG(level, fmt, args...) #endif #if defined(RTE_LIBRTE_RCU_DEBUG) -#define __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...) do {\ +#define __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...) do { \ if (v->qsbr_cnt[thread_id].lock_cnt) \ - rte_log(RTE_LOG_ ## level, rte_rcu_log_type, \ - "%s(): " fmt "\n", __func__, ## args); \ + RTE_LOG_LINE(level, RCU, "%s(): " fmt, __func__, ## args); \ } while (0) #else #define __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...) diff --git a/lib/regexdev/rte_regexdev.h b/lib/regexdev/rte_regexdev.h index dc111317a5..a50b841b1e 100644 --- a/lib/regexdev/rte_regexdev.h +++ b/lib/regexdev/rte_regexdev.h @@ -209,8 +209,7 @@ extern int rte_regexdev_logtype; #define RTE_LOGTYPE_REGEXDEV rte_regexdev_logtype #define RTE_REGEXDEV_LOG_LINE(level, ...) \ - RTE_LOG(level, REGEXDEV, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ - RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(level, REGEXDEV, "" __VA_ARGS__) /* Macros to check for valid port */ #define RTE_REGEXDEV_VALID_DEV_ID_OR_ERR_RET(dev_id, retval) do { \ diff --git a/lib/stack/stack_pvt.h b/lib/stack/stack_pvt.h index c7eab4027d..2dce42a9da 100644 --- a/lib/stack/stack_pvt.h +++ b/lib/stack/stack_pvt.h @@ -8,10 +8,10 @@ #include <rte_log.h> extern int stack_logtype; +#define RTE_LOGTYPE_STACK stack_logtype #define STACK_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ##level, stack_logtype, "%s(): "fmt "\n", \ - __func__, ##args) + RTE_LOG_LINE(level, STACK, "%s(): "fmt, __func__, ##args) #define STACK_LOG_ERR(fmt, args...) \ STACK_LOG(ERR, fmt, ## args) diff --git a/lib/telemetry/telemetry.c b/lib/telemetry/telemetry.c index 5c655e2b25..31e2391867 100644 --- a/lib/telemetry/telemetry.c +++ b/lib/telemetry/telemetry.c @@ -57,9 +57,7 @@ static rte_cpuset_t *thread_cpuset; RTE_LOG_REGISTER_DEFAULT(logtype, WARNING); #define RTE_LOGTYPE_TMTY logtype -#define TMTY_LOG_LINE(l, ...) \ - RTE_LOG(l, TMTY, RTE_FMT("TELEMETRY: " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ - RTE_FMT_TAIL(__VA_ARGS__,))) +#define TMTY_LOG_LINE(l, ...) RTE_LOG_LINE(l, TMTY, "TELEMETRY: " __VA_ARGS__) /* list of command callbacks, with one command registered by default */ static struct cmd_callback *callbacks; diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index 5a74d0e628..25c0f86e55 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -678,12 +678,12 @@ extern int vhost_data_log_level; #define RTE_LOGTYPE_VHOST_DATA vhost_data_log_level #define VHOST_CONFIG_LOG(prefix, level, fmt, args...) \ - RTE_LOG(level, VHOST_CONFIG, \ - "VHOST_CONFIG: (%s) " fmt "\n", prefix, ##args) + RTE_LOG_LINE(level, VHOST_CONFIG, \ + "VHOST_CONFIG: (%s) " fmt, prefix, ##args) #define VHOST_DATA_LOG(prefix, level, fmt, args...) \ - RTE_LOG_DP(level, VHOST_DATA, \ - "VHOST_DATA: (%s) " fmt "\n", prefix, ##args) + RTE_LOG_DP_LINE(level, VHOST_DATA, \ + "VHOST_DATA: (%s) " fmt, prefix, ##args) #ifdef RTE_LIBRTE_VHOST_DEBUG #define VHOST_MAX_PRINT_BUFF 6072 diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c index 6e5443e5f8..3704fbbb3d 100644 --- a/lib/vhost/vhost_crypto.c +++ b/lib/vhost/vhost_crypto.c @@ -21,15 +21,15 @@ RTE_LOG_REGISTER_SUFFIX(vhost_crypto_logtype, crypto, INFO); #define RTE_LOGTYPE_VHOST_CRYPTO vhost_crypto_logtype #define VC_LOG_ERR(fmt, args...) \ - RTE_LOG(ERR, VHOST_CRYPTO, "%s() line %u: " fmt "\n", \ + RTE_LOG_LINE(ERR, VHOST_CRYPTO, "%s() line %u: " fmt, \ __func__, __LINE__, ## args) #define VC_LOG_INFO(fmt, args...) \ - RTE_LOG(INFO, VHOST_CRYPTO, "%s() line %u: " fmt "\n", \ + RTE_LOG_LINE(INFO, VHOST_CRYPTO, "%s() line %u: " fmt, \ __func__, __LINE__, ## args) #ifdef RTE_LIBRTE_VHOST_DEBUG #define VC_LOG_DBG(fmt, args...) \ - RTE_LOG(DEBUG, VHOST_CRYPTO, "%s() line %u: " fmt "\n", \ + RTE_LOG_LINE(DEBUG, VHOST_CRYPTO, "%s() line %u: " fmt, \ __func__, __LINE__, ## args) #else #define VC_LOG_DBG(fmt, args...) -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v4 00/14] Detect superfluous newline in logs 2023-11-17 13:18 [RFC 0/3] Detect superfluous newline in logs David Marchand ` (6 preceding siblings ...) 2023-12-18 9:27 ` [PATCH v3 00/14] Detect superfluous newline in logs David Marchand @ 2023-12-18 14:37 ` David Marchand 2023-12-18 14:37 ` [PATCH v4 01/14] hash: remove some dead code David Marchand ` (13 more replies) 2023-12-20 15:35 ` [PATCH v5 00/13] Detect superfluous newline in logs David Marchand 8 siblings, 14 replies; 122+ messages in thread From: David Marchand @ 2023-12-18 14:37 UTC (permalink / raw) To: dev; +Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb Getting readable and consistent logs is important when running a DPDK application, especially when troubleshooting. A common issue with logs is when a DPDK change do not add (or on the contrary add too many \n) in the format string. This issue would only get noticed when actually hitting this log (which may be a situation hard to reach). This series proposes to introduce a new RTE_LOG_LINE helper that is responsible for logging a one line message and spews a build error (with gcc) if any \n is part of the format string. Because this is still a RFC and a lot of changes are added in this v2, no ack from the v1 has been kept. Since the v1 discussion on the cover letter, I changed my mind, and made the choice to break existing logging helpers exported in the public API. The reasoning is that those should not be used in the first place: logs should be produced only by the library that registers the logtype. Some multiline logging for debugging and the test assert macros are still present, but in this case an explicit call to RTE_LOG() is done. This can be checked with a simple: $ git grep 'RTE_LOG(' -- lib/ :^lib/log/ lib/acl/acl_bld.c: RTE_LOG(DEBUG, ACL, "Build phase for ACL \"%s\":\n" lib/acl/acl_gen.c: RTE_LOG(DEBUG, ACL, "Gen phase for ACL \"%s\":\n" lib/bpf/bpf_validate.c: RTE_LOG(DEBUG, BPF, "%s(%p) stats:\n" lib/bpf/bpf_validate.c: RTE_LOG(DEBUG, BPF, lib/eal/common/eal_common_debug.c: RTE_LOG(CRIT, EAL, "Error - exiting with code: %d\n" lib/eal/include/rte_test.h: RTE_LOG(ERR, EAL, "Test assert %s line %d failed: " \ lib/ip_frag/ip_frag_common.h:#define IP_FRAG_LOG(lvl, fmt, args...) RTE_LOG(lvl, IPFRAG, fmt, ##args) lib/sched/rte_sched.c: RTE_LOG(DEBUG, SCHED, "Low level config for pipe profile %u:\n" lib/sched/rte_sched.c: RTE_LOG(DEBUG, SCHED, "Low level config for subport profile %u:\n" Changes since v3: - fixed some checkpatch complaints, Changes since RFC v2: - sent as non RFC, - fixed format string crossing line boundaries, - avoided potential collision with BPF_ namespace, Changes since RFC v1: - rebased after Stephen log changes, - added more fixes as I was making progress on the topic, - added a check so dpdk developers stop using RTE_LOG(), - added preparation patches, like "lib: replace logging helpers", - converted all libraries, keeping some special cases with explicit calls to RTE_LOG, -- David Marchand David Marchand (14): hash: remove some dead code regexdev: fix logtype register lib: use dedicated logtypes and macros lib: add newline in logs lib: remove redundant newline from logs eal/linux: remove log paraphrasing the doc bpf: remove log level in internal helper lib: simplify multilines log messages rcu: introduce a logging helper vhost: improve log for memory dumping configuration log: add a per line log helper lib: convert to per line logging lib: replace logging helpers lib: use per line logging in helpers devtools/checkpatches.sh | 8 + drivers/crypto/ipsec_mb/ipsec_mb_ops.c | 2 +- lib/acl/acl_bld.c | 28 +- lib/acl/acl_gen.c | 8 +- lib/acl/rte_acl.c | 8 +- lib/acl/tb_mem.c | 4 +- lib/bbdev/rte_bbdev.c | 11 +- lib/bpf/bpf.c | 2 +- lib/bpf/bpf_convert.c | 16 +- lib/bpf/bpf_exec.c | 12 +- lib/bpf/bpf_impl.h | 5 +- lib/bpf/bpf_jit_arm64.c | 8 +- lib/bpf/bpf_jit_x86.c | 4 +- lib/bpf/bpf_load.c | 2 +- lib/bpf/bpf_load_elf.c | 24 +- lib/bpf/bpf_pkt.c | 4 +- lib/bpf/bpf_stub.c | 6 +- lib/bpf/bpf_validate.c | 44 +- lib/cfgfile/rte_cfgfile.c | 18 +- lib/compressdev/rte_compressdev_internal.h | 5 +- lib/compressdev/rte_compressdev_pmd.c | 4 +- lib/cryptodev/rte_cryptodev.c | 4 +- lib/cryptodev/rte_cryptodev.h | 22 +- lib/dispatcher/rte_dispatcher.c | 12 +- lib/dmadev/rte_dmadev.c | 8 +- lib/eal/common/eal_common_bus.c | 22 +- lib/eal/common/eal_common_class.c | 4 +- lib/eal/common/eal_common_config.c | 2 +- lib/eal/common/eal_common_debug.c | 6 +- lib/eal/common/eal_common_dev.c | 80 +- lib/eal/common/eal_common_devargs.c | 18 +- lib/eal/common/eal_common_dynmem.c | 34 +- lib/eal/common/eal_common_fbarray.c | 12 +- lib/eal/common/eal_common_interrupts.c | 38 +- lib/eal/common/eal_common_lcore.c | 26 +- lib/eal/common/eal_common_memalloc.c | 12 +- lib/eal/common/eal_common_memory.c | 66 +- lib/eal/common/eal_common_memzone.c | 24 +- lib/eal/common/eal_common_options.c | 236 +++--- lib/eal/common/eal_common_proc.c | 112 +-- lib/eal/common/eal_common_tailqs.c | 12 +- lib/eal/common/eal_common_thread.c | 12 +- lib/eal/common/eal_common_timer.c | 6 +- lib/eal/common/eal_common_trace_utils.c | 2 +- lib/eal/common/eal_trace.h | 4 +- lib/eal/common/hotplug_mp.c | 54 +- lib/eal/common/malloc_elem.c | 6 +- lib/eal/common/malloc_heap.c | 40 +- lib/eal/common/malloc_mp.c | 72 +- lib/eal/common/rte_keepalive.c | 2 +- lib/eal/common/rte_malloc.c | 10 +- lib/eal/common/rte_service.c | 8 +- lib/eal/freebsd/eal.c | 75 +- lib/eal/freebsd/eal_alarm.c | 2 +- lib/eal/freebsd/eal_dev.c | 8 +- lib/eal/freebsd/eal_hugepage_info.c | 22 +- lib/eal/freebsd/eal_interrupts.c | 60 +- lib/eal/freebsd/eal_lcore.c | 2 +- lib/eal/freebsd/eal_memalloc.c | 10 +- lib/eal/freebsd/eal_memory.c | 34 +- lib/eal/freebsd/eal_thread.c | 2 +- lib/eal/freebsd/eal_timer.c | 10 +- lib/eal/linux/eal.c | 122 +-- lib/eal/linux/eal_alarm.c | 2 +- lib/eal/linux/eal_dev.c | 40 +- lib/eal/linux/eal_hugepage_info.c | 38 +- lib/eal/linux/eal_interrupts.c | 116 +-- lib/eal/linux/eal_lcore.c | 4 +- lib/eal/linux/eal_memalloc.c | 120 +-- lib/eal/linux/eal_memory.c | 208 ++--- lib/eal/linux/eal_thread.c | 4 +- lib/eal/linux/eal_timer.c | 14 +- lib/eal/linux/eal_vfio.c | 270 +++---- lib/eal/linux/eal_vfio_mp_sync.c | 4 +- lib/eal/riscv/rte_cycles.c | 4 +- lib/eal/unix/eal_filesystem.c | 14 +- lib/eal/unix/eal_firmware.c | 2 +- lib/eal/unix/eal_unix_memory.c | 8 +- lib/eal/unix/rte_thread.c | 34 +- lib/eal/windows/eal.c | 36 +- lib/eal/windows/eal_alarm.c | 12 +- lib/eal/windows/eal_debug.c | 8 +- lib/eal/windows/eal_dev.c | 8 +- lib/eal/windows/eal_hugepages.c | 10 +- lib/eal/windows/eal_interrupts.c | 10 +- lib/eal/windows/eal_lcore.c | 7 +- lib/eal/windows/eal_memalloc.c | 50 +- lib/eal/windows/eal_memory.c | 22 +- lib/eal/windows/eal_windows.h | 4 +- lib/eal/windows/include/rte_windows.h | 6 +- lib/eal/windows/rte_thread.c | 28 +- lib/efd/rte_efd.c | 58 +- lib/ethdev/ethdev_driver.c | 44 +- lib/ethdev/ethdev_pci.h | 2 +- lib/ethdev/ethdev_private.c | 10 +- lib/ethdev/rte_class_eth.c | 2 +- lib/ethdev/rte_ethdev.c | 854 ++++++++++----------- lib/ethdev/rte_ethdev.h | 51 +- lib/ethdev/rte_ethdev_cman.c | 16 +- lib/ethdev/rte_ethdev_telemetry.c | 44 +- lib/ethdev/rte_flow.c | 64 +- lib/ethdev/rte_flow.h | 3 - lib/ethdev/sff_telemetry.c | 30 +- lib/eventdev/eventdev_pmd.h | 18 +- lib/eventdev/rte_event_crypto_adapter.c | 12 +- lib/eventdev/rte_event_dma_adapter.c | 18 +- lib/eventdev/rte_event_eth_rx_adapter.c | 40 +- lib/eventdev/rte_event_eth_tx_adapter.c | 2 +- lib/eventdev/rte_event_timer_adapter.c | 21 +- lib/eventdev/rte_eventdev.c | 10 +- lib/fib/rte_fib.c | 14 +- lib/fib/rte_fib6.c | 14 +- lib/gpudev/gpudev.c | 6 +- lib/graph/graph_private.h | 7 +- lib/hash/rte_cuckoo_hash.c | 52 +- lib/hash/rte_cuckoo_hash.h | 11 - lib/hash/rte_fbk_hash.c | 4 +- lib/hash/rte_hash_crc.c | 12 +- lib/hash/rte_thash.c | 20 +- lib/hash/rte_thash_gfni.c | 8 +- lib/ip_frag/rte_ip_frag_common.c | 8 +- lib/latencystats/rte_latencystats.c | 41 +- lib/log/log.c | 6 +- lib/log/rte_log.h | 21 + lib/lpm/rte_lpm.c | 12 +- lib/lpm/rte_lpm6.c | 10 +- lib/mbuf/rte_mbuf.c | 14 +- lib/mbuf/rte_mbuf_dyn.c | 14 +- lib/mbuf/rte_mbuf_pool_ops.c | 4 +- lib/member/member.h | 14 + lib/member/rte_member.c | 15 +- lib/member/rte_member.h | 9 - lib/member/rte_member_heap.h | 39 +- lib/member/rte_member_ht.c | 13 +- lib/member/rte_member_sketch.c | 41 +- lib/member/rte_member_vbf.c | 9 +- lib/mempool/rte_mempool.c | 24 +- lib/mempool/rte_mempool.h | 2 +- lib/mempool/rte_mempool_ops.c | 10 +- lib/metrics/rte_metrics_telemetry.c | 6 +- lib/mldev/rte_mldev.c | 102 +-- lib/mldev/rte_mldev.h | 5 +- lib/net/rte_net_crc.c | 14 +- lib/node/ethdev_rx.c | 4 +- lib/node/ip4_lookup.c | 2 +- lib/node/ip6_lookup.c | 2 +- lib/node/kernel_rx.c | 8 +- lib/node/kernel_tx.c | 4 +- lib/node/node_private.h | 8 +- lib/pdump/rte_pdump.c | 113 ++- lib/pipeline/rte_pipeline.c | 228 +++--- lib/port/rte_port_ethdev.c | 18 +- lib/port/rte_port_eventdev.c | 18 +- lib/port/rte_port_fd.c | 24 +- lib/port/rte_port_frag.c | 14 +- lib/port/rte_port_ras.c | 12 +- lib/port/rte_port_ring.c | 18 +- lib/port/rte_port_sched.c | 12 +- lib/port/rte_port_source_sink.c | 48 +- lib/port/rte_port_sym_crypto.c | 18 +- lib/power/guest_channel.c | 36 +- lib/power/power_acpi_cpufreq.c | 116 +-- lib/power/power_amd_pstate_cpufreq.c | 132 ++-- lib/power/power_common.c | 14 +- lib/power/power_common.h | 6 +- lib/power/power_cppc_cpufreq.c | 130 ++-- lib/power/power_intel_uncore.c | 72 +- lib/power/power_kvm_vm.c | 22 +- lib/power/power_pstate_cpufreq.c | 156 ++-- lib/power/rte_power.c | 22 +- lib/power/rte_power_pmd_mgmt.c | 34 +- lib/power/rte_power_uncore.c | 14 +- lib/rawdev/rte_rawdev_pmd.h | 4 +- lib/rcu/rte_rcu_qsbr.c | 66 +- lib/rcu/rte_rcu_qsbr.h | 17 +- lib/regexdev/rte_regexdev.c | 88 +-- lib/regexdev/rte_regexdev.h | 13 +- lib/reorder/rte_reorder.c | 32 +- lib/rib/rte_rib.c | 10 +- lib/rib/rte_rib6.c | 10 +- lib/ring/rte_ring.c | 24 +- lib/sched/rte_pie.c | 18 +- lib/sched/rte_sched.c | 274 +++---- lib/stack/rte_stack.c | 8 +- lib/stack/stack_pvt.h | 4 +- lib/table/rte_table_acl.c | 72 +- lib/table/rte_table_array.c | 16 +- lib/table/rte_table_hash_cuckoo.c | 22 +- lib/table/rte_table_hash_ext.c | 22 +- lib/table/rte_table_hash_key16.c | 38 +- lib/table/rte_table_hash_key32.c | 38 +- lib/table/rte_table_hash_key8.c | 38 +- lib/table/rte_table_hash_lru.c | 22 +- lib/table/rte_table_lpm.c | 42 +- lib/table/rte_table_lpm_ipv6.c | 44 +- lib/table/rte_table_stub.c | 4 +- lib/telemetry/telemetry.c | 39 +- lib/vhost/fd_man.c | 8 +- lib/vhost/iotlb.c | 36 +- lib/vhost/socket.c | 102 +-- lib/vhost/vdpa.c | 8 +- lib/vhost/vduse.c | 120 +-- lib/vhost/vduse.h | 4 +- lib/vhost/vhost.c | 118 +-- lib/vhost/vhost.h | 24 +- lib/vhost/vhost_crypto.c | 12 +- lib/vhost/vhost_user.c | 530 ++++++------- lib/vhost/virtio_net.c | 188 ++--- lib/vhost/virtio_net_ctrl.c | 38 +- 209 files changed, 3986 insertions(+), 3971 deletions(-) create mode 100644 lib/member/member.h -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v4 01/14] hash: remove some dead code 2023-12-18 14:37 ` [PATCH v4 00/14] Detect superfluous newline in logs David Marchand @ 2023-12-18 14:37 ` David Marchand 2023-12-18 14:37 ` [PATCH v4 02/14] regexdev: fix logtype register David Marchand ` (12 subsequent siblings) 13 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-18 14:37 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Tyler Retzlaff, Yipeng Wang, Sameh Gobriel, Vladimir Medvedkin, Ruifeng Wang, Ray Kinsella, Dharmik Thakkar This macro is not used. Fixes: 769b2de7fb52 ("hash: implement RCU resources reclamation") Cc: stable@dpdk.org Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Stephen Hemminger <stephen@networkplumber.org> Reviewed-by: Tyler Retzlaff <roretzla@linux.microsoft.com> --- lib/hash/rte_cuckoo_hash.h | 11 ----------- 1 file changed, 11 deletions(-) diff --git a/lib/hash/rte_cuckoo_hash.h b/lib/hash/rte_cuckoo_hash.h index f7afc4dd79..8ea793c66e 100644 --- a/lib/hash/rte_cuckoo_hash.h +++ b/lib/hash/rte_cuckoo_hash.h @@ -29,17 +29,6 @@ #define RETURN_IF_TRUE(cond, retval) #endif -#if defined(RTE_LIBRTE_HASH_DEBUG) -#define ERR_IF_TRUE(cond, fmt, args...) do { \ - if (cond) { \ - RTE_LOG(ERR, HASH, fmt, ##args); \ - return; \ - } \ -} while (0) -#else -#define ERR_IF_TRUE(cond, fmt, args...) -#endif - #include <rte_hash_crc.h> #include <rte_jhash.h> -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v4 02/14] regexdev: fix logtype register 2023-12-18 14:37 ` [PATCH v4 00/14] Detect superfluous newline in logs David Marchand 2023-12-18 14:37 ` [PATCH v4 01/14] hash: remove some dead code David Marchand @ 2023-12-18 14:37 ` David Marchand 2023-12-18 16:46 ` Stephen Hemminger 2023-12-18 14:37 ` [PATCH v4 03/14] lib: use dedicated logtypes and macros David Marchand ` (11 subsequent siblings) 13 siblings, 1 reply; 122+ messages in thread From: David Marchand @ 2023-12-18 14:37 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Tyler Retzlaff, Ori Kam, Guy Kaneti, Parav Pandit This library logtype was not initialized so its logs would end up under the 0 logtype, iow, RTE_LOGTYPE_EAL. Fixes: b25246beaefc ("regexdev: add core functions") Cc: stable@dpdk.org Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Stephen Hemminger <stephen@networkplumber.org> Reviewed-by: Tyler Retzlaff <roretzla@linux.microsoft.com> Acked-by: Ori Kam <orika@nvidia.com> --- lib/regexdev/rte_regexdev.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/regexdev/rte_regexdev.c b/lib/regexdev/rte_regexdev.c index caec069182..d38a85eb0b 100644 --- a/lib/regexdev/rte_regexdev.c +++ b/lib/regexdev/rte_regexdev.c @@ -19,7 +19,7 @@ static struct { struct rte_regexdev_data data[RTE_MAX_REGEXDEV_DEVS]; } *rte_regexdev_shared_data; -int rte_regexdev_logtype; +RTE_LOG_REGISTER_DEFAULT(rte_regexdev_logtype, INFO); static uint16_t regexdev_find_free_dev(void) -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [PATCH v4 02/14] regexdev: fix logtype register 2023-12-18 14:37 ` [PATCH v4 02/14] regexdev: fix logtype register David Marchand @ 2023-12-18 16:46 ` Stephen Hemminger 0 siblings, 0 replies; 122+ messages in thread From: Stephen Hemminger @ 2023-12-18 16:46 UTC (permalink / raw) To: David Marchand Cc: dev, thomas, ferruh.yigit, bruce.richardson, mb, stable, Tyler Retzlaff, Ori Kam, Guy Kaneti, Parav Pandit On Mon, 18 Dec 2023 15:37:51 +0100 David Marchand <david.marchand@redhat.com> wrote: > This library logtype was not initialized so its logs would end up under > the 0 logtype, iow, RTE_LOGTYPE_EAL. > > Fixes: b25246beaefc ("regexdev: add core functions") > Cc: stable@dpdk.org > > Signed-off-by: David Marchand <david.marchand@redhat.com> > Reviewed-by: Stephen Hemminger <stephen@networkplumber.org> > Reviewed-by: Tyler Retzlaff <roretzla@linux.microsoft.com> > Acked-by: Ori Kam <orika@nvidia.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v4 03/14] lib: use dedicated logtypes and macros 2023-12-18 14:37 ` [PATCH v4 00/14] Detect superfluous newline in logs David Marchand 2023-12-18 14:37 ` [PATCH v4 01/14] hash: remove some dead code David Marchand 2023-12-18 14:37 ` [PATCH v4 02/14] regexdev: fix logtype register David Marchand @ 2023-12-18 14:37 ` David Marchand 2023-12-18 14:37 ` [PATCH v4 04/14] lib: add newline in logs David Marchand ` (10 subsequent siblings) 13 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-18 14:37 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Tyler Retzlaff, Andrew Rybchenko, Akhil Goyal, Fan Zhang, Amit Prakash Shukla, Jerin Jacob, Naga Harish K S V No printf! When a dedicated log helper exists, use it. And no usurpation please: a library should log under its logtype (see the eventdev rx adapter update for example). Note: the RTE_ETH_VALID_PORTID_OR_GOTO_ERR_RET macro is renamed for consistency with the rest of eventdev (private) macros. Cc: stable@dpdk.org Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Stephen Hemminger <stephen@networkplumber.org> Reviewed-by: Tyler Retzlaff <roretzla@linux.microsoft.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> --- lib/cryptodev/rte_cryptodev.c | 2 +- lib/ethdev/ethdev_driver.c | 4 ++-- lib/ethdev/ethdev_private.c | 2 +- lib/ethdev/rte_class_eth.c | 2 +- lib/eventdev/rte_event_dma_adapter.c | 4 ++-- lib/eventdev/rte_event_eth_rx_adapter.c | 12 ++++++------ lib/eventdev/rte_eventdev.c | 6 +++--- lib/mempool/rte_mempool_ops.c | 2 +- 8 files changed, 17 insertions(+), 17 deletions(-) diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c index 25e3ec12d1..ead8c9a623 100644 --- a/lib/cryptodev/rte_cryptodev.c +++ b/lib/cryptodev/rte_cryptodev.c @@ -2684,7 +2684,7 @@ rte_cryptodev_driver_id_get(const char *name) int driver_id = -1; if (name == NULL) { - RTE_LOG(DEBUG, CRYPTODEV, "name pointer NULL"); + CDEV_LOG_DEBUG("name pointer NULL"); return -1; } diff --git a/lib/ethdev/ethdev_driver.c b/lib/ethdev/ethdev_driver.c index fff4b7b4cd..55a9dcc565 100644 --- a/lib/ethdev/ethdev_driver.c +++ b/lib/ethdev/ethdev_driver.c @@ -487,7 +487,7 @@ rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da) pair = &args.pairs[i]; if (strcmp("representor", pair->key) == 0) { if (eth_da->type != RTE_ETH_REPRESENTOR_NONE) { - RTE_LOG(ERR, EAL, "duplicated representor key: %s\n", + RTE_ETHDEV_LOG(ERR, "duplicated representor key: %s\n", dargs); result = -1; goto parse_cleanup; @@ -713,7 +713,7 @@ rte_eth_representor_id_get(uint16_t port_id, if (info->ranges[i].controller != controller) continue; if (info->ranges[i].id_end < info->ranges[i].id_base) { - RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n", + RTE_ETHDEV_LOG(WARNING, "Port %hu invalid representor ID Range %u - %u, entry %d\n", port_id, info->ranges[i].id_base, info->ranges[i].id_end, i); continue; diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c index e98b7188b0..0e1c7b23c1 100644 --- a/lib/ethdev/ethdev_private.c +++ b/lib/ethdev/ethdev_private.c @@ -182,7 +182,7 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data) RTE_DIM(eth_da->representor_ports)); done: if (str == NULL) - RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str); + RTE_ETHDEV_LOG(ERR, "wrong representor format: %s\n", str); return str == NULL ? -1 : 0; } diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c index b61dae849d..311beb17cb 100644 --- a/lib/ethdev/rte_class_eth.c +++ b/lib/ethdev/rte_class_eth.c @@ -165,7 +165,7 @@ eth_dev_iterate(const void *start, valid_keys = eth_params_keys; kvargs = rte_kvargs_parse(str, valid_keys); if (kvargs == NULL) { - RTE_LOG(ERR, EAL, "cannot parse argument list\n"); + RTE_ETHDEV_LOG(ERR, "cannot parse argument list\n"); rte_errno = EINVAL; return NULL; } diff --git a/lib/eventdev/rte_event_dma_adapter.c b/lib/eventdev/rte_event_dma_adapter.c index af4b5ad388..cbf9405438 100644 --- a/lib/eventdev/rte_event_dma_adapter.c +++ b/lib/eventdev/rte_event_dma_adapter.c @@ -1046,7 +1046,7 @@ rte_event_dma_adapter_vchan_add(uint8_t id, int16_t dma_dev_id, uint16_t vchan, sizeof(struct dma_vchan_info), 0, adapter->socket_id); if (dev_info->vchanq == NULL) { - printf("Queue pair add not supported\n"); + RTE_EDEV_LOG_ERR("Queue pair add not supported"); return -ENOMEM; } } @@ -1057,7 +1057,7 @@ rte_event_dma_adapter_vchan_add(uint8_t id, int16_t dma_dev_id, uint16_t vchan, sizeof(struct dma_vchan_info), 0, adapter->socket_id); if (dev_info->tqmap == NULL) { - printf("tq pair add not supported\n"); + RTE_EDEV_LOG_ERR("tq pair add not supported"); return -ENOMEM; } } diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c index 6db03adf04..82ae31712d 100644 --- a/lib/eventdev/rte_event_eth_rx_adapter.c +++ b/lib/eventdev/rte_event_eth_rx_adapter.c @@ -314,9 +314,9 @@ rxa_event_buf_get(struct event_eth_rx_adapter *rx_adapter, uint16_t eth_dev_id, } \ } while (0) -#define RTE_ETH_VALID_PORTID_OR_GOTO_ERR_RET(port_id, retval) do { \ +#define RTE_EVENT_ETH_RX_ADAPTER_PORTID_VALID_OR_GOTO_ERR_RET(port_id, retval) do { \ if (!rte_eth_dev_is_valid_port(port_id)) { \ - RTE_ETHDEV_LOG(ERR, "Invalid port_id=%u\n", port_id); \ + RTE_EDEV_LOG_ERR("Invalid port_id=%u", port_id); \ ret = retval; \ goto error; \ } \ @@ -3671,7 +3671,7 @@ handle_rxa_get_queue_conf(const char *cmd __rte_unused, /* Get device ID from parameter string */ eth_dev_id = strtoul(token, NULL, 10); - RTE_ETH_VALID_PORTID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); + RTE_EVENT_ETH_RX_ADAPTER_PORTID_VALID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); token = strtok(NULL, ","); RTE_EVENT_ETH_RX_ADAPTER_TOKEN_VALID_OR_GOTO_ERR_RET(token, -1); @@ -3743,7 +3743,7 @@ handle_rxa_get_queue_stats(const char *cmd __rte_unused, /* Get device ID from parameter string */ eth_dev_id = strtoul(token, NULL, 10); - RTE_ETH_VALID_PORTID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); + RTE_EVENT_ETH_RX_ADAPTER_PORTID_VALID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); token = strtok(NULL, ","); RTE_EVENT_ETH_RX_ADAPTER_TOKEN_VALID_OR_GOTO_ERR_RET(token, -1); @@ -3813,7 +3813,7 @@ handle_rxa_queue_stats_reset(const char *cmd __rte_unused, /* Get device ID from parameter string */ eth_dev_id = strtoul(token, NULL, 10); - RTE_ETH_VALID_PORTID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); + RTE_EVENT_ETH_RX_ADAPTER_PORTID_VALID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); token = strtok(NULL, ","); RTE_EVENT_ETH_RX_ADAPTER_TOKEN_VALID_OR_GOTO_ERR_RET(token, -1); @@ -3868,7 +3868,7 @@ handle_rxa_instance_get(const char *cmd __rte_unused, /* Get device ID from parameter string */ eth_dev_id = strtoul(token, NULL, 10); - RTE_ETH_VALID_PORTID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); + RTE_EVENT_ETH_RX_ADAPTER_PORTID_VALID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); token = strtok(NULL, ","); RTE_EVENT_ETH_RX_ADAPTER_TOKEN_VALID_OR_GOTO_ERR_RET(token, -1); diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index 0ca32d6721..ae50821a3f 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -1428,8 +1428,8 @@ rte_event_vector_pool_create(const char *name, unsigned int n, int ret; if (!nb_elem) { - RTE_LOG(ERR, EVENTDEV, - "Invalid number of elements=%d requested\n", nb_elem); + RTE_EDEV_LOG_ERR("Invalid number of elements=%d requested", + nb_elem); rte_errno = EINVAL; return NULL; } @@ -1444,7 +1444,7 @@ rte_event_vector_pool_create(const char *name, unsigned int n, mp_ops_name = rte_mbuf_best_mempool_ops(); ret = rte_mempool_set_ops_byname(mp, mp_ops_name, NULL); if (ret != 0) { - RTE_LOG(ERR, EVENTDEV, "error setting mempool handler\n"); + RTE_EDEV_LOG_ERR("error setting mempool handler"); goto err; } diff --git a/lib/mempool/rte_mempool_ops.c b/lib/mempool/rte_mempool_ops.c index ae1d288f27..e871de9ec9 100644 --- a/lib/mempool/rte_mempool_ops.c +++ b/lib/mempool/rte_mempool_ops.c @@ -46,7 +46,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h) if (strlen(h->name) >= sizeof(ops->name) - 1) { rte_spinlock_unlock(&rte_mempool_ops_table.sl); - RTE_LOG(DEBUG, EAL, "%s(): mempool_ops <%s>: name too long\n", + RTE_LOG(DEBUG, MEMPOOL, "%s(): mempool_ops <%s>: name too long\n", __func__, h->name); rte_errno = EEXIST; return -EEXIST; -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v4 04/14] lib: add newline in logs 2023-12-18 14:37 ` [PATCH v4 00/14] Detect superfluous newline in logs David Marchand ` (2 preceding siblings ...) 2023-12-18 14:37 ` [PATCH v4 03/14] lib: use dedicated logtypes and macros David Marchand @ 2023-12-18 14:37 ` David Marchand 2023-12-18 14:37 ` [PATCH v4 05/14] lib: remove redundant newline from logs David Marchand ` (9 subsequent siblings) 13 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-18 14:37 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Tyler Retzlaff, Andrew Rybchenko, Harman Kalra, Vladimir Medvedkin, Anatoly Burakov, David Hunt, Sivaprasad Tummala Fix places leading to a log message not terminated with a newline. Cc: stable@dpdk.org Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Reviewed-by: Tyler Retzlaff <roretzla@linux.microsoft.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> --- lib/eal/common/eal_common_options.c | 2 +- lib/eal/linux/eal_hugepage_info.c | 2 +- lib/eal/linux/eal_interrupts.c | 2 +- lib/ethdev/ethdev_pci.h | 2 +- lib/ethdev/rte_ethdev.c | 40 ++++++++++++++--------------- lib/lpm/rte_lpm6.c | 6 ++--- lib/power/guest_channel.c | 2 +- lib/power/rte_power_pmd_mgmt.c | 6 ++--- 8 files changed, 31 insertions(+), 31 deletions(-) diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c index a6d21f1cba..e9ba01fb89 100644 --- a/lib/eal/common/eal_common_options.c +++ b/lib/eal/common/eal_common_options.c @@ -2141,7 +2141,7 @@ rte_vect_set_max_simd_bitwidth(uint16_t bitwidth) struct internal_config *internal_conf = eal_get_internal_configuration(); if (internal_conf->max_simd_bitwidth.forced) { - RTE_LOG(NOTICE, EAL, "Cannot set max SIMD bitwidth - user runtime override enabled"); + RTE_LOG(NOTICE, EAL, "Cannot set max SIMD bitwidth - user runtime override enabled\n"); return -EPERM; } diff --git a/lib/eal/linux/eal_hugepage_info.c b/lib/eal/linux/eal_hugepage_info.c index 581d9dfc91..36a495fb1f 100644 --- a/lib/eal/linux/eal_hugepage_info.c +++ b/lib/eal/linux/eal_hugepage_info.c @@ -403,7 +403,7 @@ inspect_hugedir_cb(const struct walk_hugedir_data *whd) struct stat st; if (fstat(whd->file_fd, &st) < 0) - RTE_LOG(DEBUG, EAL, "%s(): stat(\"%s\") failed: %s", + RTE_LOG(DEBUG, EAL, "%s(): stat(\"%s\") failed: %s\n", __func__, whd->file_name, strerror(errno)); else (*total_size) += st.st_size; diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c index d4919dff45..eabac24992 100644 --- a/lib/eal/linux/eal_interrupts.c +++ b/lib/eal/linux/eal_interrupts.c @@ -1542,7 +1542,7 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd) /* only check, initialization would be done in vdev driver.*/ if ((uint64_t)rte_intr_efd_counter_size_get(intr_handle) > sizeof(union rte_intr_read_buffer)) { - RTE_LOG(ERR, EAL, "the efd_counter_size is oversized"); + RTE_LOG(ERR, EAL, "the efd_counter_size is oversized\n"); return -EINVAL; } } else { diff --git a/lib/ethdev/ethdev_pci.h b/lib/ethdev/ethdev_pci.h index 320e3e0093..ddb559aa95 100644 --- a/lib/ethdev/ethdev_pci.h +++ b/lib/ethdev/ethdev_pci.h @@ -31,7 +31,7 @@ rte_eth_copy_pci_info(struct rte_eth_dev *eth_dev, struct rte_pci_device *pci_dev) { if ((eth_dev == NULL) || (pci_dev == NULL)) { - RTE_ETHDEV_LOG(ERR, "NULL pointer eth_dev=%p pci_dev=%p", + RTE_ETHDEV_LOG(ERR, "NULL pointer eth_dev=%p pci_dev=%p\n", (void *)eth_dev, (void *)pci_dev); return; } diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 3858983fcc..b9d99ece15 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -724,7 +724,7 @@ rte_eth_dev_get_port_by_name(const char *name, uint16_t *port_id) uint16_t pid; if (name == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get port ID from NULL name"); + RTE_ETHDEV_LOG(ERR, "Cannot get port ID from NULL name\n"); return -EINVAL; } @@ -2394,41 +2394,41 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id, nb_rx_desc = cap.max_nb_desc; if (nb_rx_desc > cap.max_nb_desc) { RTE_ETHDEV_LOG(ERR, - "Invalid value for nb_rx_desc(=%hu), should be: <= %hu", + "Invalid value for nb_rx_desc(=%hu), should be: <= %hu\n", nb_rx_desc, cap.max_nb_desc); return -EINVAL; } if (conf->peer_count > cap.max_rx_2_tx) { RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Rx queue(=%u), should be: <= %hu", + "Invalid value for number of peers for Rx queue(=%u), should be: <= %hu\n", conf->peer_count, cap.max_rx_2_tx); return -EINVAL; } if (conf->use_locked_device_memory && !cap.rx_cap.locked_device_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to use locked device memory for Rx queue, which is not supported"); + "Attempt to use locked device memory for Rx queue, which is not supported\n"); return -EINVAL; } if (conf->use_rte_memory && !cap.rx_cap.rte_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to use DPDK memory for Rx queue, which is not supported"); + "Attempt to use DPDK memory for Rx queue, which is not supported\n"); return -EINVAL; } if (conf->use_locked_device_memory && conf->use_rte_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to use mutually exclusive memory settings for Rx queue"); + "Attempt to use mutually exclusive memory settings for Rx queue\n"); return -EINVAL; } if (conf->force_memory && !conf->use_locked_device_memory && !conf->use_rte_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to force Rx queue memory settings, but none is set"); + "Attempt to force Rx queue memory settings, but none is set\n"); return -EINVAL; } if (conf->peer_count == 0) { RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Rx queue(=%u), should be: > 0", + "Invalid value for number of peers for Rx queue(=%u), should be: > 0\n", conf->peer_count); return -EINVAL; } @@ -2438,7 +2438,7 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id, count++; } if (count > cap.max_nb_queues) { - RTE_ETHDEV_LOG(ERR, "To many Rx hairpin queues max is %d", + RTE_ETHDEV_LOG(ERR, "To many Rx hairpin queues max is %d\n", cap.max_nb_queues); return -EINVAL; } @@ -2597,41 +2597,41 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id, nb_tx_desc = cap.max_nb_desc; if (nb_tx_desc > cap.max_nb_desc) { RTE_ETHDEV_LOG(ERR, - "Invalid value for nb_tx_desc(=%hu), should be: <= %hu", + "Invalid value for nb_tx_desc(=%hu), should be: <= %hu\n", nb_tx_desc, cap.max_nb_desc); return -EINVAL; } if (conf->peer_count > cap.max_tx_2_rx) { RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Tx queue(=%u), should be: <= %hu", + "Invalid value for number of peers for Tx queue(=%u), should be: <= %hu\n", conf->peer_count, cap.max_tx_2_rx); return -EINVAL; } if (conf->use_locked_device_memory && !cap.tx_cap.locked_device_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to use locked device memory for Tx queue, which is not supported"); + "Attempt to use locked device memory for Tx queue, which is not supported\n"); return -EINVAL; } if (conf->use_rte_memory && !cap.tx_cap.rte_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to use DPDK memory for Tx queue, which is not supported"); + "Attempt to use DPDK memory for Tx queue, which is not supported\n"); return -EINVAL; } if (conf->use_locked_device_memory && conf->use_rte_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to use mutually exclusive memory settings for Tx queue"); + "Attempt to use mutually exclusive memory settings for Tx queue\n"); return -EINVAL; } if (conf->force_memory && !conf->use_locked_device_memory && !conf->use_rte_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to force Tx queue memory settings, but none is set"); + "Attempt to force Tx queue memory settings, but none is set\n"); return -EINVAL; } if (conf->peer_count == 0) { RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Tx queue(=%u), should be: > 0", + "Invalid value for number of peers for Tx queue(=%u), should be: > 0\n", conf->peer_count); return -EINVAL; } @@ -2641,7 +2641,7 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id, count++; } if (count > cap.max_nb_queues) { - RTE_ETHDEV_LOG(ERR, "To many Tx hairpin queues max is %d", + RTE_ETHDEV_LOG(ERR, "To many Tx hairpin queues max is %d\n", cap.max_nb_queues); return -EINVAL; } @@ -6716,7 +6716,7 @@ rte_eth_ip_reassembly_capability_get(uint16_t port_id, } if (reassembly_capa == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get reassembly capability to NULL"); + RTE_ETHDEV_LOG(ERR, "Cannot get reassembly capability to NULL\n"); return -EINVAL; } @@ -6752,7 +6752,7 @@ rte_eth_ip_reassembly_conf_get(uint16_t port_id, } if (conf == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get reassembly info to NULL"); + RTE_ETHDEV_LOG(ERR, "Cannot get reassembly info to NULL\n"); return -EINVAL; } @@ -6780,7 +6780,7 @@ rte_eth_ip_reassembly_conf_set(uint16_t port_id, if (dev->data->dev_configured == 0) { RTE_ETHDEV_LOG(ERR, "Device with port_id=%u is not configured.\n" - "Cannot set IP reassembly configuration", + "Cannot set IP reassembly configuration\n", port_id); return -EINVAL; } diff --git a/lib/lpm/rte_lpm6.c b/lib/lpm/rte_lpm6.c index 873cc8bc26..24ce7dd022 100644 --- a/lib/lpm/rte_lpm6.c +++ b/lib/lpm/rte_lpm6.c @@ -280,7 +280,7 @@ rte_lpm6_create(const char *name, int socket_id, rules_tbl = rte_hash_create(&rule_hash_tbl_params); if (rules_tbl == NULL) { - RTE_LOG(ERR, LPM, "LPM rules hash table allocation failed: %s (%d)", + RTE_LOG(ERR, LPM, "LPM rules hash table allocation failed: %s (%d)\n", rte_strerror(rte_errno), rte_errno); goto fail_wo_unlock; } @@ -290,7 +290,7 @@ rte_lpm6_create(const char *name, int socket_id, sizeof(uint32_t) * config->number_tbl8s, RTE_CACHE_LINE_SIZE); if (tbl8_pool == NULL) { - RTE_LOG(ERR, LPM, "LPM tbl8 pool allocation failed: %s (%d)", + RTE_LOG(ERR, LPM, "LPM tbl8 pool allocation failed: %s (%d)\n", rte_strerror(rte_errno), rte_errno); rte_errno = ENOMEM; goto fail_wo_unlock; @@ -301,7 +301,7 @@ rte_lpm6_create(const char *name, int socket_id, sizeof(struct rte_lpm_tbl8_hdr) * config->number_tbl8s, RTE_CACHE_LINE_SIZE); if (tbl8_hdrs == NULL) { - RTE_LOG(ERR, LPM, "LPM tbl8 headers allocation failed: %s (%d)", + RTE_LOG(ERR, LPM, "LPM tbl8 headers allocation failed: %s (%d)\n", rte_strerror(rte_errno), rte_errno); rte_errno = ENOMEM; goto fail_wo_unlock; diff --git a/lib/power/guest_channel.c b/lib/power/guest_channel.c index cc05347425..a6f2097d5b 100644 --- a/lib/power/guest_channel.c +++ b/lib/power/guest_channel.c @@ -90,7 +90,7 @@ guest_channel_host_connect(const char *path, unsigned int lcore_id) flags |= O_NONBLOCK; if (fcntl(fd, F_SETFL, flags) < 0) { RTE_LOG(ERR, GUEST_CHANNEL, "Failed on setting non-blocking mode for " - "file %s", fd_path); + "file %s\n", fd_path); goto error; } /* QEMU needs a delay after connection */ diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c index 38f8384085..6f18ed0adf 100644 --- a/lib/power/rte_power_pmd_mgmt.c +++ b/lib/power/rte_power_pmd_mgmt.c @@ -686,7 +686,7 @@ int rte_power_pmd_mgmt_set_pause_duration(unsigned int duration) { if (duration == 0) { - RTE_LOG(ERR, POWER, "Pause duration must be greater than 0, value unchanged"); + RTE_LOG(ERR, POWER, "Pause duration must be greater than 0, value unchanged\n"); return -EINVAL; } pause_duration = duration; @@ -709,7 +709,7 @@ rte_power_pmd_mgmt_set_scaling_freq_min(unsigned int lcore, unsigned int min) } if (min > scale_freq_max[lcore]) { - RTE_LOG(ERR, POWER, "Invalid min frequency: Cannot be greater than max frequency"); + RTE_LOG(ERR, POWER, "Invalid min frequency: Cannot be greater than max frequency\n"); return -EINVAL; } scale_freq_min[lcore] = min; @@ -729,7 +729,7 @@ rte_power_pmd_mgmt_set_scaling_freq_max(unsigned int lcore, unsigned int max) if (max == 0) max = UINT32_MAX; if (max < scale_freq_min[lcore]) { - RTE_LOG(ERR, POWER, "Invalid max frequency: Cannot be less than min frequency"); + RTE_LOG(ERR, POWER, "Invalid max frequency: Cannot be less than min frequency\n"); return -EINVAL; } -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v4 05/14] lib: remove redundant newline from logs 2023-12-18 14:37 ` [PATCH v4 00/14] Detect superfluous newline in logs David Marchand ` (3 preceding siblings ...) 2023-12-18 14:37 ` [PATCH v4 04/14] lib: add newline in logs David Marchand @ 2023-12-18 14:37 ` David Marchand 2023-12-18 14:37 ` [PATCH v4 06/14] eal/linux: remove log paraphrasing the doc David Marchand ` (8 subsequent siblings) 13 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-18 14:37 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Chengwen Feng, Mattias Rönnblom, Kai Ji, Pablo de Lara, Nicolas Chautru, Cristian Dumitrescu, Fan Zhang, Ashish Gupta, Akhil Goyal, Kevin Laatz, Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam, Jerin Jacob, Abhinandan Gujjar, Amit Prakash Shukla, Naga Harish K S V, Erik Gabriel Carrillo, Srikanth Yalavarthi, Jasvinder Singh, Nithin Dabilpuram, Pavan Nikhilesh, Honnappa Nagarahalli, Maxime Coquelin, Chenbo Xia Fix places where two newline characters may be logged. Cc: stable@dpdk.org Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Reviewed-by: Chengwen Feng <fengchengwen@huawei.com> Acked-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com> --- Changes since RFC v1: - split fixes on direct calls to printf or RTE_LOG in a previous patch, --- drivers/crypto/ipsec_mb/ipsec_mb_ops.c | 2 +- lib/bbdev/rte_bbdev.c | 6 +- lib/cfgfile/rte_cfgfile.c | 14 ++-- lib/compressdev/rte_compressdev_pmd.c | 4 +- lib/cryptodev/rte_cryptodev.c | 2 +- lib/dispatcher/rte_dispatcher.c | 12 +-- lib/dmadev/rte_dmadev.c | 2 +- lib/eal/windows/eal_memory.c | 2 +- lib/eventdev/eventdev_pmd.h | 6 +- lib/eventdev/rte_event_crypto_adapter.c | 12 +-- lib/eventdev/rte_event_dma_adapter.c | 14 ++-- lib/eventdev/rte_event_eth_rx_adapter.c | 28 +++---- lib/eventdev/rte_event_eth_tx_adapter.c | 2 +- lib/eventdev/rte_event_timer_adapter.c | 4 +- lib/eventdev/rte_eventdev.c | 4 +- lib/metrics/rte_metrics_telemetry.c | 2 +- lib/mldev/rte_mldev.c | 102 ++++++++++++------------ lib/net/rte_net_crc.c | 6 +- lib/node/ethdev_rx.c | 4 +- lib/node/ip4_lookup.c | 2 +- lib/node/ip6_lookup.c | 2 +- lib/node/kernel_rx.c | 8 +- lib/node/kernel_tx.c | 4 +- lib/rcu/rte_rcu_qsbr.c | 4 +- lib/rcu/rte_rcu_qsbr.h | 8 +- lib/stack/rte_stack.c | 8 +- lib/vhost/vhost_crypto.c | 6 +- 27 files changed, 135 insertions(+), 135 deletions(-) diff --git a/drivers/crypto/ipsec_mb/ipsec_mb_ops.c b/drivers/crypto/ipsec_mb/ipsec_mb_ops.c index 52d6d010c7..f21f9cc5a0 100644 --- a/drivers/crypto/ipsec_mb/ipsec_mb_ops.c +++ b/drivers/crypto/ipsec_mb/ipsec_mb_ops.c @@ -407,7 +407,7 @@ ipsec_mb_ipc_request(const struct rte_mp_msg *mp_msg, const void *peer) resp_param->result = ipsec_mb_qp_release(dev, qp_id); break; default: - CDEV_LOG_ERR("invalid mp request type\n"); + CDEV_LOG_ERR("invalid mp request type"); } out: diff --git a/lib/bbdev/rte_bbdev.c b/lib/bbdev/rte_bbdev.c index cfebea09c7..e09bb97abb 100644 --- a/lib/bbdev/rte_bbdev.c +++ b/lib/bbdev/rte_bbdev.c @@ -1106,12 +1106,12 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op, intr_handle = dev->intr_handle; if (intr_handle == NULL) { - rte_bbdev_log(ERR, "Device %u intr handle unset\n", dev_id); + rte_bbdev_log(ERR, "Device %u intr handle unset", dev_id); return -ENOTSUP; } if (queue_id >= RTE_MAX_RXTX_INTR_VEC_ID) { - rte_bbdev_log(ERR, "Device %u queue_id %u is too big\n", + rte_bbdev_log(ERR, "Device %u queue_id %u is too big", dev_id, queue_id); return -ENOTSUP; } @@ -1120,7 +1120,7 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op, ret = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data); if (ret && (ret != -EEXIST)) { rte_bbdev_log(ERR, - "dev %u q %u int ctl error op %d epfd %d vec %u\n", + "dev %u q %u int ctl error op %d epfd %d vec %u", dev_id, queue_id, op, epfd, vec); return ret; } diff --git a/lib/cfgfile/rte_cfgfile.c b/lib/cfgfile/rte_cfgfile.c index eefba6e408..2f9cc0722a 100644 --- a/lib/cfgfile/rte_cfgfile.c +++ b/lib/cfgfile/rte_cfgfile.c @@ -137,7 +137,7 @@ rte_cfgfile_check_params(const struct rte_cfgfile_parameters *params) unsigned int i; if (!params) { - CFG_LOG(ERR, "missing cfgfile parameters\n"); + CFG_LOG(ERR, "missing cfgfile parameters"); return -EINVAL; } @@ -150,7 +150,7 @@ rte_cfgfile_check_params(const struct rte_cfgfile_parameters *params) } if (valid_comment == 0) { - CFG_LOG(ERR, "invalid comment characters %c\n", + CFG_LOG(ERR, "invalid comment characters %c", params->comment_character); return -ENOTSUP; } @@ -188,7 +188,7 @@ rte_cfgfile_load_with_params(const char *filename, int flags, lineno++; if ((len >= sizeof(buffer) - 1) && (buffer[len-1] != '\n')) { CFG_LOG(ERR, " line %d - no \\n found on string. " - "Check if line too long\n", lineno); + "Check if line too long", lineno); goto error1; } /* skip parsing if comment character found */ @@ -209,7 +209,7 @@ rte_cfgfile_load_with_params(const char *filename, int flags, char *end = memchr(buffer, ']', len); if (end == NULL) { CFG_LOG(ERR, - "line %d - no terminating ']' character found\n", + "line %d - no terminating ']' character found", lineno); goto error1; } @@ -225,7 +225,7 @@ rte_cfgfile_load_with_params(const char *filename, int flags, split[1] = memchr(buffer, '=', len); if (split[1] == NULL) { CFG_LOG(ERR, - "line %d - no '=' character found\n", + "line %d - no '=' character found", lineno); goto error1; } @@ -249,7 +249,7 @@ rte_cfgfile_load_with_params(const char *filename, int flags, if (!(flags & CFG_FLAG_EMPTY_VALUES) && (*split[1] == '\0')) { CFG_LOG(ERR, - "line %d - cannot use empty values\n", + "line %d - cannot use empty values", lineno); goto error1; } @@ -414,7 +414,7 @@ int rte_cfgfile_set_entry(struct rte_cfgfile *cfg, const char *sectionname, return 0; } - CFG_LOG(ERR, "entry name doesn't exist\n"); + CFG_LOG(ERR, "entry name doesn't exist"); return -EINVAL; } diff --git a/lib/compressdev/rte_compressdev_pmd.c b/lib/compressdev/rte_compressdev_pmd.c index 156bccd972..762b44f03e 100644 --- a/lib/compressdev/rte_compressdev_pmd.c +++ b/lib/compressdev/rte_compressdev_pmd.c @@ -100,12 +100,12 @@ rte_compressdev_pmd_create(const char *name, struct rte_compressdev *compressdev; if (params->name[0] != '\0') { - COMPRESSDEV_LOG(INFO, "User specified device name = %s\n", + COMPRESSDEV_LOG(INFO, "User specified device name = %s", params->name); name = params->name; } - COMPRESSDEV_LOG(INFO, "Creating compressdev %s\n", name); + COMPRESSDEV_LOG(INFO, "Creating compressdev %s", name); COMPRESSDEV_LOG(INFO, "Init parameters - name: %s, socket id: %d", name, params->socket_id); diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c index ead8c9a623..b233c0ecd7 100644 --- a/lib/cryptodev/rte_cryptodev.c +++ b/lib/cryptodev/rte_cryptodev.c @@ -2074,7 +2074,7 @@ rte_cryptodev_sym_session_create(uint8_t dev_id, } if (xforms == NULL) { - CDEV_LOG_ERR("Invalid xform\n"); + CDEV_LOG_ERR("Invalid xform"); rte_errno = EINVAL; return NULL; } diff --git a/lib/dispatcher/rte_dispatcher.c b/lib/dispatcher/rte_dispatcher.c index 10d02edde9..95dd41b818 100644 --- a/lib/dispatcher/rte_dispatcher.c +++ b/lib/dispatcher/rte_dispatcher.c @@ -246,7 +246,7 @@ evd_service_register(struct rte_dispatcher *dispatcher) rc = rte_service_component_register(&service, &dispatcher->service_id); if (rc != 0) RTE_EDEV_LOG_ERR("Registration of dispatcher service " - "%s failed with error code %d\n", + "%s failed with error code %d", service.name, rc); return rc; @@ -260,7 +260,7 @@ evd_service_unregister(struct rte_dispatcher *dispatcher) rc = rte_service_component_unregister(dispatcher->service_id); if (rc != 0) RTE_EDEV_LOG_ERR("Unregistration of dispatcher service " - "failed with error code %d\n", rc); + "failed with error code %d", rc); return rc; } @@ -279,7 +279,7 @@ rte_dispatcher_create(uint8_t event_dev_id) RTE_CACHE_LINE_SIZE, socket_id); if (dispatcher == NULL) { - RTE_EDEV_LOG_ERR("Unable to allocate memory for dispatcher\n"); + RTE_EDEV_LOG_ERR("Unable to allocate memory for dispatcher"); rte_errno = ENOMEM; return NULL; } @@ -483,7 +483,7 @@ evd_lcore_uninstall_handler(struct rte_dispatcher_lcore *lcore, unreg_handler = evd_lcore_get_handler_by_id(lcore, handler_id); if (unreg_handler == NULL) { - RTE_EDEV_LOG_ERR("Invalid handler id %d\n", handler_id); + RTE_EDEV_LOG_ERR("Invalid handler id %d", handler_id); return -EINVAL; } @@ -602,7 +602,7 @@ rte_dispatcher_finalize_unregister(struct rte_dispatcher *dispatcher, unreg_finalizer = evd_get_finalizer_by_id(dispatcher, finalizer_id); if (unreg_finalizer == NULL) { - RTE_EDEV_LOG_ERR("Invalid finalizer id %d\n", finalizer_id); + RTE_EDEV_LOG_ERR("Invalid finalizer id %d", finalizer_id); return -EINVAL; } @@ -636,7 +636,7 @@ evd_set_service_runstate(struct rte_dispatcher *dispatcher, int state) */ if (rc != 0) RTE_EDEV_LOG_ERR("Unexpected error %d occurred while setting " - "service component run state to %d\n", rc, + "service component run state to %d", rc, state); RTE_VERIFY(rc == 0); diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c index 4e5e420c82..009a21849a 100644 --- a/lib/dmadev/rte_dmadev.c +++ b/lib/dmadev/rte_dmadev.c @@ -726,7 +726,7 @@ rte_dma_vchan_status(int16_t dev_id, uint16_t vchan, enum rte_dma_vchan_status * return -EINVAL; if (vchan >= dev->data->dev_conf.nb_vchans) { - RTE_DMA_LOG(ERR, "Device %u vchan %u out of range\n", dev_id, vchan); + RTE_DMA_LOG(ERR, "Device %u vchan %u out of range", dev_id, vchan); return -EINVAL; } diff --git a/lib/eal/windows/eal_memory.c b/lib/eal/windows/eal_memory.c index 31410a41fd..fd39155163 100644 --- a/lib/eal/windows/eal_memory.c +++ b/lib/eal/windows/eal_memory.c @@ -110,7 +110,7 @@ eal_mem_win32api_init(void) VirtualAlloc2_ptr = (VirtualAlloc2_type)( (void *)GetProcAddress(library, function)); if (VirtualAlloc2_ptr == NULL) { - RTE_LOG_WIN32_ERR("GetProcAddress(\"%s\", \"%s\")\n", + RTE_LOG_WIN32_ERR("GetProcAddress(\"%s\", \"%s\")", library_name, function); /* Contrary to the docs, Server 2016 is not supported. */ diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index 30bd90085c..2ec5aec0a8 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -49,14 +49,14 @@ extern "C" { /* Macros to check for valid device */ #define RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, retval) do { \ if (!rte_event_pmd_is_valid_dev((dev_id))) { \ - RTE_EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \ + RTE_EDEV_LOG_ERR("Invalid dev_id=%d", dev_id); \ return retval; \ } \ } while (0) #define RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, errno, retval) do { \ if (!rte_event_pmd_is_valid_dev((dev_id))) { \ - RTE_EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \ + RTE_EDEV_LOG_ERR("Invalid dev_id=%d", dev_id); \ rte_errno = errno; \ return retval; \ } \ @@ -64,7 +64,7 @@ extern "C" { #define RTE_EVENTDEV_VALID_DEVID_OR_RET(dev_id) do { \ if (!rte_event_pmd_is_valid_dev((dev_id))) { \ - RTE_EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \ + RTE_EDEV_LOG_ERR("Invalid dev_id=%d", dev_id); \ return; \ } \ } while (0) diff --git a/lib/eventdev/rte_event_crypto_adapter.c b/lib/eventdev/rte_event_crypto_adapter.c index 1b435c9f0e..d46595d190 100644 --- a/lib/eventdev/rte_event_crypto_adapter.c +++ b/lib/eventdev/rte_event_crypto_adapter.c @@ -133,7 +133,7 @@ static struct event_crypto_adapter **event_crypto_adapter; /* Macros to check for valid adapter */ #define EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) do { \ if (!eca_valid_id(id)) { \ - RTE_EDEV_LOG_ERR("Invalid crypto adapter id = %d\n", id); \ + RTE_EDEV_LOG_ERR("Invalid crypto adapter id = %d", id); \ return retval; \ } \ } while (0) @@ -309,7 +309,7 @@ eca_default_config_cb(uint8_t id, uint8_t dev_id, ret = rte_event_dev_configure(dev_id, &dev_conf); if (ret) { - RTE_EDEV_LOG_ERR("failed to configure event dev %u\n", dev_id); + RTE_EDEV_LOG_ERR("failed to configure event dev %u", dev_id); if (started) { if (rte_event_dev_start(dev_id)) return -EIO; @@ -319,7 +319,7 @@ eca_default_config_cb(uint8_t id, uint8_t dev_id, ret = rte_event_port_setup(dev_id, port_id, port_conf); if (ret) { - RTE_EDEV_LOG_ERR("failed to setup event port %u\n", port_id); + RTE_EDEV_LOG_ERR("failed to setup event port %u", port_id); return ret; } @@ -391,7 +391,7 @@ rte_event_crypto_adapter_create_ext(uint8_t id, uint8_t dev_id, sizeof(struct crypto_device_info), 0, socket_id); if (adapter->cdevs == NULL) { - RTE_EDEV_LOG_ERR("Failed to get mem for crypto devices\n"); + RTE_EDEV_LOG_ERR("Failed to get mem for crypto devices"); eca_circular_buffer_free(&adapter->ebuf); rte_free(adapter); return -ENOMEM; @@ -1403,7 +1403,7 @@ rte_event_crypto_adapter_runtime_params_set(uint8_t id, EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); if (params == NULL) { - RTE_EDEV_LOG_ERR("params pointer is NULL\n"); + RTE_EDEV_LOG_ERR("params pointer is NULL"); return -EINVAL; } @@ -1436,7 +1436,7 @@ rte_event_crypto_adapter_runtime_params_get(uint8_t id, EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); if (params == NULL) { - RTE_EDEV_LOG_ERR("params pointer is NULL\n"); + RTE_EDEV_LOG_ERR("params pointer is NULL"); return -EINVAL; } diff --git a/lib/eventdev/rte_event_dma_adapter.c b/lib/eventdev/rte_event_dma_adapter.c index cbf9405438..4196164305 100644 --- a/lib/eventdev/rte_event_dma_adapter.c +++ b/lib/eventdev/rte_event_dma_adapter.c @@ -20,7 +20,7 @@ #define EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) \ do { \ if (!edma_adapter_valid_id(id)) { \ - RTE_EDEV_LOG_ERR("Invalid DMA adapter id = %d\n", id); \ + RTE_EDEV_LOG_ERR("Invalid DMA adapter id = %d", id); \ return retval; \ } \ } while (0) @@ -313,7 +313,7 @@ edma_default_config_cb(uint8_t id, uint8_t evdev_id, struct rte_event_dma_adapte ret = rte_event_dev_configure(evdev_id, &dev_conf); if (ret) { - RTE_EDEV_LOG_ERR("Failed to configure event dev %u\n", evdev_id); + RTE_EDEV_LOG_ERR("Failed to configure event dev %u", evdev_id); if (started) { if (rte_event_dev_start(evdev_id)) return -EIO; @@ -323,7 +323,7 @@ edma_default_config_cb(uint8_t id, uint8_t evdev_id, struct rte_event_dma_adapte ret = rte_event_port_setup(evdev_id, port_id, port_conf); if (ret) { - RTE_EDEV_LOG_ERR("Failed to setup event port %u\n", port_id); + RTE_EDEV_LOG_ERR("Failed to setup event port %u", port_id); return ret; } @@ -407,7 +407,7 @@ rte_event_dma_adapter_create_ext(uint8_t id, uint8_t evdev_id, num_dma_dev * sizeof(struct dma_device_info), 0, socket_id); if (adapter->dma_devs == NULL) { - RTE_EDEV_LOG_ERR("Failed to get memory for DMA devices\n"); + RTE_EDEV_LOG_ERR("Failed to get memory for DMA devices"); edma_circular_buffer_free(&adapter->ebuf); rte_free(adapter); return -ENOMEM; @@ -417,7 +417,7 @@ rte_event_dma_adapter_create_ext(uint8_t id, uint8_t evdev_id, for (i = 0; i < num_dma_dev; i++) { ret = rte_dma_info_get(i, &info); if (ret) { - RTE_EDEV_LOG_ERR("Failed to get dma device info\n"); + RTE_EDEV_LOG_ERR("Failed to get dma device info"); edma_circular_buffer_free(&adapter->ebuf); rte_free(adapter); return ret; @@ -1297,7 +1297,7 @@ rte_event_dma_adapter_runtime_params_set(uint8_t id, EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); if (params == NULL) { - RTE_EDEV_LOG_ERR("params pointer is NULL\n"); + RTE_EDEV_LOG_ERR("params pointer is NULL"); return -EINVAL; } @@ -1326,7 +1326,7 @@ rte_event_dma_adapter_runtime_params_get(uint8_t id, EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); if (params == NULL) { - RTE_EDEV_LOG_ERR("params pointer is NULL\n"); + RTE_EDEV_LOG_ERR("params pointer is NULL"); return -EINVAL; } diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c index 82ae31712d..1b83a55b5c 100644 --- a/lib/eventdev/rte_event_eth_rx_adapter.c +++ b/lib/eventdev/rte_event_eth_rx_adapter.c @@ -293,14 +293,14 @@ rxa_event_buf_get(struct event_eth_rx_adapter *rx_adapter, uint16_t eth_dev_id, #define RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) do { \ if (!rxa_validate_id(id)) { \ - RTE_EDEV_LOG_ERR("Invalid eth Rx adapter id = %d\n", id); \ + RTE_EDEV_LOG_ERR("Invalid eth Rx adapter id = %d", id); \ return retval; \ } \ } while (0) #define RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_GOTO_ERR_RET(id, retval) do { \ if (!rxa_validate_id(id)) { \ - RTE_EDEV_LOG_ERR("Invalid eth Rx adapter id = %d\n", id); \ + RTE_EDEV_LOG_ERR("Invalid eth Rx adapter id = %d", id); \ ret = retval; \ goto error; \ } \ @@ -308,7 +308,7 @@ rxa_event_buf_get(struct event_eth_rx_adapter *rx_adapter, uint16_t eth_dev_id, #define RTE_EVENT_ETH_RX_ADAPTER_TOKEN_VALID_OR_GOTO_ERR_RET(token, retval) do { \ if ((token) == NULL || strlen(token) == 0 || !isdigit(*token)) { \ - RTE_EDEV_LOG_ERR("Invalid eth Rx adapter token\n"); \ + RTE_EDEV_LOG_ERR("Invalid eth Rx adapter token"); \ ret = retval; \ goto error; \ } \ @@ -1540,7 +1540,7 @@ rxa_default_conf_cb(uint8_t id, uint8_t dev_id, ret = rte_event_dev_configure(dev_id, &dev_conf); if (ret) { - RTE_EDEV_LOG_ERR("failed to configure event dev %u\n", + RTE_EDEV_LOG_ERR("failed to configure event dev %u", dev_id); if (started) { if (rte_event_dev_start(dev_id)) @@ -1551,7 +1551,7 @@ rxa_default_conf_cb(uint8_t id, uint8_t dev_id, ret = rte_event_port_setup(dev_id, port_id, port_conf); if (ret) { - RTE_EDEV_LOG_ERR("failed to setup event port %u\n", + RTE_EDEV_LOG_ERR("failed to setup event port %u", port_id); return ret; } @@ -1628,7 +1628,7 @@ rxa_create_intr_thread(struct event_eth_rx_adapter *rx_adapter) if (!err) return 0; - RTE_EDEV_LOG_ERR("Failed to create interrupt thread err = %d\n", err); + RTE_EDEV_LOG_ERR("Failed to create interrupt thread err = %d", err); rte_free(rx_adapter->epoll_events); error: rte_ring_free(rx_adapter->intr_ring); @@ -1644,12 +1644,12 @@ rxa_destroy_intr_thread(struct event_eth_rx_adapter *rx_adapter) err = pthread_cancel((pthread_t)rx_adapter->rx_intr_thread.opaque_id); if (err) - RTE_EDEV_LOG_ERR("Can't cancel interrupt thread err = %d\n", + RTE_EDEV_LOG_ERR("Can't cancel interrupt thread err = %d", err); err = rte_thread_join(rx_adapter->rx_intr_thread, NULL); if (err) - RTE_EDEV_LOG_ERR("Can't join interrupt thread err = %d\n", err); + RTE_EDEV_LOG_ERR("Can't join interrupt thread err = %d", err); rte_free(rx_adapter->epoll_events); rte_ring_free(rx_adapter->intr_ring); @@ -1915,7 +1915,7 @@ rxa_init_service(struct event_eth_rx_adapter *rx_adapter, uint8_t id) if (rte_mbuf_dyn_rx_timestamp_register( &event_eth_rx_timestamp_dynfield_offset, &event_eth_rx_timestamp_dynflag) != 0) { - RTE_EDEV_LOG_ERR("Error registering timestamp field in mbuf\n"); + RTE_EDEV_LOG_ERR("Error registering timestamp field in mbuf"); return -rte_errno; } @@ -2445,7 +2445,7 @@ rxa_create(uint8_t id, uint8_t dev_id, RTE_DIM(default_rss_key)); if (rx_adapter->eth_devices == NULL) { - RTE_EDEV_LOG_ERR("failed to get mem for eth devices\n"); + RTE_EDEV_LOG_ERR("failed to get mem for eth devices"); rte_free(rx_adapter); return -ENOMEM; } @@ -2497,12 +2497,12 @@ rxa_config_params_validate(struct rte_event_eth_rx_adapter_params *rxa_params, return 0; } else if (!rxa_params->use_queue_event_buf && rxa_params->event_buf_size == 0) { - RTE_EDEV_LOG_ERR("event buffer size can't be zero\n"); + RTE_EDEV_LOG_ERR("event buffer size can't be zero"); return -EINVAL; } else if (rxa_params->use_queue_event_buf && rxa_params->event_buf_size != 0) { RTE_EDEV_LOG_ERR("event buffer size needs to be configured " - "as part of queue add\n"); + "as part of queue add"); return -EINVAL; } @@ -3597,7 +3597,7 @@ handle_rxa_stats(const char *cmd __rte_unused, /* Get Rx adapter stats */ if (rte_event_eth_rx_adapter_stats_get(rx_adapter_id, &rx_adptr_stats)) { - RTE_EDEV_LOG_ERR("Failed to get Rx adapter stats\n"); + RTE_EDEV_LOG_ERR("Failed to get Rx adapter stats"); return -1; } @@ -3636,7 +3636,7 @@ handle_rxa_stats_reset(const char *cmd __rte_unused, /* Reset Rx adapter stats */ if (rte_event_eth_rx_adapter_stats_reset(rx_adapter_id)) { - RTE_EDEV_LOG_ERR("Failed to reset Rx adapter stats\n"); + RTE_EDEV_LOG_ERR("Failed to reset Rx adapter stats"); return -1; } diff --git a/lib/eventdev/rte_event_eth_tx_adapter.c b/lib/eventdev/rte_event_eth_tx_adapter.c index 360d5caf6a..56435be991 100644 --- a/lib/eventdev/rte_event_eth_tx_adapter.c +++ b/lib/eventdev/rte_event_eth_tx_adapter.c @@ -334,7 +334,7 @@ txa_service_conf_cb(uint8_t __rte_unused id, uint8_t dev_id, ret = rte_event_port_setup(dev_id, port_id, pc); if (ret) { - RTE_EDEV_LOG_ERR("failed to setup event port %u\n", + RTE_EDEV_LOG_ERR("failed to setup event port %u", port_id); if (started) { if (rte_event_dev_start(dev_id)) diff --git a/lib/eventdev/rte_event_timer_adapter.c b/lib/eventdev/rte_event_timer_adapter.c index 27466707bc..3f22e85173 100644 --- a/lib/eventdev/rte_event_timer_adapter.c +++ b/lib/eventdev/rte_event_timer_adapter.c @@ -106,7 +106,7 @@ default_port_conf_cb(uint16_t id, uint8_t event_dev_id, uint8_t *event_port_id, ret = rte_event_dev_configure(dev_id, &dev_conf); if (ret < 0) { - EVTIM_LOG_ERR("failed to configure event dev %u\n", dev_id); + EVTIM_LOG_ERR("failed to configure event dev %u", dev_id); if (started) if (rte_event_dev_start(dev_id)) return -EIO; @@ -116,7 +116,7 @@ default_port_conf_cb(uint16_t id, uint8_t event_dev_id, uint8_t *event_port_id, ret = rte_event_port_setup(dev_id, port_id, port_conf); if (ret < 0) { - EVTIM_LOG_ERR("failed to setup event port %u on event dev %u\n", + EVTIM_LOG_ERR("failed to setup event port %u on event dev %u", port_id, dev_id); return ret; } diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index ae50821a3f..157752868d 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -1007,13 +1007,13 @@ rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t } if (*dev->dev_ops->port_link == NULL) { - RTE_EDEV_LOG_ERR("Function not supported\n"); + RTE_EDEV_LOG_ERR("Function not supported"); rte_errno = ENOTSUP; return 0; } if (profile_id && *dev->dev_ops->port_link_profile == NULL) { - RTE_EDEV_LOG_ERR("Function not supported\n"); + RTE_EDEV_LOG_ERR("Function not supported"); rte_errno = ENOTSUP; return 0; } diff --git a/lib/metrics/rte_metrics_telemetry.c b/lib/metrics/rte_metrics_telemetry.c index 5be21b2e86..1d133e1f8c 100644 --- a/lib/metrics/rte_metrics_telemetry.c +++ b/lib/metrics/rte_metrics_telemetry.c @@ -363,7 +363,7 @@ rte_metrics_tel_stat_names_to_ids(const char * const *stat_names, } } if (j == num_metrics) { - METRICS_LOG_WARN("Invalid stat name %s\n", + METRICS_LOG_WARN("Invalid stat name %s", stat_names[i]); free(names); return -EINVAL; diff --git a/lib/mldev/rte_mldev.c b/lib/mldev/rte_mldev.c index cc5f2e0cc6..196b1850e6 100644 --- a/lib/mldev/rte_mldev.c +++ b/lib/mldev/rte_mldev.c @@ -159,7 +159,7 @@ int rte_ml_dev_init(size_t dev_max) { if (dev_max == 0 || dev_max > INT16_MAX) { - RTE_MLDEV_LOG(ERR, "Invalid dev_max = %zu (> %d)\n", dev_max, INT16_MAX); + RTE_MLDEV_LOG(ERR, "Invalid dev_max = %zu (> %d)", dev_max, INT16_MAX); rte_errno = EINVAL; return -rte_errno; } @@ -217,7 +217,7 @@ rte_ml_dev_socket_id(int16_t dev_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -232,7 +232,7 @@ rte_ml_dev_info_get(int16_t dev_id, struct rte_ml_dev_info *dev_info) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -241,7 +241,7 @@ rte_ml_dev_info_get(int16_t dev_id, struct rte_ml_dev_info *dev_info) return -ENOTSUP; if (dev_info == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, dev_info cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, dev_info cannot be NULL", dev_id); return -EINVAL; } memset(dev_info, 0, sizeof(struct rte_ml_dev_info)); @@ -257,7 +257,7 @@ rte_ml_dev_configure(int16_t dev_id, const struct rte_ml_dev_config *config) int ret; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -271,7 +271,7 @@ rte_ml_dev_configure(int16_t dev_id, const struct rte_ml_dev_config *config) } if (config == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, config cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, config cannot be NULL", dev_id); return -EINVAL; } @@ -280,7 +280,7 @@ rte_ml_dev_configure(int16_t dev_id, const struct rte_ml_dev_config *config) return ret; if (config->nb_queue_pairs > dev_info.max_queue_pairs) { - RTE_MLDEV_LOG(ERR, "Device %d num of queues %u > %u\n", dev_id, + RTE_MLDEV_LOG(ERR, "Device %d num of queues %u > %u", dev_id, config->nb_queue_pairs, dev_info.max_queue_pairs); return -EINVAL; } @@ -294,7 +294,7 @@ rte_ml_dev_close(int16_t dev_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -318,7 +318,7 @@ rte_ml_dev_start(int16_t dev_id) int ret; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -345,7 +345,7 @@ rte_ml_dev_stop(int16_t dev_id) int ret; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -372,7 +372,7 @@ rte_ml_dev_queue_pair_setup(int16_t dev_id, uint16_t queue_pair_id, struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -386,7 +386,7 @@ rte_ml_dev_queue_pair_setup(int16_t dev_id, uint16_t queue_pair_id, } if (qp_conf == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, qp_conf cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, qp_conf cannot be NULL", dev_id); return -EINVAL; } @@ -404,7 +404,7 @@ rte_ml_dev_stats_get(int16_t dev_id, struct rte_ml_dev_stats *stats) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -413,7 +413,7 @@ rte_ml_dev_stats_get(int16_t dev_id, struct rte_ml_dev_stats *stats) return -ENOTSUP; if (stats == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, stats cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, stats cannot be NULL", dev_id); return -EINVAL; } memset(stats, 0, sizeof(struct rte_ml_dev_stats)); @@ -427,7 +427,7 @@ rte_ml_dev_stats_reset(int16_t dev_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return; } @@ -445,7 +445,7 @@ rte_ml_dev_xstats_names_get(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, in struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -462,7 +462,7 @@ rte_ml_dev_xstats_by_name_get(int16_t dev_id, const char *name, uint16_t *stat_i struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -471,12 +471,12 @@ rte_ml_dev_xstats_by_name_get(int16_t dev_id, const char *name, uint16_t *stat_i return -ENOTSUP; if (name == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, name cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, name cannot be NULL", dev_id); return -EINVAL; } if (value == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, value cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, value cannot be NULL", dev_id); return -EINVAL; } @@ -490,7 +490,7 @@ rte_ml_dev_xstats_get(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, int32_t struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -499,12 +499,12 @@ rte_ml_dev_xstats_get(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, int32_t return -ENOTSUP; if (stat_ids == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, stat_ids cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, stat_ids cannot be NULL", dev_id); return -EINVAL; } if (values == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, values cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, values cannot be NULL", dev_id); return -EINVAL; } @@ -518,7 +518,7 @@ rte_ml_dev_xstats_reset(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, int32_ struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -535,7 +535,7 @@ rte_ml_dev_dump(int16_t dev_id, FILE *fd) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -544,7 +544,7 @@ rte_ml_dev_dump(int16_t dev_id, FILE *fd) return -ENOTSUP; if (fd == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, file descriptor cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, file descriptor cannot be NULL", dev_id); return -EINVAL; } @@ -557,7 +557,7 @@ rte_ml_dev_selftest(int16_t dev_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -574,7 +574,7 @@ rte_ml_model_load(int16_t dev_id, struct rte_ml_model_params *params, uint16_t * struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -583,12 +583,12 @@ rte_ml_model_load(int16_t dev_id, struct rte_ml_model_params *params, uint16_t * return -ENOTSUP; if (params == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, params cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, params cannot be NULL", dev_id); return -EINVAL; } if (model_id == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, model_id cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, model_id cannot be NULL", dev_id); return -EINVAL; } @@ -601,7 +601,7 @@ rte_ml_model_unload(int16_t dev_id, uint16_t model_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -618,7 +618,7 @@ rte_ml_model_start(int16_t dev_id, uint16_t model_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -635,7 +635,7 @@ rte_ml_model_stop(int16_t dev_id, uint16_t model_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -652,7 +652,7 @@ rte_ml_model_info_get(int16_t dev_id, uint16_t model_id, struct rte_ml_model_inf struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -661,7 +661,7 @@ rte_ml_model_info_get(int16_t dev_id, uint16_t model_id, struct rte_ml_model_inf return -ENOTSUP; if (model_info == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, model_id %u, model_info cannot be NULL\n", dev_id, + RTE_MLDEV_LOG(ERR, "Dev %d, model_id %u, model_info cannot be NULL", dev_id, model_id); return -EINVAL; } @@ -675,7 +675,7 @@ rte_ml_model_params_update(int16_t dev_id, uint16_t model_id, void *buffer) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -684,7 +684,7 @@ rte_ml_model_params_update(int16_t dev_id, uint16_t model_id, void *buffer) return -ENOTSUP; if (buffer == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, buffer cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, buffer cannot be NULL", dev_id); return -EINVAL; } @@ -698,7 +698,7 @@ rte_ml_io_quantize(int16_t dev_id, uint16_t model_id, struct rte_ml_buff_seg **d struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -707,12 +707,12 @@ rte_ml_io_quantize(int16_t dev_id, uint16_t model_id, struct rte_ml_buff_seg **d return -ENOTSUP; if (dbuffer == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, dbuffer cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, dbuffer cannot be NULL", dev_id); return -EINVAL; } if (qbuffer == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, qbuffer cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, qbuffer cannot be NULL", dev_id); return -EINVAL; } @@ -726,7 +726,7 @@ rte_ml_io_dequantize(int16_t dev_id, uint16_t model_id, struct rte_ml_buff_seg * struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -735,12 +735,12 @@ rte_ml_io_dequantize(int16_t dev_id, uint16_t model_id, struct rte_ml_buff_seg * return -ENOTSUP; if (qbuffer == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, qbuffer cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, qbuffer cannot be NULL", dev_id); return -EINVAL; } if (dbuffer == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, dbuffer cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, dbuffer cannot be NULL", dev_id); return -EINVAL; } @@ -811,7 +811,7 @@ rte_ml_enqueue_burst(int16_t dev_id, uint16_t qp_id, struct rte_ml_op **ops, uin #ifdef RTE_LIBRTE_ML_DEV_DEBUG if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); rte_errno = -EINVAL; return 0; } @@ -823,13 +823,13 @@ rte_ml_enqueue_burst(int16_t dev_id, uint16_t qp_id, struct rte_ml_op **ops, uin } if (ops == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, ops cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, ops cannot be NULL", dev_id); rte_errno = -EINVAL; return 0; } if (qp_id >= dev->data->nb_queue_pairs) { - RTE_MLDEV_LOG(ERR, "Invalid qp_id %u\n", qp_id); + RTE_MLDEV_LOG(ERR, "Invalid qp_id %u", qp_id); rte_errno = -EINVAL; return 0; } @@ -847,7 +847,7 @@ rte_ml_dequeue_burst(int16_t dev_id, uint16_t qp_id, struct rte_ml_op **ops, uin #ifdef RTE_LIBRTE_ML_DEV_DEBUG if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); rte_errno = -EINVAL; return 0; } @@ -859,13 +859,13 @@ rte_ml_dequeue_burst(int16_t dev_id, uint16_t qp_id, struct rte_ml_op **ops, uin } if (ops == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, ops cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, ops cannot be NULL", dev_id); rte_errno = -EINVAL; return 0; } if (qp_id >= dev->data->nb_queue_pairs) { - RTE_MLDEV_LOG(ERR, "Invalid qp_id %u\n", qp_id); + RTE_MLDEV_LOG(ERR, "Invalid qp_id %u", qp_id); rte_errno = -EINVAL; return 0; } @@ -883,7 +883,7 @@ rte_ml_op_error_get(int16_t dev_id, struct rte_ml_op *op, struct rte_ml_op_error #ifdef RTE_LIBRTE_ML_DEV_DEBUG if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -892,12 +892,12 @@ rte_ml_op_error_get(int16_t dev_id, struct rte_ml_op *op, struct rte_ml_op_error return -ENOTSUP; if (op == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, op cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, op cannot be NULL", dev_id); return -EINVAL; } if (error == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, error cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, error cannot be NULL", dev_id); return -EINVAL; } #else diff --git a/lib/net/rte_net_crc.c b/lib/net/rte_net_crc.c index a685f9e7bb..900d6de7f4 100644 --- a/lib/net/rte_net_crc.c +++ b/lib/net/rte_net_crc.c @@ -179,7 +179,7 @@ avx512_vpclmulqdq_get_handlers(void) max_simd_bitwidth >= RTE_VECT_SIMD_512) return handlers_avx512; #endif - NET_LOG(INFO, "Requirements not met, can't use AVX512\n"); + NET_LOG(INFO, "Requirements not met, can't use AVX512"); return NULL; } @@ -205,7 +205,7 @@ sse42_pclmulqdq_get_handlers(void) max_simd_bitwidth >= RTE_VECT_SIMD_128) return handlers_sse42; #endif - NET_LOG(INFO, "Requirements not met, can't use SSE\n"); + NET_LOG(INFO, "Requirements not met, can't use SSE"); return NULL; } @@ -231,7 +231,7 @@ neon_pmull_get_handlers(void) max_simd_bitwidth >= RTE_VECT_SIMD_128) return handlers_neon; #endif - NET_LOG(INFO, "Requirements not met, can't use NEON\n"); + NET_LOG(INFO, "Requirements not met, can't use NEON"); return NULL; } diff --git a/lib/node/ethdev_rx.c b/lib/node/ethdev_rx.c index 3e8fac1df4..475eff6abe 100644 --- a/lib/node/ethdev_rx.c +++ b/lib/node/ethdev_rx.c @@ -160,13 +160,13 @@ ethdev_ptype_setup(uint16_t port, uint16_t queue) if (!l3_ipv4 || !l3_ipv6) { node_info("ethdev_rx", - "Enabling ptype callback for required ptypes on port %u\n", + "Enabling ptype callback for required ptypes on port %u", port); if (!rte_eth_add_rx_callback(port, queue, eth_pkt_parse_cb, NULL)) { node_err("ethdev_rx", - "Failed to add rx ptype cb: port=%d, queue=%d\n", + "Failed to add rx ptype cb: port=%d, queue=%d", port, queue); return -EINVAL; } diff --git a/lib/node/ip4_lookup.c b/lib/node/ip4_lookup.c index 0dbfde64fe..18955971f6 100644 --- a/lib/node/ip4_lookup.c +++ b/lib/node/ip4_lookup.c @@ -143,7 +143,7 @@ rte_node_ip4_route_add(uint32_t ip, uint8_t depth, uint16_t next_hop, ip, depth, val); if (ret < 0) { node_err("ip4_lookup", - "Unable to add entry %s / %d nh (%x) to LPM table on sock %d, rc=%d\n", + "Unable to add entry %s / %d nh (%x) to LPM table on sock %d, rc=%d", abuf, depth, val, socket, ret); return ret; } diff --git a/lib/node/ip6_lookup.c b/lib/node/ip6_lookup.c index 6f56eb5ec5..309964f60f 100644 --- a/lib/node/ip6_lookup.c +++ b/lib/node/ip6_lookup.c @@ -283,7 +283,7 @@ rte_node_ip6_route_add(const uint8_t *ip, uint8_t depth, uint16_t next_hop, if (ret < 0) { node_err("ip6_lookup", "Unable to add entry %s / %d nh (%x) to LPM " - "table on sock %d, rc=%d\n", + "table on sock %d, rc=%d", abuf, depth, val, socket, ret); return ret; } diff --git a/lib/node/kernel_rx.c b/lib/node/kernel_rx.c index 2dba7c8cc7..6c20cdbb1e 100644 --- a/lib/node/kernel_rx.c +++ b/lib/node/kernel_rx.c @@ -134,7 +134,7 @@ kernel_rx_node_do(struct rte_graph *graph, struct rte_node *node, kernel_rx_node if (len == 0 || len == 0xFFFF) { rte_pktmbuf_free(m); if (rx->idx <= 0) - node_dbg("kernel_rx", "rx_mbuf array is empty\n"); + node_dbg("kernel_rx", "rx_mbuf array is empty"); rx->idx--; break; } @@ -207,20 +207,20 @@ kernel_rx_node_init(const struct rte_graph *graph, struct rte_node *node) RTE_VERIFY(elem != NULL); if (ctx->pktmbuf_pool == NULL) { - node_err("kernel_rx", "Invalid mbuf pool on graph %s\n", graph->name); + node_err("kernel_rx", "Invalid mbuf pool on graph %s", graph->name); return -EINVAL; } recv_info = rte_zmalloc_socket("kernel_rx_info", sizeof(kernel_rx_info_t), RTE_CACHE_LINE_SIZE, graph->socket); if (!recv_info) { - node_err("kernel_rx", "Kernel recv_info is NULL\n"); + node_err("kernel_rx", "Kernel recv_info is NULL"); return -ENOMEM; } sock = socket(AF_INET, SOCK_RAW, IPPROTO_RAW); if (sock < 0) { - node_err("kernel_rx", "Unable to open RAW socket\n"); + node_err("kernel_rx", "Unable to open RAW socket"); return sock; } diff --git a/lib/node/kernel_tx.c b/lib/node/kernel_tx.c index 27d1808c71..3a96741622 100644 --- a/lib/node/kernel_tx.c +++ b/lib/node/kernel_tx.c @@ -36,7 +36,7 @@ kernel_tx_process_mbuf(struct rte_node *node, struct rte_mbuf **mbufs, uint16_t sin.sin_addr.s_addr = ip4->dst_addr; if (sendto(ctx->sock, buf, len, 0, (struct sockaddr *)&sin, sizeof(sin)) < 0) - node_err("kernel_tx", "Unable to send packets: %s\n", strerror(errno)); + node_err("kernel_tx", "Unable to send packets: %s", strerror(errno)); } } @@ -87,7 +87,7 @@ kernel_tx_node_init(const struct rte_graph *graph __rte_unused, struct rte_node ctx->sock = socket(AF_INET, SOCK_RAW, IPPROTO_RAW); if (ctx->sock < 0) - node_err("kernel_tx", "Unable to open RAW socket\n"); + node_err("kernel_tx", "Unable to open RAW socket"); return 0; } diff --git a/lib/rcu/rte_rcu_qsbr.c b/lib/rcu/rte_rcu_qsbr.c index a9f3d6cc98..41a44be4b9 100644 --- a/lib/rcu/rte_rcu_qsbr.c +++ b/lib/rcu/rte_rcu_qsbr.c @@ -92,7 +92,7 @@ rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id) return 1; } - __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n", + __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u", v->qsbr_cnt[thread_id].lock_cnt); id = thread_id & __RTE_QSBR_THRID_MASK; @@ -144,7 +144,7 @@ rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id) return 1; } - __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n", + __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u", v->qsbr_cnt[thread_id].lock_cnt); id = thread_id & __RTE_QSBR_THRID_MASK; diff --git a/lib/rcu/rte_rcu_qsbr.h b/lib/rcu/rte_rcu_qsbr.h index 5979fb0efb..6b908e7ee0 100644 --- a/lib/rcu/rte_rcu_qsbr.h +++ b/lib/rcu/rte_rcu_qsbr.h @@ -299,7 +299,7 @@ rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id) RTE_ASSERT(v != NULL && thread_id < v->max_threads); - __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n", + __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u", v->qsbr_cnt[thread_id].lock_cnt); /* Copy the current value of token. @@ -350,7 +350,7 @@ rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int thread_id) { RTE_ASSERT(v != NULL && thread_id < v->max_threads); - __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n", + __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u", v->qsbr_cnt[thread_id].lock_cnt); /* The reader can go offline only after the load of the @@ -427,7 +427,7 @@ rte_rcu_qsbr_unlock(__rte_unused struct rte_rcu_qsbr *v, 1, rte_memory_order_release); __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, WARNING, - "Lock counter %u. Nested locks?\n", + "Lock counter %u. Nested locks?", v->qsbr_cnt[thread_id].lock_cnt); #endif } @@ -481,7 +481,7 @@ rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int thread_id) RTE_ASSERT(v != NULL && thread_id < v->max_threads); - __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n", + __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u", v->qsbr_cnt[thread_id].lock_cnt); /* Acquire the changes to the shared data structure released diff --git a/lib/stack/rte_stack.c b/lib/stack/rte_stack.c index 1fabec2bfe..1dab6d6645 100644 --- a/lib/stack/rte_stack.c +++ b/lib/stack/rte_stack.c @@ -56,7 +56,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id, int ret; if (flags & ~(RTE_STACK_F_LF)) { - STACK_LOG_ERR("Unsupported stack flags %#x\n", flags); + STACK_LOG_ERR("Unsupported stack flags %#x", flags); return NULL; } @@ -65,7 +65,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id, #endif #if !defined(RTE_STACK_LF_SUPPORTED) if (flags & RTE_STACK_F_LF) { - STACK_LOG_ERR("Lock-free stack is not supported on your platform\n"); + STACK_LOG_ERR("Lock-free stack is not supported on your platform"); rte_errno = ENOTSUP; return NULL; } @@ -82,7 +82,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id, te = rte_zmalloc("STACK_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - STACK_LOG_ERR("Cannot reserve memory for tailq\n"); + STACK_LOG_ERR("Cannot reserve memory for tailq"); rte_errno = ENOMEM; return NULL; } @@ -92,7 +92,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id, mz = rte_memzone_reserve_aligned(mz_name, sz, socket_id, 0, __alignof__(*s)); if (mz == NULL) { - STACK_LOG_ERR("Cannot reserve stack memzone!\n"); + STACK_LOG_ERR("Cannot reserve stack memzone!"); rte_mcfg_tailq_write_unlock(); rte_free(te); return NULL; diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c index 3e1ef1ac25..6e5443e5f8 100644 --- a/lib/vhost/vhost_crypto.c +++ b/lib/vhost/vhost_crypto.c @@ -245,7 +245,7 @@ transform_cipher_param(struct rte_crypto_sym_xform *xform, return ret; if (param->cipher_key_len > VHOST_USER_CRYPTO_MAX_CIPHER_KEY_LENGTH) { - VC_LOG_DBG("Invalid cipher key length\n"); + VC_LOG_DBG("Invalid cipher key length"); return -VIRTIO_CRYPTO_BADMSG; } @@ -301,7 +301,7 @@ transform_chain_param(struct rte_crypto_sym_xform *xforms, return ret; if (param->cipher_key_len > VHOST_USER_CRYPTO_MAX_CIPHER_KEY_LENGTH) { - VC_LOG_DBG("Invalid cipher key length\n"); + VC_LOG_DBG("Invalid cipher key length"); return -VIRTIO_CRYPTO_BADMSG; } @@ -321,7 +321,7 @@ transform_chain_param(struct rte_crypto_sym_xform *xforms, return ret; if (param->auth_key_len > VHOST_USER_CRYPTO_MAX_HMAC_KEY_LENGTH) { - VC_LOG_DBG("Invalid auth key length\n"); + VC_LOG_DBG("Invalid auth key length"); return -VIRTIO_CRYPTO_BADMSG; } -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v4 06/14] eal/linux: remove log paraphrasing the doc 2023-12-18 14:37 ` [PATCH v4 00/14] Detect superfluous newline in logs David Marchand ` (4 preceding siblings ...) 2023-12-18 14:37 ` [PATCH v4 05/14] lib: remove redundant newline from logs David Marchand @ 2023-12-18 14:37 ` David Marchand 2023-12-18 14:37 ` [PATCH v4 07/14] bpf: remove log level in internal helper David Marchand ` (7 subsequent siblings) 13 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-18 14:37 UTC (permalink / raw) To: dev; +Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Tyler Retzlaff An error log message does not need to paraphrase the DPDK documentation. Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com> --- lib/eal/linux/eal_timer.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/lib/eal/linux/eal_timer.c b/lib/eal/linux/eal_timer.c index 3a30284e3a..df9ad61ae9 100644 --- a/lib/eal/linux/eal_timer.c +++ b/lib/eal/linux/eal_timer.c @@ -152,11 +152,7 @@ rte_eal_hpet_init(int make_default) } eal_hpet = mmap(NULL, 1024, PROT_READ, MAP_SHARED, fd, 0); if (eal_hpet == MAP_FAILED) { - RTE_LOG(ERR, EAL, "ERROR: Cannot mmap "DEV_HPET"!\n" - "Please enable CONFIG_HPET_MMAP in your kernel configuration " - "to allow HPET support.\n" - "To run without using HPET, unset RTE_LIBEAL_USE_HPET " - "in your build configuration or use '--no-hpet' EAL flag.\n"); + RTE_LOG(ERR, EAL, "ERROR: Cannot mmap "DEV_HPET"!\n"); close(fd); internal_conf->no_hpet = 1; return -1; -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v4 07/14] bpf: remove log level in internal helper 2023-12-18 14:37 ` [PATCH v4 00/14] Detect superfluous newline in logs David Marchand ` (5 preceding siblings ...) 2023-12-18 14:37 ` [PATCH v4 06/14] eal/linux: remove log paraphrasing the doc David Marchand @ 2023-12-18 14:37 ` David Marchand 2023-12-18 14:37 ` [PATCH v4 08/14] lib: simplify multilines log messages David Marchand ` (6 subsequent siblings) 13 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-18 14:37 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Tyler Retzlaff, Konstantin Ananyev There is no other log level than debug, simplify this helper. Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com> --- lib/bpf/bpf_validate.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c index 95b9ef99ef..f246b3c5eb 100644 --- a/lib/bpf/bpf_validate.c +++ b/lib/bpf/bpf_validate.c @@ -2178,18 +2178,18 @@ restore_eval_state(struct bpf_verifier *bvf, struct inst_node *node) } static void -log_eval_state(const struct bpf_verifier *bvf, const struct ebpf_insn *ins, - uint32_t pc, int32_t loglvl) +log_dbg_eval_state(const struct bpf_verifier *bvf, const struct ebpf_insn *ins, + uint32_t pc) { const struct bpf_eval_state *st; const struct bpf_reg_val *rv; - rte_log(loglvl, rte_bpf_logtype, "%s(pc=%u):\n", __func__, pc); + RTE_BPF_LOG(DEBUG, "%s(pc=%u):\n", __func__, pc); st = bvf->evst; rv = st->rv + ins->dst_reg; - rte_log(loglvl, rte_bpf_logtype, + RTE_BPF_LOG(DEBUG, "r%u={\n" "\tv={type=%u, size=%zu},\n" "\tmask=0x%" PRIx64 ",\n" @@ -2269,7 +2269,7 @@ evaluate(struct bpf_verifier *bvf) } } - log_eval_state(bvf, ins + idx, idx, RTE_LOG_DEBUG); + log_dbg_eval_state(bvf, ins + idx, idx); bvf->evin = NULL; } -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v4 08/14] lib: simplify multilines log messages 2023-12-18 14:37 ` [PATCH v4 00/14] Detect superfluous newline in logs David Marchand ` (6 preceding siblings ...) 2023-12-18 14:37 ` [PATCH v4 07/14] bpf: remove log level in internal helper David Marchand @ 2023-12-18 14:37 ` David Marchand 2023-12-18 14:37 ` [PATCH v4 09/14] rcu: introduce a logging helper David Marchand ` (5 subsequent siblings) 13 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-18 14:37 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Tyler Retzlaff, Andrew Rybchenko, Konstantin Ananyev, Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam Those error log messages don't need to span on multiple lines. Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> --- Changes since RFC v2: - fixed format string crossing line boundaries, --- lib/acl/tb_mem.c | 4 ++-- lib/bpf/bpf_stub.c | 6 ++---- lib/eal/windows/eal_hugepages.c | 4 ++-- lib/ethdev/rte_ethdev.c | 12 ++++-------- 4 files changed, 10 insertions(+), 16 deletions(-) diff --git a/lib/acl/tb_mem.c b/lib/acl/tb_mem.c index 6a9d96aaed..4ee65b23da 100644 --- a/lib/acl/tb_mem.c +++ b/lib/acl/tb_mem.c @@ -26,8 +26,8 @@ tb_pool(struct tb_mem_pool *pool, size_t sz) size = sz + pool->alignment - 1; block = calloc(1, size + sizeof(*pool->block)); if (block == NULL) { - RTE_LOG(ERR, ACL, "%s(%zu)\n failed, currently allocated " - "by pool: %zu bytes\n", __func__, sz, pool->alloc); + RTE_LOG(ERR, ACL, "%s(%zu) failed, currently allocated by pool: %zu bytes\n", + __func__, sz, pool->alloc); siglongjmp(pool->fail, -ENOMEM); return NULL; } diff --git a/lib/bpf/bpf_stub.c b/lib/bpf/bpf_stub.c index ebc5343896..83c2203622 100644 --- a/lib/bpf/bpf_stub.c +++ b/lib/bpf/bpf_stub.c @@ -19,8 +19,7 @@ rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname, return NULL; } - RTE_BPF_LOG(ERR, "%s() is not supported with current config\n" - "rebuild with libelf installed\n", + RTE_BPF_LOG(ERR, "%s() is not supported, rebuild with libelf installed\n", __func__); rte_errno = ENOTSUP; return NULL; @@ -36,8 +35,7 @@ rte_bpf_convert(const struct bpf_program *prog) return NULL; } - RTE_BPF_LOG(ERR, "%s() is not supported with current config\n" - "rebuild with libpcap installed\n", + RTE_BPF_LOG(ERR, "%s() is not supported, rebuild with libpcap installed\n", __func__); rte_errno = ENOTSUP; return NULL; diff --git a/lib/eal/windows/eal_hugepages.c b/lib/eal/windows/eal_hugepages.c index b007dceb39..775c67e4c4 100644 --- a/lib/eal/windows/eal_hugepages.c +++ b/lib/eal/windows/eal_hugepages.c @@ -105,8 +105,8 @@ int eal_hugepage_info_init(void) { if (hugepage_claim_privilege() < 0) { - RTE_LOG(ERR, EAL, "Cannot claim hugepage privilege\n" - "Verify that large-page support privilege is assigned to the current user\n"); + RTE_LOG(ERR, EAL, + "Cannot claim hugepage privilege, check large-page support privilege\n"); return -1; } diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index b9d99ece15..9dd0efa9d8 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -6709,8 +6709,7 @@ rte_eth_ip_reassembly_capability_get(uint16_t port_id, if (dev->data->dev_configured == 0) { RTE_ETHDEV_LOG(ERR, - "Device with port_id=%u is not configured.\n" - "Cannot get IP reassembly capability\n", + "port_id=%u is not configured, cannot get IP reassembly capability\n", port_id); return -EINVAL; } @@ -6745,8 +6744,7 @@ rte_eth_ip_reassembly_conf_get(uint16_t port_id, if (dev->data->dev_configured == 0) { RTE_ETHDEV_LOG(ERR, - "Device with port_id=%u is not configured.\n" - "Cannot get IP reassembly configuration\n", + "port_id=%u is not configured, cannot get IP reassembly configuration\n", port_id); return -EINVAL; } @@ -6779,16 +6777,14 @@ rte_eth_ip_reassembly_conf_set(uint16_t port_id, if (dev->data->dev_configured == 0) { RTE_ETHDEV_LOG(ERR, - "Device with port_id=%u is not configured.\n" - "Cannot set IP reassembly configuration\n", + "port_id=%u is not configured, cannot set IP reassembly configuration\n", port_id); return -EINVAL; } if (dev->data->dev_started != 0) { RTE_ETHDEV_LOG(ERR, - "Device with port_id=%u started,\n" - "cannot configure IP reassembly params.\n", + "port_id=%u is started, cannot configure IP reassembly params.\n", port_id); return -EINVAL; } -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v4 09/14] rcu: introduce a logging helper 2023-12-18 14:37 ` [PATCH v4 00/14] Detect superfluous newline in logs David Marchand ` (7 preceding siblings ...) 2023-12-18 14:37 ` [PATCH v4 08/14] lib: simplify multilines log messages David Marchand @ 2023-12-18 14:37 ` David Marchand 2023-12-18 14:37 ` [PATCH v4 10/14] vhost: improve log for memory dumping configuration David Marchand ` (4 subsequent siblings) 13 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-18 14:37 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Honnappa Nagarahalli, Tyler Retzlaff Add a simple helper for logging messages in this library. Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Stephen Hemminger <stephen@networkplumber.org> Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com> Reviewed-by: Tyler Retzlaff <roretzla@linux.microsoft.com> --- lib/rcu/rte_rcu_qsbr.c | 62 ++++++++++++++++-------------------------- lib/rcu/rte_rcu_qsbr.h | 1 + 2 files changed, 24 insertions(+), 39 deletions(-) diff --git a/lib/rcu/rte_rcu_qsbr.c b/lib/rcu/rte_rcu_qsbr.c index 41a44be4b9..5b6530788a 100644 --- a/lib/rcu/rte_rcu_qsbr.c +++ b/lib/rcu/rte_rcu_qsbr.c @@ -19,6 +19,9 @@ #include "rte_rcu_qsbr.h" #include "rcu_qsbr_pvt.h" +#define RCU_LOG(level, fmt, args...) \ + RTE_LOG(level, RCU, "%s(): " fmt "\n", __func__, ## args) + /* Get the memory size of QSBR variable */ size_t rte_rcu_qsbr_get_memsize(uint32_t max_threads) @@ -26,9 +29,7 @@ rte_rcu_qsbr_get_memsize(uint32_t max_threads) size_t sz; if (max_threads == 0) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid max_threads %u\n", - __func__, max_threads); + RCU_LOG(ERR, "Invalid max_threads %u", max_threads); rte_errno = EINVAL; return 1; @@ -52,8 +53,7 @@ rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads) size_t sz; if (v == NULL) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(ERR, "Invalid input parameter"); rte_errno = EINVAL; return 1; @@ -85,8 +85,7 @@ rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id) uint64_t old_bmap, new_bmap; if (v == NULL || thread_id >= v->max_threads) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(ERR, "Invalid input parameter"); rte_errno = EINVAL; return 1; @@ -137,8 +136,7 @@ rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id) uint64_t old_bmap, new_bmap; if (v == NULL || thread_id >= v->max_threads) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(ERR, "Invalid input parameter"); rte_errno = EINVAL; return 1; @@ -211,8 +209,7 @@ rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v) uint32_t i, t, id; if (v == NULL || f == NULL) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(ERR, "Invalid input parameter"); rte_errno = EINVAL; return 1; @@ -282,8 +279,7 @@ rte_rcu_qsbr_dq_create(const struct rte_rcu_qsbr_dq_parameters *params) params->v == NULL || params->name == NULL || params->size == 0 || params->esize == 0 || (params->esize % 4 != 0)) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(ERR, "Invalid input parameter"); rte_errno = EINVAL; return NULL; @@ -293,9 +289,10 @@ rte_rcu_qsbr_dq_create(const struct rte_rcu_qsbr_dq_parameters *params) */ if ((params->trigger_reclaim_limit <= params->size) && (params->max_reclaim_size == 0)) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter, size = %u, trigger_reclaim_limit = %u, max_reclaim_size = %u\n", - __func__, params->size, params->trigger_reclaim_limit, + RCU_LOG(ERR, + "Invalid input parameter, size = %u, trigger_reclaim_limit = %u, " + "max_reclaim_size = %u", + params->size, params->trigger_reclaim_limit, params->max_reclaim_size); rte_errno = EINVAL; @@ -328,8 +325,7 @@ rte_rcu_qsbr_dq_create(const struct rte_rcu_qsbr_dq_parameters *params) __RTE_QSBR_TOKEN_SIZE + params->esize, qs_fifo_size, SOCKET_ID_ANY, flags); if (dq->r == NULL) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): defer queue create failed\n", __func__); + RCU_LOG(ERR, "defer queue create failed"); rte_free(dq); return NULL; } @@ -354,8 +350,7 @@ int rte_rcu_qsbr_dq_enqueue(struct rte_rcu_qsbr_dq *dq, void *e) uint32_t cur_size; if (dq == NULL || e == NULL) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(ERR, "Invalid input parameter"); rte_errno = EINVAL; return 1; @@ -372,8 +367,7 @@ int rte_rcu_qsbr_dq_enqueue(struct rte_rcu_qsbr_dq *dq, void *e) */ cur_size = rte_ring_count(dq->r); if (cur_size > dq->trigger_reclaim_limit) { - rte_log(RTE_LOG_INFO, rte_rcu_log_type, - "%s(): Triggering reclamation\n", __func__); + RCU_LOG(INFO, "Triggering reclamation"); rte_rcu_qsbr_dq_reclaim(dq, dq->max_reclaim_size, NULL, NULL, NULL); } @@ -391,23 +385,18 @@ int rte_rcu_qsbr_dq_enqueue(struct rte_rcu_qsbr_dq *dq, void *e) * Enqueue uses the configured flags when the DQ was created. */ if (rte_ring_enqueue_elem(dq->r, data, dq->esize) != 0) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Enqueue failed\n", __func__); + RCU_LOG(ERR, "Enqueue failed"); /* Note that the token generated above is not used. * Other than wasting tokens, it should not cause any * other issues. */ - rte_log(RTE_LOG_INFO, rte_rcu_log_type, - "%s(): Skipped enqueuing token = %" PRIu64 "\n", - __func__, dq_elem->token); + RCU_LOG(INFO, "Skipped enqueuing token = %" PRIu64, dq_elem->token); rte_errno = ENOSPC; return 1; } - rte_log(RTE_LOG_INFO, rte_rcu_log_type, - "%s(): Enqueued token = %" PRIu64 "\n", - __func__, dq_elem->token); + RCU_LOG(INFO, "Enqueued token = %" PRIu64, dq_elem->token); return 0; } @@ -422,8 +411,7 @@ rte_rcu_qsbr_dq_reclaim(struct rte_rcu_qsbr_dq *dq, unsigned int n, __rte_rcu_qsbr_dq_elem_t *dq_elem; if (dq == NULL || n == 0) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(ERR, "Invalid input parameter"); rte_errno = EINVAL; return 1; @@ -445,17 +433,14 @@ rte_rcu_qsbr_dq_reclaim(struct rte_rcu_qsbr_dq *dq, unsigned int n, } rte_ring_dequeue_elem_finish(dq->r, 1); - rte_log(RTE_LOG_INFO, rte_rcu_log_type, - "%s(): Reclaimed token = %" PRIu64 "\n", - __func__, dq_elem->token); + RCU_LOG(INFO, "Reclaimed token = %" PRIu64, dq_elem->token); dq->free_fn(dq->p, dq_elem->elem, 1); cnt++; } - rte_log(RTE_LOG_INFO, rte_rcu_log_type, - "%s(): Reclaimed %u resources\n", __func__, cnt); + RCU_LOG(INFO, "Reclaimed %u resources", cnt); if (freed != NULL) *freed = cnt; @@ -472,8 +457,7 @@ rte_rcu_qsbr_dq_delete(struct rte_rcu_qsbr_dq *dq) unsigned int pending; if (dq == NULL) { - rte_log(RTE_LOG_DEBUG, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(DEBUG, "Invalid input parameter"); return 0; } diff --git a/lib/rcu/rte_rcu_qsbr.h b/lib/rcu/rte_rcu_qsbr.h index 6b908e7ee0..0dca8310c0 100644 --- a/lib/rcu/rte_rcu_qsbr.h +++ b/lib/rcu/rte_rcu_qsbr.h @@ -36,6 +36,7 @@ extern "C" { #include <rte_ring.h> extern int rte_rcu_log_type; +#define RTE_LOGTYPE_RCU rte_rcu_log_type #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG #define __RTE_RCU_DP_LOG(level, fmt, args...) \ -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v4 10/14] vhost: improve log for memory dumping configuration 2023-12-18 14:37 ` [PATCH v4 00/14] Detect superfluous newline in logs David Marchand ` (8 preceding siblings ...) 2023-12-18 14:37 ` [PATCH v4 09/14] rcu: introduce a logging helper David Marchand @ 2023-12-18 14:37 ` David Marchand 2023-12-18 14:38 ` [PATCH v4 11/14] log: add a per line log helper David Marchand ` (3 subsequent siblings) 13 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-18 14:37 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Maxime Coquelin, Chenbo Xia Add the device name as a prefix of logs associated to madvise() calls. Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> --- lib/vhost/iotlb.c | 18 +++++++++--------- lib/vhost/vhost.h | 2 +- lib/vhost/vhost_user.c | 26 +++++++++++++------------- 3 files changed, 23 insertions(+), 23 deletions(-) diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c index 87ac0e5126..10ab77262e 100644 --- a/lib/vhost/iotlb.c +++ b/lib/vhost/iotlb.c @@ -54,16 +54,16 @@ vhost_user_iotlb_share_page(struct vhost_iotlb_entry *a, struct vhost_iotlb_entr } static void -vhost_user_iotlb_set_dump(struct vhost_iotlb_entry *node) +vhost_user_iotlb_set_dump(struct virtio_net *dev, struct vhost_iotlb_entry *node) { uint64_t start; start = node->uaddr + node->uoffset; - mem_set_dump((void *)(uintptr_t)start, node->size, true, RTE_BIT64(node->page_shift)); + mem_set_dump(dev, (void *)(uintptr_t)start, node->size, true, RTE_BIT64(node->page_shift)); } static void -vhost_user_iotlb_clear_dump(struct vhost_iotlb_entry *node, +vhost_user_iotlb_clear_dump(struct virtio_net *dev, struct vhost_iotlb_entry *node, struct vhost_iotlb_entry *prev, struct vhost_iotlb_entry *next) { uint64_t start, end; @@ -80,7 +80,7 @@ vhost_user_iotlb_clear_dump(struct vhost_iotlb_entry *node, end = RTE_ALIGN_FLOOR(end, RTE_BIT64(node->page_shift)); if (end > start) - mem_set_dump((void *)(uintptr_t)start, end - start, false, + mem_set_dump(dev, (void *)(uintptr_t)start, end - start, false, RTE_BIT64(node->page_shift)); } @@ -204,7 +204,7 @@ vhost_user_iotlb_cache_remove_all(struct virtio_net *dev) vhost_user_iotlb_wr_lock_all(dev); RTE_TAILQ_FOREACH_SAFE(node, &dev->iotlb_list, next, temp_node) { - vhost_user_iotlb_clear_dump(node, NULL, NULL); + vhost_user_iotlb_clear_dump(dev, node, NULL, NULL); TAILQ_REMOVE(&dev->iotlb_list, node, next); vhost_user_iotlb_remove_notify(dev, node); @@ -230,7 +230,7 @@ vhost_user_iotlb_cache_random_evict(struct virtio_net *dev) if (!entry_idx) { struct vhost_iotlb_entry *next_node = RTE_TAILQ_NEXT(node, next); - vhost_user_iotlb_clear_dump(node, prev_node, next_node); + vhost_user_iotlb_clear_dump(dev, node, prev_node, next_node); TAILQ_REMOVE(&dev->iotlb_list, node, next); vhost_user_iotlb_remove_notify(dev, node); @@ -285,7 +285,7 @@ vhost_user_iotlb_cache_insert(struct virtio_net *dev, uint64_t iova, uint64_t ua vhost_user_iotlb_pool_put(dev, new_node); goto unlock; } else if (node->iova > new_node->iova) { - vhost_user_iotlb_set_dump(new_node); + vhost_user_iotlb_set_dump(dev, new_node); TAILQ_INSERT_BEFORE(node, new_node, next); dev->iotlb_cache_nr++; @@ -293,7 +293,7 @@ vhost_user_iotlb_cache_insert(struct virtio_net *dev, uint64_t iova, uint64_t ua } } - vhost_user_iotlb_set_dump(new_node); + vhost_user_iotlb_set_dump(dev, new_node); TAILQ_INSERT_TAIL(&dev->iotlb_list, new_node, next); dev->iotlb_cache_nr++; @@ -322,7 +322,7 @@ vhost_user_iotlb_cache_remove(struct virtio_net *dev, uint64_t iova, uint64_t si if (iova < node->iova + node->size) { struct vhost_iotlb_entry *next_node = RTE_TAILQ_NEXT(node, next); - vhost_user_iotlb_clear_dump(node, prev_node, next_node); + vhost_user_iotlb_clear_dump(dev, node, prev_node, next_node); TAILQ_REMOVE(&dev->iotlb_list, node, next); vhost_user_iotlb_remove_notify(dev, node); diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index f8624fba3d..5f24911190 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -1062,6 +1062,6 @@ mbuf_is_consumed(struct rte_mbuf *m) return true; } -void mem_set_dump(void *ptr, size_t size, bool enable, uint64_t alignment); +void mem_set_dump(struct virtio_net *dev, void *ptr, size_t size, bool enable, uint64_t alignment); #endif /* _VHOST_NET_CDEV_H_ */ diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index e36312181a..413f068bcd 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -763,7 +763,7 @@ hua_to_alignment(struct rte_vhost_memory *mem, void *ptr) } void -mem_set_dump(void *ptr, size_t size, bool enable, uint64_t pagesz) +mem_set_dump(struct virtio_net *dev, void *ptr, size_t size, bool enable, uint64_t pagesz) { #ifdef MADV_DONTDUMP void *start = RTE_PTR_ALIGN_FLOOR(ptr, pagesz); @@ -771,8 +771,8 @@ mem_set_dump(void *ptr, size_t size, bool enable, uint64_t pagesz) size_t len = end - (uintptr_t)start; if (madvise(start, len, enable ? MADV_DODUMP : MADV_DONTDUMP) == -1) { - rte_log(RTE_LOG_INFO, vhost_config_log_level, - "VHOST_CONFIG: could not set coredump preference (%s).\n", strerror(errno)); + VHOST_LOG_CONFIG(dev->ifname, INFO, + "could not set coredump preference (%s).\n", strerror(errno)); } #endif } @@ -807,7 +807,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->desc_packed, len, true, + mem_set_dump(dev, vq->desc_packed, len, true, hua_to_alignment(dev->mem, vq->desc_packed)); numa_realloc(&dev, &vq); *pdev = dev; @@ -824,7 +824,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->driver_event, len, true, + mem_set_dump(dev, vq->driver_event, len, true, hua_to_alignment(dev->mem, vq->driver_event)); len = sizeof(struct vring_packed_desc_event); vq->device_event = (struct vring_packed_desc_event *) @@ -837,7 +837,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->device_event, len, true, + mem_set_dump(dev, vq->device_event, len, true, hua_to_alignment(dev->mem, vq->device_event)); vq->access_ok = true; return; @@ -855,7 +855,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->desc, len, true, hua_to_alignment(dev->mem, vq->desc)); + mem_set_dump(dev, vq->desc, len, true, hua_to_alignment(dev->mem, vq->desc)); numa_realloc(&dev, &vq); *pdev = dev; *pvq = vq; @@ -871,7 +871,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->avail, len, true, hua_to_alignment(dev->mem, vq->avail)); + mem_set_dump(dev, vq->avail, len, true, hua_to_alignment(dev->mem, vq->avail)); len = sizeof(struct vring_used) + sizeof(struct vring_used_elem) * vq->size; if (dev->features & (1ULL << VIRTIO_RING_F_EVENT_IDX)) @@ -884,7 +884,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->used, len, true, hua_to_alignment(dev->mem, vq->used)); + mem_set_dump(dev, vq->used, len, true, hua_to_alignment(dev->mem, vq->used)); if (vq->last_used_idx != vq->used->idx) { VHOST_LOG_CONFIG(dev->ifname, WARNING, @@ -1274,7 +1274,7 @@ vhost_user_mmap_region(struct virtio_net *dev, region->mmap_addr = mmap_addr; region->mmap_size = mmap_size; region->host_user_addr = (uint64_t)(uintptr_t)mmap_addr + mmap_offset; - mem_set_dump(mmap_addr, mmap_size, false, alignment); + mem_set_dump(dev, mmap_addr, mmap_size, false, alignment); if (dev->async_copy) { if (add_guest_pages(dev, region, alignment) < 0) { @@ -1580,7 +1580,7 @@ inflight_mem_alloc(struct virtio_net *dev, const char *name, size_t size, int *f } alignment = get_blk_size(mfd); - mem_set_dump(ptr, size, false, alignment); + mem_set_dump(dev, ptr, size, false, alignment); *fd = mfd; return ptr; } @@ -1789,7 +1789,7 @@ vhost_user_set_inflight_fd(struct virtio_net **pdev, dev->inflight_info->fd = -1; } - mem_set_dump(addr, mmap_size, false, get_blk_size(fd)); + mem_set_dump(dev, addr, mmap_size, false, get_blk_size(fd)); dev->inflight_info->fd = fd; dev->inflight_info->addr = addr; dev->inflight_info->size = mmap_size; @@ -2343,7 +2343,7 @@ vhost_user_set_log_base(struct virtio_net **pdev, dev->log_addr = (uint64_t)(uintptr_t)addr; dev->log_base = dev->log_addr + off; dev->log_size = size; - mem_set_dump(addr, size + off, false, alignment); + mem_set_dump(dev, addr, size + off, false, alignment); for (i = 0; i < dev->nr_vring; i++) { struct vhost_virtqueue *vq = dev->virtqueue[i]; -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v4 11/14] log: add a per line log helper 2023-12-18 14:37 ` [PATCH v4 00/14] Detect superfluous newline in logs David Marchand ` (9 preceding siblings ...) 2023-12-18 14:37 ` [PATCH v4 10/14] vhost: improve log for memory dumping configuration David Marchand @ 2023-12-18 14:38 ` David Marchand 2023-12-19 15:45 ` Thomas Monjalon 2023-12-18 14:38 ` [PATCH v4 12/14] lib: convert to per line logging David Marchand ` (2 subsequent siblings) 13 siblings, 1 reply; 122+ messages in thread From: David Marchand @ 2023-12-18 14:38 UTC (permalink / raw) To: dev; +Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Chengwen Feng gcc builtin __builtin_strchr can be used as a static assertion to check whether passed format strings contain a \n. This can be useful to detect double \n in log messages. Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Acked-by: Chengwen Feng <fengchengwen@huawei.com> --- Changes since v3: - fixed some checkpatch complaints, Changes since RFC v1: - added a check in checkpatches.sh, --- devtools/checkpatches.sh | 8 ++++++++ lib/log/rte_log.h | 21 +++++++++++++++++++++ 2 files changed, 29 insertions(+) diff --git a/devtools/checkpatches.sh b/devtools/checkpatches.sh index 10b79ca2bc..10d1bf490b 100755 --- a/devtools/checkpatches.sh +++ b/devtools/checkpatches.sh @@ -53,6 +53,14 @@ print_usage () { check_forbidden_additions() { # <patch> res=0 + # refrain from new calls to RTE_LOG + awk -v FOLDERS="lib" \ + -v EXPRESSIONS="RTE_LOG\\\(" \ + -v RET_ON_FAIL=1 \ + -v MESSAGE='Prefer RTE_LOG_LINE' \ + -f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \ + "$1" || res=1 + # refrain from new additions of rte_panic() and rte_exit() # multiple folders and expressions are separated by spaces awk -v FOLDERS="lib drivers" \ diff --git a/lib/log/rte_log.h b/lib/log/rte_log.h index 3394746103..637e9dcc9a 100644 --- a/lib/log/rte_log.h +++ b/lib/log/rte_log.h @@ -17,6 +17,7 @@ extern "C" { #endif +#include <assert.h> #include <stdint.h> #include <stdio.h> #include <stdarg.h> @@ -358,6 +359,26 @@ int rte_vlog(uint32_t level, uint32_t logtype, const char *format, va_list ap) RTE_LOGTYPE_ ## t, # t ": " __VA_ARGS__) : \ 0) +#ifdef RTE_TOOLCHAIN_GCC +#define RTE_LOG_CHECK_NO_NEWLINE(fmt) \ + static_assert(!__builtin_strchr(fmt, '\n'), \ + "This log format string contains a \\n") +#else +#define RTE_LOG_CHECK_NO_NEWLINE(...) +#endif + +#define RTE_LOG_LINE(l, t, ...) do { \ + RTE_LOG_CHECK_NO_NEWLINE(RTE_FMT_HEAD(__VA_ARGS__ ,)); \ + RTE_LOG(l, t, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + RTE_FMT_TAIL(__VA_ARGS__ ,))); \ +} while (0) + +#define RTE_LOG_DP_LINE(l, t, ...) do { \ + RTE_LOG_CHECK_NO_NEWLINE(RTE_FMT_HEAD(__VA_ARGS__ ,)); \ + RTE_LOG_DP(l, t, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + RTE_FMT_TAIL(__VA_ARGS__ ,))); \ +} while (0) + #define RTE_LOG_REGISTER_IMPL(type, name, level) \ int type; \ RTE_INIT(__##type) \ -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [PATCH v4 11/14] log: add a per line log helper 2023-12-18 14:38 ` [PATCH v4 11/14] log: add a per line log helper David Marchand @ 2023-12-19 15:45 ` Thomas Monjalon 2023-12-19 17:16 ` Stephen Hemminger 0 siblings, 1 reply; 122+ messages in thread From: Thomas Monjalon @ 2023-12-19 15:45 UTC (permalink / raw) To: David Marchand Cc: dev, ferruh.yigit, bruce.richardson, stephen, mb, Chengwen Feng 18/12/2023 15:38, David Marchand: > +#ifdef RTE_TOOLCHAIN_GCC > +#define RTE_LOG_CHECK_NO_NEWLINE(fmt) \ > + static_assert(!__builtin_strchr(fmt, '\n'), \ > + "This log format string contains a \\n") > +#else > +#define RTE_LOG_CHECK_NO_NEWLINE(...) > +#endif No support in clang? > +#define RTE_LOG_LINE(l, t, ...) do { \ > + RTE_LOG_CHECK_NO_NEWLINE(RTE_FMT_HEAD(__VA_ARGS__ ,)); \ > + RTE_LOG(l, t, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ > + RTE_FMT_TAIL(__VA_ARGS__ ,))); \ > +} while (0) > + > +#define RTE_LOG_DP_LINE(l, t, ...) do { \ > + RTE_LOG_CHECK_NO_NEWLINE(RTE_FMT_HEAD(__VA_ARGS__ ,)); \ > + RTE_LOG_DP(l, t, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ > + RTE_FMT_TAIL(__VA_ARGS__ ,))); \ > +} while (0) I don't think we need a space between __VA_ARGS__ and the comma. ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [PATCH v4 11/14] log: add a per line log helper 2023-12-19 15:45 ` Thomas Monjalon @ 2023-12-19 17:16 ` Stephen Hemminger 2023-12-20 8:26 ` David Marchand 0 siblings, 1 reply; 122+ messages in thread From: Stephen Hemminger @ 2023-12-19 17:16 UTC (permalink / raw) To: Thomas Monjalon Cc: David Marchand, dev, ferruh.yigit, bruce.richardson, mb, Chengwen Feng On Tue, 19 Dec 2023 16:45:19 +0100 Thomas Monjalon <thomas@monjalon.net> wrote: > 18/12/2023 15:38, David Marchand: > > +#ifdef RTE_TOOLCHAIN_GCC > > +#define RTE_LOG_CHECK_NO_NEWLINE(fmt) \ > > + static_assert(!__builtin_strchr(fmt, '\n'), \ > > + "This log format string contains a \\n") > > +#else > > +#define RTE_LOG_CHECK_NO_NEWLINE(...) > > +#endif > > No support in clang? clang has static assert, but probably not builtin_strchr ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [PATCH v4 11/14] log: add a per line log helper 2023-12-19 17:16 ` Stephen Hemminger @ 2023-12-20 8:26 ` David Marchand 0 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-20 8:26 UTC (permalink / raw) To: Stephen Hemminger, Thomas Monjalon Cc: dev, ferruh.yigit, bruce.richardson, mb, Chengwen Feng On Tue, Dec 19, 2023 at 6:16 PM Stephen Hemminger <stephen@networkplumber.org> wrote: > > On Tue, 19 Dec 2023 16:45:19 +0100 > Thomas Monjalon <thomas@monjalon.net> wrote: > > > 18/12/2023 15:38, David Marchand: > > > +#ifdef RTE_TOOLCHAIN_GCC > > > +#define RTE_LOG_CHECK_NO_NEWLINE(fmt) \ > > > + static_assert(!__builtin_strchr(fmt, '\n'), \ > > > + "This log format string contains a \\n") > > > +#else > > > +#define RTE_LOG_CHECK_NO_NEWLINE(...) > > > +#endif > > > > No support in clang? > > clang has static assert, but probably not builtin_strchr clang seems to have support for __builtin_strchr (which was not obvious to me when I first looked at it). Testing with clang ("thanks" to net/mlx4), I realised that this check relies on some gnu extension (constant folding) which breaks compilation with -pedantic. An additional check on PEDANTIC is needed, and I can then add support for clang. -- David Marchand ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v4 12/14] lib: convert to per line logging 2023-12-18 14:37 ` [PATCH v4 00/14] Detect superfluous newline in logs David Marchand ` (10 preceding siblings ...) 2023-12-18 14:38 ` [PATCH v4 11/14] log: add a per line log helper David Marchand @ 2023-12-18 14:38 ` David Marchand 2023-12-20 13:46 ` Thomas Monjalon 2023-12-18 14:38 ` [PATCH v4 13/14] lib: replace logging helpers David Marchand 2023-12-18 14:38 ` [PATCH v4 14/14] lib: use per line logging in helpers David Marchand 13 siblings, 1 reply; 122+ messages in thread From: David Marchand @ 2023-12-18 14:38 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Andrew Rybchenko, Konstantin Ananyev, Anatoly Burakov, Harman Kalra, Jerin Jacob, Sunil Kumar Kori, Harry van Haaren, Stanislaw Kardach, Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam, Byron Marohn, Yipeng Wang, Vladimir Medvedkin, Sameh Gobriel, Reshma Pattan, Cristian Dumitrescu, David Hunt, Sivaprasad Tummala, Honnappa Nagarahalli, Volodymyr Fialko, Maxime Coquelin, Chenbo Xia Convert many libraries that call RTE_LOG(... "\n", ...) to RTE_LOG_LINE. Note: - for acl and sched libraries that still has some debug multilines messages, a direct call to RTE_LOG is used: this will make it easier to notice such special cases, Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> --- Changes since v3: - fixed some checkpatch complaints, --- lib/acl/acl_bld.c | 28 +-- lib/acl/acl_gen.c | 8 +- lib/acl/rte_acl.c | 8 +- lib/acl/tb_mem.c | 2 +- lib/eal/common/eal_common_bus.c | 22 +- lib/eal/common/eal_common_class.c | 4 +- lib/eal/common/eal_common_config.c | 2 +- lib/eal/common/eal_common_debug.c | 6 +- lib/eal/common/eal_common_dev.c | 80 +++---- lib/eal/common/eal_common_devargs.c | 18 +- lib/eal/common/eal_common_dynmem.c | 34 +-- lib/eal/common/eal_common_fbarray.c | 12 +- lib/eal/common/eal_common_interrupts.c | 38 ++-- lib/eal/common/eal_common_lcore.c | 26 +-- lib/eal/common/eal_common_memalloc.c | 12 +- lib/eal/common/eal_common_memory.c | 66 +++--- lib/eal/common/eal_common_memzone.c | 24 +-- lib/eal/common/eal_common_options.c | 236 ++++++++++---------- lib/eal/common/eal_common_proc.c | 112 +++++----- lib/eal/common/eal_common_tailqs.c | 12 +- lib/eal/common/eal_common_thread.c | 12 +- lib/eal/common/eal_common_timer.c | 6 +- lib/eal/common/eal_common_trace_utils.c | 2 +- lib/eal/common/eal_trace.h | 4 +- lib/eal/common/hotplug_mp.c | 54 ++--- lib/eal/common/malloc_elem.c | 6 +- lib/eal/common/malloc_heap.c | 40 ++-- lib/eal/common/malloc_mp.c | 72 +++---- lib/eal/common/rte_keepalive.c | 2 +- lib/eal/common/rte_malloc.c | 10 +- lib/eal/common/rte_service.c | 8 +- lib/eal/freebsd/eal.c | 75 +++---- lib/eal/freebsd/eal_alarm.c | 2 +- lib/eal/freebsd/eal_dev.c | 8 +- lib/eal/freebsd/eal_hugepage_info.c | 22 +- lib/eal/freebsd/eal_interrupts.c | 60 +++--- lib/eal/freebsd/eal_lcore.c | 2 +- lib/eal/freebsd/eal_memalloc.c | 10 +- lib/eal/freebsd/eal_memory.c | 34 +-- lib/eal/freebsd/eal_thread.c | 2 +- lib/eal/freebsd/eal_timer.c | 10 +- lib/eal/linux/eal.c | 122 +++++------ lib/eal/linux/eal_alarm.c | 2 +- lib/eal/linux/eal_dev.c | 40 ++-- lib/eal/linux/eal_hugepage_info.c | 38 ++-- lib/eal/linux/eal_interrupts.c | 116 +++++----- lib/eal/linux/eal_lcore.c | 4 +- lib/eal/linux/eal_memalloc.c | 120 +++++------ lib/eal/linux/eal_memory.c | 208 +++++++++--------- lib/eal/linux/eal_thread.c | 4 +- lib/eal/linux/eal_timer.c | 10 +- lib/eal/linux/eal_vfio.c | 270 +++++++++++------------ lib/eal/linux/eal_vfio_mp_sync.c | 4 +- lib/eal/riscv/rte_cycles.c | 4 +- lib/eal/unix/eal_filesystem.c | 14 +- lib/eal/unix/eal_firmware.c | 2 +- lib/eal/unix/eal_unix_memory.c | 8 +- lib/eal/unix/rte_thread.c | 34 +-- lib/eal/windows/eal.c | 36 ++-- lib/eal/windows/eal_alarm.c | 12 +- lib/eal/windows/eal_debug.c | 8 +- lib/eal/windows/eal_dev.c | 8 +- lib/eal/windows/eal_hugepages.c | 10 +- lib/eal/windows/eal_interrupts.c | 10 +- lib/eal/windows/eal_lcore.c | 7 +- lib/eal/windows/eal_memalloc.c | 50 ++--- lib/eal/windows/eal_memory.c | 20 +- lib/eal/windows/eal_windows.h | 4 +- lib/eal/windows/include/rte_windows.h | 6 +- lib/eal/windows/rte_thread.c | 28 +-- lib/efd/rte_efd.c | 58 ++--- lib/fib/rte_fib.c | 14 +- lib/fib/rte_fib6.c | 14 +- lib/hash/rte_cuckoo_hash.c | 52 ++--- lib/hash/rte_fbk_hash.c | 4 +- lib/hash/rte_hash_crc.c | 12 +- lib/hash/rte_thash.c | 20 +- lib/hash/rte_thash_gfni.c | 8 +- lib/ip_frag/rte_ip_frag_common.c | 8 +- lib/latencystats/rte_latencystats.c | 41 ++-- lib/log/log.c | 6 +- lib/lpm/rte_lpm.c | 12 +- lib/lpm/rte_lpm6.c | 10 +- lib/mbuf/rte_mbuf.c | 14 +- lib/mbuf/rte_mbuf_dyn.c | 14 +- lib/mbuf/rte_mbuf_pool_ops.c | 4 +- lib/mempool/rte_mempool.c | 24 +-- lib/mempool/rte_mempool.h | 2 +- lib/mempool/rte_mempool_ops.c | 10 +- lib/pipeline/rte_pipeline.c | 228 ++++++++++---------- lib/port/rte_port_ethdev.c | 18 +- lib/port/rte_port_eventdev.c | 18 +- lib/port/rte_port_fd.c | 24 +-- lib/port/rte_port_frag.c | 14 +- lib/port/rte_port_ras.c | 12 +- lib/port/rte_port_ring.c | 18 +- lib/port/rte_port_sched.c | 12 +- lib/port/rte_port_source_sink.c | 48 ++--- lib/port/rte_port_sym_crypto.c | 18 +- lib/power/guest_channel.c | 38 ++-- lib/power/power_acpi_cpufreq.c | 106 ++++----- lib/power/power_amd_pstate_cpufreq.c | 120 +++++------ lib/power/power_common.c | 10 +- lib/power/power_cppc_cpufreq.c | 118 +++++----- lib/power/power_intel_uncore.c | 68 +++--- lib/power/power_kvm_vm.c | 22 +- lib/power/power_pstate_cpufreq.c | 144 ++++++------- lib/power/rte_power.c | 22 +- lib/power/rte_power_pmd_mgmt.c | 34 +-- lib/power/rte_power_uncore.c | 14 +- lib/rcu/rte_rcu_qsbr.c | 2 +- lib/reorder/rte_reorder.c | 32 +-- lib/rib/rte_rib.c | 10 +- lib/rib/rte_rib6.c | 10 +- lib/ring/rte_ring.c | 24 +-- lib/sched/rte_pie.c | 18 +- lib/sched/rte_sched.c | 274 ++++++++++++------------ lib/table/rte_table_acl.c | 72 +++---- lib/table/rte_table_array.c | 16 +- lib/table/rte_table_hash_cuckoo.c | 22 +- lib/table/rte_table_hash_ext.c | 22 +- lib/table/rte_table_hash_key16.c | 38 ++-- lib/table/rte_table_hash_key32.c | 38 ++-- lib/table/rte_table_hash_key8.c | 38 ++-- lib/table/rte_table_hash_lru.c | 22 +- lib/table/rte_table_lpm.c | 42 ++-- lib/table/rte_table_lpm_ipv6.c | 44 ++-- lib/table/rte_table_stub.c | 4 +- lib/vhost/fd_man.c | 8 +- 129 files changed, 2280 insertions(+), 2279 deletions(-) diff --git a/lib/acl/acl_bld.c b/lib/acl/acl_bld.c index eaf8770415..27bdd6b9a1 100644 --- a/lib/acl/acl_bld.c +++ b/lib/acl/acl_bld.c @@ -1017,8 +1017,8 @@ build_trie(struct acl_build_context *context, struct rte_acl_build_rule *head, break; default: - RTE_LOG(ERR, ACL, - "Error in rule[%u] type - %hhu\n", + RTE_LOG_LINE(ERR, ACL, + "Error in rule[%u] type - %hhu", rule->f->data.userdata, rule->config->defs[n].type); return NULL; @@ -1374,7 +1374,7 @@ acl_build_tries(struct acl_build_context *context, last = build_one_trie(context, rule_sets, n, context->node_max); if (context->bld_tries[n].trie == NULL) { - RTE_LOG(ERR, ACL, "Build of %u-th trie failed\n", n); + RTE_LOG_LINE(ERR, ACL, "Build of %u-th trie failed", n); return -ENOMEM; } @@ -1383,8 +1383,8 @@ acl_build_tries(struct acl_build_context *context, break; if (num_tries == RTE_DIM(context->tries)) { - RTE_LOG(ERR, ACL, - "Exceeded max number of tries: %u\n", + RTE_LOG_LINE(ERR, ACL, + "Exceeded max number of tries: %u", num_tries); return -ENOMEM; } @@ -1409,7 +1409,7 @@ acl_build_tries(struct acl_build_context *context, */ last = build_one_trie(context, rule_sets, n, INT32_MAX); if (context->bld_tries[n].trie == NULL || last != NULL) { - RTE_LOG(ERR, ACL, "Build of %u-th trie failed\n", n); + RTE_LOG_LINE(ERR, ACL, "Build of %u-th trie failed", n); return -ENOMEM; } @@ -1435,8 +1435,8 @@ acl_build_log(const struct acl_build_context *ctx) for (n = 0; n < RTE_DIM(ctx->tries); n++) { if (ctx->tries[n].count != 0) - RTE_LOG(DEBUG, ACL, - "trie %u: number of rules: %u, indexes: %u\n", + RTE_LOG_LINE(DEBUG, ACL, + "trie %u: number of rules: %u, indexes: %u", n, ctx->tries[n].count, ctx->tries[n].num_data_indexes); } @@ -1526,8 +1526,8 @@ acl_bld(struct acl_build_context *bcx, struct rte_acl_ctx *ctx, /* build phase runs out of memory. */ if (rc != 0) { - RTE_LOG(ERR, ACL, - "ACL context: %s, %s() failed with error code: %d\n", + RTE_LOG_LINE(ERR, ACL, + "ACL context: %s, %s() failed with error code: %d", bcx->acx->name, __func__, rc); return rc; } @@ -1568,8 +1568,8 @@ acl_check_bld_param(struct rte_acl_ctx *ctx, const struct rte_acl_config *cfg) for (i = 0; i != cfg->num_fields; i++) { if (cfg->defs[i].type > RTE_ACL_FIELD_TYPE_BITMASK) { - RTE_LOG(ERR, ACL, - "ACL context: %s, invalid type: %hhu for %u-th field\n", + RTE_LOG_LINE(ERR, ACL, + "ACL context: %s, invalid type: %hhu for %u-th field", ctx->name, cfg->defs[i].type, i); return -EINVAL; } @@ -1580,8 +1580,8 @@ acl_check_bld_param(struct rte_acl_ctx *ctx, const struct rte_acl_config *cfg) ; if (j == RTE_DIM(field_sizes)) { - RTE_LOG(ERR, ACL, - "ACL context: %s, invalid size: %hhu for %u-th field\n", + RTE_LOG_LINE(ERR, ACL, + "ACL context: %s, invalid size: %hhu for %u-th field", ctx->name, cfg->defs[i].size, i); return -EINVAL; } diff --git a/lib/acl/acl_gen.c b/lib/acl/acl_gen.c index 03a47ea231..2f612df1e0 100644 --- a/lib/acl/acl_gen.c +++ b/lib/acl/acl_gen.c @@ -471,9 +471,9 @@ rte_acl_gen(struct rte_acl_ctx *ctx, struct rte_acl_trie *trie, XMM_SIZE; if (total_size > max_size) { - RTE_LOG(DEBUG, ACL, + RTE_LOG_LINE(DEBUG, ACL, "Gen phase for ACL ctx \"%s\" exceeds max_size limit, " - "bytes required: %zu, allowed: %zu\n", + "bytes required: %zu, allowed: %zu", ctx->name, total_size, max_size); return -ERANGE; } @@ -481,8 +481,8 @@ rte_acl_gen(struct rte_acl_ctx *ctx, struct rte_acl_trie *trie, mem = rte_zmalloc_socket(ctx->name, total_size, RTE_CACHE_LINE_SIZE, ctx->socket_id); if (mem == NULL) { - RTE_LOG(ERR, ACL, - "allocation of %zu bytes on socket %d for %s failed\n", + RTE_LOG_LINE(ERR, ACL, + "allocation of %zu bytes on socket %d for %s failed", total_size, ctx->socket_id, ctx->name); return -ENOMEM; } diff --git a/lib/acl/rte_acl.c b/lib/acl/rte_acl.c index 760c3587d4..bec26d0a22 100644 --- a/lib/acl/rte_acl.c +++ b/lib/acl/rte_acl.c @@ -399,15 +399,15 @@ rte_acl_create(const struct rte_acl_param *param) te = rte_zmalloc("ACL_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, ACL, "Cannot allocate tailq entry!\n"); + RTE_LOG_LINE(ERR, ACL, "Cannot allocate tailq entry!"); goto exit; } ctx = rte_zmalloc_socket(name, sz, RTE_CACHE_LINE_SIZE, param->socket_id); if (ctx == NULL) { - RTE_LOG(ERR, ACL, - "allocation of %zu bytes on socket %d for %s failed\n", + RTE_LOG_LINE(ERR, ACL, + "allocation of %zu bytes on socket %d for %s failed", sz, param->socket_id, name); rte_free(te); goto exit; @@ -473,7 +473,7 @@ rte_acl_add_rules(struct rte_acl_ctx *ctx, const struct rte_acl_rule *rules, ((uintptr_t)rules + i * ctx->rule_sz); rc = acl_check_rule(&rv->data); if (rc != 0) { - RTE_LOG(ERR, ACL, "%s(%s): rule #%u is invalid\n", + RTE_LOG_LINE(ERR, ACL, "%s(%s): rule #%u is invalid", __func__, ctx->name, i + 1); return rc; } diff --git a/lib/acl/tb_mem.c b/lib/acl/tb_mem.c index 4ee65b23da..7914899363 100644 --- a/lib/acl/tb_mem.c +++ b/lib/acl/tb_mem.c @@ -26,7 +26,7 @@ tb_pool(struct tb_mem_pool *pool, size_t sz) size = sz + pool->alignment - 1; block = calloc(1, size + sizeof(*pool->block)); if (block == NULL) { - RTE_LOG(ERR, ACL, "%s(%zu) failed, currently allocated by pool: %zu bytes\n", + RTE_LOG_LINE(ERR, ACL, "%s(%zu) failed, currently allocated by pool: %zu bytes", __func__, sz, pool->alloc); siglongjmp(pool->fail, -ENOMEM); return NULL; diff --git a/lib/eal/common/eal_common_bus.c b/lib/eal/common/eal_common_bus.c index acac14131a..456f27112c 100644 --- a/lib/eal/common/eal_common_bus.c +++ b/lib/eal/common/eal_common_bus.c @@ -35,14 +35,14 @@ rte_bus_register(struct rte_bus *bus) RTE_VERIFY(!bus->plug || bus->unplug); TAILQ_INSERT_TAIL(&rte_bus_list, bus, next); - RTE_LOG(DEBUG, EAL, "Registered [%s] bus.\n", rte_bus_name(bus)); + RTE_LOG_LINE(DEBUG, EAL, "Registered [%s] bus.", rte_bus_name(bus)); } void rte_bus_unregister(struct rte_bus *bus) { TAILQ_REMOVE(&rte_bus_list, bus, next); - RTE_LOG(DEBUG, EAL, "Unregistered [%s] bus.\n", rte_bus_name(bus)); + RTE_LOG_LINE(DEBUG, EAL, "Unregistered [%s] bus.", rte_bus_name(bus)); } /* Scan all the buses for registered devices */ @@ -55,7 +55,7 @@ rte_bus_scan(void) TAILQ_FOREACH(bus, &rte_bus_list, next) { ret = bus->scan(); if (ret) - RTE_LOG(ERR, EAL, "Scan for (%s) bus failed.\n", + RTE_LOG_LINE(ERR, EAL, "Scan for (%s) bus failed.", rte_bus_name(bus)); } @@ -77,14 +77,14 @@ rte_bus_probe(void) ret = bus->probe(); if (ret) - RTE_LOG(ERR, EAL, "Bus (%s) probe failed.\n", + RTE_LOG_LINE(ERR, EAL, "Bus (%s) probe failed.", rte_bus_name(bus)); } if (vbus) { ret = vbus->probe(); if (ret) - RTE_LOG(ERR, EAL, "Bus (%s) probe failed.\n", + RTE_LOG_LINE(ERR, EAL, "Bus (%s) probe failed.", rte_bus_name(vbus)); } @@ -133,7 +133,7 @@ rte_bus_dump(FILE *f) TAILQ_FOREACH(bus, &rte_bus_list, next) { ret = bus_dump_one(f, bus); if (ret) { - RTE_LOG(ERR, EAL, "Unable to write to stream (%d)\n", + RTE_LOG_LINE(ERR, EAL, "Unable to write to stream (%d)", ret); break; } @@ -235,15 +235,15 @@ rte_bus_get_iommu_class(void) continue; bus_iova_mode = bus->get_iommu_class(); - RTE_LOG(DEBUG, EAL, "Bus %s wants IOVA as '%s'\n", + RTE_LOG_LINE(DEBUG, EAL, "Bus %s wants IOVA as '%s'", rte_bus_name(bus), bus_iova_mode == RTE_IOVA_DC ? "DC" : (bus_iova_mode == RTE_IOVA_PA ? "PA" : "VA")); if (bus_iova_mode == RTE_IOVA_PA) { buses_want_pa = true; if (!RTE_IOVA_IN_MBUF) - RTE_LOG(WARNING, EAL, - "Bus %s wants IOVA as PA not compatible with 'enable_iova_as_pa=false' build option.\n", + RTE_LOG_LINE(WARNING, EAL, + "Bus %s wants IOVA as PA not compatible with 'enable_iova_as_pa=false' build option.", rte_bus_name(bus)); } else if (bus_iova_mode == RTE_IOVA_VA) buses_want_va = true; @@ -255,8 +255,8 @@ rte_bus_get_iommu_class(void) } else { mode = RTE_IOVA_DC; if (buses_want_va) { - RTE_LOG(WARNING, EAL, "Some buses want 'VA' but forcing 'DC' because other buses want 'PA'.\n"); - RTE_LOG(WARNING, EAL, "Depending on the final decision by the EAL, not all buses may be able to initialize.\n"); + RTE_LOG_LINE(WARNING, EAL, "Some buses want 'VA' but forcing 'DC' because other buses want 'PA'."); + RTE_LOG_LINE(WARNING, EAL, "Depending on the final decision by the EAL, not all buses may be able to initialize."); } } diff --git a/lib/eal/common/eal_common_class.c b/lib/eal/common/eal_common_class.c index 0187076af1..02a983b286 100644 --- a/lib/eal/common/eal_common_class.c +++ b/lib/eal/common/eal_common_class.c @@ -19,14 +19,14 @@ rte_class_register(struct rte_class *class) RTE_VERIFY(class->name && strlen(class->name)); TAILQ_INSERT_TAIL(&rte_class_list, class, next); - RTE_LOG(DEBUG, EAL, "Registered [%s] device class.\n", class->name); + RTE_LOG_LINE(DEBUG, EAL, "Registered [%s] device class.", class->name); } void rte_class_unregister(struct rte_class *class) { TAILQ_REMOVE(&rte_class_list, class, next); - RTE_LOG(DEBUG, EAL, "Unregistered [%s] device class.\n", class->name); + RTE_LOG_LINE(DEBUG, EAL, "Unregistered [%s] device class.", class->name); } struct rte_class * diff --git a/lib/eal/common/eal_common_config.c b/lib/eal/common/eal_common_config.c index 0daf0f3188..4b6530f2fb 100644 --- a/lib/eal/common/eal_common_config.c +++ b/lib/eal/common/eal_common_config.c @@ -31,7 +31,7 @@ int eal_set_runtime_dir(const char *run_dir) { if (strlcpy(runtime_dir, run_dir, PATH_MAX) >= PATH_MAX) { - RTE_LOG(ERR, EAL, "Runtime directory string too long\n"); + RTE_LOG_LINE(ERR, EAL, "Runtime directory string too long"); return -1; } diff --git a/lib/eal/common/eal_common_debug.c b/lib/eal/common/eal_common_debug.c index 9cac9c6390..065843f34e 100644 --- a/lib/eal/common/eal_common_debug.c +++ b/lib/eal/common/eal_common_debug.c @@ -16,7 +16,7 @@ __rte_panic(const char *funcname, const char *format, ...) { va_list ap; - rte_log(RTE_LOG_CRIT, RTE_LOGTYPE_EAL, "PANIC in %s():\n", funcname); + RTE_LOG_LINE(CRIT, EAL, "PANIC in %s():", funcname); va_start(ap, format); rte_vlog(RTE_LOG_CRIT, RTE_LOGTYPE_EAL, format, ap); va_end(ap); @@ -42,7 +42,7 @@ rte_exit(int exit_code, const char *format, ...) va_end(ap); if (rte_eal_cleanup() != 0 && rte_errno != EALREADY) - RTE_LOG(CRIT, EAL, - "EAL could not release all resources\n"); + RTE_LOG_LINE(CRIT, EAL, + "EAL could not release all resources"); exit(exit_code); } diff --git a/lib/eal/common/eal_common_dev.c b/lib/eal/common/eal_common_dev.c index 614ef6c9fc..359907798a 100644 --- a/lib/eal/common/eal_common_dev.c +++ b/lib/eal/common/eal_common_dev.c @@ -182,7 +182,7 @@ local_dev_probe(const char *devargs, struct rte_device **new_dev) goto err_devarg; if (da->bus->plug == NULL) { - RTE_LOG(ERR, EAL, "Function plug not supported by bus (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Function plug not supported by bus (%s)", da->bus->name); ret = -ENOTSUP; goto err_devarg; @@ -199,7 +199,7 @@ local_dev_probe(const char *devargs, struct rte_device **new_dev) dev = da->bus->find_device(NULL, cmp_dev_name, da->name); if (dev == NULL) { - RTE_LOG(ERR, EAL, "Cannot find device (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Cannot find device (%s)", da->name); ret = -ENODEV; goto err_devarg; @@ -214,7 +214,7 @@ local_dev_probe(const char *devargs, struct rte_device **new_dev) ret = -ENOTSUP; if (ret && !rte_dev_is_probed(dev)) { /* if hasn't ever succeeded */ - RTE_LOG(ERR, EAL, "Driver cannot attach the device (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Driver cannot attach the device (%s)", dev->name); return ret; } @@ -248,13 +248,13 @@ rte_dev_probe(const char *devargs) */ ret = eal_dev_hotplug_request_to_primary(&req); if (ret != 0) { - RTE_LOG(ERR, EAL, - "Failed to send hotplug request to primary\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to send hotplug request to primary"); return -ENOMSG; } if (req.result != 0) - RTE_LOG(ERR, EAL, - "Failed to hotplug add device\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to hotplug add device"); return req.result; } @@ -264,8 +264,8 @@ rte_dev_probe(const char *devargs) ret = local_dev_probe(devargs, &dev); if (ret != 0) { - RTE_LOG(ERR, EAL, - "Failed to attach device on primary process\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to attach device on primary process"); /** * it is possible that secondary process failed to attached a @@ -282,8 +282,8 @@ rte_dev_probe(const char *devargs) /* if any communication error, we need to rollback. */ if (ret != 0) { - RTE_LOG(ERR, EAL, - "Failed to send hotplug add request to secondary\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to send hotplug add request to secondary"); ret = -ENOMSG; goto rollback; } @@ -293,8 +293,8 @@ rte_dev_probe(const char *devargs) * is necessary. */ if (req.result != 0) { - RTE_LOG(ERR, EAL, - "Failed to attach device on secondary process\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to attach device on secondary process"); ret = req.result; /* for -EEXIST, we don't need to rollback. */ @@ -310,15 +310,15 @@ rte_dev_probe(const char *devargs) /* primary send rollback request to secondary. */ if (eal_dev_hotplug_request_to_secondary(&req) != 0) - RTE_LOG(WARNING, EAL, + RTE_LOG_LINE(WARNING, EAL, "Failed to rollback device attach on secondary." - "Devices in secondary may not sync with primary\n"); + "Devices in secondary may not sync with primary"); /* primary rollback itself. */ if (local_dev_remove(dev) != 0) - RTE_LOG(WARNING, EAL, + RTE_LOG_LINE(WARNING, EAL, "Failed to rollback device attach on primary." - "Devices in secondary may not sync with primary\n"); + "Devices in secondary may not sync with primary"); return ret; } @@ -331,13 +331,13 @@ rte_eal_hotplug_remove(const char *busname, const char *devname) bus = rte_bus_find_by_name(busname); if (bus == NULL) { - RTE_LOG(ERR, EAL, "Cannot find bus (%s)\n", busname); + RTE_LOG_LINE(ERR, EAL, "Cannot find bus (%s)", busname); return -ENOENT; } dev = bus->find_device(NULL, cmp_dev_name, devname); if (dev == NULL) { - RTE_LOG(ERR, EAL, "Cannot find plugged device (%s)\n", devname); + RTE_LOG_LINE(ERR, EAL, "Cannot find plugged device (%s)", devname); return -EINVAL; } @@ -351,14 +351,14 @@ local_dev_remove(struct rte_device *dev) int ret; if (dev->bus->unplug == NULL) { - RTE_LOG(ERR, EAL, "Function unplug not supported by bus (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Function unplug not supported by bus (%s)", dev->bus->name); return -ENOTSUP; } ret = dev->bus->unplug(dev); if (ret) { - RTE_LOG(ERR, EAL, "Driver cannot detach the device (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Driver cannot detach the device (%s)", dev->name); return (ret < 0) ? ret : -ENOENT; } @@ -374,7 +374,7 @@ rte_dev_remove(struct rte_device *dev) int ret; if (!rte_dev_is_probed(dev)) { - RTE_LOG(ERR, EAL, "Device is not probed\n"); + RTE_LOG_LINE(ERR, EAL, "Device is not probed"); return -ENOENT; } @@ -394,13 +394,13 @@ rte_dev_remove(struct rte_device *dev) */ ret = eal_dev_hotplug_request_to_primary(&req); if (ret != 0) { - RTE_LOG(ERR, EAL, - "Failed to send hotplug request to primary\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to send hotplug request to primary"); return -ENOMSG; } if (req.result != 0) - RTE_LOG(ERR, EAL, - "Failed to hotplug remove device\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to hotplug remove device"); return req.result; } @@ -414,8 +414,8 @@ rte_dev_remove(struct rte_device *dev) * part of the secondary processes still detached it successfully. */ if (ret != 0) { - RTE_LOG(ERR, EAL, - "Failed to send device detach request to secondary\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to send device detach request to secondary"); ret = -ENOMSG; goto rollback; } @@ -425,8 +425,8 @@ rte_dev_remove(struct rte_device *dev) * is necessary. */ if (req.result != 0) { - RTE_LOG(ERR, EAL, - "Failed to detach device on secondary process\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to detach device on secondary process"); ret = req.result; /** * if -ENOENT, we don't need to rollback, since devices is @@ -441,8 +441,8 @@ rte_dev_remove(struct rte_device *dev) /* if primary failed, still need to consider if rollback is necessary */ if (ret != 0) { - RTE_LOG(ERR, EAL, - "Failed to detach device on primary process\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to detach device on primary process"); /* if -ENOENT, we don't need to rollback */ if (ret == -ENOENT) return ret; @@ -456,9 +456,9 @@ rte_dev_remove(struct rte_device *dev) /* primary send rollback request to secondary. */ if (eal_dev_hotplug_request_to_secondary(&req) != 0) - RTE_LOG(WARNING, EAL, + RTE_LOG_LINE(WARNING, EAL, "Failed to rollback device detach on secondary." - "Devices in secondary may not sync with primary\n"); + "Devices in secondary may not sync with primary"); return ret; } @@ -508,16 +508,16 @@ rte_dev_event_callback_register(const char *device_name, } TAILQ_INSERT_TAIL(&dev_event_cbs, event_cb, next); } else { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "Failed to allocate memory for device " "event callback."); ret = -ENOMEM; goto error; } } else { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "The callback is already exist, no need " - "to register again.\n"); + "to register again."); event_cb = NULL; ret = -EEXIST; goto error; @@ -635,17 +635,17 @@ rte_dev_iterator_init(struct rte_dev_iterator *it, * one layer specified. */ if (bus == NULL && cls == NULL) { - RTE_LOG(DEBUG, EAL, "Either bus or class must be specified.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Either bus or class must be specified."); rte_errno = EINVAL; goto get_out; } if (bus != NULL && bus->dev_iterate == NULL) { - RTE_LOG(DEBUG, EAL, "Bus %s not supported\n", bus->name); + RTE_LOG_LINE(DEBUG, EAL, "Bus %s not supported", bus->name); rte_errno = ENOTSUP; goto get_out; } if (cls != NULL && cls->dev_iterate == NULL) { - RTE_LOG(DEBUG, EAL, "Class %s not supported\n", cls->name); + RTE_LOG_LINE(DEBUG, EAL, "Class %s not supported", cls->name); rte_errno = ENOTSUP; goto get_out; } diff --git a/lib/eal/common/eal_common_devargs.c b/lib/eal/common/eal_common_devargs.c index fb5d0a293b..dbf5affa76 100644 --- a/lib/eal/common/eal_common_devargs.c +++ b/lib/eal/common/eal_common_devargs.c @@ -39,12 +39,12 @@ devargs_bus_parse_default(struct rte_devargs *devargs, /* Parse devargs name from bus key-value list. */ name = rte_kvargs_get(bus_args, "name"); if (name == NULL) { - RTE_LOG(DEBUG, EAL, "devargs name not found: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "devargs name not found: %s", devargs->data); return 0; } if (rte_strscpy(devargs->name, name, sizeof(devargs->name)) < 0) { - RTE_LOG(ERR, EAL, "devargs name too long: %s\n", + RTE_LOG_LINE(ERR, EAL, "devargs name too long: %s", devargs->data); return -E2BIG; } @@ -79,7 +79,7 @@ rte_devargs_layers_parse(struct rte_devargs *devargs, if (devargs->data != devstr) { devargs->data = strdup(devstr); if (devargs->data == NULL) { - RTE_LOG(ERR, EAL, "OOM\n"); + RTE_LOG_LINE(ERR, EAL, "OOM"); ret = -ENOMEM; goto get_out; } @@ -133,7 +133,7 @@ rte_devargs_layers_parse(struct rte_devargs *devargs, devargs->bus_str = layers[i].str; devargs->bus = rte_bus_find_by_name(kv->value); if (devargs->bus == NULL) { - RTE_LOG(ERR, EAL, "Could not find bus \"%s\"\n", + RTE_LOG_LINE(ERR, EAL, "Could not find bus \"%s\"", kv->value); ret = -EFAULT; goto get_out; @@ -142,7 +142,7 @@ rte_devargs_layers_parse(struct rte_devargs *devargs, devargs->cls_str = layers[i].str; devargs->cls = rte_class_find_by_name(kv->value); if (devargs->cls == NULL) { - RTE_LOG(ERR, EAL, "Could not find class \"%s\"\n", + RTE_LOG_LINE(ERR, EAL, "Could not find class \"%s\"", kv->value); ret = -EFAULT; goto get_out; @@ -217,7 +217,7 @@ rte_devargs_parse(struct rte_devargs *da, const char *dev) da->name[i] = devname[i]; i++; if (i == maxlen) { - RTE_LOG(WARNING, EAL, "Parsing \"%s\": device name should be shorter than %zu\n", + RTE_LOG_LINE(WARNING, EAL, "Parsing \"%s\": device name should be shorter than %zu", dev, maxlen); da->name[i - 1] = '\0'; return -EINVAL; @@ -227,7 +227,7 @@ rte_devargs_parse(struct rte_devargs *da, const char *dev) if (bus == NULL) { bus = rte_bus_find_by_device_name(da->name); if (bus == NULL) { - RTE_LOG(ERR, EAL, "failed to parse device \"%s\"\n", + RTE_LOG_LINE(ERR, EAL, "failed to parse device \"%s\"", da->name); return -EFAULT; } @@ -239,7 +239,7 @@ rte_devargs_parse(struct rte_devargs *da, const char *dev) else da->data = strdup(""); if (da->data == NULL) { - RTE_LOG(ERR, EAL, "not enough memory to parse arguments\n"); + RTE_LOG_LINE(ERR, EAL, "not enough memory to parse arguments"); return -ENOMEM; } da->drv_str = da->data; @@ -266,7 +266,7 @@ rte_devargs_parsef(struct rte_devargs *da, const char *format, ...) len += 1; dev = calloc(1, (size_t)len); if (dev == NULL) { - RTE_LOG(ERR, EAL, "not enough memory to parse device\n"); + RTE_LOG_LINE(ERR, EAL, "not enough memory to parse device"); return -ENOMEM; } diff --git a/lib/eal/common/eal_common_dynmem.c b/lib/eal/common/eal_common_dynmem.c index 95da55d9b0..721cb63bf2 100644 --- a/lib/eal/common/eal_common_dynmem.c +++ b/lib/eal/common/eal_common_dynmem.c @@ -76,7 +76,7 @@ eal_dynmem_memseg_lists_init(void) n_memtypes = internal_conf->num_hugepage_sizes * rte_socket_count(); memtypes = calloc(n_memtypes, sizeof(*memtypes)); if (memtypes == NULL) { - RTE_LOG(ERR, EAL, "Cannot allocate space for memory types\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate space for memory types"); return -1; } @@ -101,8 +101,8 @@ eal_dynmem_memseg_lists_init(void) memtypes[cur_type].page_sz = hugepage_sz; memtypes[cur_type].socket_id = socket_id; - RTE_LOG(DEBUG, EAL, "Detected memory type: " - "socket_id:%u hugepage_sz:%" PRIu64 "\n", + RTE_LOG_LINE(DEBUG, EAL, "Detected memory type: " + "socket_id:%u hugepage_sz:%" PRIu64, socket_id, hugepage_sz); } } @@ -120,7 +120,7 @@ eal_dynmem_memseg_lists_init(void) max_seglists_per_type = RTE_MAX_MEMSEG_LISTS / n_memtypes; if (max_seglists_per_type == 0) { - RTE_LOG(ERR, EAL, "Cannot accommodate all memory types, please increase RTE_MAX_MEMSEG_LISTS\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot accommodate all memory types, please increase RTE_MAX_MEMSEG_LISTS"); goto out; } @@ -171,15 +171,15 @@ eal_dynmem_memseg_lists_init(void) /* limit number of segment lists according to our maximum */ n_seglists = RTE_MIN(n_seglists, max_seglists_per_type); - RTE_LOG(DEBUG, EAL, "Creating %i segment lists: " - "n_segs:%i socket_id:%i hugepage_sz:%" PRIu64 "\n", + RTE_LOG_LINE(DEBUG, EAL, "Creating %i segment lists: " + "n_segs:%i socket_id:%i hugepage_sz:%" PRIu64, n_seglists, n_segs, socket_id, pagesz); /* create all segment lists */ for (cur_seglist = 0; cur_seglist < n_seglists; cur_seglist++) { if (msl_idx >= RTE_MAX_MEMSEG_LISTS) { - RTE_LOG(ERR, EAL, - "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS\n"); + RTE_LOG_LINE(ERR, EAL, + "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS"); goto out; } msl = &mcfg->memsegs[msl_idx++]; @@ -189,7 +189,7 @@ eal_dynmem_memseg_lists_init(void) goto out; if (eal_memseg_list_alloc(msl, 0)) { - RTE_LOG(ERR, EAL, "Cannot allocate VA space for memseg list\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate VA space for memseg list"); goto out; } } @@ -287,9 +287,9 @@ eal_dynmem_hugepage_init(void) if (num_pages == 0) continue; - RTE_LOG(DEBUG, EAL, + RTE_LOG_LINE(DEBUG, EAL, "Allocating %u pages of size %" PRIu64 "M " - "on socket %i\n", + "on socket %i", num_pages, hpi->hugepage_sz >> 20, socket_id); /* we may not be able to allocate all pages in one go, @@ -307,7 +307,7 @@ eal_dynmem_hugepage_init(void) pages = malloc(sizeof(*pages) * needed); if (pages == NULL) { - RTE_LOG(ERR, EAL, "Failed to malloc pages\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to malloc pages"); return -1; } @@ -342,7 +342,7 @@ eal_dynmem_hugepage_init(void) continue; if (rte_mem_alloc_validator_register("socket-limit", limits_callback, i, limit)) - RTE_LOG(ERR, EAL, "Failed to register socket limits validator callback\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to register socket limits validator callback"); } } return 0; @@ -515,8 +515,8 @@ eal_dynmem_calc_num_pages_per_socket( internal_conf->socket_mem[socket] / 0x100000); available = requested - ((unsigned int)(memory[socket] / 0x100000)); - RTE_LOG(ERR, EAL, "Not enough memory available on " - "socket %u! Requested: %uMB, available: %uMB\n", + RTE_LOG_LINE(ERR, EAL, "Not enough memory available on " + "socket %u! Requested: %uMB, available: %uMB", socket, requested, available); return -1; } @@ -526,8 +526,8 @@ eal_dynmem_calc_num_pages_per_socket( if (total_mem > 0) { requested = (unsigned int)(internal_conf->memory / 0x100000); available = requested - (unsigned int)(total_mem / 0x100000); - RTE_LOG(ERR, EAL, "Not enough memory available! " - "Requested: %uMB, available: %uMB\n", + RTE_LOG_LINE(ERR, EAL, "Not enough memory available! " + "Requested: %uMB, available: %uMB", requested, available); return -1; } diff --git a/lib/eal/common/eal_common_fbarray.c b/lib/eal/common/eal_common_fbarray.c index 2055bfa57d..7b90e01500 100644 --- a/lib/eal/common/eal_common_fbarray.c +++ b/lib/eal/common/eal_common_fbarray.c @@ -83,7 +83,7 @@ resize_and_map(int fd, const char *path, void *addr, size_t len) void *map_addr; if (eal_file_truncate(fd, len)) { - RTE_LOG(ERR, EAL, "Cannot truncate %s\n", path); + RTE_LOG_LINE(ERR, EAL, "Cannot truncate %s", path); return -1; } @@ -755,7 +755,7 @@ rte_fbarray_init(struct rte_fbarray *arr, const char *name, unsigned int len, void *new_data = rte_mem_map(data, mmap_len, RTE_PROT_READ | RTE_PROT_WRITE, flags, fd, 0); if (new_data == NULL) { - RTE_LOG(DEBUG, EAL, "%s(): couldn't remap anonymous memory: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): couldn't remap anonymous memory: %s", __func__, rte_strerror(rte_errno)); goto fail; } @@ -770,12 +770,12 @@ rte_fbarray_init(struct rte_fbarray *arr, const char *name, unsigned int len, */ fd = eal_file_open(path, EAL_OPEN_CREATE | EAL_OPEN_READWRITE); if (fd < 0) { - RTE_LOG(DEBUG, EAL, "%s(): couldn't open %s: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): couldn't open %s: %s", __func__, path, rte_strerror(rte_errno)); goto fail; } else if (eal_file_lock( fd, EAL_FLOCK_EXCLUSIVE, EAL_FLOCK_RETURN)) { - RTE_LOG(DEBUG, EAL, "%s(): couldn't lock %s: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): couldn't lock %s: %s", __func__, path, rte_strerror(rte_errno)); rte_errno = EBUSY; goto fail; @@ -1017,7 +1017,7 @@ rte_fbarray_destroy(struct rte_fbarray *arr) */ fd = tmp->fd; if (eal_file_lock(fd, EAL_FLOCK_EXCLUSIVE, EAL_FLOCK_RETURN)) { - RTE_LOG(DEBUG, EAL, "Cannot destroy fbarray - another process is using it\n"); + RTE_LOG_LINE(DEBUG, EAL, "Cannot destroy fbarray - another process is using it"); rte_errno = EBUSY; ret = -1; goto out; @@ -1026,7 +1026,7 @@ rte_fbarray_destroy(struct rte_fbarray *arr) /* we're OK to destroy the file */ eal_get_fbarray_path(path, sizeof(path), arr->name); if (unlink(path)) { - RTE_LOG(DEBUG, EAL, "Cannot unlink fbarray: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "Cannot unlink fbarray: %s", strerror(errno)); rte_errno = errno; /* diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c index 97b64fed58..6a5723a068 100644 --- a/lib/eal/common/eal_common_interrupts.c +++ b/lib/eal/common/eal_common_interrupts.c @@ -15,7 +15,7 @@ /* Macros to check for valid interrupt handle */ #define CHECK_VALID_INTR_HANDLE(intr_handle) do { \ if (intr_handle == NULL) { \ - RTE_LOG(DEBUG, EAL, "Interrupt instance unallocated\n"); \ + RTE_LOG_LINE(DEBUG, EAL, "Interrupt instance unallocated"); \ rte_errno = EINVAL; \ goto fail; \ } \ @@ -37,7 +37,7 @@ struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags) * defined flags. */ if ((flags & ~RTE_INTR_INSTANCE_KNOWN_FLAGS) != 0) { - RTE_LOG(DEBUG, EAL, "Invalid alloc flag passed 0x%x\n", flags); + RTE_LOG_LINE(DEBUG, EAL, "Invalid alloc flag passed 0x%x", flags); rte_errno = EINVAL; return NULL; } @@ -48,7 +48,7 @@ struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags) else intr_handle = calloc(1, sizeof(*intr_handle)); if (intr_handle == NULL) { - RTE_LOG(ERR, EAL, "Failed to allocate intr_handle\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to allocate intr_handle"); rte_errno = ENOMEM; return NULL; } @@ -61,7 +61,7 @@ struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags) sizeof(int)); } if (intr_handle->efds == NULL) { - RTE_LOG(ERR, EAL, "Fail to allocate event fd list\n"); + RTE_LOG_LINE(ERR, EAL, "Fail to allocate event fd list"); rte_errno = ENOMEM; goto fail; } @@ -75,7 +75,7 @@ struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags) sizeof(struct rte_epoll_event)); } if (intr_handle->elist == NULL) { - RTE_LOG(ERR, EAL, "fail to allocate event fd list\n"); + RTE_LOG_LINE(ERR, EAL, "fail to allocate event fd list"); rte_errno = ENOMEM; goto fail; } @@ -100,7 +100,7 @@ struct rte_intr_handle *rte_intr_instance_dup(const struct rte_intr_handle *src) struct rte_intr_handle *intr_handle; if (src == NULL) { - RTE_LOG(DEBUG, EAL, "Source interrupt instance unallocated\n"); + RTE_LOG_LINE(DEBUG, EAL, "Source interrupt instance unallocated"); rte_errno = EINVAL; return NULL; } @@ -129,7 +129,7 @@ int rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size) CHECK_VALID_INTR_HANDLE(intr_handle); if (size == 0) { - RTE_LOG(DEBUG, EAL, "Size can't be zero\n"); + RTE_LOG_LINE(DEBUG, EAL, "Size can't be zero"); rte_errno = EINVAL; goto fail; } @@ -143,7 +143,7 @@ int rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size) tmp_efds = realloc(intr_handle->efds, size * sizeof(int)); } if (tmp_efds == NULL) { - RTE_LOG(ERR, EAL, "Failed to realloc the efds list\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to realloc the efds list"); rte_errno = ENOMEM; goto fail; } @@ -157,7 +157,7 @@ int rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size) size * sizeof(struct rte_epoll_event)); } if (tmp_elist == NULL) { - RTE_LOG(ERR, EAL, "Failed to realloc the event list\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to realloc the event list"); rte_errno = ENOMEM; goto fail; } @@ -253,8 +253,8 @@ int rte_intr_max_intr_set(struct rte_intr_handle *intr_handle, CHECK_VALID_INTR_HANDLE(intr_handle); if (max_intr > intr_handle->nb_intr) { - RTE_LOG(DEBUG, EAL, "Maximum interrupt vector ID (%d) exceeds " - "the number of available events (%d)\n", max_intr, + RTE_LOG_LINE(DEBUG, EAL, "Maximum interrupt vector ID (%d) exceeds " + "the number of available events (%d)", max_intr, intr_handle->nb_intr); rte_errno = ERANGE; goto fail; @@ -332,7 +332,7 @@ int rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle, CHECK_VALID_INTR_HANDLE(intr_handle); if (index >= intr_handle->nb_intr) { - RTE_LOG(DEBUG, EAL, "Invalid index %d, max limit %d\n", index, + RTE_LOG_LINE(DEBUG, EAL, "Invalid index %d, max limit %d", index, intr_handle->nb_intr); rte_errno = EINVAL; goto fail; @@ -349,7 +349,7 @@ int rte_intr_efds_index_set(struct rte_intr_handle *intr_handle, CHECK_VALID_INTR_HANDLE(intr_handle); if (index >= intr_handle->nb_intr) { - RTE_LOG(DEBUG, EAL, "Invalid index %d, max limit %d\n", index, + RTE_LOG_LINE(DEBUG, EAL, "Invalid index %d, max limit %d", index, intr_handle->nb_intr); rte_errno = ERANGE; goto fail; @@ -368,7 +368,7 @@ struct rte_epoll_event *rte_intr_elist_index_get( CHECK_VALID_INTR_HANDLE(intr_handle); if (index >= intr_handle->nb_intr) { - RTE_LOG(DEBUG, EAL, "Invalid index %d, max limit %d\n", index, + RTE_LOG_LINE(DEBUG, EAL, "Invalid index %d, max limit %d", index, intr_handle->nb_intr); rte_errno = ERANGE; goto fail; @@ -385,7 +385,7 @@ int rte_intr_elist_index_set(struct rte_intr_handle *intr_handle, CHECK_VALID_INTR_HANDLE(intr_handle); if (index >= intr_handle->nb_intr) { - RTE_LOG(DEBUG, EAL, "Invalid index %d, max limit %d\n", index, + RTE_LOG_LINE(DEBUG, EAL, "Invalid index %d, max limit %d", index, intr_handle->nb_intr); rte_errno = ERANGE; goto fail; @@ -408,7 +408,7 @@ int rte_intr_vec_list_alloc(struct rte_intr_handle *intr_handle, return 0; if (size > intr_handle->nb_intr) { - RTE_LOG(DEBUG, EAL, "Invalid size %d, max limit %d\n", size, + RTE_LOG_LINE(DEBUG, EAL, "Invalid size %d, max limit %d", size, intr_handle->nb_intr); rte_errno = ERANGE; goto fail; @@ -419,7 +419,7 @@ int rte_intr_vec_list_alloc(struct rte_intr_handle *intr_handle, else intr_handle->intr_vec = calloc(size, sizeof(int)); if (intr_handle->intr_vec == NULL) { - RTE_LOG(ERR, EAL, "Failed to allocate %d intr_vec\n", size); + RTE_LOG_LINE(ERR, EAL, "Failed to allocate %d intr_vec", size); rte_errno = ENOMEM; goto fail; } @@ -437,7 +437,7 @@ int rte_intr_vec_list_index_get(const struct rte_intr_handle *intr_handle, CHECK_VALID_INTR_HANDLE(intr_handle); if (index >= intr_handle->vec_list_size) { - RTE_LOG(DEBUG, EAL, "Index %d greater than vec list size %d\n", + RTE_LOG_LINE(DEBUG, EAL, "Index %d greater than vec list size %d", index, intr_handle->vec_list_size); rte_errno = ERANGE; goto fail; @@ -454,7 +454,7 @@ int rte_intr_vec_list_index_set(struct rte_intr_handle *intr_handle, CHECK_VALID_INTR_HANDLE(intr_handle); if (index >= intr_handle->vec_list_size) { - RTE_LOG(DEBUG, EAL, "Index %d greater than vec list size %d\n", + RTE_LOG_LINE(DEBUG, EAL, "Index %d greater than vec list size %d", index, intr_handle->vec_list_size); rte_errno = ERANGE; goto fail; diff --git a/lib/eal/common/eal_common_lcore.c b/lib/eal/common/eal_common_lcore.c index 6807a38247..4ec1996d12 100644 --- a/lib/eal/common/eal_common_lcore.c +++ b/lib/eal/common/eal_common_lcore.c @@ -174,8 +174,8 @@ rte_eal_cpu_init(void) lcore_config[lcore_id].core_role = ROLE_RTE; lcore_config[lcore_id].core_id = eal_cpu_core_id(lcore_id); lcore_config[lcore_id].socket_id = socket_id; - RTE_LOG(DEBUG, EAL, "Detected lcore %u as " - "core %u on socket %u\n", + RTE_LOG_LINE(DEBUG, EAL, "Detected lcore %u as " + "core %u on socket %u", lcore_id, lcore_config[lcore_id].core_id, lcore_config[lcore_id].socket_id); count++; @@ -183,17 +183,17 @@ rte_eal_cpu_init(void) for (; lcore_id < CPU_SETSIZE; lcore_id++) { if (eal_cpu_detected(lcore_id) == 0) continue; - RTE_LOG(DEBUG, EAL, "Skipped lcore %u as core %u on socket %u\n", + RTE_LOG_LINE(DEBUG, EAL, "Skipped lcore %u as core %u on socket %u", lcore_id, eal_cpu_core_id(lcore_id), eal_cpu_socket_id(lcore_id)); } /* Set the count of enabled logical cores of the EAL configuration */ config->lcore_count = count; - RTE_LOG(DEBUG, EAL, - "Maximum logical cores by configuration: %u\n", + RTE_LOG_LINE(DEBUG, EAL, + "Maximum logical cores by configuration: %u", RTE_MAX_LCORE); - RTE_LOG(INFO, EAL, "Detected CPU lcores: %u\n", config->lcore_count); + RTE_LOG_LINE(INFO, EAL, "Detected CPU lcores: %u", config->lcore_count); /* sort all socket id's in ascending order */ qsort(lcore_to_socket_id, RTE_DIM(lcore_to_socket_id), @@ -208,7 +208,7 @@ rte_eal_cpu_init(void) socket_id; prev_socket_id = socket_id; } - RTE_LOG(INFO, EAL, "Detected NUMA nodes: %u\n", config->numa_node_count); + RTE_LOG_LINE(INFO, EAL, "Detected NUMA nodes: %u", config->numa_node_count); return 0; } @@ -247,7 +247,7 @@ callback_init(struct lcore_callback *callback, unsigned int lcore_id) { if (callback->init == NULL) return 0; - RTE_LOG(DEBUG, EAL, "Call init for lcore callback %s, lcore_id %u\n", + RTE_LOG_LINE(DEBUG, EAL, "Call init for lcore callback %s, lcore_id %u", callback->name, lcore_id); return callback->init(lcore_id, callback->arg); } @@ -257,7 +257,7 @@ callback_uninit(struct lcore_callback *callback, unsigned int lcore_id) { if (callback->uninit == NULL) return; - RTE_LOG(DEBUG, EAL, "Call uninit for lcore callback %s, lcore_id %u\n", + RTE_LOG_LINE(DEBUG, EAL, "Call uninit for lcore callback %s, lcore_id %u", callback->name, lcore_id); callback->uninit(lcore_id, callback->arg); } @@ -311,7 +311,7 @@ rte_lcore_callback_register(const char *name, rte_lcore_init_cb init, } no_init: TAILQ_INSERT_TAIL(&lcore_callbacks, callback, next); - RTE_LOG(DEBUG, EAL, "Registered new lcore callback %s (%sinit, %suninit).\n", + RTE_LOG_LINE(DEBUG, EAL, "Registered new lcore callback %s (%sinit, %suninit).", callback->name, callback->init == NULL ? "NO " : "", callback->uninit == NULL ? "NO " : ""); out: @@ -339,7 +339,7 @@ rte_lcore_callback_unregister(void *handle) no_uninit: TAILQ_REMOVE(&lcore_callbacks, callback, next); rte_rwlock_write_unlock(&lcore_lock); - RTE_LOG(DEBUG, EAL, "Unregistered lcore callback %s-%p.\n", + RTE_LOG_LINE(DEBUG, EAL, "Unregistered lcore callback %s-%p.", callback->name, callback->arg); free_callback(callback); } @@ -361,7 +361,7 @@ eal_lcore_non_eal_allocate(void) break; } if (lcore_id == RTE_MAX_LCORE) { - RTE_LOG(DEBUG, EAL, "No lcore available.\n"); + RTE_LOG_LINE(DEBUG, EAL, "No lcore available."); goto out; } TAILQ_FOREACH(callback, &lcore_callbacks, next) { @@ -375,7 +375,7 @@ eal_lcore_non_eal_allocate(void) callback_uninit(prev, lcore_id); prev = TAILQ_PREV(prev, lcore_callbacks_head, next); } - RTE_LOG(DEBUG, EAL, "Initialization refused for lcore %u.\n", + RTE_LOG_LINE(DEBUG, EAL, "Initialization refused for lcore %u.", lcore_id); cfg->lcore_role[lcore_id] = ROLE_OFF; cfg->lcore_count--; diff --git a/lib/eal/common/eal_common_memalloc.c b/lib/eal/common/eal_common_memalloc.c index ab04479c1c..feb22c2b2f 100644 --- a/lib/eal/common/eal_common_memalloc.c +++ b/lib/eal/common/eal_common_memalloc.c @@ -186,7 +186,7 @@ eal_memalloc_mem_event_callback_register(const char *name, ret = 0; - RTE_LOG(DEBUG, EAL, "Mem event callback '%s:%p' registered\n", + RTE_LOG_LINE(DEBUG, EAL, "Mem event callback '%s:%p' registered", name, arg); unlock: @@ -225,7 +225,7 @@ eal_memalloc_mem_event_callback_unregister(const char *name, void *arg) ret = 0; - RTE_LOG(DEBUG, EAL, "Mem event callback '%s:%p' unregistered\n", + RTE_LOG_LINE(DEBUG, EAL, "Mem event callback '%s:%p' unregistered", name, arg); unlock: @@ -242,7 +242,7 @@ eal_memalloc_mem_event_notify(enum rte_mem_event event, const void *start, rte_rwlock_read_lock(&mem_event_rwlock); TAILQ_FOREACH(entry, &mem_event_callback_list, next) { - RTE_LOG(DEBUG, EAL, "Calling mem event callback '%s:%p'\n", + RTE_LOG_LINE(DEBUG, EAL, "Calling mem event callback '%s:%p'", entry->name, entry->arg); entry->clb(event, start, len, entry->arg); } @@ -293,7 +293,7 @@ eal_memalloc_mem_alloc_validator_register(const char *name, ret = 0; - RTE_LOG(DEBUG, EAL, "Mem alloc validator '%s' on socket %i with limit %zu registered\n", + RTE_LOG_LINE(DEBUG, EAL, "Mem alloc validator '%s' on socket %i with limit %zu registered", name, socket_id, limit); unlock: @@ -332,7 +332,7 @@ eal_memalloc_mem_alloc_validator_unregister(const char *name, int socket_id) ret = 0; - RTE_LOG(DEBUG, EAL, "Mem alloc validator '%s' on socket %i unregistered\n", + RTE_LOG_LINE(DEBUG, EAL, "Mem alloc validator '%s' on socket %i unregistered", name, socket_id); unlock: @@ -351,7 +351,7 @@ eal_memalloc_mem_alloc_validate(int socket_id, size_t new_len) TAILQ_FOREACH(entry, &mem_alloc_validator_list, next) { if (entry->socket_id != socket_id || entry->limit > new_len) continue; - RTE_LOG(DEBUG, EAL, "Calling mem alloc validator '%s' on socket %i\n", + RTE_LOG_LINE(DEBUG, EAL, "Calling mem alloc validator '%s' on socket %i", entry->name, entry->socket_id); if (entry->clb(socket_id, entry->limit, new_len) < 0) ret = -1; diff --git a/lib/eal/common/eal_common_memory.c b/lib/eal/common/eal_common_memory.c index d9433db623..9e183669a6 100644 --- a/lib/eal/common/eal_common_memory.c +++ b/lib/eal/common/eal_common_memory.c @@ -57,7 +57,7 @@ eal_get_virtual_area(void *requested_addr, size_t *size, if (system_page_sz == 0) system_page_sz = rte_mem_page_size(); - RTE_LOG(DEBUG, EAL, "Ask a virtual area of 0x%zx bytes\n", *size); + RTE_LOG_LINE(DEBUG, EAL, "Ask a virtual area of 0x%zx bytes", *size); addr_is_hint = (flags & EAL_VIRTUAL_AREA_ADDR_IS_HINT) > 0; allow_shrink = (flags & EAL_VIRTUAL_AREA_ALLOW_SHRINK) > 0; @@ -94,7 +94,7 @@ eal_get_virtual_area(void *requested_addr, size_t *size, do { map_sz = no_align ? *size : *size + page_sz; if (map_sz > SIZE_MAX) { - RTE_LOG(ERR, EAL, "Map size too big\n"); + RTE_LOG_LINE(ERR, EAL, "Map size too big"); rte_errno = E2BIG; return NULL; } @@ -125,16 +125,16 @@ eal_get_virtual_area(void *requested_addr, size_t *size, RTE_PTR_ALIGN(mapped_addr, page_sz); if (*size == 0) { - RTE_LOG(ERR, EAL, "Cannot get a virtual area of any size: %s\n", + RTE_LOG_LINE(ERR, EAL, "Cannot get a virtual area of any size: %s", rte_strerror(rte_errno)); return NULL; } else if (mapped_addr == NULL) { - RTE_LOG(ERR, EAL, "Cannot get a virtual area: %s\n", + RTE_LOG_LINE(ERR, EAL, "Cannot get a virtual area: %s", rte_strerror(rte_errno)); return NULL; } else if (requested_addr != NULL && !addr_is_hint && aligned_addr != requested_addr) { - RTE_LOG(ERR, EAL, "Cannot get a virtual area at requested address: %p (got %p)\n", + RTE_LOG_LINE(ERR, EAL, "Cannot get a virtual area at requested address: %p (got %p)", requested_addr, aligned_addr); eal_mem_free(mapped_addr, map_sz); rte_errno = EADDRNOTAVAIL; @@ -146,19 +146,19 @@ eal_get_virtual_area(void *requested_addr, size_t *size, * a base virtual address. */ if (internal_conf->base_virtaddr != 0) { - RTE_LOG(WARNING, EAL, "WARNING! Base virtual address hint (%p != %p) not respected!\n", + RTE_LOG_LINE(WARNING, EAL, "WARNING! Base virtual address hint (%p != %p) not respected!", requested_addr, aligned_addr); - RTE_LOG(WARNING, EAL, " This may cause issues with mapping memory into secondary processes\n"); + RTE_LOG_LINE(WARNING, EAL, " This may cause issues with mapping memory into secondary processes"); } else { - RTE_LOG(DEBUG, EAL, "WARNING! Base virtual address hint (%p != %p) not respected!\n", + RTE_LOG_LINE(DEBUG, EAL, "WARNING! Base virtual address hint (%p != %p) not respected!", requested_addr, aligned_addr); - RTE_LOG(DEBUG, EAL, " This may cause issues with mapping memory into secondary processes\n"); + RTE_LOG_LINE(DEBUG, EAL, " This may cause issues with mapping memory into secondary processes"); } } else if (next_baseaddr != NULL) { next_baseaddr = RTE_PTR_ADD(aligned_addr, *size); } - RTE_LOG(DEBUG, EAL, "Virtual area found at %p (size = 0x%zx)\n", + RTE_LOG_LINE(DEBUG, EAL, "Virtual area found at %p (size = 0x%zx)", aligned_addr, *size); if (unmap) { @@ -202,7 +202,7 @@ eal_memseg_list_init_named(struct rte_memseg_list *msl, const char *name, { if (rte_fbarray_init(&msl->memseg_arr, name, n_segs, sizeof(struct rte_memseg))) { - RTE_LOG(ERR, EAL, "Cannot allocate memseg list: %s\n", + RTE_LOG_LINE(ERR, EAL, "Cannot allocate memseg list: %s", rte_strerror(rte_errno)); return -1; } @@ -212,8 +212,8 @@ eal_memseg_list_init_named(struct rte_memseg_list *msl, const char *name, msl->base_va = NULL; msl->heap = heap; - RTE_LOG(DEBUG, EAL, - "Memseg list allocated at socket %i, page size 0x%"PRIx64"kB\n", + RTE_LOG_LINE(DEBUG, EAL, + "Memseg list allocated at socket %i, page size 0x%"PRIx64"kB", socket_id, page_sz >> 10); return 0; @@ -251,8 +251,8 @@ eal_memseg_list_alloc(struct rte_memseg_list *msl, int reserve_flags) * including common code, so don't duplicate the message. */ if (rte_errno == EADDRNOTAVAIL) - RTE_LOG(ERR, EAL, "Cannot reserve %llu bytes at [%p] - " - "please use '--" OPT_BASE_VIRTADDR "' option\n", + RTE_LOG_LINE(ERR, EAL, "Cannot reserve %llu bytes at [%p] - " + "please use '--" OPT_BASE_VIRTADDR "' option", (unsigned long long)mem_sz, msl->base_va); #endif return -1; @@ -260,7 +260,7 @@ eal_memseg_list_alloc(struct rte_memseg_list *msl, int reserve_flags) msl->base_va = addr; msl->len = mem_sz; - RTE_LOG(DEBUG, EAL, "VA reserved for memseg list at %p, size %zx\n", + RTE_LOG_LINE(DEBUG, EAL, "VA reserved for memseg list at %p, size %zx", addr, mem_sz); return 0; @@ -472,7 +472,7 @@ rte_mem_event_callback_register(const char *name, rte_mem_event_callback_t clb, /* FreeBSD boots with legacy mem enabled by default */ if (internal_conf->legacy_mem) { - RTE_LOG(DEBUG, EAL, "Registering mem event callbacks not supported\n"); + RTE_LOG_LINE(DEBUG, EAL, "Registering mem event callbacks not supported"); rte_errno = ENOTSUP; return -1; } @@ -487,7 +487,7 @@ rte_mem_event_callback_unregister(const char *name, void *arg) /* FreeBSD boots with legacy mem enabled by default */ if (internal_conf->legacy_mem) { - RTE_LOG(DEBUG, EAL, "Registering mem event callbacks not supported\n"); + RTE_LOG_LINE(DEBUG, EAL, "Registering mem event callbacks not supported"); rte_errno = ENOTSUP; return -1; } @@ -503,7 +503,7 @@ rte_mem_alloc_validator_register(const char *name, /* FreeBSD boots with legacy mem enabled by default */ if (internal_conf->legacy_mem) { - RTE_LOG(DEBUG, EAL, "Registering mem alloc validators not supported\n"); + RTE_LOG_LINE(DEBUG, EAL, "Registering mem alloc validators not supported"); rte_errno = ENOTSUP; return -1; } @@ -519,7 +519,7 @@ rte_mem_alloc_validator_unregister(const char *name, int socket_id) /* FreeBSD boots with legacy mem enabled by default */ if (internal_conf->legacy_mem) { - RTE_LOG(DEBUG, EAL, "Registering mem alloc validators not supported\n"); + RTE_LOG_LINE(DEBUG, EAL, "Registering mem alloc validators not supported"); rte_errno = ENOTSUP; return -1; } @@ -545,10 +545,10 @@ check_iova(const struct rte_memseg_list *msl __rte_unused, if (!(iova & *mask)) return 0; - RTE_LOG(DEBUG, EAL, "memseg iova %"PRIx64", len %zx, out of range\n", + RTE_LOG_LINE(DEBUG, EAL, "memseg iova %"PRIx64", len %zx, out of range", ms->iova, ms->len); - RTE_LOG(DEBUG, EAL, "\tusing dma mask %"PRIx64"\n", *mask); + RTE_LOG_LINE(DEBUG, EAL, "\tusing dma mask %"PRIx64, *mask); return 1; } @@ -565,7 +565,7 @@ check_dma_mask(uint8_t maskbits, bool thread_unsafe) /* Sanity check. We only check width can be managed with 64 bits * variables. Indeed any higher value is likely wrong. */ if (maskbits > MAX_DMA_MASK_BITS) { - RTE_LOG(ERR, EAL, "wrong dma mask size %u (Max: %u)\n", + RTE_LOG_LINE(ERR, EAL, "wrong dma mask size %u (Max: %u)", maskbits, MAX_DMA_MASK_BITS); return -1; } @@ -925,7 +925,7 @@ rte_extmem_register(void *va_addr, size_t len, rte_iova_t iova_addrs[], /* get next available socket ID */ socket_id = mcfg->next_socket_id; if (socket_id > INT32_MAX) { - RTE_LOG(ERR, EAL, "Cannot assign new socket ID's\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot assign new socket ID's"); rte_errno = ENOSPC; ret = -1; goto unlock; @@ -1030,7 +1030,7 @@ rte_eal_memory_detach(void) /* detach internal memory subsystem data first */ if (eal_memalloc_cleanup()) - RTE_LOG(ERR, EAL, "Could not release memory subsystem data\n"); + RTE_LOG_LINE(ERR, EAL, "Could not release memory subsystem data"); for (i = 0; i < RTE_DIM(mcfg->memsegs); i++) { struct rte_memseg_list *msl = &mcfg->memsegs[i]; @@ -1047,7 +1047,7 @@ rte_eal_memory_detach(void) */ if (!msl->external) if (rte_mem_unmap(msl->base_va, msl->len) != 0) - RTE_LOG(ERR, EAL, "Could not unmap memory: %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not unmap memory: %s", rte_strerror(rte_errno)); /* @@ -1056,7 +1056,7 @@ rte_eal_memory_detach(void) * have no way of knowing if they still do. */ if (rte_fbarray_detach(&msl->memseg_arr)) - RTE_LOG(ERR, EAL, "Could not detach fbarray: %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not detach fbarray: %s", rte_strerror(rte_errno)); } rte_rwlock_write_unlock(&mcfg->memory_hotplug_lock); @@ -1068,7 +1068,7 @@ rte_eal_memory_detach(void) */ if (internal_conf->no_shconf == 0 && mcfg->mem_cfg_addr != 0) { if (rte_mem_unmap(mcfg, RTE_ALIGN(sizeof(*mcfg), page_sz)) != 0) - RTE_LOG(ERR, EAL, "Could not unmap shared memory config: %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not unmap shared memory config: %s", rte_strerror(rte_errno)); } rte_eal_get_configuration()->mem_config = NULL; @@ -1084,7 +1084,7 @@ rte_eal_memory_init(void) eal_get_internal_configuration(); int retval; - RTE_LOG(DEBUG, EAL, "Setting up physically contiguous memory...\n"); + RTE_LOG_LINE(DEBUG, EAL, "Setting up physically contiguous memory..."); if (rte_eal_memseg_init() < 0) goto fail; @@ -1213,7 +1213,7 @@ handle_eal_memzone_info_request(const char *cmd __rte_unused, /* go through each page occupied by this memzone */ msl = rte_mem_virt2memseg_list(mz->addr); if (!msl) { - RTE_LOG(DEBUG, EAL, "Skipping bad memzone\n"); + RTE_LOG_LINE(DEBUG, EAL, "Skipping bad memzone"); return -1; } page_sz = (size_t)mz->hugepage_sz; @@ -1404,7 +1404,7 @@ handle_eal_memseg_info_request(const char *cmd __rte_unused, ms = rte_fbarray_get(arr, ms_idx); if (ms == NULL) { rte_mcfg_mem_read_unlock(); - RTE_LOG(DEBUG, EAL, "Error fetching requested memseg.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Error fetching requested memseg."); return -1; } @@ -1477,7 +1477,7 @@ handle_eal_element_list_request(const char *cmd __rte_unused, ms = rte_fbarray_get(&msl->memseg_arr, ms_idx); if (ms == NULL) { rte_mcfg_mem_read_unlock(); - RTE_LOG(DEBUG, EAL, "Error fetching requested memseg.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Error fetching requested memseg."); return -1; } @@ -1555,7 +1555,7 @@ handle_eal_element_info_request(const char *cmd __rte_unused, ms = rte_fbarray_get(&msl->memseg_arr, ms_idx); if (ms == NULL) { rte_mcfg_mem_read_unlock(); - RTE_LOG(DEBUG, EAL, "Error fetching requested memseg.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Error fetching requested memseg."); return -1; } diff --git a/lib/eal/common/eal_common_memzone.c b/lib/eal/common/eal_common_memzone.c index 1f3e701499..fc478d0fac 100644 --- a/lib/eal/common/eal_common_memzone.c +++ b/lib/eal/common/eal_common_memzone.c @@ -31,13 +31,13 @@ rte_memzone_max_set(size_t max) struct rte_mem_config *mcfg; if (eal_get_internal_configuration()->init_complete > 0) { - RTE_LOG(ERR, EAL, "Max memzone cannot be set after EAL init\n"); + RTE_LOG_LINE(ERR, EAL, "Max memzone cannot be set after EAL init"); return -1; } mcfg = rte_eal_get_configuration()->mem_config; if (mcfg == NULL) { - RTE_LOG(ERR, EAL, "Failed to set max memzone count\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to set max memzone count"); return -1; } @@ -116,16 +116,16 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len, /* no more room in config */ if (arr->count >= arr->len) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "%s(): Number of requested memzone segments exceeds maximum " - "%u\n", __func__, arr->len); + "%u", __func__, arr->len); rte_errno = ENOSPC; return NULL; } if (strlen(name) > sizeof(mz->name) - 1) { - RTE_LOG(DEBUG, EAL, "%s(): memzone <%s>: name too long\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): memzone <%s>: name too long", __func__, name); rte_errno = ENAMETOOLONG; return NULL; @@ -133,7 +133,7 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len, /* zone already exist */ if ((memzone_lookup_thread_unsafe(name)) != NULL) { - RTE_LOG(DEBUG, EAL, "%s(): memzone <%s> already exists\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): memzone <%s> already exists", __func__, name); rte_errno = EEXIST; return NULL; @@ -141,7 +141,7 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len, /* if alignment is not a power of two */ if (align && !rte_is_power_of_2(align)) { - RTE_LOG(ERR, EAL, "%s(): Invalid alignment: %u\n", __func__, + RTE_LOG_LINE(ERR, EAL, "%s(): Invalid alignment: %u", __func__, align); rte_errno = EINVAL; return NULL; @@ -218,7 +218,7 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len, } if (mz == NULL) { - RTE_LOG(ERR, EAL, "%s(): Cannot find free memzone\n", __func__); + RTE_LOG_LINE(ERR, EAL, "%s(): Cannot find free memzone", __func__); malloc_heap_free(elem); rte_errno = ENOSPC; return NULL; @@ -323,7 +323,7 @@ rte_memzone_free(const struct rte_memzone *mz) if (found_mz == NULL) { ret = -EINVAL; } else if (found_mz->addr == NULL) { - RTE_LOG(ERR, EAL, "Memzone is not allocated\n"); + RTE_LOG_LINE(ERR, EAL, "Memzone is not allocated"); ret = -EINVAL; } else { addr = found_mz->addr; @@ -385,7 +385,7 @@ dump_memzone(const struct rte_memzone *mz, void *arg) /* go through each page occupied by this memzone */ msl = rte_mem_virt2memseg_list(mz->addr); if (!msl) { - RTE_LOG(DEBUG, EAL, "Skipping bad memzone\n"); + RTE_LOG_LINE(DEBUG, EAL, "Skipping bad memzone"); return; } page_sz = (size_t)mz->hugepage_sz; @@ -434,11 +434,11 @@ rte_eal_memzone_init(void) if (rte_eal_process_type() == RTE_PROC_PRIMARY && rte_fbarray_init(&mcfg->memzones, "memzone", rte_memzone_max_get(), sizeof(struct rte_memzone))) { - RTE_LOG(ERR, EAL, "Cannot allocate memzone list\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate memzone list"); ret = -1; } else if (rte_eal_process_type() == RTE_PROC_SECONDARY && rte_fbarray_attach(&mcfg->memzones)) { - RTE_LOG(ERR, EAL, "Cannot attach to memzone list\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot attach to memzone list"); ret = -1; } diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c index e9ba01fb89..c1af05b134 100644 --- a/lib/eal/common/eal_common_options.c +++ b/lib/eal/common/eal_common_options.c @@ -255,14 +255,14 @@ eal_option_device_add(enum rte_devtype type, const char *optarg) optlen = strlen(optarg) + 1; devopt = calloc(1, sizeof(*devopt) + optlen); if (devopt == NULL) { - RTE_LOG(ERR, EAL, "Unable to allocate device option\n"); + RTE_LOG_LINE(ERR, EAL, "Unable to allocate device option"); return -ENOMEM; } devopt->type = type; ret = strlcpy(devopt->arg, optarg, optlen); if (ret < 0) { - RTE_LOG(ERR, EAL, "Unable to copy device option\n"); + RTE_LOG_LINE(ERR, EAL, "Unable to copy device option"); free(devopt); return -EINVAL; } @@ -281,7 +281,7 @@ eal_option_device_parse(void) if (ret == 0) { ret = rte_devargs_add(devopt->type, devopt->arg); if (ret) - RTE_LOG(ERR, EAL, "Unable to parse device '%s'\n", + RTE_LOG_LINE(ERR, EAL, "Unable to parse device '%s'", devopt->arg); } TAILQ_REMOVE(&devopt_list, devopt, next); @@ -360,7 +360,7 @@ eal_plugin_add(const char *path) solib = malloc(sizeof(*solib)); if (solib == NULL) { - RTE_LOG(ERR, EAL, "malloc(solib) failed\n"); + RTE_LOG_LINE(ERR, EAL, "malloc(solib) failed"); return -1; } memset(solib, 0, sizeof(*solib)); @@ -390,7 +390,7 @@ eal_plugindir_init(const char *path) d = opendir(path); if (d == NULL) { - RTE_LOG(ERR, EAL, "failed to open directory %s: %s\n", + RTE_LOG_LINE(ERR, EAL, "failed to open directory %s: %s", path, strerror(errno)); return -1; } @@ -442,13 +442,13 @@ verify_perms(const char *dirpath) /* call stat to check for permissions and ensure not world writable */ if (stat(dirpath, &st) != 0) { - RTE_LOG(ERR, EAL, "Error with stat on %s, %s\n", + RTE_LOG_LINE(ERR, EAL, "Error with stat on %s, %s", dirpath, strerror(errno)); return -1; } if (st.st_mode & S_IWOTH) { - RTE_LOG(ERR, EAL, - "Error, directory path %s is world-writable and insecure\n", + RTE_LOG_LINE(ERR, EAL, + "Error, directory path %s is world-writable and insecure", dirpath); return -1; } @@ -466,16 +466,16 @@ eal_dlopen(const char *pathname) /* not a full or relative path, try a load from system dirs */ retval = dlopen(pathname, RTLD_NOW); if (retval == NULL) - RTE_LOG(ERR, EAL, "%s\n", dlerror()); + RTE_LOG_LINE(ERR, EAL, "%s", dlerror()); return retval; } if (realp == NULL) { - RTE_LOG(ERR, EAL, "Error with realpath for %s, %s\n", + RTE_LOG_LINE(ERR, EAL, "Error with realpath for %s, %s", pathname, strerror(errno)); goto out; } if (strnlen(realp, PATH_MAX) == PATH_MAX) { - RTE_LOG(ERR, EAL, "Error, driver path greater than PATH_MAX\n"); + RTE_LOG_LINE(ERR, EAL, "Error, driver path greater than PATH_MAX"); goto out; } @@ -485,7 +485,7 @@ eal_dlopen(const char *pathname) retval = dlopen(realp, RTLD_NOW); if (retval == NULL) - RTE_LOG(ERR, EAL, "%s\n", dlerror()); + RTE_LOG_LINE(ERR, EAL, "%s", dlerror()); out: free(realp); return retval; @@ -500,7 +500,7 @@ is_shared_build(void) len = strlcpy(soname, EAL_SO"."ABI_VERSION, sizeof(soname)); if (len > sizeof(soname)) { - RTE_LOG(ERR, EAL, "Shared lib name too long in shared build check\n"); + RTE_LOG_LINE(ERR, EAL, "Shared lib name too long in shared build check"); len = sizeof(soname) - 1; } @@ -508,10 +508,10 @@ is_shared_build(void) void *handle; /* check if we have this .so loaded, if so - shared build */ - RTE_LOG(DEBUG, EAL, "Checking presence of .so '%s'\n", soname); + RTE_LOG_LINE(DEBUG, EAL, "Checking presence of .so '%s'", soname); handle = dlopen(soname, RTLD_LAZY | RTLD_NOLOAD); if (handle != NULL) { - RTE_LOG(INFO, EAL, "Detected shared linkage of DPDK\n"); + RTE_LOG_LINE(INFO, EAL, "Detected shared linkage of DPDK"); dlclose(handle); return 1; } @@ -524,7 +524,7 @@ is_shared_build(void) } } - RTE_LOG(INFO, EAL, "Detected static linkage of DPDK\n"); + RTE_LOG_LINE(INFO, EAL, "Detected static linkage of DPDK"); return 0; } @@ -549,13 +549,13 @@ eal_plugins_init(void) if (stat(solib->name, &sb) == 0 && S_ISDIR(sb.st_mode)) { if (eal_plugindir_init(solib->name) == -1) { - RTE_LOG(ERR, EAL, - "Cannot init plugin directory %s\n", + RTE_LOG_LINE(ERR, EAL, + "Cannot init plugin directory %s", solib->name); return -1; } } else { - RTE_LOG(DEBUG, EAL, "open shared lib %s\n", + RTE_LOG_LINE(DEBUG, EAL, "open shared lib %s", solib->name); solib->lib_handle = eal_dlopen(solib->name); if (solib->lib_handle == NULL) @@ -626,15 +626,15 @@ eal_parse_service_coremask(const char *coremask) uint32_t lcore = idx; if (main_lcore_parsed && cfg->main_lcore == lcore) { - RTE_LOG(ERR, EAL, - "lcore %u is main lcore, cannot use as service core\n", + RTE_LOG_LINE(ERR, EAL, + "lcore %u is main lcore, cannot use as service core", idx); return -1; } if (eal_cpu_detected(idx) == 0) { - RTE_LOG(ERR, EAL, - "lcore %u unavailable\n", idx); + RTE_LOG_LINE(ERR, EAL, + "lcore %u unavailable", idx); return -1; } @@ -658,9 +658,9 @@ eal_parse_service_coremask(const char *coremask) return -1; if (core_parsed && taken_lcore_count != count) { - RTE_LOG(WARNING, EAL, + RTE_LOG_LINE(WARNING, EAL, "Not all service cores are in the coremask. " - "Please ensure -c or -l includes service cores\n"); + "Please ensure -c or -l includes service cores"); } cfg->service_lcore_count = count; @@ -689,7 +689,7 @@ update_lcore_config(int *cores) for (i = 0; i < RTE_MAX_LCORE; i++) { if (cores[i] != -1) { if (eal_cpu_detected(i) == 0) { - RTE_LOG(ERR, EAL, "lcore %u unavailable\n", i); + RTE_LOG_LINE(ERR, EAL, "lcore %u unavailable", i); ret = -1; continue; } @@ -717,7 +717,7 @@ check_core_list(int *lcores, unsigned int count) if (lcores[i] < RTE_MAX_LCORE) continue; - RTE_LOG(ERR, EAL, "lcore %d >= RTE_MAX_LCORE (%d)\n", + RTE_LOG_LINE(ERR, EAL, "lcore %d >= RTE_MAX_LCORE (%d)", lcores[i], RTE_MAX_LCORE); overflow = true; } @@ -737,9 +737,9 @@ check_core_list(int *lcores, unsigned int count) } if (len > 0) lcorestr[len - 1] = 0; - RTE_LOG(ERR, EAL, "To use high physical core ids, " + RTE_LOG_LINE(ERR, EAL, "To use high physical core ids, " "please use --lcores to map them to lcore ids below RTE_MAX_LCORE, " - "e.g. --lcores %s\n", lcorestr); + "e.g. --lcores %s", lcorestr); return -1; } @@ -769,7 +769,7 @@ rte_eal_parse_coremask(const char *coremask, int *cores) while ((i > 0) && isblank(coremask[i - 1])) i--; if (i == 0) { - RTE_LOG(ERR, EAL, "No lcores in coremask: [%s]\n", + RTE_LOG_LINE(ERR, EAL, "No lcores in coremask: [%s]", coremask_orig); return -1; } @@ -778,7 +778,7 @@ rte_eal_parse_coremask(const char *coremask, int *cores) c = coremask[i]; if (isxdigit(c) == 0) { /* invalid characters */ - RTE_LOG(ERR, EAL, "invalid characters in coremask: [%s]\n", + RTE_LOG_LINE(ERR, EAL, "invalid characters in coremask: [%s]", coremask_orig); return -1; } @@ -787,7 +787,7 @@ rte_eal_parse_coremask(const char *coremask, int *cores) { if ((1 << j) & val) { if (count >= RTE_MAX_LCORE) { - RTE_LOG(ERR, EAL, "Too many lcores provided. Cannot exceed RTE_MAX_LCORE (%d)\n", + RTE_LOG_LINE(ERR, EAL, "Too many lcores provided. Cannot exceed RTE_MAX_LCORE (%d)", RTE_MAX_LCORE); return -1; } @@ -796,7 +796,7 @@ rte_eal_parse_coremask(const char *coremask, int *cores) } } if (count == 0) { - RTE_LOG(ERR, EAL, "No lcores in coremask: [%s]\n", + RTE_LOG_LINE(ERR, EAL, "No lcores in coremask: [%s]", coremask_orig); return -1; } @@ -864,8 +864,8 @@ eal_parse_service_corelist(const char *corelist) uint32_t lcore = idx; if (cfg->main_lcore == lcore && main_lcore_parsed) { - RTE_LOG(ERR, EAL, - "Error: lcore %u is main lcore, cannot use as service core\n", + RTE_LOG_LINE(ERR, EAL, + "Error: lcore %u is main lcore, cannot use as service core", idx); return -1; } @@ -887,9 +887,9 @@ eal_parse_service_corelist(const char *corelist) return -1; if (core_parsed && taken_lcore_count != count) { - RTE_LOG(WARNING, EAL, + RTE_LOG_LINE(WARNING, EAL, "Not all service cores were in the coremask. " - "Please ensure -c or -l includes service cores\n"); + "Please ensure -c or -l includes service cores"); } return 0; @@ -943,7 +943,7 @@ eal_parse_corelist(const char *corelist, int *cores) if (dup) continue; if (count >= RTE_MAX_LCORE) { - RTE_LOG(ERR, EAL, "Too many lcores provided. Cannot exceed RTE_MAX_LCORE (%d)\n", + RTE_LOG_LINE(ERR, EAL, "Too many lcores provided. Cannot exceed RTE_MAX_LCORE (%d)", RTE_MAX_LCORE); return -1; } @@ -991,8 +991,8 @@ eal_parse_main_lcore(const char *arg) /* ensure main core is not used as service core */ if (lcore_config[cfg->main_lcore].core_role == ROLE_SERVICE) { - RTE_LOG(ERR, EAL, - "Error: Main lcore is used as a service core\n"); + RTE_LOG_LINE(ERR, EAL, + "Error: Main lcore is used as a service core"); return -1; } @@ -1132,8 +1132,8 @@ check_cpuset(rte_cpuset_t *set) continue; if (eal_cpu_detected(idx) == 0) { - RTE_LOG(ERR, EAL, "core %u " - "unavailable\n", idx); + RTE_LOG_LINE(ERR, EAL, "core %u " + "unavailable", idx); return -1; } } @@ -1612,8 +1612,8 @@ eal_parse_huge_unlink(const char *arg, struct hugepage_file_discipline *out) return 0; } if (strcmp(arg, HUGE_UNLINK_NEVER) == 0) { - RTE_LOG(WARNING, EAL, "Using --"OPT_HUGE_UNLINK"=" - HUGE_UNLINK_NEVER" may create data leaks.\n"); + RTE_LOG_LINE(WARNING, EAL, "Using --"OPT_HUGE_UNLINK"=" + HUGE_UNLINK_NEVER" may create data leaks."); out->unlink_existing = false; return 0; } @@ -1648,24 +1648,24 @@ eal_parse_common_option(int opt, const char *optarg, int lcore_indexes[RTE_MAX_LCORE]; if (eal_service_cores_parsed()) - RTE_LOG(WARNING, EAL, - "Service cores parsed before dataplane cores. Please ensure -c is before -s or -S\n"); + RTE_LOG_LINE(WARNING, EAL, + "Service cores parsed before dataplane cores. Please ensure -c is before -s or -S"); if (rte_eal_parse_coremask(optarg, lcore_indexes) < 0) { - RTE_LOG(ERR, EAL, "invalid coremask syntax\n"); + RTE_LOG_LINE(ERR, EAL, "invalid coremask syntax"); return -1; } if (update_lcore_config(lcore_indexes) < 0) { char *available = available_cores(); - RTE_LOG(ERR, EAL, - "invalid coremask, please check specified cores are part of %s\n", + RTE_LOG_LINE(ERR, EAL, + "invalid coremask, please check specified cores are part of %s", available); free(available); return -1; } if (core_parsed) { - RTE_LOG(ERR, EAL, "Option -c is ignored, because (%s) is set!\n", + RTE_LOG_LINE(ERR, EAL, "Option -c is ignored, because (%s) is set!", (core_parsed == LCORE_OPT_LST) ? "-l" : (core_parsed == LCORE_OPT_MAP) ? "--lcore" : "-c"); @@ -1680,25 +1680,25 @@ eal_parse_common_option(int opt, const char *optarg, int lcore_indexes[RTE_MAX_LCORE]; if (eal_service_cores_parsed()) - RTE_LOG(WARNING, EAL, - "Service cores parsed before dataplane cores. Please ensure -l is before -s or -S\n"); + RTE_LOG_LINE(WARNING, EAL, + "Service cores parsed before dataplane cores. Please ensure -l is before -s or -S"); if (eal_parse_corelist(optarg, lcore_indexes) < 0) { - RTE_LOG(ERR, EAL, "invalid core list syntax\n"); + RTE_LOG_LINE(ERR, EAL, "invalid core list syntax"); return -1; } if (update_lcore_config(lcore_indexes) < 0) { char *available = available_cores(); - RTE_LOG(ERR, EAL, - "invalid core list, please check specified cores are part of %s\n", + RTE_LOG_LINE(ERR, EAL, + "invalid core list, please check specified cores are part of %s", available); free(available); return -1; } if (core_parsed) { - RTE_LOG(ERR, EAL, "Option -l is ignored, because (%s) is set!\n", + RTE_LOG_LINE(ERR, EAL, "Option -l is ignored, because (%s) is set!", (core_parsed == LCORE_OPT_MSK) ? "-c" : (core_parsed == LCORE_OPT_MAP) ? "--lcore" : "-l"); @@ -1711,14 +1711,14 @@ eal_parse_common_option(int opt, const char *optarg, /* service coremask */ case 's': if (eal_parse_service_coremask(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid service coremask\n"); + RTE_LOG_LINE(ERR, EAL, "invalid service coremask"); return -1; } break; /* service corelist */ case 'S': if (eal_parse_service_corelist(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid service core list\n"); + RTE_LOG_LINE(ERR, EAL, "invalid service core list"); return -1; } break; @@ -1733,7 +1733,7 @@ eal_parse_common_option(int opt, const char *optarg, case 'n': conf->force_nchannel = atoi(optarg); if (conf->force_nchannel == 0) { - RTE_LOG(ERR, EAL, "invalid channel number\n"); + RTE_LOG_LINE(ERR, EAL, "invalid channel number"); return -1; } break; @@ -1742,7 +1742,7 @@ eal_parse_common_option(int opt, const char *optarg, conf->force_nrank = atoi(optarg); if (conf->force_nrank == 0 || conf->force_nrank > 16) { - RTE_LOG(ERR, EAL, "invalid rank number\n"); + RTE_LOG_LINE(ERR, EAL, "invalid rank number"); return -1; } break; @@ -1756,13 +1756,13 @@ eal_parse_common_option(int opt, const char *optarg, * write message at highest log level so it can always * be seen * even if info or warning messages are disabled */ - RTE_LOG(CRIT, EAL, "RTE Version: '%s'\n", rte_version()); + RTE_LOG_LINE(CRIT, EAL, "RTE Version: '%s'", rte_version()); break; /* long options */ case OPT_HUGE_UNLINK_NUM: if (eal_parse_huge_unlink(optarg, &conf->hugepage_file) < 0) { - RTE_LOG(ERR, EAL, "invalid --"OPT_HUGE_UNLINK" option\n"); + RTE_LOG_LINE(ERR, EAL, "invalid --"OPT_HUGE_UNLINK" option"); return -1; } break; @@ -1802,8 +1802,8 @@ eal_parse_common_option(int opt, const char *optarg, case OPT_MAIN_LCORE_NUM: if (eal_parse_main_lcore(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameter for --" - OPT_MAIN_LCORE "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameter for --" + OPT_MAIN_LCORE); return -1; } break; @@ -1818,8 +1818,8 @@ eal_parse_common_option(int opt, const char *optarg, #ifndef RTE_EXEC_ENV_WINDOWS case OPT_SYSLOG_NUM: if (eal_parse_syslog(optarg, conf) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_SYSLOG "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_SYSLOG); return -1; } break; @@ -1827,9 +1827,9 @@ eal_parse_common_option(int opt, const char *optarg, case OPT_LOG_LEVEL_NUM: { if (eal_parse_log_level(optarg) < 0) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" - OPT_LOG_LEVEL "\n"); + OPT_LOG_LEVEL); return -1; } break; @@ -1838,8 +1838,8 @@ eal_parse_common_option(int opt, const char *optarg, #ifndef RTE_EXEC_ENV_WINDOWS case OPT_TRACE_NUM: { if (eal_trace_args_save(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_TRACE "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_TRACE); return -1; } break; @@ -1847,8 +1847,8 @@ eal_parse_common_option(int opt, const char *optarg, case OPT_TRACE_DIR_NUM: { if (eal_trace_dir_args_save(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_TRACE_DIR "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_TRACE_DIR); return -1; } break; @@ -1856,8 +1856,8 @@ eal_parse_common_option(int opt, const char *optarg, case OPT_TRACE_BUF_SIZE_NUM: { if (eal_trace_bufsz_args_save(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_TRACE_BUF_SIZE "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_TRACE_BUF_SIZE); return -1; } break; @@ -1865,8 +1865,8 @@ eal_parse_common_option(int opt, const char *optarg, case OPT_TRACE_MODE_NUM: { if (eal_trace_mode_args_save(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_TRACE_MODE "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_TRACE_MODE); return -1; } break; @@ -1875,13 +1875,13 @@ eal_parse_common_option(int opt, const char *optarg, case OPT_LCORES_NUM: if (eal_parse_lcores(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameter for --" - OPT_LCORES "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameter for --" + OPT_LCORES); return -1; } if (core_parsed) { - RTE_LOG(ERR, EAL, "Option --lcore is ignored, because (%s) is set!\n", + RTE_LOG_LINE(ERR, EAL, "Option --lcore is ignored, because (%s) is set!", (core_parsed == LCORE_OPT_LST) ? "-l" : (core_parsed == LCORE_OPT_MSK) ? "-c" : "--lcore"); @@ -1898,15 +1898,15 @@ eal_parse_common_option(int opt, const char *optarg, break; case OPT_IOVA_MODE_NUM: if (eal_parse_iova_mode(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_IOVA_MODE "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_IOVA_MODE); return -1; } break; case OPT_BASE_VIRTADDR_NUM: if (eal_parse_base_virtaddr(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameter for --" - OPT_BASE_VIRTADDR "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameter for --" + OPT_BASE_VIRTADDR); return -1; } break; @@ -1917,8 +1917,8 @@ eal_parse_common_option(int opt, const char *optarg, break; case OPT_FORCE_MAX_SIMD_BITWIDTH_NUM: if (eal_parse_simd_bitwidth(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameter for --" - OPT_FORCE_MAX_SIMD_BITWIDTH "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameter for --" + OPT_FORCE_MAX_SIMD_BITWIDTH); return -1; } break; @@ -1932,8 +1932,8 @@ eal_parse_common_option(int opt, const char *optarg, return 0; ba_conflict: - RTE_LOG(ERR, EAL, - "Options allow (-a) and block (-b) can't be used at the same time\n"); + RTE_LOG_LINE(ERR, EAL, + "Options allow (-a) and block (-b) can't be used at the same time"); return -1; } @@ -2034,94 +2034,94 @@ eal_check_common_options(struct internal_config *internal_cfg) eal_get_internal_configuration(); if (cfg->lcore_role[cfg->main_lcore] != ROLE_RTE) { - RTE_LOG(ERR, EAL, "Main lcore is not enabled for DPDK\n"); + RTE_LOG_LINE(ERR, EAL, "Main lcore is not enabled for DPDK"); return -1; } if (internal_cfg->process_type == RTE_PROC_INVALID) { - RTE_LOG(ERR, EAL, "Invalid process type specified\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid process type specified"); return -1; } if (internal_cfg->hugefile_prefix != NULL && strlen(internal_cfg->hugefile_prefix) < 1) { - RTE_LOG(ERR, EAL, "Invalid length of --" OPT_FILE_PREFIX " option\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid length of --" OPT_FILE_PREFIX " option"); return -1; } if (internal_cfg->hugepage_dir != NULL && strlen(internal_cfg->hugepage_dir) < 1) { - RTE_LOG(ERR, EAL, "Invalid length of --" OPT_HUGE_DIR" option\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid length of --" OPT_HUGE_DIR" option"); return -1; } if (internal_cfg->user_mbuf_pool_ops_name != NULL && strlen(internal_cfg->user_mbuf_pool_ops_name) < 1) { - RTE_LOG(ERR, EAL, "Invalid length of --" OPT_MBUF_POOL_OPS_NAME" option\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid length of --" OPT_MBUF_POOL_OPS_NAME" option"); return -1; } if (strchr(eal_get_hugefile_prefix(), '%') != NULL) { - RTE_LOG(ERR, EAL, "Invalid char, '%%', in --"OPT_FILE_PREFIX" " - "option\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid char, '%%', in --"OPT_FILE_PREFIX" " + "option"); return -1; } if (mem_parsed && internal_cfg->force_sockets == 1) { - RTE_LOG(ERR, EAL, "Options -m and --"OPT_SOCKET_MEM" cannot " - "be specified at the same time\n"); + RTE_LOG_LINE(ERR, EAL, "Options -m and --"OPT_SOCKET_MEM" cannot " + "be specified at the same time"); return -1; } if (internal_cfg->no_hugetlbfs && internal_cfg->force_sockets == 1) { - RTE_LOG(ERR, EAL, "Option --"OPT_SOCKET_MEM" cannot " - "be specified together with --"OPT_NO_HUGE"\n"); + RTE_LOG_LINE(ERR, EAL, "Option --"OPT_SOCKET_MEM" cannot " + "be specified together with --"OPT_NO_HUGE); return -1; } if (internal_cfg->no_hugetlbfs && internal_cfg->hugepage_file.unlink_before_mapping && !internal_cfg->in_memory) { - RTE_LOG(ERR, EAL, "Option --"OPT_HUGE_UNLINK" cannot " - "be specified together with --"OPT_NO_HUGE"\n"); + RTE_LOG_LINE(ERR, EAL, "Option --"OPT_HUGE_UNLINK" cannot " + "be specified together with --"OPT_NO_HUGE); return -1; } if (internal_cfg->no_hugetlbfs && internal_cfg->huge_worker_stack_size != 0) { - RTE_LOG(ERR, EAL, "Option --"OPT_HUGE_WORKER_STACK" cannot " - "be specified together with --"OPT_NO_HUGE"\n"); + RTE_LOG_LINE(ERR, EAL, "Option --"OPT_HUGE_WORKER_STACK" cannot " + "be specified together with --"OPT_NO_HUGE); return -1; } if (internal_conf->force_socket_limits && internal_conf->legacy_mem) { - RTE_LOG(ERR, EAL, "Option --"OPT_SOCKET_LIMIT - " is only supported in non-legacy memory mode\n"); + RTE_LOG_LINE(ERR, EAL, "Option --"OPT_SOCKET_LIMIT + " is only supported in non-legacy memory mode"); } if (internal_cfg->single_file_segments && internal_cfg->hugepage_file.unlink_before_mapping && !internal_cfg->in_memory) { - RTE_LOG(ERR, EAL, "Option --"OPT_SINGLE_FILE_SEGMENTS" is " - "not compatible with --"OPT_HUGE_UNLINK"\n"); + RTE_LOG_LINE(ERR, EAL, "Option --"OPT_SINGLE_FILE_SEGMENTS" is " + "not compatible with --"OPT_HUGE_UNLINK); return -1; } if (!internal_cfg->hugepage_file.unlink_existing && internal_cfg->in_memory) { - RTE_LOG(ERR, EAL, "Option --"OPT_IN_MEMORY" is not compatible " - "with --"OPT_HUGE_UNLINK"="HUGE_UNLINK_NEVER"\n"); + RTE_LOG_LINE(ERR, EAL, "Option --"OPT_IN_MEMORY" is not compatible " + "with --"OPT_HUGE_UNLINK"="HUGE_UNLINK_NEVER); return -1; } if (internal_cfg->legacy_mem && internal_cfg->in_memory) { - RTE_LOG(ERR, EAL, "Option --"OPT_LEGACY_MEM" is not compatible " - "with --"OPT_IN_MEMORY"\n"); + RTE_LOG_LINE(ERR, EAL, "Option --"OPT_LEGACY_MEM" is not compatible " + "with --"OPT_IN_MEMORY); return -1; } if (internal_cfg->legacy_mem && internal_cfg->match_allocations) { - RTE_LOG(ERR, EAL, "Option --"OPT_LEGACY_MEM" is not compatible " - "with --"OPT_MATCH_ALLOCATIONS"\n"); + RTE_LOG_LINE(ERR, EAL, "Option --"OPT_LEGACY_MEM" is not compatible " + "with --"OPT_MATCH_ALLOCATIONS); return -1; } if (internal_cfg->no_hugetlbfs && internal_cfg->match_allocations) { - RTE_LOG(ERR, EAL, "Option --"OPT_NO_HUGE" is not compatible " - "with --"OPT_MATCH_ALLOCATIONS"\n"); + RTE_LOG_LINE(ERR, EAL, "Option --"OPT_NO_HUGE" is not compatible " + "with --"OPT_MATCH_ALLOCATIONS); return -1; } if (internal_cfg->legacy_mem && internal_cfg->memory == 0) { - RTE_LOG(NOTICE, EAL, "Static memory layout is selected, " + RTE_LOG_LINE(NOTICE, EAL, "Static memory layout is selected, " "amount of reserved memory can be adjusted with " - "-m or --"OPT_SOCKET_MEM"\n"); + "-m or --"OPT_SOCKET_MEM); } return 0; @@ -2141,12 +2141,12 @@ rte_vect_set_max_simd_bitwidth(uint16_t bitwidth) struct internal_config *internal_conf = eal_get_internal_configuration(); if (internal_conf->max_simd_bitwidth.forced) { - RTE_LOG(NOTICE, EAL, "Cannot set max SIMD bitwidth - user runtime override enabled\n"); + RTE_LOG_LINE(NOTICE, EAL, "Cannot set max SIMD bitwidth - user runtime override enabled"); return -EPERM; } if (bitwidth < RTE_VECT_SIMD_DISABLED || !rte_is_power_of_2(bitwidth)) { - RTE_LOG(ERR, EAL, "Invalid bitwidth value!\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid bitwidth value!"); return -EINVAL; } internal_conf->max_simd_bitwidth.bitwidth = bitwidth; diff --git a/lib/eal/common/eal_common_proc.c b/lib/eal/common/eal_common_proc.c index 728815c4a9..abc6117c65 100644 --- a/lib/eal/common/eal_common_proc.c +++ b/lib/eal/common/eal_common_proc.c @@ -181,12 +181,12 @@ static int validate_action_name(const char *name) { if (name == NULL) { - RTE_LOG(ERR, EAL, "Action name cannot be NULL\n"); + RTE_LOG_LINE(ERR, EAL, "Action name cannot be NULL"); rte_errno = EINVAL; return -1; } if (strnlen(name, RTE_MP_MAX_NAME_LEN) == 0) { - RTE_LOG(ERR, EAL, "Length of action name is zero\n"); + RTE_LOG_LINE(ERR, EAL, "Length of action name is zero"); rte_errno = EINVAL; return -1; } @@ -208,7 +208,7 @@ rte_mp_action_register(const char *name, rte_mp_t action) return -1; if (internal_conf->no_shconf) { - RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n"); + RTE_LOG_LINE(DEBUG, EAL, "No shared files mode enabled, IPC is disabled"); rte_errno = ENOTSUP; return -1; } @@ -244,7 +244,7 @@ rte_mp_action_unregister(const char *name) return; if (internal_conf->no_shconf) { - RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n"); + RTE_LOG_LINE(DEBUG, EAL, "No shared files mode enabled, IPC is disabled"); return; } @@ -291,12 +291,12 @@ read_msg(int fd, struct mp_msg_internal *m, struct sockaddr_un *s) if (errno == EINTR) goto retry; - RTE_LOG(ERR, EAL, "recvmsg failed, %s\n", strerror(errno)); + RTE_LOG_LINE(ERR, EAL, "recvmsg failed, %s", strerror(errno)); return -1; } if (msglen != buflen || (msgh.msg_flags & (MSG_TRUNC | MSG_CTRUNC))) { - RTE_LOG(ERR, EAL, "truncated msg\n"); + RTE_LOG_LINE(ERR, EAL, "truncated msg"); return -1; } @@ -311,11 +311,11 @@ read_msg(int fd, struct mp_msg_internal *m, struct sockaddr_un *s) } /* sanity-check the response */ if (m->msg.num_fds < 0 || m->msg.num_fds > RTE_MP_MAX_FD_NUM) { - RTE_LOG(ERR, EAL, "invalid number of fd's received\n"); + RTE_LOG_LINE(ERR, EAL, "invalid number of fd's received"); return -1; } if (m->msg.len_param < 0 || m->msg.len_param > RTE_MP_MAX_PARAM_LEN) { - RTE_LOG(ERR, EAL, "invalid received data length\n"); + RTE_LOG_LINE(ERR, EAL, "invalid received data length"); return -1; } return msglen; @@ -340,7 +340,7 @@ process_msg(struct mp_msg_internal *m, struct sockaddr_un *s) const struct internal_config *internal_conf = eal_get_internal_configuration(); - RTE_LOG(DEBUG, EAL, "msg: %s\n", msg->name); + RTE_LOG_LINE(DEBUG, EAL, "msg: %s", msg->name); if (m->type == MP_REP || m->type == MP_IGN) { struct pending_request *req = NULL; @@ -359,7 +359,7 @@ process_msg(struct mp_msg_internal *m, struct sockaddr_un *s) req = async_reply_handle_thread_unsafe( pending_req); } else { - RTE_LOG(ERR, EAL, "Drop mp reply: %s\n", msg->name); + RTE_LOG_LINE(ERR, EAL, "Drop mp reply: %s", msg->name); cleanup_msg_fds(msg); } pthread_mutex_unlock(&pending_requests.lock); @@ -388,12 +388,12 @@ process_msg(struct mp_msg_internal *m, struct sockaddr_un *s) strlcpy(dummy.name, msg->name, sizeof(dummy.name)); mp_send(&dummy, s->sun_path, MP_IGN); } else { - RTE_LOG(ERR, EAL, "Cannot find action: %s\n", + RTE_LOG_LINE(ERR, EAL, "Cannot find action: %s", msg->name); } cleanup_msg_fds(msg); } else if (action(msg, s->sun_path) < 0) { - RTE_LOG(ERR, EAL, "Fail to handle message: %s\n", msg->name); + RTE_LOG_LINE(ERR, EAL, "Fail to handle message: %s", msg->name); } } @@ -459,7 +459,7 @@ process_async_request(struct pending_request *sr, const struct timespec *now) tmp = realloc(user_msgs, sizeof(*msg) * (reply->nb_received + 1)); if (!tmp) { - RTE_LOG(ERR, EAL, "Fail to alloc reply for request %s:%s\n", + RTE_LOG_LINE(ERR, EAL, "Fail to alloc reply for request %s:%s", sr->dst, sr->request->name); /* this entry is going to be removed and its message * dropped, but we don't want to leak memory, so @@ -518,7 +518,7 @@ async_reply_handle_thread_unsafe(void *arg) struct timespec ts_now; if (clock_gettime(CLOCK_MONOTONIC, &ts_now) < 0) { - RTE_LOG(ERR, EAL, "Cannot get current time\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot get current time"); goto no_trigger; } @@ -532,10 +532,10 @@ async_reply_handle_thread_unsafe(void *arg) * handling the same message twice. */ if (rte_errno == EINPROGRESS) { - RTE_LOG(DEBUG, EAL, "Request handling is already in progress\n"); + RTE_LOG_LINE(DEBUG, EAL, "Request handling is already in progress"); goto no_trigger; } - RTE_LOG(ERR, EAL, "Failed to cancel alarm\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to cancel alarm"); } if (action == ACTION_TRIGGER) @@ -570,7 +570,7 @@ open_socket_fd(void) mp_fd = socket(AF_UNIX, SOCK_DGRAM, 0); if (mp_fd < 0) { - RTE_LOG(ERR, EAL, "failed to create unix socket\n"); + RTE_LOG_LINE(ERR, EAL, "failed to create unix socket"); return -1; } @@ -582,13 +582,13 @@ open_socket_fd(void) unlink(un.sun_path); /* May still exist since last run */ if (bind(mp_fd, (struct sockaddr *)&un, sizeof(un)) < 0) { - RTE_LOG(ERR, EAL, "failed to bind %s: %s\n", + RTE_LOG_LINE(ERR, EAL, "failed to bind %s: %s", un.sun_path, strerror(errno)); close(mp_fd); return -1; } - RTE_LOG(INFO, EAL, "Multi-process socket %s\n", un.sun_path); + RTE_LOG_LINE(INFO, EAL, "Multi-process socket %s", un.sun_path); return mp_fd; } @@ -614,7 +614,7 @@ rte_mp_channel_init(void) * so no need to initialize IPC. */ if (internal_conf->no_shconf) { - RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC will be disabled\n"); + RTE_LOG_LINE(DEBUG, EAL, "No shared files mode enabled, IPC will be disabled"); rte_errno = ENOTSUP; return -1; } @@ -630,13 +630,13 @@ rte_mp_channel_init(void) /* lock the directory */ dir_fd = open(mp_dir_path, O_RDONLY); if (dir_fd < 0) { - RTE_LOG(ERR, EAL, "failed to open %s: %s\n", + RTE_LOG_LINE(ERR, EAL, "failed to open %s: %s", mp_dir_path, strerror(errno)); return -1; } if (flock(dir_fd, LOCK_EX)) { - RTE_LOG(ERR, EAL, "failed to lock %s: %s\n", + RTE_LOG_LINE(ERR, EAL, "failed to lock %s: %s", mp_dir_path, strerror(errno)); close(dir_fd); return -1; @@ -649,7 +649,7 @@ rte_mp_channel_init(void) if (rte_thread_create_internal_control(&mp_handle_tid, "mp-msg", mp_handle, NULL) < 0) { - RTE_LOG(ERR, EAL, "failed to create mp thread: %s\n", + RTE_LOG_LINE(ERR, EAL, "failed to create mp thread: %s", strerror(errno)); close(dir_fd); close(rte_atomic_exchange_explicit(&mp_fd, -1, rte_memory_order_relaxed)); @@ -732,7 +732,7 @@ send_msg(const char *dst_path, struct rte_mp_msg *msg, int type) unlink(dst_path); return 0; } - RTE_LOG(ERR, EAL, "failed to send to (%s) due to %s\n", + RTE_LOG_LINE(ERR, EAL, "failed to send to (%s) due to %s", dst_path, strerror(errno)); return -1; } @@ -760,7 +760,7 @@ mp_send(struct rte_mp_msg *msg, const char *peer, int type) /* broadcast to all secondary processes */ mp_dir = opendir(mp_dir_path); if (!mp_dir) { - RTE_LOG(ERR, EAL, "Unable to open directory %s\n", + RTE_LOG_LINE(ERR, EAL, "Unable to open directory %s", mp_dir_path); rte_errno = errno; return -1; @@ -769,7 +769,7 @@ mp_send(struct rte_mp_msg *msg, const char *peer, int type) dir_fd = dirfd(mp_dir); /* lock the directory to prevent processes spinning up while we send */ if (flock(dir_fd, LOCK_SH)) { - RTE_LOG(ERR, EAL, "Unable to lock directory %s\n", + RTE_LOG_LINE(ERR, EAL, "Unable to lock directory %s", mp_dir_path); rte_errno = errno; closedir(mp_dir); @@ -799,7 +799,7 @@ static int check_input(const struct rte_mp_msg *msg) { if (msg == NULL) { - RTE_LOG(ERR, EAL, "Msg cannot be NULL\n"); + RTE_LOG_LINE(ERR, EAL, "Msg cannot be NULL"); rte_errno = EINVAL; return -1; } @@ -808,25 +808,25 @@ check_input(const struct rte_mp_msg *msg) return -1; if (msg->len_param < 0) { - RTE_LOG(ERR, EAL, "Message data length is negative\n"); + RTE_LOG_LINE(ERR, EAL, "Message data length is negative"); rte_errno = EINVAL; return -1; } if (msg->num_fds < 0) { - RTE_LOG(ERR, EAL, "Number of fd's is negative\n"); + RTE_LOG_LINE(ERR, EAL, "Number of fd's is negative"); rte_errno = EINVAL; return -1; } if (msg->len_param > RTE_MP_MAX_PARAM_LEN) { - RTE_LOG(ERR, EAL, "Message data is too long\n"); + RTE_LOG_LINE(ERR, EAL, "Message data is too long"); rte_errno = E2BIG; return -1; } if (msg->num_fds > RTE_MP_MAX_FD_NUM) { - RTE_LOG(ERR, EAL, "Cannot send more than %d FDs\n", + RTE_LOG_LINE(ERR, EAL, "Cannot send more than %d FDs", RTE_MP_MAX_FD_NUM); rte_errno = E2BIG; return -1; @@ -845,12 +845,12 @@ rte_mp_sendmsg(struct rte_mp_msg *msg) return -1; if (internal_conf->no_shconf) { - RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n"); + RTE_LOG_LINE(DEBUG, EAL, "No shared files mode enabled, IPC is disabled"); rte_errno = ENOTSUP; return -1; } - RTE_LOG(DEBUG, EAL, "sendmsg: %s\n", msg->name); + RTE_LOG_LINE(DEBUG, EAL, "sendmsg: %s", msg->name); return mp_send(msg, NULL, MP_MSG); } @@ -865,7 +865,7 @@ mp_request_async(const char *dst, struct rte_mp_msg *req, pending_req = calloc(1, sizeof(*pending_req)); reply_msg = calloc(1, sizeof(*reply_msg)); if (pending_req == NULL || reply_msg == NULL) { - RTE_LOG(ERR, EAL, "Could not allocate space for sync request\n"); + RTE_LOG_LINE(ERR, EAL, "Could not allocate space for sync request"); rte_errno = ENOMEM; ret = -1; goto fail; @@ -881,7 +881,7 @@ mp_request_async(const char *dst, struct rte_mp_msg *req, exist = find_pending_request(dst, req->name); if (exist) { - RTE_LOG(ERR, EAL, "A pending request %s:%s\n", dst, req->name); + RTE_LOG_LINE(ERR, EAL, "A pending request %s:%s", dst, req->name); rte_errno = EEXIST; ret = -1; goto fail; @@ -889,7 +889,7 @@ mp_request_async(const char *dst, struct rte_mp_msg *req, ret = send_msg(dst, req, MP_REQ); if (ret < 0) { - RTE_LOG(ERR, EAL, "Fail to send request %s:%s\n", + RTE_LOG_LINE(ERR, EAL, "Fail to send request %s:%s", dst, req->name); ret = -1; goto fail; @@ -902,7 +902,7 @@ mp_request_async(const char *dst, struct rte_mp_msg *req, /* if alarm set fails, we simply ignore the reply */ if (rte_eal_alarm_set(ts->tv_sec * 1000000 + ts->tv_nsec / 1000, async_reply_handle, pending_req) < 0) { - RTE_LOG(ERR, EAL, "Fail to set alarm for request %s:%s\n", + RTE_LOG_LINE(ERR, EAL, "Fail to set alarm for request %s:%s", dst, req->name); ret = -1; goto fail; @@ -936,14 +936,14 @@ mp_request_sync(const char *dst, struct rte_mp_msg *req, exist = find_pending_request(dst, req->name); if (exist) { - RTE_LOG(ERR, EAL, "A pending request %s:%s\n", dst, req->name); + RTE_LOG_LINE(ERR, EAL, "A pending request %s:%s", dst, req->name); rte_errno = EEXIST; return -1; } ret = send_msg(dst, req, MP_REQ); if (ret < 0) { - RTE_LOG(ERR, EAL, "Fail to send request %s:%s\n", + RTE_LOG_LINE(ERR, EAL, "Fail to send request %s:%s", dst, req->name); return -1; } else if (ret == 0) @@ -961,13 +961,13 @@ mp_request_sync(const char *dst, struct rte_mp_msg *req, TAILQ_REMOVE(&pending_requests.requests, &pending_req, next); if (pending_req.reply_received == 0) { - RTE_LOG(ERR, EAL, "Fail to recv reply for request %s:%s\n", + RTE_LOG_LINE(ERR, EAL, "Fail to recv reply for request %s:%s", dst, req->name); rte_errno = ETIMEDOUT; return -1; } if (pending_req.reply_received == -1) { - RTE_LOG(DEBUG, EAL, "Asked to ignore response\n"); + RTE_LOG_LINE(DEBUG, EAL, "Asked to ignore response"); /* not receiving this message is not an error, so decrement * number of sent messages */ @@ -977,7 +977,7 @@ mp_request_sync(const char *dst, struct rte_mp_msg *req, tmp = realloc(reply->msgs, sizeof(msg) * (reply->nb_received + 1)); if (!tmp) { - RTE_LOG(ERR, EAL, "Fail to alloc reply for request %s:%s\n", + RTE_LOG_LINE(ERR, EAL, "Fail to alloc reply for request %s:%s", dst, req->name); rte_errno = ENOMEM; return -1; @@ -999,7 +999,7 @@ rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply, const struct internal_config *internal_conf = eal_get_internal_configuration(); - RTE_LOG(DEBUG, EAL, "request: %s\n", req->name); + RTE_LOG_LINE(DEBUG, EAL, "request: %s", req->name); reply->nb_sent = 0; reply->nb_received = 0; @@ -1009,13 +1009,13 @@ rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply, goto end; if (internal_conf->no_shconf) { - RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n"); + RTE_LOG_LINE(DEBUG, EAL, "No shared files mode enabled, IPC is disabled"); rte_errno = ENOTSUP; return -1; } if (clock_gettime(CLOCK_MONOTONIC, &now) < 0) { - RTE_LOG(ERR, EAL, "Failed to get current time\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to get current time"); rte_errno = errno; goto end; } @@ -1035,7 +1035,7 @@ rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply, /* for primary process, broadcast request, and collect reply 1 by 1 */ mp_dir = opendir(mp_dir_path); if (!mp_dir) { - RTE_LOG(ERR, EAL, "Unable to open directory %s\n", mp_dir_path); + RTE_LOG_LINE(ERR, EAL, "Unable to open directory %s", mp_dir_path); rte_errno = errno; goto end; } @@ -1043,7 +1043,7 @@ rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply, dir_fd = dirfd(mp_dir); /* lock the directory to prevent processes spinning up while we send */ if (flock(dir_fd, LOCK_SH)) { - RTE_LOG(ERR, EAL, "Unable to lock directory %s\n", + RTE_LOG_LINE(ERR, EAL, "Unable to lock directory %s", mp_dir_path); rte_errno = errno; goto close_end; @@ -1102,19 +1102,19 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts, const struct internal_config *internal_conf = eal_get_internal_configuration(); - RTE_LOG(DEBUG, EAL, "request: %s\n", req->name); + RTE_LOG_LINE(DEBUG, EAL, "request: %s", req->name); if (check_input(req) != 0) return -1; if (internal_conf->no_shconf) { - RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n"); + RTE_LOG_LINE(DEBUG, EAL, "No shared files mode enabled, IPC is disabled"); rte_errno = ENOTSUP; return -1; } if (clock_gettime(CLOCK_MONOTONIC, &now) < 0) { - RTE_LOG(ERR, EAL, "Failed to get current time\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to get current time"); rte_errno = errno; return -1; } @@ -1122,7 +1122,7 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts, dummy = calloc(1, sizeof(*dummy)); param = calloc(1, sizeof(*param)); if (copy == NULL || dummy == NULL || param == NULL) { - RTE_LOG(ERR, EAL, "Failed to allocate memory for async reply\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to allocate memory for async reply"); rte_errno = ENOMEM; goto fail; } @@ -1180,7 +1180,7 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts, /* for primary process, broadcast request */ mp_dir = opendir(mp_dir_path); if (!mp_dir) { - RTE_LOG(ERR, EAL, "Unable to open directory %s\n", mp_dir_path); + RTE_LOG_LINE(ERR, EAL, "Unable to open directory %s", mp_dir_path); rte_errno = errno; goto unlock_fail; } @@ -1188,7 +1188,7 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts, /* lock the directory to prevent processes spinning up while we send */ if (flock(dir_fd, LOCK_SH)) { - RTE_LOG(ERR, EAL, "Unable to lock directory %s\n", + RTE_LOG_LINE(ERR, EAL, "Unable to lock directory %s", mp_dir_path); rte_errno = errno; goto closedir_fail; @@ -1240,7 +1240,7 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts, int rte_mp_reply(struct rte_mp_msg *msg, const char *peer) { - RTE_LOG(DEBUG, EAL, "reply: %s\n", msg->name); + RTE_LOG_LINE(DEBUG, EAL, "reply: %s", msg->name); const struct internal_config *internal_conf = eal_get_internal_configuration(); @@ -1248,13 +1248,13 @@ rte_mp_reply(struct rte_mp_msg *msg, const char *peer) return -1; if (peer == NULL) { - RTE_LOG(ERR, EAL, "peer is not specified\n"); + RTE_LOG_LINE(ERR, EAL, "peer is not specified"); rte_errno = EINVAL; return -1; } if (internal_conf->no_shconf) { - RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n"); + RTE_LOG_LINE(DEBUG, EAL, "No shared files mode enabled, IPC is disabled"); return 0; } diff --git a/lib/eal/common/eal_common_tailqs.c b/lib/eal/common/eal_common_tailqs.c index 580fbf24bc..06a6cac4ff 100644 --- a/lib/eal/common/eal_common_tailqs.c +++ b/lib/eal/common/eal_common_tailqs.c @@ -109,8 +109,8 @@ int rte_eal_tailq_register(struct rte_tailq_elem *t) { if (rte_eal_tailq_local_register(t) < 0) { - RTE_LOG(ERR, EAL, - "%s tailq is already registered\n", t->name); + RTE_LOG_LINE(ERR, EAL, + "%s tailq is already registered", t->name); goto error; } @@ -119,8 +119,8 @@ rte_eal_tailq_register(struct rte_tailq_elem *t) if (rte_tailqs_count >= 0) { rte_eal_tailq_update(t); if (t->head == NULL) { - RTE_LOG(ERR, EAL, - "Cannot initialize tailq: %s\n", t->name); + RTE_LOG_LINE(ERR, EAL, + "Cannot initialize tailq: %s", t->name); TAILQ_REMOVE(&rte_tailq_elem_head, t, next); goto error; } @@ -145,8 +145,8 @@ rte_eal_tailqs_init(void) * rte_eal_tailq_register and EAL_REGISTER_TAILQ */ rte_eal_tailq_update(t); if (t->head == NULL) { - RTE_LOG(ERR, EAL, - "Cannot initialize tailq: %s\n", t->name); + RTE_LOG_LINE(ERR, EAL, + "Cannot initialize tailq: %s", t->name); /* TAILQ_REMOVE not needed, error is already fatal */ goto fail; } diff --git a/lib/eal/common/eal_common_thread.c b/lib/eal/common/eal_common_thread.c index c422ea8b53..b0974a7aa5 100644 --- a/lib/eal/common/eal_common_thread.c +++ b/lib/eal/common/eal_common_thread.c @@ -86,7 +86,7 @@ int rte_thread_set_affinity(rte_cpuset_t *cpusetp) { if (rte_thread_set_affinity_by_id(rte_thread_self(), cpusetp) != 0) { - RTE_LOG(ERR, EAL, "rte_thread_set_affinity_by_id failed\n"); + RTE_LOG_LINE(ERR, EAL, "rte_thread_set_affinity_by_id failed"); return -1; } @@ -175,7 +175,7 @@ eal_thread_loop(void *arg) __rte_thread_init(lcore_id, &lcore_config[lcore_id].cpuset); ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset)); - RTE_LOG(DEBUG, EAL, "lcore %u is ready (tid=%zx;cpuset=[%s%s])\n", + RTE_LOG_LINE(DEBUG, EAL, "lcore %u is ready (tid=%zx;cpuset=[%s%s])", lcore_id, rte_thread_self().opaque_id, cpuset, ret == 0 ? "" : "..."); @@ -368,12 +368,12 @@ rte_thread_register(void) /* EAL init flushes all lcores, we can't register before. */ if (eal_get_internal_configuration()->init_complete != 1) { - RTE_LOG(DEBUG, EAL, "Called %s before EAL init.\n", __func__); + RTE_LOG_LINE(DEBUG, EAL, "Called %s before EAL init.", __func__); rte_errno = EINVAL; return -1; } if (!rte_mp_disable()) { - RTE_LOG(ERR, EAL, "Multiprocess in use, registering non-EAL threads is not supported.\n"); + RTE_LOG_LINE(ERR, EAL, "Multiprocess in use, registering non-EAL threads is not supported."); rte_errno = EINVAL; return -1; } @@ -387,7 +387,7 @@ rte_thread_register(void) rte_errno = ENOMEM; return -1; } - RTE_LOG(DEBUG, EAL, "Registered non-EAL thread as lcore %u.\n", + RTE_LOG_LINE(DEBUG, EAL, "Registered non-EAL thread as lcore %u.", lcore_id); return 0; } @@ -401,7 +401,7 @@ rte_thread_unregister(void) eal_lcore_non_eal_release(lcore_id); __rte_thread_uninit(); if (lcore_id != LCORE_ID_ANY) - RTE_LOG(DEBUG, EAL, "Unregistered non-EAL thread (was lcore %u).\n", + RTE_LOG_LINE(DEBUG, EAL, "Unregistered non-EAL thread (was lcore %u).", lcore_id); } diff --git a/lib/eal/common/eal_common_timer.c b/lib/eal/common/eal_common_timer.c index 5686a5102b..bd2ca85c6c 100644 --- a/lib/eal/common/eal_common_timer.c +++ b/lib/eal/common/eal_common_timer.c @@ -39,8 +39,8 @@ static uint64_t estimate_tsc_freq(void) { #define CYC_PER_10MHZ 1E7 - RTE_LOG(WARNING, EAL, "WARNING: TSC frequency estimated roughly" - " - clock timings may be less accurate.\n"); + RTE_LOG_LINE(WARNING, EAL, "WARNING: TSC frequency estimated roughly" + " - clock timings may be less accurate."); /* assume that the rte_delay_us_sleep() will sleep for 1 second */ uint64_t start = rte_rdtsc(); rte_delay_us_sleep(US_PER_S); @@ -71,7 +71,7 @@ set_tsc_freq(void) if (!freq) freq = estimate_tsc_freq(); - RTE_LOG(DEBUG, EAL, "TSC frequency is ~%" PRIu64 " KHz\n", freq / 1000); + RTE_LOG_LINE(DEBUG, EAL, "TSC frequency is ~%" PRIu64 " KHz", freq / 1000); eal_tsc_resolution_hz = freq; mcfg->tsc_hz = freq; } diff --git a/lib/eal/common/eal_common_trace_utils.c b/lib/eal/common/eal_common_trace_utils.c index 8561a0e198..f5e724f9cd 100644 --- a/lib/eal/common/eal_common_trace_utils.c +++ b/lib/eal/common/eal_common_trace_utils.c @@ -348,7 +348,7 @@ trace_mkdir(void) return -rte_errno; } - RTE_LOG(INFO, EAL, "Trace dir: %s\n", trace->dir); + RTE_LOG_LINE(INFO, EAL, "Trace dir: %s", trace->dir); already_done = true; return 0; } diff --git a/lib/eal/common/eal_trace.h b/lib/eal/common/eal_trace.h index ace2ef3ee5..4dbd6ea457 100644 --- a/lib/eal/common/eal_trace.h +++ b/lib/eal/common/eal_trace.h @@ -17,10 +17,10 @@ #include "eal_thread.h" #define trace_err(fmt, args...) \ - RTE_LOG(ERR, EAL, "%s():%u " fmt "\n", __func__, __LINE__, ## args) + RTE_LOG_LINE(ERR, EAL, "%s():%u " fmt, __func__, __LINE__, ## args) #define trace_crit(fmt, args...) \ - RTE_LOG(CRIT, EAL, "%s():%u " fmt "\n", __func__, __LINE__, ## args) + RTE_LOG_LINE(CRIT, EAL, "%s():%u " fmt, __func__, __LINE__, ## args) #define TRACE_CTF_MAGIC 0xC1FC1FC1 #define TRACE_MAX_ARGS 32 diff --git a/lib/eal/common/hotplug_mp.c b/lib/eal/common/hotplug_mp.c index 602781966c..cd47c248f5 100644 --- a/lib/eal/common/hotplug_mp.c +++ b/lib/eal/common/hotplug_mp.c @@ -77,7 +77,7 @@ send_response_to_secondary(const struct eal_dev_mp_req *req, ret = rte_mp_reply(&mp_resp, peer); if (ret != 0) - RTE_LOG(ERR, EAL, "failed to send response to secondary\n"); + RTE_LOG_LINE(ERR, EAL, "failed to send response to secondary"); return ret; } @@ -101,18 +101,18 @@ __handle_secondary_request(void *param) if (req->t == EAL_DEV_REQ_TYPE_ATTACH) { ret = local_dev_probe(req->devargs, &dev); if (ret != 0 && ret != -EEXIST) { - RTE_LOG(ERR, EAL, "Failed to hotplug add device on primary\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to hotplug add device on primary"); goto finish; } ret = eal_dev_hotplug_request_to_secondary(&tmp_req); if (ret != 0) { - RTE_LOG(ERR, EAL, "Failed to send hotplug request to secondary\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to send hotplug request to secondary"); ret = -ENOMSG; goto rollback; } if (tmp_req.result != 0) { ret = tmp_req.result; - RTE_LOG(ERR, EAL, "Failed to hotplug add device on secondary\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to hotplug add device on secondary"); if (ret != -EEXIST) goto rollback; } @@ -123,27 +123,27 @@ __handle_secondary_request(void *param) ret = eal_dev_hotplug_request_to_secondary(&tmp_req); if (ret != 0) { - RTE_LOG(ERR, EAL, "Failed to send hotplug request to secondary\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to send hotplug request to secondary"); ret = -ENOMSG; goto rollback; } bus = rte_bus_find_by_name(da.bus->name); if (bus == NULL) { - RTE_LOG(ERR, EAL, "Cannot find bus (%s)\n", da.bus->name); + RTE_LOG_LINE(ERR, EAL, "Cannot find bus (%s)", da.bus->name); ret = -ENOENT; goto finish; } dev = bus->find_device(NULL, cmp_dev_name, da.name); if (dev == NULL) { - RTE_LOG(ERR, EAL, "Cannot find plugged device (%s)\n", da.name); + RTE_LOG_LINE(ERR, EAL, "Cannot find plugged device (%s)", da.name); ret = -ENOENT; goto finish; } if (tmp_req.result != 0) { - RTE_LOG(ERR, EAL, "Failed to hotplug remove device on secondary\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to hotplug remove device on secondary"); ret = tmp_req.result; if (ret != -ENOENT) goto rollback; @@ -151,12 +151,12 @@ __handle_secondary_request(void *param) ret = local_dev_remove(dev); if (ret != 0) { - RTE_LOG(ERR, EAL, "Failed to hotplug remove device on primary\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to hotplug remove device on primary"); if (ret != -ENOENT) goto rollback; } } else { - RTE_LOG(ERR, EAL, "unsupported secondary to primary request\n"); + RTE_LOG_LINE(ERR, EAL, "unsupported secondary to primary request"); ret = -ENOTSUP; } goto finish; @@ -174,7 +174,7 @@ __handle_secondary_request(void *param) finish: ret = send_response_to_secondary(&tmp_req, ret, bundle->peer); if (ret) - RTE_LOG(ERR, EAL, "failed to send response to secondary\n"); + RTE_LOG_LINE(ERR, EAL, "failed to send response to secondary"); rte_devargs_reset(&da); free(bundle->peer); @@ -191,7 +191,7 @@ handle_secondary_request(const struct rte_mp_msg *msg, const void *peer) bundle = malloc(sizeof(*bundle)); if (bundle == NULL) { - RTE_LOG(ERR, EAL, "not enough memory\n"); + RTE_LOG_LINE(ERR, EAL, "not enough memory"); return send_response_to_secondary(req, -ENOMEM, peer); } @@ -204,7 +204,7 @@ handle_secondary_request(const struct rte_mp_msg *msg, const void *peer) bundle->peer = strdup(peer); if (bundle->peer == NULL) { free(bundle); - RTE_LOG(ERR, EAL, "not enough memory\n"); + RTE_LOG_LINE(ERR, EAL, "not enough memory"); return send_response_to_secondary(req, -ENOMEM, peer); } @@ -214,7 +214,7 @@ handle_secondary_request(const struct rte_mp_msg *msg, const void *peer) */ ret = rte_eal_alarm_set(1, __handle_secondary_request, bundle); if (ret != 0) { - RTE_LOG(ERR, EAL, "failed to add mp task\n"); + RTE_LOG_LINE(ERR, EAL, "failed to add mp task"); free(bundle->peer); free(bundle); return send_response_to_secondary(req, ret, peer); @@ -257,14 +257,14 @@ static void __handle_primary_request(void *param) bus = rte_bus_find_by_name(da->bus->name); if (bus == NULL) { - RTE_LOG(ERR, EAL, "Cannot find bus (%s)\n", da->bus->name); + RTE_LOG_LINE(ERR, EAL, "Cannot find bus (%s)", da->bus->name); ret = -ENOENT; goto quit; } dev = bus->find_device(NULL, cmp_dev_name, da->name); if (dev == NULL) { - RTE_LOG(ERR, EAL, "Cannot find plugged device (%s)\n", da->name); + RTE_LOG_LINE(ERR, EAL, "Cannot find plugged device (%s)", da->name); ret = -ENOENT; goto quit; } @@ -296,7 +296,7 @@ static void __handle_primary_request(void *param) memcpy(resp, req, sizeof(*resp)); resp->result = ret; if (rte_mp_reply(&mp_resp, bundle->peer) < 0) - RTE_LOG(ERR, EAL, "failed to send reply to primary request\n"); + RTE_LOG_LINE(ERR, EAL, "failed to send reply to primary request"); free(bundle->peer); free(bundle); @@ -320,11 +320,11 @@ handle_primary_request(const struct rte_mp_msg *msg, const void *peer) bundle = calloc(1, sizeof(*bundle)); if (bundle == NULL) { - RTE_LOG(ERR, EAL, "not enough memory\n"); + RTE_LOG_LINE(ERR, EAL, "not enough memory"); resp->result = -ENOMEM; ret = rte_mp_reply(&mp_resp, peer); if (ret) - RTE_LOG(ERR, EAL, "failed to send reply to primary request\n"); + RTE_LOG_LINE(ERR, EAL, "failed to send reply to primary request"); return ret; } @@ -336,12 +336,12 @@ handle_primary_request(const struct rte_mp_msg *msg, const void *peer) */ bundle->peer = (void *)strdup(peer); if (bundle->peer == NULL) { - RTE_LOG(ERR, EAL, "not enough memory\n"); + RTE_LOG_LINE(ERR, EAL, "not enough memory"); free(bundle); resp->result = -ENOMEM; ret = rte_mp_reply(&mp_resp, peer); if (ret) - RTE_LOG(ERR, EAL, "failed to send reply to primary request\n"); + RTE_LOG_LINE(ERR, EAL, "failed to send reply to primary request"); return ret; } @@ -356,7 +356,7 @@ handle_primary_request(const struct rte_mp_msg *msg, const void *peer) resp->result = ret; ret = rte_mp_reply(&mp_resp, peer); if (ret != 0) { - RTE_LOG(ERR, EAL, "failed to send reply to primary request\n"); + RTE_LOG_LINE(ERR, EAL, "failed to send reply to primary request"); return ret; } } @@ -378,7 +378,7 @@ int eal_dev_hotplug_request_to_primary(struct eal_dev_mp_req *req) ret = rte_mp_request_sync(&mp_req, &mp_reply, &ts); if (ret || mp_reply.nb_received != 1) { - RTE_LOG(ERR, EAL, "Cannot send request to primary\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot send request to primary"); if (!ret) return -1; return ret; @@ -408,14 +408,14 @@ int eal_dev_hotplug_request_to_secondary(struct eal_dev_mp_req *req) if (ret != 0) { /* if IPC is not supported, behave as if the call succeeded */ if (rte_errno != ENOTSUP) - RTE_LOG(ERR, EAL, "rte_mp_request_sync failed\n"); + RTE_LOG_LINE(ERR, EAL, "rte_mp_request_sync failed"); else ret = 0; return ret; } if (mp_reply.nb_sent != mp_reply.nb_received) { - RTE_LOG(ERR, EAL, "not all secondary reply\n"); + RTE_LOG_LINE(ERR, EAL, "not all secondary reply"); free(mp_reply.msgs); return -1; } @@ -448,7 +448,7 @@ int eal_mp_dev_hotplug_init(void) handle_secondary_request); /* primary is allowed to not support IPC */ if (ret != 0 && rte_errno != ENOTSUP) { - RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n", + RTE_LOG_LINE(ERR, EAL, "Couldn't register '%s' action", EAL_DEV_MP_ACTION_REQUEST); return ret; } @@ -456,7 +456,7 @@ int eal_mp_dev_hotplug_init(void) ret = rte_mp_action_register(EAL_DEV_MP_ACTION_REQUEST, handle_primary_request); if (ret != 0) { - RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n", + RTE_LOG_LINE(ERR, EAL, "Couldn't register '%s' action", EAL_DEV_MP_ACTION_REQUEST); return ret; } diff --git a/lib/eal/common/malloc_elem.c b/lib/eal/common/malloc_elem.c index f5d1c8c2e2..6e9d5b8660 100644 --- a/lib/eal/common/malloc_elem.c +++ b/lib/eal/common/malloc_elem.c @@ -148,7 +148,7 @@ malloc_elem_insert(struct malloc_elem *elem) /* first and last elements must be both NULL or both non-NULL */ if ((heap->first == NULL) != (heap->last == NULL)) { - RTE_LOG(ERR, EAL, "Heap is probably corrupt\n"); + RTE_LOG_LINE(ERR, EAL, "Heap is probably corrupt"); return; } @@ -628,7 +628,7 @@ malloc_elem_hide_region(struct malloc_elem *elem, void *start, size_t len) malloc_elem_free_list_insert(hide_end); } else if (len_after > 0) { - RTE_LOG(ERR, EAL, "Unaligned element, heap is probably corrupt\n"); + RTE_LOG_LINE(ERR, EAL, "Unaligned element, heap is probably corrupt"); return; } } @@ -647,7 +647,7 @@ malloc_elem_hide_region(struct malloc_elem *elem, void *start, size_t len) malloc_elem_free_list_insert(prev); } else if (len_before > 0) { - RTE_LOG(ERR, EAL, "Unaligned element, heap is probably corrupt\n"); + RTE_LOG_LINE(ERR, EAL, "Unaligned element, heap is probably corrupt"); return; } } diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c index 6b6cf9174c..010c84c36c 100644 --- a/lib/eal/common/malloc_heap.c +++ b/lib/eal/common/malloc_heap.c @@ -117,7 +117,7 @@ malloc_add_seg(const struct rte_memseg_list *msl, heap_idx = malloc_socket_to_heap_id(msl->socket_id); if (heap_idx < 0) { - RTE_LOG(ERR, EAL, "Memseg list has invalid socket id\n"); + RTE_LOG_LINE(ERR, EAL, "Memseg list has invalid socket id"); return -1; } heap = &mcfg->malloc_heaps[heap_idx]; @@ -135,7 +135,7 @@ malloc_add_seg(const struct rte_memseg_list *msl, heap->total_size += len; - RTE_LOG(DEBUG, EAL, "Added %zuM to heap on socket %i\n", len >> 20, + RTE_LOG_LINE(DEBUG, EAL, "Added %zuM to heap on socket %i", len >> 20, msl->socket_id); return 0; } @@ -308,7 +308,7 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size, /* first, check if we're allowed to allocate this memory */ if (eal_memalloc_mem_alloc_validate(socket, heap->total_size + alloc_sz) < 0) { - RTE_LOG(DEBUG, EAL, "User has disallowed allocation\n"); + RTE_LOG_LINE(DEBUG, EAL, "User has disallowed allocation"); return NULL; } @@ -324,7 +324,7 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size, /* check if we wanted contiguous memory but didn't get it */ if (contig && !eal_memalloc_is_contig(msl, map_addr, alloc_sz)) { - RTE_LOG(DEBUG, EAL, "%s(): couldn't allocate physically contiguous space\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): couldn't allocate physically contiguous space", __func__); goto fail; } @@ -352,8 +352,8 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size, * which could solve some situations when IOVA VA is not * really needed. */ - RTE_LOG(ERR, EAL, - "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask\n", + RTE_LOG_LINE(ERR, EAL, + "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask", __func__); /* @@ -363,8 +363,8 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size, */ if ((rte_eal_iova_mode() == RTE_IOVA_VA) && rte_eal_using_phys_addrs()) - RTE_LOG(ERR, EAL, - "%s(): Please try initializing EAL with --iova-mode=pa parameter\n", + RTE_LOG_LINE(ERR, EAL, + "%s(): Please try initializing EAL with --iova-mode=pa parameter", __func__); goto fail; } @@ -440,7 +440,7 @@ try_expand_heap_primary(struct malloc_heap *heap, uint64_t pg_sz, } heap->total_size += alloc_sz; - RTE_LOG(DEBUG, EAL, "Heap on socket %d was expanded by %zdMB\n", + RTE_LOG_LINE(DEBUG, EAL, "Heap on socket %d was expanded by %zdMB", socket, alloc_sz >> 20ULL); free(ms); @@ -693,7 +693,7 @@ malloc_heap_alloc_on_heap_id(const char *type, size_t size, /* this should have succeeded */ if (ret == NULL) - RTE_LOG(ERR, EAL, "Error allocating from heap\n"); + RTE_LOG_LINE(ERR, EAL, "Error allocating from heap"); } alloc_unlock: rte_spinlock_unlock(&(heap->lock)); @@ -1040,7 +1040,7 @@ malloc_heap_free(struct malloc_elem *elem) /* we didn't exit early, meaning we have unmapped some pages */ unmapped = true; - RTE_LOG(DEBUG, EAL, "Heap on socket %d was shrunk by %zdMB\n", + RTE_LOG_LINE(DEBUG, EAL, "Heap on socket %d was shrunk by %zdMB", msl->socket_id, aligned_len >> 20ULL); rte_mcfg_mem_write_unlock(); @@ -1199,7 +1199,7 @@ malloc_heap_create_external_seg(void *va_addr, rte_iova_t iova_addrs[], } } if (msl == NULL) { - RTE_LOG(ERR, EAL, "Couldn't find empty memseg list\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't find empty memseg list"); rte_errno = ENOSPC; return NULL; } @@ -1210,7 +1210,7 @@ malloc_heap_create_external_seg(void *va_addr, rte_iova_t iova_addrs[], /* create the backing fbarray */ if (rte_fbarray_init(&msl->memseg_arr, fbarray_name, n_pages, sizeof(struct rte_memseg)) < 0) { - RTE_LOG(ERR, EAL, "Couldn't create fbarray backing the memseg list\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't create fbarray backing the memseg list"); return NULL; } arr = &msl->memseg_arr; @@ -1310,7 +1310,7 @@ malloc_heap_add_external_memory(struct malloc_heap *heap, heap->total_size += msl->len; /* all done! */ - RTE_LOG(DEBUG, EAL, "Added segment for heap %s starting at %p\n", + RTE_LOG_LINE(DEBUG, EAL, "Added segment for heap %s starting at %p", heap->name, msl->base_va); /* notify all subscribers that a new memory area has been added */ @@ -1356,7 +1356,7 @@ malloc_heap_create(struct malloc_heap *heap, const char *heap_name) /* prevent overflow. did you really create 2 billion heaps??? */ if (next_socket_id > INT32_MAX) { - RTE_LOG(ERR, EAL, "Cannot assign new socket ID's\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot assign new socket ID's"); rte_errno = ENOSPC; return -1; } @@ -1382,17 +1382,17 @@ int malloc_heap_destroy(struct malloc_heap *heap) { if (heap->alloc_count != 0) { - RTE_LOG(ERR, EAL, "Heap is still in use\n"); + RTE_LOG_LINE(ERR, EAL, "Heap is still in use"); rte_errno = EBUSY; return -1; } if (heap->first != NULL || heap->last != NULL) { - RTE_LOG(ERR, EAL, "Heap still contains memory segments\n"); + RTE_LOG_LINE(ERR, EAL, "Heap still contains memory segments"); rte_errno = EBUSY; return -1; } if (heap->total_size != 0) - RTE_LOG(ERR, EAL, "Total size not zero, heap is likely corrupt\n"); + RTE_LOG_LINE(ERR, EAL, "Total size not zero, heap is likely corrupt"); /* Reset all of the heap but the (hold) lock so caller can release it. */ RTE_BUILD_BUG_ON(offsetof(struct malloc_heap, lock) != 0); @@ -1411,7 +1411,7 @@ rte_eal_malloc_heap_init(void) eal_get_internal_configuration(); if (internal_conf->match_allocations) - RTE_LOG(DEBUG, EAL, "Hugepages will be freed exactly as allocated.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Hugepages will be freed exactly as allocated."); if (rte_eal_process_type() == RTE_PROC_PRIMARY) { /* assign min socket ID to external heaps */ @@ -1431,7 +1431,7 @@ rte_eal_malloc_heap_init(void) } if (register_mp_requests()) { - RTE_LOG(ERR, EAL, "Couldn't register malloc multiprocess actions\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't register malloc multiprocess actions"); return -1; } diff --git a/lib/eal/common/malloc_mp.c b/lib/eal/common/malloc_mp.c index 4d62397aba..e0f49bc471 100644 --- a/lib/eal/common/malloc_mp.c +++ b/lib/eal/common/malloc_mp.c @@ -156,7 +156,7 @@ handle_sync(const struct rte_mp_msg *msg, const void *peer) int ret; if (req->t != REQ_TYPE_SYNC) { - RTE_LOG(ERR, EAL, "Unexpected request from primary\n"); + RTE_LOG_LINE(ERR, EAL, "Unexpected request from primary"); return -1; } @@ -189,19 +189,19 @@ handle_free_request(const struct malloc_mp_req *m) /* check if the requested memory actually exists */ msl = rte_mem_virt2memseg_list(start); if (msl == NULL) { - RTE_LOG(ERR, EAL, "Requested to free unknown memory\n"); + RTE_LOG_LINE(ERR, EAL, "Requested to free unknown memory"); return -1; } /* check if end is within the same memory region */ if (rte_mem_virt2memseg_list(end) != msl) { - RTE_LOG(ERR, EAL, "Requested to free memory spanning multiple regions\n"); + RTE_LOG_LINE(ERR, EAL, "Requested to free memory spanning multiple regions"); return -1; } /* we're supposed to only free memory that's not external */ if (msl->external) { - RTE_LOG(ERR, EAL, "Requested to free external memory\n"); + RTE_LOG_LINE(ERR, EAL, "Requested to free external memory"); return -1; } @@ -228,13 +228,13 @@ handle_alloc_request(const struct malloc_mp_req *m, /* this is checked by the API, but we need to prevent divide by zero */ if (ar->page_sz == 0 || !rte_is_power_of_2(ar->page_sz)) { - RTE_LOG(ERR, EAL, "Attempting to allocate with invalid page size\n"); + RTE_LOG_LINE(ERR, EAL, "Attempting to allocate with invalid page size"); return -1; } /* heap idx is index into the heap array, not socket ID */ if (ar->malloc_heap_idx >= RTE_MAX_HEAPS) { - RTE_LOG(ERR, EAL, "Attempting to allocate from invalid heap\n"); + RTE_LOG_LINE(ERR, EAL, "Attempting to allocate from invalid heap"); return -1; } @@ -247,7 +247,7 @@ handle_alloc_request(const struct malloc_mp_req *m, * socket ID's are always lower than RTE_MAX_NUMA_NODES. */ if (heap->socket_id >= RTE_MAX_NUMA_NODES) { - RTE_LOG(ERR, EAL, "Attempting to allocate from external heap\n"); + RTE_LOG_LINE(ERR, EAL, "Attempting to allocate from external heap"); return -1; } @@ -258,7 +258,7 @@ handle_alloc_request(const struct malloc_mp_req *m, /* we can't know in advance how many pages we'll need, so we malloc */ ms = malloc(sizeof(*ms) * n_segs); if (ms == NULL) { - RTE_LOG(ERR, EAL, "Couldn't allocate memory for request state\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't allocate memory for request state"); return -1; } memset(ms, 0, sizeof(*ms) * n_segs); @@ -307,13 +307,13 @@ handle_request(const struct rte_mp_msg *msg, const void *peer __rte_unused) /* make sure it's not a dupe */ entry = find_request_by_id(m->id); if (entry != NULL) { - RTE_LOG(ERR, EAL, "Duplicate request id\n"); + RTE_LOG_LINE(ERR, EAL, "Duplicate request id"); goto fail; } entry = malloc(sizeof(*entry)); if (entry == NULL) { - RTE_LOG(ERR, EAL, "Unable to allocate memory for request\n"); + RTE_LOG_LINE(ERR, EAL, "Unable to allocate memory for request"); goto fail; } @@ -325,7 +325,7 @@ handle_request(const struct rte_mp_msg *msg, const void *peer __rte_unused) } else if (m->t == REQ_TYPE_FREE) { ret = handle_free_request(m); } else { - RTE_LOG(ERR, EAL, "Unexpected request from secondary\n"); + RTE_LOG_LINE(ERR, EAL, "Unexpected request from secondary"); goto fail; } @@ -345,7 +345,7 @@ handle_request(const struct rte_mp_msg *msg, const void *peer __rte_unused) resp->id = m->id; if (rte_mp_sendmsg(&resp_msg)) { - RTE_LOG(ERR, EAL, "Couldn't send response\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't send response"); goto fail; } /* we did not modify the request */ @@ -376,7 +376,7 @@ handle_request(const struct rte_mp_msg *msg, const void *peer __rte_unused) handle_sync_response); } while (ret != 0 && rte_errno == EEXIST); if (ret != 0) { - RTE_LOG(ERR, EAL, "Couldn't send sync request\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't send sync request"); if (m->t == REQ_TYPE_ALLOC) free(entry->alloc_state.ms); goto fail; @@ -414,7 +414,7 @@ handle_sync_response(const struct rte_mp_msg *request, entry = find_request_by_id(mpreq->id); if (entry == NULL) { - RTE_LOG(ERR, EAL, "Wrong request ID\n"); + RTE_LOG_LINE(ERR, EAL, "Wrong request ID"); goto fail; } @@ -428,12 +428,12 @@ handle_sync_response(const struct rte_mp_msg *request, (struct malloc_mp_req *)reply->msgs[i].param; if (resp->t != REQ_TYPE_SYNC) { - RTE_LOG(ERR, EAL, "Unexpected response to sync request\n"); + RTE_LOG_LINE(ERR, EAL, "Unexpected response to sync request"); result = REQ_RESULT_FAIL; break; } if (resp->id != entry->user_req.id) { - RTE_LOG(ERR, EAL, "Response to wrong sync request\n"); + RTE_LOG_LINE(ERR, EAL, "Response to wrong sync request"); result = REQ_RESULT_FAIL; break; } @@ -458,7 +458,7 @@ handle_sync_response(const struct rte_mp_msg *request, strlcpy(msg.name, MP_ACTION_RESPONSE, sizeof(msg.name)); if (rte_mp_sendmsg(&msg)) - RTE_LOG(ERR, EAL, "Could not send message to secondary process\n"); + RTE_LOG_LINE(ERR, EAL, "Could not send message to secondary process"); TAILQ_REMOVE(&mp_request_list.list, entry, next); free(entry); @@ -482,7 +482,7 @@ handle_sync_response(const struct rte_mp_msg *request, strlcpy(msg.name, MP_ACTION_RESPONSE, sizeof(msg.name)); if (rte_mp_sendmsg(&msg)) - RTE_LOG(ERR, EAL, "Could not send message to secondary process\n"); + RTE_LOG_LINE(ERR, EAL, "Could not send message to secondary process"); TAILQ_REMOVE(&mp_request_list.list, entry, next); free(entry->alloc_state.ms); @@ -524,7 +524,7 @@ handle_sync_response(const struct rte_mp_msg *request, handle_rollback_response); } while (ret != 0 && rte_errno == EEXIST); if (ret != 0) { - RTE_LOG(ERR, EAL, "Could not send rollback request to secondary process\n"); + RTE_LOG_LINE(ERR, EAL, "Could not send rollback request to secondary process"); /* we couldn't send rollback request, but that's OK - * secondary will time out, and memory has been removed @@ -536,7 +536,7 @@ handle_sync_response(const struct rte_mp_msg *request, goto fail; } } else { - RTE_LOG(ERR, EAL, " to sync request of unknown type\n"); + RTE_LOG_LINE(ERR, EAL, " to sync request of unknown type"); goto fail; } @@ -564,12 +564,12 @@ handle_rollback_response(const struct rte_mp_msg *request, entry = find_request_by_id(mpreq->id); if (entry == NULL) { - RTE_LOG(ERR, EAL, "Wrong request ID\n"); + RTE_LOG_LINE(ERR, EAL, "Wrong request ID"); goto fail; } if (entry->user_req.t != REQ_TYPE_ALLOC) { - RTE_LOG(ERR, EAL, "Unexpected active request\n"); + RTE_LOG_LINE(ERR, EAL, "Unexpected active request"); goto fail; } @@ -582,7 +582,7 @@ handle_rollback_response(const struct rte_mp_msg *request, strlcpy(msg.name, MP_ACTION_RESPONSE, sizeof(msg.name)); if (rte_mp_sendmsg(&msg)) - RTE_LOG(ERR, EAL, "Could not send message to secondary process\n"); + RTE_LOG_LINE(ERR, EAL, "Could not send message to secondary process"); /* clean up */ TAILQ_REMOVE(&mp_request_list.list, entry, next); @@ -657,14 +657,14 @@ request_sync(void) if (ret != 0) { /* if IPC is unsupported, behave as if the call succeeded */ if (rte_errno != ENOTSUP) - RTE_LOG(ERR, EAL, "Could not send sync request to secondary process\n"); + RTE_LOG_LINE(ERR, EAL, "Could not send sync request to secondary process"); else ret = 0; goto out; } if (reply.nb_received != reply.nb_sent) { - RTE_LOG(ERR, EAL, "Not all secondaries have responded\n"); + RTE_LOG_LINE(ERR, EAL, "Not all secondaries have responded"); goto out; } @@ -672,15 +672,15 @@ request_sync(void) struct malloc_mp_req *resp = (struct malloc_mp_req *)reply.msgs[i].param; if (resp->t != REQ_TYPE_SYNC) { - RTE_LOG(ERR, EAL, "Unexpected response from secondary\n"); + RTE_LOG_LINE(ERR, EAL, "Unexpected response from secondary"); goto out; } if (resp->id != req->id) { - RTE_LOG(ERR, EAL, "Wrong request ID\n"); + RTE_LOG_LINE(ERR, EAL, "Wrong request ID"); goto out; } if (resp->result != REQ_RESULT_SUCCESS) { - RTE_LOG(ERR, EAL, "Secondary process failed to synchronize\n"); + RTE_LOG_LINE(ERR, EAL, "Secondary process failed to synchronize"); goto out; } } @@ -711,14 +711,14 @@ request_to_primary(struct malloc_mp_req *user_req) entry = malloc(sizeof(*entry)); if (entry == NULL) { - RTE_LOG(ERR, EAL, "Cannot allocate memory for request\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate memory for request"); goto fail; } memset(entry, 0, sizeof(*entry)); if (gettimeofday(&now, NULL) < 0) { - RTE_LOG(ERR, EAL, "Cannot get current time\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot get current time"); goto fail; } @@ -740,7 +740,7 @@ request_to_primary(struct malloc_mp_req *user_req) memcpy(msg_req, user_req, sizeof(*msg_req)); if (rte_mp_sendmsg(&msg)) { - RTE_LOG(ERR, EAL, "Cannot send message to primary\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot send message to primary"); goto fail; } @@ -759,7 +759,7 @@ request_to_primary(struct malloc_mp_req *user_req) } while (ret != 0 && ret != ETIMEDOUT); if (entry->state != REQ_STATE_COMPLETE) { - RTE_LOG(ERR, EAL, "Request timed out\n"); + RTE_LOG_LINE(ERR, EAL, "Request timed out"); ret = -1; } else { ret = 0; @@ -783,24 +783,24 @@ register_mp_requests(void) /* it's OK for primary to not support IPC */ if (rte_mp_action_register(MP_ACTION_REQUEST, handle_request) && rte_errno != ENOTSUP) { - RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n", + RTE_LOG_LINE(ERR, EAL, "Couldn't register '%s' action", MP_ACTION_REQUEST); return -1; } } else { if (rte_mp_action_register(MP_ACTION_SYNC, handle_sync)) { - RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n", + RTE_LOG_LINE(ERR, EAL, "Couldn't register '%s' action", MP_ACTION_SYNC); return -1; } if (rte_mp_action_register(MP_ACTION_ROLLBACK, handle_sync)) { - RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n", + RTE_LOG_LINE(ERR, EAL, "Couldn't register '%s' action", MP_ACTION_SYNC); return -1; } if (rte_mp_action_register(MP_ACTION_RESPONSE, handle_response)) { - RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n", + RTE_LOG_LINE(ERR, EAL, "Couldn't register '%s' action", MP_ACTION_RESPONSE); return -1; } diff --git a/lib/eal/common/rte_keepalive.c b/lib/eal/common/rte_keepalive.c index e0494b2010..699022ae1c 100644 --- a/lib/eal/common/rte_keepalive.c +++ b/lib/eal/common/rte_keepalive.c @@ -53,7 +53,7 @@ struct rte_keepalive { static void print_trace(const char *msg, struct rte_keepalive *keepcfg, int idx_core) { - RTE_LOG(INFO, EAL, "%sLast seen %" PRId64 "ms ago.\n", + RTE_LOG_LINE(INFO, EAL, "%sLast seen %" PRId64 "ms ago.", msg, ((rte_rdtsc() - keepcfg->last_alive[idx_core])*1000) / rte_get_tsc_hz() diff --git a/lib/eal/common/rte_malloc.c b/lib/eal/common/rte_malloc.c index 9db0c399ae..9b3038805a 100644 --- a/lib/eal/common/rte_malloc.c +++ b/lib/eal/common/rte_malloc.c @@ -35,7 +35,7 @@ mem_free(void *addr, const bool trace_ena) if (addr == NULL) return; if (malloc_heap_free(malloc_elem_from_data(addr)) < 0) - RTE_LOG(ERR, EAL, "Error: Invalid memory\n"); + RTE_LOG_LINE(ERR, EAL, "Error: Invalid memory"); } void @@ -171,7 +171,7 @@ rte_realloc_socket(void *ptr, size_t size, unsigned int align, int socket) struct malloc_elem *elem = malloc_elem_from_data(ptr); if (elem == NULL) { - RTE_LOG(ERR, EAL, "Error: memory corruption detected\n"); + RTE_LOG_LINE(ERR, EAL, "Error: memory corruption detected"); return NULL; } @@ -598,7 +598,7 @@ rte_malloc_heap_create(const char *heap_name) /* existing heap */ if (strncmp(heap_name, tmp->name, RTE_HEAP_NAME_MAX_LEN) == 0) { - RTE_LOG(ERR, EAL, "Heap %s already exists\n", + RTE_LOG_LINE(ERR, EAL, "Heap %s already exists", heap_name); rte_errno = EEXIST; ret = -1; @@ -611,7 +611,7 @@ rte_malloc_heap_create(const char *heap_name) } } if (heap == NULL) { - RTE_LOG(ERR, EAL, "Cannot create new heap: no space\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot create new heap: no space"); rte_errno = ENOSPC; ret = -1; goto unlock; @@ -643,7 +643,7 @@ rte_malloc_heap_destroy(const char *heap_name) /* start from non-socket heaps */ heap = find_named_heap(heap_name); if (heap == NULL) { - RTE_LOG(ERR, EAL, "Heap %s not found\n", heap_name); + RTE_LOG_LINE(ERR, EAL, "Heap %s not found", heap_name); rte_errno = ENOENT; ret = -1; goto unlock; diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c index e183d2e631..3ed4186add 100644 --- a/lib/eal/common/rte_service.c +++ b/lib/eal/common/rte_service.c @@ -87,8 +87,8 @@ rte_service_init(void) RTE_BUILD_BUG_ON(RTE_SERVICE_NUM_MAX > 64); if (rte_service_library_initialized) { - RTE_LOG(NOTICE, EAL, - "service library init() called, init flag %d\n", + RTE_LOG_LINE(NOTICE, EAL, + "service library init() called, init flag %d", rte_service_library_initialized); return -EALREADY; } @@ -97,14 +97,14 @@ rte_service_init(void) sizeof(struct rte_service_spec_impl), RTE_CACHE_LINE_SIZE); if (!rte_services) { - RTE_LOG(ERR, EAL, "error allocating rte services array\n"); + RTE_LOG_LINE(ERR, EAL, "error allocating rte services array"); goto fail_mem; } lcore_states = rte_calloc("rte_service_core_states", RTE_MAX_LCORE, sizeof(struct core_state), RTE_CACHE_LINE_SIZE); if (!lcore_states) { - RTE_LOG(ERR, EAL, "error allocating core states array\n"); + RTE_LOG_LINE(ERR, EAL, "error allocating core states array"); goto fail_mem; } diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c index 568e06e9ed..2c5f71a85a 100644 --- a/lib/eal/freebsd/eal.c +++ b/lib/eal/freebsd/eal.c @@ -117,7 +117,7 @@ rte_eal_config_create(void) if (mem_cfg_fd < 0){ mem_cfg_fd = open(pathname, O_RDWR | O_CREAT, 0600); if (mem_cfg_fd < 0) { - RTE_LOG(ERR, EAL, "Cannot open '%s' for rte_mem_config\n", + RTE_LOG_LINE(ERR, EAL, "Cannot open '%s' for rte_mem_config", pathname); return -1; } @@ -127,7 +127,7 @@ rte_eal_config_create(void) if (retval < 0){ close(mem_cfg_fd); mem_cfg_fd = -1; - RTE_LOG(ERR, EAL, "Cannot resize '%s' for rte_mem_config\n", + RTE_LOG_LINE(ERR, EAL, "Cannot resize '%s' for rte_mem_config", pathname); return -1; } @@ -136,8 +136,8 @@ rte_eal_config_create(void) if (retval < 0){ close(mem_cfg_fd); mem_cfg_fd = -1; - RTE_LOG(ERR, EAL, "Cannot create lock on '%s'. Is another primary " - "process running?\n", pathname); + RTE_LOG_LINE(ERR, EAL, "Cannot create lock on '%s'. Is another primary " + "process running?", pathname); return -1; } @@ -145,7 +145,7 @@ rte_eal_config_create(void) rte_mem_cfg_addr = eal_get_virtual_area(rte_mem_cfg_addr, &cfg_len_aligned, page_sz, 0, 0); if (rte_mem_cfg_addr == NULL) { - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config"); close(mem_cfg_fd); mem_cfg_fd = -1; return -1; @@ -156,7 +156,7 @@ rte_eal_config_create(void) cfg_len_aligned, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_FIXED, mem_cfg_fd, 0); if (mapped_mem_cfg_addr == MAP_FAILED) { - RTE_LOG(ERR, EAL, "Cannot remap memory for rte_config\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot remap memory for rte_config"); munmap(rte_mem_cfg_addr, cfg_len); close(mem_cfg_fd); mem_cfg_fd = -1; @@ -190,7 +190,7 @@ rte_eal_config_attach(void) if (mem_cfg_fd < 0){ mem_cfg_fd = open(pathname, O_RDWR); if (mem_cfg_fd < 0) { - RTE_LOG(ERR, EAL, "Cannot open '%s' for rte_mem_config\n", + RTE_LOG_LINE(ERR, EAL, "Cannot open '%s' for rte_mem_config", pathname); return -1; } @@ -202,7 +202,7 @@ rte_eal_config_attach(void) if (rte_mem_cfg_addr == MAP_FAILED) { close(mem_cfg_fd); mem_cfg_fd = -1; - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)", errno, strerror(errno)); return -1; } @@ -242,14 +242,14 @@ rte_eal_config_reattach(void) if (mem_config == MAP_FAILED || mem_config != rte_mem_cfg_addr) { if (mem_config != MAP_FAILED) { /* errno is stale, don't use */ - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config at [%p], got [%p]" + RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config at [%p], got [%p]" " - please use '--" OPT_BASE_VIRTADDR - "' option\n", + "' option", rte_mem_cfg_addr, mem_config); munmap(mem_config, sizeof(struct rte_mem_config)); return -1; } - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)", errno, strerror(errno)); return -1; } @@ -280,7 +280,7 @@ eal_proc_type_detect(void) ptype = RTE_PROC_SECONDARY; } - RTE_LOG(INFO, EAL, "Auto-detected process type: %s\n", + RTE_LOG_LINE(INFO, EAL, "Auto-detected process type: %s", ptype == RTE_PROC_PRIMARY ? "PRIMARY" : "SECONDARY"); return ptype; @@ -307,20 +307,20 @@ rte_config_init(void) return -1; eal_mcfg_wait_complete(); if (eal_mcfg_check_version() < 0) { - RTE_LOG(ERR, EAL, "Primary and secondary process DPDK version mismatch\n"); + RTE_LOG_LINE(ERR, EAL, "Primary and secondary process DPDK version mismatch"); return -1; } if (rte_eal_config_reattach() < 0) return -1; if (!__rte_mp_enable()) { - RTE_LOG(ERR, EAL, "Primary process refused secondary attachment\n"); + RTE_LOG_LINE(ERR, EAL, "Primary process refused secondary attachment"); return -1; } eal_mcfg_update_internal(); break; case RTE_PROC_AUTO: case RTE_PROC_INVALID: - RTE_LOG(ERR, EAL, "Invalid process type %d\n", + RTE_LOG_LINE(ERR, EAL, "Invalid process type %d", config->process_type); return -1; } @@ -454,7 +454,7 @@ eal_parse_args(int argc, char **argv) { char *ops_name = strdup(optarg); if (ops_name == NULL) - RTE_LOG(ERR, EAL, "Could not store mbuf pool ops name\n"); + RTE_LOG_LINE(ERR, EAL, "Could not store mbuf pool ops name"); else { /* free old ops name */ free(internal_conf->user_mbuf_pool_ops_name); @@ -469,16 +469,16 @@ eal_parse_args(int argc, char **argv) exit(EXIT_SUCCESS); default: if (opt < OPT_LONG_MIN_NUM && isprint(opt)) { - RTE_LOG(ERR, EAL, "Option %c is not supported " - "on FreeBSD\n", opt); + RTE_LOG_LINE(ERR, EAL, "Option %c is not supported " + "on FreeBSD", opt); } else if (opt >= OPT_LONG_MIN_NUM && opt < OPT_LONG_MAX_NUM) { - RTE_LOG(ERR, EAL, "Option %s is not supported " - "on FreeBSD\n", + RTE_LOG_LINE(ERR, EAL, "Option %s is not supported " + "on FreeBSD", eal_long_options[option_index].name); } else { - RTE_LOG(ERR, EAL, "Option %d is not supported " - "on FreeBSD\n", opt); + RTE_LOG_LINE(ERR, EAL, "Option %d is not supported " + "on FreeBSD", opt); } eal_usage(prgname); ret = -1; @@ -489,11 +489,11 @@ eal_parse_args(int argc, char **argv) /* create runtime data directory. In no_shconf mode, skip any errors */ if (eal_create_runtime_dir() < 0) { if (internal_conf->no_shconf == 0) { - RTE_LOG(ERR, EAL, "Cannot create runtime directory\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot create runtime directory"); ret = -1; goto out; } else - RTE_LOG(WARNING, EAL, "No DPDK runtime directory created\n"); + RTE_LOG_LINE(WARNING, EAL, "No DPDK runtime directory created"); } if (eal_adjust_config(internal_conf) != 0) { @@ -545,7 +545,7 @@ eal_check_mem_on_local_socket(void) socket_id = rte_lcore_to_socket_id(config->main_lcore); if (rte_memseg_list_walk(check_socket, &socket_id) == 0) - RTE_LOG(WARNING, EAL, "WARNING: Main core has no memory on local socket!\n"); + RTE_LOG_LINE(WARNING, EAL, "WARNING: Main core has no memory on local socket!"); } @@ -572,7 +572,7 @@ rte_eal_iopl_init(void) static void rte_eal_init_alert(const char *msg) { fprintf(stderr, "EAL: FATAL: %s\n", msg); - RTE_LOG(ERR, EAL, "%s\n", msg); + RTE_LOG_LINE(ERR, EAL, "%s", msg); } /* Launch threads, called at application init(). */ @@ -629,7 +629,8 @@ rte_eal_init(int argc, char **argv) /* FreeBSD always uses legacy memory model */ internal_conf->legacy_mem = true; if (internal_conf->in_memory) { - RTE_LOG(WARNING, EAL, "Warning: ignoring unsupported flag, '%s'\n", OPT_IN_MEMORY); + RTE_LOG_LINE(WARNING, EAL, "Warning: ignoring unsupported flag, '%s'", + OPT_IN_MEMORY); internal_conf->in_memory = false; } @@ -695,14 +696,14 @@ rte_eal_init(int argc, char **argv) has_phys_addr = internal_conf->no_hugetlbfs == 0; iova_mode = internal_conf->iova_mode; if (iova_mode == RTE_IOVA_DC) { - RTE_LOG(DEBUG, EAL, "Specific IOVA mode is not requested, autodetecting\n"); + RTE_LOG_LINE(DEBUG, EAL, "Specific IOVA mode is not requested, autodetecting"); if (has_phys_addr) { - RTE_LOG(DEBUG, EAL, "Selecting IOVA mode according to bus requests\n"); + RTE_LOG_LINE(DEBUG, EAL, "Selecting IOVA mode according to bus requests"); iova_mode = rte_bus_get_iommu_class(); if (iova_mode == RTE_IOVA_DC) { if (!RTE_IOVA_IN_MBUF) { iova_mode = RTE_IOVA_VA; - RTE_LOG(DEBUG, EAL, "IOVA as VA mode is forced by build option.\n"); + RTE_LOG_LINE(DEBUG, EAL, "IOVA as VA mode is forced by build option."); } else { iova_mode = RTE_IOVA_PA; } @@ -725,7 +726,7 @@ rte_eal_init(int argc, char **argv) } rte_eal_get_configuration()->iova_mode = iova_mode; - RTE_LOG(INFO, EAL, "Selected IOVA mode '%s'\n", + RTE_LOG_LINE(INFO, EAL, "Selected IOVA mode '%s'", rte_eal_iova_mode() == RTE_IOVA_PA ? "PA" : "VA"); if (internal_conf->no_hugetlbfs == 0) { @@ -751,11 +752,11 @@ rte_eal_init(int argc, char **argv) if (internal_conf->vmware_tsc_map == 1) { #ifdef RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT rte_cycles_vmware_tsc_map = 1; - RTE_LOG (DEBUG, EAL, "Using VMWARE TSC MAP, " - "you must have monitor_control.pseudo_perfctr = TRUE\n"); + RTE_LOG_LINE(DEBUG, EAL, "Using VMWARE TSC MAP, " + "you must have monitor_control.pseudo_perfctr = TRUE"); #else - RTE_LOG (WARNING, EAL, "Ignoring --vmware-tsc-map because " - "RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT is not set\n"); + RTE_LOG_LINE(WARNING, EAL, "Ignoring --vmware-tsc-map because " + "RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT is not set"); #endif } @@ -818,7 +819,7 @@ rte_eal_init(int argc, char **argv) ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset)); - RTE_LOG(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])\n", + RTE_LOG_LINE(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])", config->main_lcore, (uintptr_t)pthread_self(), cpuset, ret == 0 ? "" : "..."); @@ -917,7 +918,7 @@ rte_eal_cleanup(void) if (!rte_atomic_compare_exchange_strong_explicit(&run_once, &has_run, 1, rte_memory_order_relaxed, rte_memory_order_relaxed)) { - RTE_LOG(WARNING, EAL, "Already called cleanup\n"); + RTE_LOG_LINE(WARNING, EAL, "Already called cleanup"); rte_errno = EALREADY; return -1; } diff --git a/lib/eal/freebsd/eal_alarm.c b/lib/eal/freebsd/eal_alarm.c index e5b0909a45..2493adf8ae 100644 --- a/lib/eal/freebsd/eal_alarm.c +++ b/lib/eal/freebsd/eal_alarm.c @@ -59,7 +59,7 @@ rte_eal_alarm_init(void) intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE); if (intr_handle == NULL) { - RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n"); + RTE_LOG_LINE(ERR, EAL, "Fail to allocate intr_handle"); goto error; } diff --git a/lib/eal/freebsd/eal_dev.c b/lib/eal/freebsd/eal_dev.c index c3dfe9108f..8d35148ba3 100644 --- a/lib/eal/freebsd/eal_dev.c +++ b/lib/eal/freebsd/eal_dev.c @@ -8,27 +8,27 @@ int rte_dev_event_monitor_start(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for FreeBSD\n"); + RTE_LOG_LINE(ERR, EAL, "Device event is not supported for FreeBSD"); return -1; } int rte_dev_event_monitor_stop(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for FreeBSD\n"); + RTE_LOG_LINE(ERR, EAL, "Device event is not supported for FreeBSD"); return -1; } int rte_dev_hotplug_handle_enable(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for FreeBSD\n"); + RTE_LOG_LINE(ERR, EAL, "Device event is not supported for FreeBSD"); return -1; } int rte_dev_hotplug_handle_disable(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for FreeBSD\n"); + RTE_LOG_LINE(ERR, EAL, "Device event is not supported for FreeBSD"); return -1; } diff --git a/lib/eal/freebsd/eal_hugepage_info.c b/lib/eal/freebsd/eal_hugepage_info.c index e58e618469..3c97daa444 100644 --- a/lib/eal/freebsd/eal_hugepage_info.c +++ b/lib/eal/freebsd/eal_hugepage_info.c @@ -72,7 +72,7 @@ eal_hugepage_info_init(void) &sysctl_size, NULL, 0); if (error != 0) { - RTE_LOG(ERR, EAL, "could not read sysctl hw.contigmem.num_buffers\n"); + RTE_LOG_LINE(ERR, EAL, "could not read sysctl hw.contigmem.num_buffers"); return -1; } @@ -81,28 +81,28 @@ eal_hugepage_info_init(void) &sysctl_size, NULL, 0); if (error != 0) { - RTE_LOG(ERR, EAL, "could not read sysctl hw.contigmem.buffer_size\n"); + RTE_LOG_LINE(ERR, EAL, "could not read sysctl hw.contigmem.buffer_size"); return -1; } fd = open(CONTIGMEM_DEV, O_RDWR); if (fd < 0) { - RTE_LOG(ERR, EAL, "could not open "CONTIGMEM_DEV"\n"); + RTE_LOG_LINE(ERR, EAL, "could not open "CONTIGMEM_DEV); return -1; } if (flock(fd, LOCK_EX | LOCK_NB) < 0) { - RTE_LOG(ERR, EAL, "could not lock memory. Is another DPDK process running?\n"); + RTE_LOG_LINE(ERR, EAL, "could not lock memory. Is another DPDK process running?"); return -1; } if (buffer_size >= 1<<30) - RTE_LOG(INFO, EAL, "Contigmem driver has %d buffers, each of size %dGB\n", + RTE_LOG_LINE(INFO, EAL, "Contigmem driver has %d buffers, each of size %dGB", num_buffers, (int)(buffer_size>>30)); else if (buffer_size >= 1<<20) - RTE_LOG(INFO, EAL, "Contigmem driver has %d buffers, each of size %dMB\n", + RTE_LOG_LINE(INFO, EAL, "Contigmem driver has %d buffers, each of size %dMB", num_buffers, (int)(buffer_size>>20)); else - RTE_LOG(INFO, EAL, "Contigmem driver has %d buffers, each of size %dKB\n", + RTE_LOG_LINE(INFO, EAL, "Contigmem driver has %d buffers, each of size %dKB", num_buffers, (int)(buffer_size>>10)); strlcpy(hpi->hugedir, CONTIGMEM_DEV, sizeof(hpi->hugedir)); @@ -117,7 +117,7 @@ eal_hugepage_info_init(void) tmp_hpi = create_shared_memory(eal_hugepage_info_path(), sizeof(internal_conf->hugepage_info)); if (tmp_hpi == NULL ) { - RTE_LOG(ERR, EAL, "Failed to create shared memory!\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to create shared memory!"); return -1; } @@ -132,7 +132,7 @@ eal_hugepage_info_init(void) } if (munmap(tmp_hpi, sizeof(internal_conf->hugepage_info)) < 0) { - RTE_LOG(ERR, EAL, "Failed to unmap shared memory!\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to unmap shared memory!"); return -1; } @@ -154,14 +154,14 @@ eal_hugepage_info_read(void) tmp_hpi = open_shared_memory(eal_hugepage_info_path(), sizeof(internal_conf->hugepage_info)); if (tmp_hpi == NULL) { - RTE_LOG(ERR, EAL, "Failed to open shared memory!\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to open shared memory!"); return -1; } memcpy(hpi, tmp_hpi, sizeof(internal_conf->hugepage_info)); if (munmap(tmp_hpi, sizeof(internal_conf->hugepage_info)) < 0) { - RTE_LOG(ERR, EAL, "Failed to unmap shared memory!\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to unmap shared memory!"); return -1; } return 0; diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c index 2b31dfb099..ffba823808 100644 --- a/lib/eal/freebsd/eal_interrupts.c +++ b/lib/eal/freebsd/eal_interrupts.c @@ -90,12 +90,12 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, /* first do parameter checking */ if (rte_intr_fd_get(intr_handle) < 0 || cb == NULL) { - RTE_LOG(ERR, EAL, - "Registering with invalid input parameter\n"); + RTE_LOG_LINE(ERR, EAL, + "Registering with invalid input parameter"); return -EINVAL; } if (kq < 0) { - RTE_LOG(ERR, EAL, "Kqueue is not active: %d\n", kq); + RTE_LOG_LINE(ERR, EAL, "Kqueue is not active: %d", kq); return -ENODEV; } @@ -120,7 +120,7 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, /* allocate a new interrupt callback entity */ callback = calloc(1, sizeof(*callback)); if (callback == NULL) { - RTE_LOG(ERR, EAL, "Can not allocate memory\n"); + RTE_LOG_LINE(ERR, EAL, "Can not allocate memory"); ret = -ENOMEM; goto fail; } @@ -132,13 +132,13 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, if (src == NULL) { src = calloc(1, sizeof(*src)); if (src == NULL) { - RTE_LOG(ERR, EAL, "Can not allocate memory\n"); + RTE_LOG_LINE(ERR, EAL, "Can not allocate memory"); ret = -ENOMEM; goto fail; } else { src->intr_handle = rte_intr_instance_dup(intr_handle); if (src->intr_handle == NULL) { - RTE_LOG(ERR, EAL, "Can not create intr instance\n"); + RTE_LOG_LINE(ERR, EAL, "Can not create intr instance"); ret = -ENOMEM; free(src); src = NULL; @@ -167,7 +167,7 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, ke.flags = EV_ADD; /* mark for addition to the queue */ if (intr_source_to_kevent(intr_handle, &ke) < 0) { - RTE_LOG(ERR, EAL, "Cannot convert interrupt handle to kevent\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot convert interrupt handle to kevent"); ret = -ENODEV; goto fail; } @@ -181,10 +181,10 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, * user. so, don't output it unless debug log level set. */ if (errno == ENODEV) - RTE_LOG(DEBUG, EAL, "Interrupt handle %d not supported\n", + RTE_LOG_LINE(DEBUG, EAL, "Interrupt handle %d not supported", rte_intr_fd_get(src->intr_handle)); else - RTE_LOG(ERR, EAL, "Error adding fd %d kevent, %s\n", + RTE_LOG_LINE(ERR, EAL, "Error adding fd %d kevent, %s", rte_intr_fd_get(src->intr_handle), strerror(errno)); ret = -errno; @@ -222,13 +222,13 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle, /* do parameter checking first */ if (rte_intr_fd_get(intr_handle) < 0) { - RTE_LOG(ERR, EAL, - "Unregistering with invalid input parameter\n"); + RTE_LOG_LINE(ERR, EAL, + "Unregistering with invalid input parameter"); return -EINVAL; } if (kq < 0) { - RTE_LOG(ERR, EAL, "Kqueue is not active\n"); + RTE_LOG_LINE(ERR, EAL, "Kqueue is not active"); return -ENODEV; } @@ -277,12 +277,12 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle, /* do parameter checking first */ if (rte_intr_fd_get(intr_handle) < 0) { - RTE_LOG(ERR, EAL, - "Unregistering with invalid input parameter\n"); + RTE_LOG_LINE(ERR, EAL, + "Unregistering with invalid input parameter"); return -EINVAL; } if (kq < 0) { - RTE_LOG(ERR, EAL, "Kqueue is not active\n"); + RTE_LOG_LINE(ERR, EAL, "Kqueue is not active"); return -ENODEV; } @@ -312,7 +312,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle, ke.flags = EV_DELETE; /* mark for deletion from the queue */ if (intr_source_to_kevent(intr_handle, &ke) < 0) { - RTE_LOG(ERR, EAL, "Cannot convert to kevent\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot convert to kevent"); ret = -ENODEV; goto out; } @@ -321,7 +321,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle, * remove intr file descriptor from wait list. */ if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) { - RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n", + RTE_LOG_LINE(ERR, EAL, "Error removing fd %d kevent, %s", rte_intr_fd_get(src->intr_handle), strerror(errno)); /* removing non-existent even is an expected condition @@ -396,7 +396,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle) break; /* unknown handle type */ default: - RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Unknown handle type of fd %d", rte_intr_fd_get(intr_handle)); rc = -1; break; @@ -437,7 +437,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle) break; /* unknown handle type */ default: - RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Unknown handle type of fd %d", rte_intr_fd_get(intr_handle)); rc = -1; break; @@ -513,13 +513,13 @@ eal_intr_process_interrupts(struct kevent *events, int nfds) if (errno == EINTR || errno == EWOULDBLOCK) continue; - RTE_LOG(ERR, EAL, "Error reading from file " - "descriptor %d: %s\n", + RTE_LOG_LINE(ERR, EAL, "Error reading from file " + "descriptor %d: %s", event_fd, strerror(errno)); } else if (bytes_read == 0) - RTE_LOG(ERR, EAL, "Read nothing from file " - "descriptor %d\n", event_fd); + RTE_LOG_LINE(ERR, EAL, "Read nothing from file " + "descriptor %d", event_fd); else call = true; } @@ -556,7 +556,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds) ke.flags = EV_DELETE; if (intr_source_to_kevent(src->intr_handle, &ke) < 0) { - RTE_LOG(ERR, EAL, "Cannot convert to kevent\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot convert to kevent"); rte_spinlock_unlock(&intr_lock); return; } @@ -565,7 +565,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds) * remove intr file descriptor from wait list. */ if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) { - RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n", + RTE_LOG_LINE(ERR, EAL, "Error removing fd %d kevent, %s", rte_intr_fd_get(src->intr_handle), strerror(errno)); /* removing non-existent even is an expected @@ -606,8 +606,8 @@ eal_intr_thread_main(void *arg __rte_unused) if (nfds < 0) { if (errno == EINTR) continue; - RTE_LOG(ERR, EAL, - "kevent returns with fail\n"); + RTE_LOG_LINE(ERR, EAL, + "kevent returns with fail"); break; } /* kevent timeout, will never happen here */ @@ -632,7 +632,7 @@ rte_eal_intr_init(void) kq = kqueue(); if (kq < 0) { - RTE_LOG(ERR, EAL, "Cannot create kqueue instance\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot create kqueue instance"); return -1; } @@ -641,8 +641,8 @@ rte_eal_intr_init(void) eal_intr_thread_main, NULL); if (ret != 0) { rte_errno = -ret; - RTE_LOG(ERR, EAL, - "Failed to create thread for interrupt handling\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to create thread for interrupt handling"); } return ret; diff --git a/lib/eal/freebsd/eal_lcore.c b/lib/eal/freebsd/eal_lcore.c index d9ef4bc9c5..cfd375076a 100644 --- a/lib/eal/freebsd/eal_lcore.c +++ b/lib/eal/freebsd/eal_lcore.c @@ -30,7 +30,7 @@ eal_get_ncpus(void) if (ncpu < 0) { sysctl(mib, 2, &ncpu, &len, NULL, 0); - RTE_LOG(INFO, EAL, "Sysctl reports %d cpus\n", ncpu); + RTE_LOG_LINE(INFO, EAL, "Sysctl reports %d cpus", ncpu); } return ncpu; } diff --git a/lib/eal/freebsd/eal_memalloc.c b/lib/eal/freebsd/eal_memalloc.c index 00ab02cb63..f96ed2ce21 100644 --- a/lib/eal/freebsd/eal_memalloc.c +++ b/lib/eal/freebsd/eal_memalloc.c @@ -15,21 +15,21 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms __rte_unused, int __rte_unused n_segs, size_t __rte_unused page_sz, int __rte_unused socket, bool __rte_unused exact) { - RTE_LOG(ERR, EAL, "Memory hotplug not supported on FreeBSD\n"); + RTE_LOG_LINE(ERR, EAL, "Memory hotplug not supported on FreeBSD"); return -1; } struct rte_memseg * eal_memalloc_alloc_seg(size_t __rte_unused page_sz, int __rte_unused socket) { - RTE_LOG(ERR, EAL, "Memory hotplug not supported on FreeBSD\n"); + RTE_LOG_LINE(ERR, EAL, "Memory hotplug not supported on FreeBSD"); return NULL; } int eal_memalloc_free_seg(struct rte_memseg *ms __rte_unused) { - RTE_LOG(ERR, EAL, "Memory hotplug not supported on FreeBSD\n"); + RTE_LOG_LINE(ERR, EAL, "Memory hotplug not supported on FreeBSD"); return -1; } @@ -37,14 +37,14 @@ int eal_memalloc_free_seg_bulk(struct rte_memseg **ms __rte_unused, int n_segs __rte_unused) { - RTE_LOG(ERR, EAL, "Memory hotplug not supported on FreeBSD\n"); + RTE_LOG_LINE(ERR, EAL, "Memory hotplug not supported on FreeBSD"); return -1; } int eal_memalloc_sync_with_primary(void) { - RTE_LOG(ERR, EAL, "Memory hotplug not supported on FreeBSD\n"); + RTE_LOG_LINE(ERR, EAL, "Memory hotplug not supported on FreeBSD"); return -1; } diff --git a/lib/eal/freebsd/eal_memory.c b/lib/eal/freebsd/eal_memory.c index 5c6165c580..195f570da0 100644 --- a/lib/eal/freebsd/eal_memory.c +++ b/lib/eal/freebsd/eal_memory.c @@ -84,7 +84,7 @@ rte_eal_hugepage_init(void) addr = mmap(NULL, mem_sz, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); if (addr == MAP_FAILED) { - RTE_LOG(ERR, EAL, "%s: mmap() failed: %s\n", __func__, + RTE_LOG_LINE(ERR, EAL, "%s: mmap() failed: %s", __func__, strerror(errno)); return -1; } @@ -132,8 +132,8 @@ rte_eal_hugepage_init(void) error = sysctlbyname(physaddr_str, &physaddr, &sysctl_size, NULL, 0); if (error < 0) { - RTE_LOG(ERR, EAL, "Failed to get physical addr for buffer %u " - "from %s\n", j, hpi->hugedir); + RTE_LOG_LINE(ERR, EAL, "Failed to get physical addr for buffer %u " + "from %s", j, hpi->hugedir); return -1; } @@ -172,8 +172,8 @@ rte_eal_hugepage_init(void) break; } if (msl_idx == RTE_MAX_MEMSEG_LISTS) { - RTE_LOG(ERR, EAL, "Could not find space for memseg. Please increase RTE_MAX_MEMSEG_PER_LIST " - "RTE_MAX_MEMSEG_PER_TYPE and/or RTE_MAX_MEM_MB_PER_TYPE in configuration.\n"); + RTE_LOG_LINE(ERR, EAL, "Could not find space for memseg. Please increase RTE_MAX_MEMSEG_PER_LIST " + "RTE_MAX_MEMSEG_PER_TYPE and/or RTE_MAX_MEM_MB_PER_TYPE in configuration."); return -1; } arr = &msl->memseg_arr; @@ -190,7 +190,7 @@ rte_eal_hugepage_init(void) hpi->lock_descriptor, j * EAL_PAGE_SIZE); if (addr == MAP_FAILED) { - RTE_LOG(ERR, EAL, "Failed to mmap buffer %u from %s\n", + RTE_LOG_LINE(ERR, EAL, "Failed to mmap buffer %u from %s", j, hpi->hugedir); return -1; } @@ -205,8 +205,8 @@ rte_eal_hugepage_init(void) rte_fbarray_set_used(arr, ms_idx); - RTE_LOG(INFO, EAL, "Mapped memory segment %u @ %p: physaddr:0x%" - PRIx64", len %zu\n", + RTE_LOG_LINE(INFO, EAL, "Mapped memory segment %u @ %p: physaddr:0x%" + PRIx64", len %zu", seg_idx++, addr, physaddr, page_sz); total_mem += seg->len; @@ -215,9 +215,9 @@ rte_eal_hugepage_init(void) break; } if (total_mem < internal_conf->memory) { - RTE_LOG(ERR, EAL, "Couldn't reserve requested memory, " + RTE_LOG_LINE(ERR, EAL, "Couldn't reserve requested memory, " "requested: %" PRIu64 "M " - "available: %" PRIu64 "M\n", + "available: %" PRIu64 "M", internal_conf->memory >> 20, total_mem >> 20); return -1; } @@ -268,7 +268,7 @@ rte_eal_hugepage_attach(void) /* Obtain a file descriptor for contiguous memory */ fd_hugepage = open(cur_hpi->hugedir, O_RDWR); if (fd_hugepage < 0) { - RTE_LOG(ERR, EAL, "Could not open %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not open %s", cur_hpi->hugedir); goto error; } @@ -277,7 +277,7 @@ rte_eal_hugepage_attach(void) /* Map the contiguous memory into each memory segment */ if (rte_memseg_walk(attach_segment, &wa) < 0) { - RTE_LOG(ERR, EAL, "Failed to mmap buffer %u from %s\n", + RTE_LOG_LINE(ERR, EAL, "Failed to mmap buffer %u from %s", wa.seg_idx, cur_hpi->hugedir); goto error; } @@ -402,8 +402,8 @@ memseg_primary_init(void) unsigned int n_segs; if (msl_idx >= RTE_MAX_MEMSEG_LISTS) { - RTE_LOG(ERR, EAL, - "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS\n"); + RTE_LOG_LINE(ERR, EAL, + "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS"); return -1; } @@ -424,7 +424,7 @@ memseg_primary_init(void) type_msl_idx++; if (memseg_list_alloc(msl)) { - RTE_LOG(ERR, EAL, "Cannot allocate VA space for memseg list\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate VA space for memseg list"); return -1; } } @@ -449,13 +449,13 @@ memseg_secondary_init(void) continue; if (rte_fbarray_attach(&msl->memseg_arr)) { - RTE_LOG(ERR, EAL, "Cannot attach to primary process memseg lists\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot attach to primary process memseg lists"); return -1; } /* preallocate VA space */ if (memseg_list_alloc(msl)) { - RTE_LOG(ERR, EAL, "Cannot preallocate VA space for hugepage memory\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot preallocate VA space for hugepage memory"); return -1; } } diff --git a/lib/eal/freebsd/eal_thread.c b/lib/eal/freebsd/eal_thread.c index 6f97a3c2c1..0f7284768a 100644 --- a/lib/eal/freebsd/eal_thread.c +++ b/lib/eal/freebsd/eal_thread.c @@ -38,7 +38,7 @@ void rte_thread_set_name(rte_thread_t thread_id, const char *thread_name) const size_t truncatedsz = sizeof(truncated); if (strlcpy(truncated, thread_name, truncatedsz) >= truncatedsz) - RTE_LOG(DEBUG, EAL, "Truncated thread name\n"); + RTE_LOG_LINE(DEBUG, EAL, "Truncated thread name"); pthread_set_name_np((pthread_t)thread_id.opaque_id, truncated); } diff --git a/lib/eal/freebsd/eal_timer.c b/lib/eal/freebsd/eal_timer.c index beff755a47..61488ff641 100644 --- a/lib/eal/freebsd/eal_timer.c +++ b/lib/eal/freebsd/eal_timer.c @@ -36,20 +36,20 @@ get_tsc_freq(void) tmp = 0; if (sysctlbyname("kern.timecounter.smp_tsc", &tmp, &sz, NULL, 0)) - RTE_LOG(WARNING, EAL, "%s\n", strerror(errno)); + RTE_LOG_LINE(WARNING, EAL, "%s", strerror(errno)); else if (tmp != 1) - RTE_LOG(WARNING, EAL, "TSC is not safe to use in SMP mode\n"); + RTE_LOG_LINE(WARNING, EAL, "TSC is not safe to use in SMP mode"); tmp = 0; if (sysctlbyname("kern.timecounter.invariant_tsc", &tmp, &sz, NULL, 0)) - RTE_LOG(WARNING, EAL, "%s\n", strerror(errno)); + RTE_LOG_LINE(WARNING, EAL, "%s", strerror(errno)); else if (tmp != 1) - RTE_LOG(WARNING, EAL, "TSC is not invariant\n"); + RTE_LOG_LINE(WARNING, EAL, "TSC is not invariant"); sz = sizeof(tsc_hz); if (sysctlbyname("machdep.tsc_freq", &tsc_hz, &sz, NULL, 0)) { - RTE_LOG(WARNING, EAL, "%s\n", strerror(errno)); + RTE_LOG_LINE(WARNING, EAL, "%s", strerror(errno)); return 0; } diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c index 57da058cec..8aaff34d54 100644 --- a/lib/eal/linux/eal.c +++ b/lib/eal/linux/eal.c @@ -94,7 +94,7 @@ eal_clean_runtime_dir(void) /* open directory */ dir = opendir(runtime_dir); if (!dir) { - RTE_LOG(ERR, EAL, "Unable to open runtime directory %s\n", + RTE_LOG_LINE(ERR, EAL, "Unable to open runtime directory %s", runtime_dir); goto error; } @@ -102,14 +102,14 @@ eal_clean_runtime_dir(void) /* lock the directory before doing anything, to avoid races */ if (flock(dir_fd, LOCK_EX) < 0) { - RTE_LOG(ERR, EAL, "Unable to lock runtime directory %s\n", + RTE_LOG_LINE(ERR, EAL, "Unable to lock runtime directory %s", runtime_dir); goto error; } dirent = readdir(dir); if (!dirent) { - RTE_LOG(ERR, EAL, "Unable to read runtime directory %s\n", + RTE_LOG_LINE(ERR, EAL, "Unable to read runtime directory %s", runtime_dir); goto error; } @@ -159,7 +159,7 @@ eal_clean_runtime_dir(void) if (dir) closedir(dir); - RTE_LOG(ERR, EAL, "Error while clearing runtime dir: %s\n", + RTE_LOG_LINE(ERR, EAL, "Error while clearing runtime dir: %s", strerror(errno)); return -1; @@ -200,7 +200,7 @@ rte_eal_config_create(void) if (mem_cfg_fd < 0){ mem_cfg_fd = open(pathname, O_RDWR | O_CREAT, 0600); if (mem_cfg_fd < 0) { - RTE_LOG(ERR, EAL, "Cannot open '%s' for rte_mem_config\n", + RTE_LOG_LINE(ERR, EAL, "Cannot open '%s' for rte_mem_config", pathname); return -1; } @@ -210,7 +210,7 @@ rte_eal_config_create(void) if (retval < 0){ close(mem_cfg_fd); mem_cfg_fd = -1; - RTE_LOG(ERR, EAL, "Cannot resize '%s' for rte_mem_config\n", + RTE_LOG_LINE(ERR, EAL, "Cannot resize '%s' for rte_mem_config", pathname); return -1; } @@ -219,8 +219,8 @@ rte_eal_config_create(void) if (retval < 0){ close(mem_cfg_fd); mem_cfg_fd = -1; - RTE_LOG(ERR, EAL, "Cannot create lock on '%s'. Is another primary " - "process running?\n", pathname); + RTE_LOG_LINE(ERR, EAL, "Cannot create lock on '%s'. Is another primary " + "process running?", pathname); return -1; } @@ -228,7 +228,7 @@ rte_eal_config_create(void) rte_mem_cfg_addr = eal_get_virtual_area(rte_mem_cfg_addr, &cfg_len_aligned, page_sz, 0, 0); if (rte_mem_cfg_addr == NULL) { - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config"); close(mem_cfg_fd); mem_cfg_fd = -1; return -1; @@ -242,7 +242,7 @@ rte_eal_config_create(void) munmap(rte_mem_cfg_addr, cfg_len); close(mem_cfg_fd); mem_cfg_fd = -1; - RTE_LOG(ERR, EAL, "Cannot remap memory for rte_config\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot remap memory for rte_config"); return -1; } @@ -275,7 +275,7 @@ rte_eal_config_attach(void) if (mem_cfg_fd < 0){ mem_cfg_fd = open(pathname, O_RDWR); if (mem_cfg_fd < 0) { - RTE_LOG(ERR, EAL, "Cannot open '%s' for rte_mem_config\n", + RTE_LOG_LINE(ERR, EAL, "Cannot open '%s' for rte_mem_config", pathname); return -1; } @@ -287,7 +287,7 @@ rte_eal_config_attach(void) if (mem_config == MAP_FAILED) { close(mem_cfg_fd); mem_cfg_fd = -1; - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)", errno, strerror(errno)); return -1; } @@ -328,13 +328,13 @@ rte_eal_config_reattach(void) if (mem_config == MAP_FAILED || mem_config != rte_mem_cfg_addr) { if (mem_config != MAP_FAILED) { /* errno is stale, don't use */ - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config at [%p], got [%p]" + RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config at [%p], got [%p]" " - please use '--" OPT_BASE_VIRTADDR - "' option\n", rte_mem_cfg_addr, mem_config); + "' option", rte_mem_cfg_addr, mem_config); munmap(mem_config, sizeof(struct rte_mem_config)); return -1; } - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)", errno, strerror(errno)); return -1; } @@ -365,7 +365,7 @@ eal_proc_type_detect(void) ptype = RTE_PROC_SECONDARY; } - RTE_LOG(INFO, EAL, "Auto-detected process type: %s\n", + RTE_LOG_LINE(INFO, EAL, "Auto-detected process type: %s", ptype == RTE_PROC_PRIMARY ? "PRIMARY" : "SECONDARY"); return ptype; @@ -392,20 +392,20 @@ rte_config_init(void) return -1; eal_mcfg_wait_complete(); if (eal_mcfg_check_version() < 0) { - RTE_LOG(ERR, EAL, "Primary and secondary process DPDK version mismatch\n"); + RTE_LOG_LINE(ERR, EAL, "Primary and secondary process DPDK version mismatch"); return -1; } if (rte_eal_config_reattach() < 0) return -1; if (!__rte_mp_enable()) { - RTE_LOG(ERR, EAL, "Primary process refused secondary attachment\n"); + RTE_LOG_LINE(ERR, EAL, "Primary process refused secondary attachment"); return -1; } eal_mcfg_update_internal(); break; case RTE_PROC_AUTO: case RTE_PROC_INVALID: - RTE_LOG(ERR, EAL, "Invalid process type %d\n", + RTE_LOG_LINE(ERR, EAL, "Invalid process type %d", config->process_type); return -1; } @@ -474,7 +474,7 @@ eal_parse_socket_arg(char *strval, volatile uint64_t *socket_arg) len = strnlen(strval, SOCKET_MEM_STRLEN); if (len == SOCKET_MEM_STRLEN) { - RTE_LOG(ERR, EAL, "--socket-mem is too long\n"); + RTE_LOG_LINE(ERR, EAL, "--socket-mem is too long"); return -1; } @@ -595,13 +595,13 @@ eal_parse_huge_worker_stack(const char *arg) int ret; if (pthread_attr_init(&attr) != 0) { - RTE_LOG(ERR, EAL, "Could not retrieve default stack size\n"); + RTE_LOG_LINE(ERR, EAL, "Could not retrieve default stack size"); return -1; } ret = pthread_attr_getstacksize(&attr, &cfg->huge_worker_stack_size); pthread_attr_destroy(&attr); if (ret != 0) { - RTE_LOG(ERR, EAL, "Could not retrieve default stack size\n"); + RTE_LOG_LINE(ERR, EAL, "Could not retrieve default stack size"); return -1; } } else { @@ -617,7 +617,7 @@ eal_parse_huge_worker_stack(const char *arg) cfg->huge_worker_stack_size = stack_size * 1024; } - RTE_LOG(DEBUG, EAL, "Each worker thread will use %zu kB of DPDK memory as stack\n", + RTE_LOG_LINE(DEBUG, EAL, "Each worker thread will use %zu kB of DPDK memory as stack", cfg->huge_worker_stack_size / 1024); return 0; } @@ -673,7 +673,7 @@ eal_parse_args(int argc, char **argv) { char *hdir = strdup(optarg); if (hdir == NULL) - RTE_LOG(ERR, EAL, "Could not store hugepage directory\n"); + RTE_LOG_LINE(ERR, EAL, "Could not store hugepage directory"); else { /* free old hugepage dir */ free(internal_conf->hugepage_dir); @@ -685,7 +685,7 @@ eal_parse_args(int argc, char **argv) { char *prefix = strdup(optarg); if (prefix == NULL) - RTE_LOG(ERR, EAL, "Could not store file prefix\n"); + RTE_LOG_LINE(ERR, EAL, "Could not store file prefix"); else { /* free old prefix */ free(internal_conf->hugefile_prefix); @@ -696,8 +696,8 @@ eal_parse_args(int argc, char **argv) case OPT_SOCKET_MEM_NUM: if (eal_parse_socket_arg(optarg, internal_conf->socket_mem) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_SOCKET_MEM "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_SOCKET_MEM); eal_usage(prgname); ret = -1; goto out; @@ -708,8 +708,8 @@ eal_parse_args(int argc, char **argv) case OPT_SOCKET_LIMIT_NUM: if (eal_parse_socket_arg(optarg, internal_conf->socket_limit) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_SOCKET_LIMIT "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_SOCKET_LIMIT); eal_usage(prgname); ret = -1; goto out; @@ -719,8 +719,8 @@ eal_parse_args(int argc, char **argv) case OPT_VFIO_INTR_NUM: if (eal_parse_vfio_intr(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_VFIO_INTR "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_VFIO_INTR); eal_usage(prgname); ret = -1; goto out; @@ -729,8 +729,8 @@ eal_parse_args(int argc, char **argv) case OPT_VFIO_VF_TOKEN_NUM: if (eal_parse_vfio_vf_token(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_VFIO_VF_TOKEN "\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameters for --" + OPT_VFIO_VF_TOKEN); eal_usage(prgname); ret = -1; goto out; @@ -745,7 +745,7 @@ eal_parse_args(int argc, char **argv) { char *ops_name = strdup(optarg); if (ops_name == NULL) - RTE_LOG(ERR, EAL, "Could not store mbuf pool ops name\n"); + RTE_LOG_LINE(ERR, EAL, "Could not store mbuf pool ops name"); else { /* free old ops name */ free(internal_conf->user_mbuf_pool_ops_name); @@ -761,8 +761,8 @@ eal_parse_args(int argc, char **argv) case OPT_HUGE_WORKER_STACK_NUM: if (eal_parse_huge_worker_stack(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameter for --" - OPT_HUGE_WORKER_STACK"\n"); + RTE_LOG_LINE(ERR, EAL, "invalid parameter for --" + OPT_HUGE_WORKER_STACK); eal_usage(prgname); ret = -1; goto out; @@ -771,16 +771,16 @@ eal_parse_args(int argc, char **argv) default: if (opt < OPT_LONG_MIN_NUM && isprint(opt)) { - RTE_LOG(ERR, EAL, "Option %c is not supported " - "on Linux\n", opt); + RTE_LOG_LINE(ERR, EAL, "Option %c is not supported " + "on Linux", opt); } else if (opt >= OPT_LONG_MIN_NUM && opt < OPT_LONG_MAX_NUM) { - RTE_LOG(ERR, EAL, "Option %s is not supported " - "on Linux\n", + RTE_LOG_LINE(ERR, EAL, "Option %s is not supported " + "on Linux", eal_long_options[option_index].name); } else { - RTE_LOG(ERR, EAL, "Option %d is not supported " - "on Linux\n", opt); + RTE_LOG_LINE(ERR, EAL, "Option %d is not supported " + "on Linux", opt); } eal_usage(prgname); ret = -1; @@ -791,11 +791,11 @@ eal_parse_args(int argc, char **argv) /* create runtime data directory. In no_shconf mode, skip any errors */ if (eal_create_runtime_dir() < 0) { if (internal_conf->no_shconf == 0) { - RTE_LOG(ERR, EAL, "Cannot create runtime directory\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot create runtime directory"); ret = -1; goto out; } else - RTE_LOG(WARNING, EAL, "No DPDK runtime directory created\n"); + RTE_LOG_LINE(WARNING, EAL, "No DPDK runtime directory created"); } if (eal_adjust_config(internal_conf) != 0) { @@ -843,7 +843,7 @@ eal_check_mem_on_local_socket(void) socket_id = rte_lcore_to_socket_id(config->main_lcore); if (rte_memseg_list_walk(check_socket, &socket_id) == 0) - RTE_LOG(WARNING, EAL, "WARNING: Main core has no memory on local socket!\n"); + RTE_LOG_LINE(WARNING, EAL, "WARNING: Main core has no memory on local socket!"); } static int @@ -880,7 +880,7 @@ static int rte_eal_vfio_setup(void) static void rte_eal_init_alert(const char *msg) { fprintf(stderr, "EAL: FATAL: %s\n", msg); - RTE_LOG(ERR, EAL, "%s\n", msg); + RTE_LOG_LINE(ERR, EAL, "%s", msg); } /* @@ -1073,27 +1073,27 @@ rte_eal_init(int argc, char **argv) enum rte_iova_mode iova_mode = rte_bus_get_iommu_class(); if (iova_mode == RTE_IOVA_DC) { - RTE_LOG(DEBUG, EAL, "Buses did not request a specific IOVA mode.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Buses did not request a specific IOVA mode."); if (!RTE_IOVA_IN_MBUF) { iova_mode = RTE_IOVA_VA; - RTE_LOG(DEBUG, EAL, "IOVA as VA mode is forced by build option.\n"); + RTE_LOG_LINE(DEBUG, EAL, "IOVA as VA mode is forced by build option."); } else if (!phys_addrs) { /* if we have no access to physical addresses, * pick IOVA as VA mode. */ iova_mode = RTE_IOVA_VA; - RTE_LOG(DEBUG, EAL, "Physical addresses are unavailable, selecting IOVA as VA mode.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Physical addresses are unavailable, selecting IOVA as VA mode."); } else if (is_iommu_enabled()) { /* we have an IOMMU, pick IOVA as VA mode */ iova_mode = RTE_IOVA_VA; - RTE_LOG(DEBUG, EAL, "IOMMU is available, selecting IOVA as VA mode.\n"); + RTE_LOG_LINE(DEBUG, EAL, "IOMMU is available, selecting IOVA as VA mode."); } else { /* physical addresses available, and no IOMMU * found, so pick IOVA as PA. */ iova_mode = RTE_IOVA_PA; - RTE_LOG(DEBUG, EAL, "IOMMU is not available, selecting IOVA as PA mode.\n"); + RTE_LOG_LINE(DEBUG, EAL, "IOMMU is not available, selecting IOVA as PA mode."); } } rte_eal_get_configuration()->iova_mode = iova_mode; @@ -1114,7 +1114,7 @@ rte_eal_init(int argc, char **argv) return -1; } - RTE_LOG(INFO, EAL, "Selected IOVA mode '%s'\n", + RTE_LOG_LINE(INFO, EAL, "Selected IOVA mode '%s'", rte_eal_iova_mode() == RTE_IOVA_PA ? "PA" : "VA"); if (internal_conf->no_hugetlbfs == 0) { @@ -1138,11 +1138,11 @@ rte_eal_init(int argc, char **argv) if (internal_conf->vmware_tsc_map == 1) { #ifdef RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT rte_cycles_vmware_tsc_map = 1; - RTE_LOG (DEBUG, EAL, "Using VMWARE TSC MAP, " - "you must have monitor_control.pseudo_perfctr = TRUE\n"); + RTE_LOG_LINE(DEBUG, EAL, "Using VMWARE TSC MAP, " + "you must have monitor_control.pseudo_perfctr = TRUE"); #else - RTE_LOG (WARNING, EAL, "Ignoring --vmware-tsc-map because " - "RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT is not set\n"); + RTE_LOG_LINE(WARNING, EAL, "Ignoring --vmware-tsc-map because " + "RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT is not set"); #endif } @@ -1229,7 +1229,7 @@ rte_eal_init(int argc, char **argv) &lcore_config[config->main_lcore].cpuset); ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset)); - RTE_LOG(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])\n", + RTE_LOG_LINE(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])", config->main_lcore, (uintptr_t)pthread_self(), cpuset, ret == 0 ? "" : "..."); @@ -1350,7 +1350,7 @@ rte_eal_cleanup(void) if (!rte_atomic_compare_exchange_strong_explicit(&run_once, &has_run, 1, rte_memory_order_relaxed, rte_memory_order_relaxed)) { - RTE_LOG(WARNING, EAL, "Already called cleanup\n"); + RTE_LOG_LINE(WARNING, EAL, "Already called cleanup"); rte_errno = EALREADY; return -1; } @@ -1420,7 +1420,7 @@ rte_eal_check_module(const char *module_name) /* Check if there is sysfs mounted */ if (stat("/sys/module", &st) != 0) { - RTE_LOG(DEBUG, EAL, "sysfs is not mounted! error %i (%s)\n", + RTE_LOG_LINE(DEBUG, EAL, "sysfs is not mounted! error %i (%s)", errno, strerror(errno)); return -1; } @@ -1428,12 +1428,12 @@ rte_eal_check_module(const char *module_name) /* A module might be built-in, therefore try sysfs */ n = snprintf(sysfs_mod_name, PATH_MAX, "/sys/module/%s", module_name); if (n < 0 || n > PATH_MAX) { - RTE_LOG(DEBUG, EAL, "Could not format module path\n"); + RTE_LOG_LINE(DEBUG, EAL, "Could not format module path"); return -1; } if (stat(sysfs_mod_name, &st) != 0) { - RTE_LOG(DEBUG, EAL, "Module %s not found! error %i (%s)\n", + RTE_LOG_LINE(DEBUG, EAL, "Module %s not found! error %i (%s)", sysfs_mod_name, errno, strerror(errno)); return 0; } diff --git a/lib/eal/linux/eal_alarm.c b/lib/eal/linux/eal_alarm.c index 766ba2c251..3c0464ad10 100644 --- a/lib/eal/linux/eal_alarm.c +++ b/lib/eal/linux/eal_alarm.c @@ -65,7 +65,7 @@ rte_eal_alarm_init(void) intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE); if (intr_handle == NULL) { - RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n"); + RTE_LOG_LINE(ERR, EAL, "Fail to allocate intr_handle"); goto error; } diff --git a/lib/eal/linux/eal_dev.c b/lib/eal/linux/eal_dev.c index ac76f6174d..16e817121d 100644 --- a/lib/eal/linux/eal_dev.c +++ b/lib/eal/linux/eal_dev.c @@ -64,7 +64,7 @@ static void sigbus_handler(int signum, siginfo_t *info, { int ret; - RTE_LOG(DEBUG, EAL, "Thread catch SIGBUS, fault address:%p\n", + RTE_LOG_LINE(DEBUG, EAL, "Thread catch SIGBUS, fault address:%p", info->si_addr); rte_spinlock_lock(&failure_handle_lock); @@ -88,7 +88,7 @@ static void sigbus_handler(int signum, siginfo_t *info, } } - RTE_LOG(DEBUG, EAL, "Success to handle SIGBUS for hot-unplug!\n"); + RTE_LOG_LINE(DEBUG, EAL, "Success to handle SIGBUS for hot-unplug!"); } static int cmp_dev_name(const struct rte_device *dev, @@ -108,7 +108,7 @@ dev_uev_socket_fd_create(void) fd = socket(PF_NETLINK, SOCK_RAW | SOCK_CLOEXEC | SOCK_NONBLOCK, NETLINK_KOBJECT_UEVENT); if (fd < 0) { - RTE_LOG(ERR, EAL, "create uevent fd failed.\n"); + RTE_LOG_LINE(ERR, EAL, "create uevent fd failed."); return -1; } @@ -119,7 +119,7 @@ dev_uev_socket_fd_create(void) ret = bind(fd, (struct sockaddr *) &addr, sizeof(addr)); if (ret < 0) { - RTE_LOG(ERR, EAL, "Failed to bind uevent socket.\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to bind uevent socket."); goto err; } @@ -245,18 +245,18 @@ dev_uev_handler(__rte_unused void *param) return; else if (ret <= 0) { /* connection is closed or broken, can not up again. */ - RTE_LOG(ERR, EAL, "uevent socket connection is broken.\n"); + RTE_LOG_LINE(ERR, EAL, "uevent socket connection is broken."); rte_eal_alarm_set(1, dev_delayed_unregister, NULL); return; } ret = dev_uev_parse(buf, &uevent, EAL_UEV_MSG_LEN); if (ret < 0) { - RTE_LOG(DEBUG, EAL, "Ignoring uevent '%s'\n", buf); + RTE_LOG_LINE(DEBUG, EAL, "Ignoring uevent '%s'", buf); return; } - RTE_LOG(DEBUG, EAL, "receive uevent(name:%s, type:%d, subsystem:%d)\n", + RTE_LOG_LINE(DEBUG, EAL, "receive uevent(name:%s, type:%d, subsystem:%d)", uevent.devname, uevent.type, uevent.subsystem); switch (uevent.subsystem) { @@ -273,7 +273,7 @@ dev_uev_handler(__rte_unused void *param) rte_spinlock_lock(&failure_handle_lock); bus = rte_bus_find_by_name(busname); if (bus == NULL) { - RTE_LOG(ERR, EAL, "Cannot find bus (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Cannot find bus (%s)", busname); goto failure_handle_err; } @@ -281,15 +281,15 @@ dev_uev_handler(__rte_unused void *param) dev = bus->find_device(NULL, cmp_dev_name, uevent.devname); if (dev == NULL) { - RTE_LOG(ERR, EAL, "Cannot find device (%s) on " - "bus (%s)\n", uevent.devname, busname); + RTE_LOG_LINE(ERR, EAL, "Cannot find device (%s) on " + "bus (%s)", uevent.devname, busname); goto failure_handle_err; } ret = bus->hot_unplug_handler(dev); if (ret) { - RTE_LOG(ERR, EAL, "Can not handle hot-unplug " - "for device (%s)\n", dev->name); + RTE_LOG_LINE(ERR, EAL, "Can not handle hot-unplug " + "for device (%s)", dev->name); } rte_spinlock_unlock(&failure_handle_lock); } @@ -318,7 +318,7 @@ rte_dev_event_monitor_start(void) intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE); if (intr_handle == NULL) { - RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n"); + RTE_LOG_LINE(ERR, EAL, "Fail to allocate intr_handle"); goto exit; } @@ -332,7 +332,7 @@ rte_dev_event_monitor_start(void) ret = dev_uev_socket_fd_create(); if (ret) { - RTE_LOG(ERR, EAL, "error create device event fd.\n"); + RTE_LOG_LINE(ERR, EAL, "error create device event fd."); goto exit; } @@ -362,7 +362,7 @@ rte_dev_event_monitor_stop(void) rte_rwlock_write_lock(&monitor_lock); if (!monitor_refcount) { - RTE_LOG(ERR, EAL, "device event monitor already stopped\n"); + RTE_LOG_LINE(ERR, EAL, "device event monitor already stopped"); goto exit; } @@ -374,7 +374,7 @@ rte_dev_event_monitor_stop(void) ret = rte_intr_callback_unregister(intr_handle, dev_uev_handler, (void *)-1); if (ret < 0) { - RTE_LOG(ERR, EAL, "fail to unregister uevent callback.\n"); + RTE_LOG_LINE(ERR, EAL, "fail to unregister uevent callback."); goto exit; } @@ -429,8 +429,8 @@ rte_dev_hotplug_handle_enable(void) ret = dev_sigbus_handler_register(); if (ret < 0) - RTE_LOG(ERR, EAL, - "fail to register sigbus handler for devices.\n"); + RTE_LOG_LINE(ERR, EAL, + "fail to register sigbus handler for devices."); hotplug_handle = true; @@ -444,8 +444,8 @@ rte_dev_hotplug_handle_disable(void) ret = dev_sigbus_handler_unregister(); if (ret < 0) - RTE_LOG(ERR, EAL, - "fail to unregister sigbus handler for devices.\n"); + RTE_LOG_LINE(ERR, EAL, + "fail to unregister sigbus handler for devices."); hotplug_handle = false; diff --git a/lib/eal/linux/eal_hugepage_info.c b/lib/eal/linux/eal_hugepage_info.c index 36a495fb1f..971c57989d 100644 --- a/lib/eal/linux/eal_hugepage_info.c +++ b/lib/eal/linux/eal_hugepage_info.c @@ -110,7 +110,7 @@ get_num_hugepages(const char *subdir, size_t sz, unsigned int reusable_pages) over_pages = 0; if (num_pages == 0 && over_pages == 0 && reusable_pages) - RTE_LOG(WARNING, EAL, "No available %zu kB hugepages reported\n", + RTE_LOG_LINE(WARNING, EAL, "No available %zu kB hugepages reported", sz >> 10); num_pages += over_pages; @@ -155,7 +155,7 @@ get_num_hugepages_on_node(const char *subdir, unsigned int socket, size_t sz) return 0; if (num_pages == 0) - RTE_LOG(WARNING, EAL, "No free %zu kB hugepages reported on node %u\n", + RTE_LOG_LINE(WARNING, EAL, "No free %zu kB hugepages reported on node %u", sz >> 10, socket); /* @@ -239,7 +239,7 @@ get_hugepage_dir(uint64_t hugepage_sz, char *hugedir, int len) if (rte_strsplit(buf, sizeof(buf), splitstr, _FIELDNAME_MAX, split_tok) != _FIELDNAME_MAX) { - RTE_LOG(ERR, EAL, "Error parsing %s\n", proc_mounts); + RTE_LOG_LINE(ERR, EAL, "Error parsing %s", proc_mounts); break; /* return NULL */ } @@ -325,7 +325,7 @@ walk_hugedir(const char *hugedir, walk_hugedir_t *cb, void *user_data) dir = opendir(hugedir); if (!dir) { - RTE_LOG(ERR, EAL, "Unable to open hugepage directory %s\n", + RTE_LOG_LINE(ERR, EAL, "Unable to open hugepage directory %s", hugedir); goto error; } @@ -333,7 +333,7 @@ walk_hugedir(const char *hugedir, walk_hugedir_t *cb, void *user_data) dirent = readdir(dir); if (!dirent) { - RTE_LOG(ERR, EAL, "Unable to read hugepage directory %s\n", + RTE_LOG_LINE(ERR, EAL, "Unable to read hugepage directory %s", hugedir); goto error; } @@ -377,7 +377,7 @@ walk_hugedir(const char *hugedir, walk_hugedir_t *cb, void *user_data) if (dir) closedir(dir); - RTE_LOG(ERR, EAL, "Error while walking hugepage dir: %s\n", + RTE_LOG_LINE(ERR, EAL, "Error while walking hugepage dir: %s", strerror(errno)); return -1; @@ -403,7 +403,7 @@ inspect_hugedir_cb(const struct walk_hugedir_data *whd) struct stat st; if (fstat(whd->file_fd, &st) < 0) - RTE_LOG(DEBUG, EAL, "%s(): stat(\"%s\") failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): stat(\"%s\") failed: %s", __func__, whd->file_name, strerror(errno)); else (*total_size) += st.st_size; @@ -492,8 +492,8 @@ hugepage_info_init(void) dir = opendir(sys_dir_path); if (dir == NULL) { - RTE_LOG(ERR, EAL, - "Cannot open directory %s to read system hugepage info\n", + RTE_LOG_LINE(ERR, EAL, + "Cannot open directory %s to read system hugepage info", sys_dir_path); return -1; } @@ -520,10 +520,10 @@ hugepage_info_init(void) num_pages = get_num_hugepages(dirent->d_name, hpi->hugepage_sz, 0); if (num_pages > 0) - RTE_LOG(NOTICE, EAL, + RTE_LOG_LINE(NOTICE, EAL, "%" PRIu32 " hugepages of size " "%" PRIu64 " reserved, but no mounted " - "hugetlbfs found for that size\n", + "hugetlbfs found for that size", num_pages, hpi->hugepage_sz); /* if we have kernel support for reserving hugepages * through mmap, and we're in in-memory mode, treat this @@ -533,9 +533,9 @@ hugepage_info_init(void) */ #ifdef MAP_HUGE_SHIFT if (internal_conf->in_memory) { - RTE_LOG(DEBUG, EAL, "In-memory mode enabled, " + RTE_LOG_LINE(DEBUG, EAL, "In-memory mode enabled, " "hugepages of size %" PRIu64 " bytes " - "will be allocated anonymously\n", + "will be allocated anonymously", hpi->hugepage_sz); calc_num_pages(hpi, dirent, 0); num_sizes++; @@ -549,8 +549,8 @@ hugepage_info_init(void) /* if blocking lock failed */ if (flock(hpi->lock_descriptor, LOCK_EX) == -1) { - RTE_LOG(CRIT, EAL, - "Failed to lock hugepage directory!\n"); + RTE_LOG_LINE(CRIT, EAL, + "Failed to lock hugepage directory!"); break; } @@ -626,7 +626,7 @@ eal_hugepage_info_init(void) tmp_hpi = create_shared_memory(eal_hugepage_info_path(), sizeof(internal_conf->hugepage_info)); if (tmp_hpi == NULL) { - RTE_LOG(ERR, EAL, "Failed to create shared memory!\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to create shared memory!"); return -1; } @@ -641,7 +641,7 @@ eal_hugepage_info_init(void) } if (munmap(tmp_hpi, sizeof(internal_conf->hugepage_info)) < 0) { - RTE_LOG(ERR, EAL, "Failed to unmap shared memory!\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to unmap shared memory!"); return -1; } return 0; @@ -657,14 +657,14 @@ int eal_hugepage_info_read(void) tmp_hpi = open_shared_memory(eal_hugepage_info_path(), sizeof(internal_conf->hugepage_info)); if (tmp_hpi == NULL) { - RTE_LOG(ERR, EAL, "Failed to open shared memory!\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to open shared memory!"); return -1; } memcpy(hpi, tmp_hpi, sizeof(internal_conf->hugepage_info)); if (munmap(tmp_hpi, sizeof(internal_conf->hugepage_info)) < 0) { - RTE_LOG(ERR, EAL, "Failed to unmap shared memory!\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to unmap shared memory!"); return -1; } return 0; diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c index eabac24992..9a7169c4e4 100644 --- a/lib/eal/linux/eal_interrupts.c +++ b/lib/eal/linux/eal_interrupts.c @@ -123,7 +123,7 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) { ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { - RTE_LOG(ERR, EAL, "Error enabling INTx interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error enabling INTx interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -140,7 +140,7 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) { ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { - RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error unmasking INTx interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -168,7 +168,7 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) { ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { - RTE_LOG(ERR, EAL, "Error masking INTx interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error masking INTx interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -184,7 +184,7 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) { ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { - RTE_LOG(ERR, EAL, "Error disabling INTx interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error disabling INTx interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -208,7 +208,7 @@ vfio_ack_intx(const struct rte_intr_handle *intr_handle) vfio_dev_fd = rte_intr_dev_fd_get(intr_handle); if (ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) { - RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error unmasking INTx interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -238,7 +238,7 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) { ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { - RTE_LOG(ERR, EAL, "Error enabling MSI interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error enabling MSI interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -264,7 +264,7 @@ vfio_disable_msi(const struct rte_intr_handle *intr_handle) { vfio_dev_fd = rte_intr_dev_fd_get(intr_handle); ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) - RTE_LOG(ERR, EAL, "Error disabling MSI interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error disabling MSI interrupts for fd %d", rte_intr_fd_get(intr_handle)); return ret; @@ -303,7 +303,7 @@ vfio_enable_msix(const struct rte_intr_handle *intr_handle) { ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { - RTE_LOG(ERR, EAL, "Error enabling MSI-X interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error enabling MSI-X interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -331,7 +331,7 @@ vfio_disable_msix(const struct rte_intr_handle *intr_handle) { ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) - RTE_LOG(ERR, EAL, "Error disabling MSI-X interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error disabling MSI-X interrupts for fd %d", rte_intr_fd_get(intr_handle)); return ret; @@ -363,7 +363,7 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle) ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { - RTE_LOG(ERR, EAL, "Error enabling req interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error enabling req interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -392,7 +392,7 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle) ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) - RTE_LOG(ERR, EAL, "Error disabling req interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Error disabling req interrupts for fd %d", rte_intr_fd_get(intr_handle)); return ret; @@ -409,16 +409,16 @@ uio_intx_intr_disable(const struct rte_intr_handle *intr_handle) /* use UIO config file descriptor for uio_pci_generic */ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle); if (uio_cfg_fd < 0 || pread(uio_cfg_fd, &command_high, 1, 5) != 1) { - RTE_LOG(ERR, EAL, - "Error reading interrupts status for fd %d\n", + RTE_LOG_LINE(ERR, EAL, + "Error reading interrupts status for fd %d", uio_cfg_fd); return -1; } /* disable interrupts */ command_high |= 0x4; if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) { - RTE_LOG(ERR, EAL, - "Error disabling interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, + "Error disabling interrupts for fd %d", uio_cfg_fd); return -1; } @@ -435,16 +435,16 @@ uio_intx_intr_enable(const struct rte_intr_handle *intr_handle) /* use UIO config file descriptor for uio_pci_generic */ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle); if (uio_cfg_fd < 0 || pread(uio_cfg_fd, &command_high, 1, 5) != 1) { - RTE_LOG(ERR, EAL, - "Error reading interrupts status for fd %d\n", + RTE_LOG_LINE(ERR, EAL, + "Error reading interrupts status for fd %d", uio_cfg_fd); return -1; } /* enable interrupts */ command_high &= ~0x4; if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) { - RTE_LOG(ERR, EAL, - "Error enabling interrupts for fd %d\n", + RTE_LOG_LINE(ERR, EAL, + "Error enabling interrupts for fd %d", uio_cfg_fd); return -1; } @@ -459,7 +459,7 @@ uio_intr_disable(const struct rte_intr_handle *intr_handle) if (rte_intr_fd_get(intr_handle) < 0 || write(rte_intr_fd_get(intr_handle), &value, sizeof(value)) < 0) { - RTE_LOG(ERR, EAL, "Error disabling interrupts for fd %d (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Error disabling interrupts for fd %d (%s)", rte_intr_fd_get(intr_handle), strerror(errno)); return -1; } @@ -473,7 +473,7 @@ uio_intr_enable(const struct rte_intr_handle *intr_handle) if (rte_intr_fd_get(intr_handle) < 0 || write(rte_intr_fd_get(intr_handle), &value, sizeof(value)) < 0) { - RTE_LOG(ERR, EAL, "Error enabling interrupts for fd %d (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Error enabling interrupts for fd %d (%s)", rte_intr_fd_get(intr_handle), strerror(errno)); return -1; } @@ -492,14 +492,14 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, /* first do parameter checking */ if (rte_intr_fd_get(intr_handle) < 0 || cb == NULL) { - RTE_LOG(ERR, EAL, "Registering with invalid input parameter\n"); + RTE_LOG_LINE(ERR, EAL, "Registering with invalid input parameter"); return -EINVAL; } /* allocate a new interrupt callback entity */ callback = calloc(1, sizeof(*callback)); if (callback == NULL) { - RTE_LOG(ERR, EAL, "Can not allocate memory\n"); + RTE_LOG_LINE(ERR, EAL, "Can not allocate memory"); return -ENOMEM; } callback->cb_fn = cb; @@ -526,14 +526,14 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, if (src == NULL) { src = calloc(1, sizeof(*src)); if (src == NULL) { - RTE_LOG(ERR, EAL, "Can not allocate memory\n"); + RTE_LOG_LINE(ERR, EAL, "Can not allocate memory"); ret = -ENOMEM; free(callback); callback = NULL; } else { src->intr_handle = rte_intr_instance_dup(intr_handle); if (src->intr_handle == NULL) { - RTE_LOG(ERR, EAL, "Can not create intr instance\n"); + RTE_LOG_LINE(ERR, EAL, "Can not create intr instance"); ret = -ENOMEM; free(callback); callback = NULL; @@ -575,7 +575,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle, /* do parameter checking first */ if (rte_intr_fd_get(intr_handle) < 0) { - RTE_LOG(ERR, EAL, "Unregistering with invalid input parameter\n"); + RTE_LOG_LINE(ERR, EAL, "Unregistering with invalid input parameter"); return -EINVAL; } @@ -625,7 +625,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle, /* do parameter checking first */ if (rte_intr_fd_get(intr_handle) < 0) { - RTE_LOG(ERR, EAL, "Unregistering with invalid input parameter\n"); + RTE_LOG_LINE(ERR, EAL, "Unregistering with invalid input parameter"); return -EINVAL; } @@ -752,7 +752,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle) break; /* unknown handle type */ default: - RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Unknown handle type of fd %d", rte_intr_fd_get(intr_handle)); rc = -1; break; @@ -817,7 +817,7 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle) return -1; /* unknown handle type */ default: - RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Unknown handle type of fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -884,7 +884,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle) break; /* unknown handle type */ default: - RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n", + RTE_LOG_LINE(ERR, EAL, "Unknown handle type of fd %d", rte_intr_fd_get(intr_handle)); rc = -1; break; @@ -972,8 +972,8 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds) if (errno == EINTR || errno == EWOULDBLOCK) continue; - RTE_LOG(ERR, EAL, "Error reading from file " - "descriptor %d: %s\n", + RTE_LOG_LINE(ERR, EAL, "Error reading from file " + "descriptor %d: %s", events[n].data.fd, strerror(errno)); /* @@ -995,8 +995,8 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds) free(src); return -1; } else if (bytes_read == 0) - RTE_LOG(ERR, EAL, "Read nothing from file " - "descriptor %d\n", events[n].data.fd); + RTE_LOG_LINE(ERR, EAL, "Read nothing from file " + "descriptor %d", events[n].data.fd); else call = true; } @@ -1080,8 +1080,8 @@ eal_intr_handle_interrupts(int pfd, unsigned totalfds) if (nfds < 0) { if (errno == EINTR) continue; - RTE_LOG(ERR, EAL, - "epoll_wait returns with fail\n"); + RTE_LOG_LINE(ERR, EAL, + "epoll_wait returns with fail"); return; } /* epoll_wait timeout, will never happens here */ @@ -1192,8 +1192,8 @@ rte_eal_intr_init(void) eal_intr_thread_main, NULL); if (ret != 0) { rte_errno = -ret; - RTE_LOG(ERR, EAL, - "Failed to create thread for interrupt handling\n"); + RTE_LOG_LINE(ERR, EAL, + "Failed to create thread for interrupt handling"); } return ret; @@ -1226,7 +1226,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle) return; default: bytes_read = 1; - RTE_LOG(INFO, EAL, "unexpected intr type\n"); + RTE_LOG_LINE(INFO, EAL, "unexpected intr type"); break; } @@ -1242,11 +1242,11 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle) if (errno == EINTR || errno == EWOULDBLOCK || errno == EAGAIN) continue; - RTE_LOG(ERR, EAL, - "Error reading from fd %d: %s\n", + RTE_LOG_LINE(ERR, EAL, + "Error reading from fd %d: %s", fd, strerror(errno)); } else if (nbytes == 0) - RTE_LOG(ERR, EAL, "Read nothing from fd %d\n", fd); + RTE_LOG_LINE(ERR, EAL, "Read nothing from fd %d", fd); return; } while (1); } @@ -1296,8 +1296,8 @@ eal_init_tls_epfd(void) int pfd = epoll_create(255); if (pfd < 0) { - RTE_LOG(ERR, EAL, - "Cannot create epoll instance\n"); + RTE_LOG_LINE(ERR, EAL, + "Cannot create epoll instance"); return -1; } return pfd; @@ -1320,7 +1320,7 @@ eal_epoll_wait(int epfd, struct rte_epoll_event *events, int rc; if (!events) { - RTE_LOG(ERR, EAL, "rte_epoll_event can't be NULL\n"); + RTE_LOG_LINE(ERR, EAL, "rte_epoll_event can't be NULL"); return -1; } @@ -1342,7 +1342,7 @@ eal_epoll_wait(int epfd, struct rte_epoll_event *events, continue; } /* epoll_wait fail */ - RTE_LOG(ERR, EAL, "epoll_wait returns with fail %s\n", + RTE_LOG_LINE(ERR, EAL, "epoll_wait returns with fail %s", strerror(errno)); rc = -1; break; @@ -1393,7 +1393,7 @@ rte_epoll_ctl(int epfd, int op, int fd, struct epoll_event ev; if (!event) { - RTE_LOG(ERR, EAL, "rte_epoll_event can't be NULL\n"); + RTE_LOG_LINE(ERR, EAL, "rte_epoll_event can't be NULL"); return -1; } @@ -1411,7 +1411,7 @@ rte_epoll_ctl(int epfd, int op, int fd, ev.events = event->epdata.event; if (epoll_ctl(epfd, op, fd, &ev) < 0) { - RTE_LOG(ERR, EAL, "Error op %d fd %d epoll_ctl, %s\n", + RTE_LOG_LINE(ERR, EAL, "Error op %d fd %d epoll_ctl, %s", op, fd, strerror(errno)); if (op == EPOLL_CTL_ADD) /* rollback status when CTL_ADD fail */ @@ -1442,7 +1442,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd, if (intr_handle == NULL || rte_intr_nb_efd_get(intr_handle) == 0 || efd_idx >= (unsigned int)rte_intr_nb_efd_get(intr_handle)) { - RTE_LOG(ERR, EAL, "Wrong intr vector number.\n"); + RTE_LOG_LINE(ERR, EAL, "Wrong intr vector number."); return -EPERM; } @@ -1452,7 +1452,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd, rev = rte_intr_elist_index_get(intr_handle, efd_idx); if (rte_atomic_load_explicit(&rev->status, rte_memory_order_relaxed) != RTE_EPOLL_INVALID) { - RTE_LOG(INFO, EAL, "Event already been added.\n"); + RTE_LOG_LINE(INFO, EAL, "Event already been added."); return -EEXIST; } @@ -1465,9 +1465,9 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd, rc = rte_epoll_ctl(epfd, epfd_op, rte_intr_efds_index_get(intr_handle, efd_idx), rev); if (!rc) - RTE_LOG(DEBUG, EAL, - "efd %d associated with vec %d added on epfd %d" - "\n", rev->fd, vec, epfd); + RTE_LOG_LINE(DEBUG, EAL, + "efd %d associated with vec %d added on epfd %d", + rev->fd, vec, epfd); else rc = -EPERM; break; @@ -1476,7 +1476,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd, rev = rte_intr_elist_index_get(intr_handle, efd_idx); if (rte_atomic_load_explicit(&rev->status, rte_memory_order_relaxed) == RTE_EPOLL_INVALID) { - RTE_LOG(INFO, EAL, "Event does not exist.\n"); + RTE_LOG_LINE(INFO, EAL, "Event does not exist."); return -EPERM; } @@ -1485,7 +1485,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd, rc = -EPERM; break; default: - RTE_LOG(ERR, EAL, "event op type mismatch\n"); + RTE_LOG_LINE(ERR, EAL, "event op type mismatch"); rc = -EPERM; } @@ -1523,8 +1523,8 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd) for (i = 0; i < n; i++) { fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC); if (fd < 0) { - RTE_LOG(ERR, EAL, - "can't setup eventfd, error %i (%s)\n", + RTE_LOG_LINE(ERR, EAL, + "can't setup eventfd, error %i (%s)", errno, strerror(errno)); return -errno; } @@ -1542,7 +1542,7 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd) /* only check, initialization would be done in vdev driver.*/ if ((uint64_t)rte_intr_efd_counter_size_get(intr_handle) > sizeof(union rte_intr_read_buffer)) { - RTE_LOG(ERR, EAL, "the efd_counter_size is oversized\n"); + RTE_LOG_LINE(ERR, EAL, "the efd_counter_size is oversized"); return -EINVAL; } } else { diff --git a/lib/eal/linux/eal_lcore.c b/lib/eal/linux/eal_lcore.c index 2e6a350603..42bf0ee7a1 100644 --- a/lib/eal/linux/eal_lcore.c +++ b/lib/eal/linux/eal_lcore.c @@ -68,7 +68,7 @@ eal_cpu_core_id(unsigned lcore_id) return (unsigned)id; err: - RTE_LOG(ERR, EAL, "Error reading core id value from %s " - "for lcore %u - assuming core 0\n", SYS_CPU_DIR, lcore_id); + RTE_LOG_LINE(ERR, EAL, "Error reading core id value from %s " + "for lcore %u - assuming core 0", SYS_CPU_DIR, lcore_id); return 0; } diff --git a/lib/eal/linux/eal_memalloc.c b/lib/eal/linux/eal_memalloc.c index 9853ec78a2..35a1868e32 100644 --- a/lib/eal/linux/eal_memalloc.c +++ b/lib/eal/linux/eal_memalloc.c @@ -147,7 +147,7 @@ check_numa(void) bool ret = true; /* Check if kernel supports NUMA. */ if (numa_available() != 0) { - RTE_LOG(DEBUG, EAL, "NUMA is not supported.\n"); + RTE_LOG_LINE(DEBUG, EAL, "NUMA is not supported."); ret = false; } return ret; @@ -156,16 +156,16 @@ check_numa(void) static void prepare_numa(int *oldpolicy, struct bitmask *oldmask, int socket_id) { - RTE_LOG(DEBUG, EAL, "Trying to obtain current memory policy.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Trying to obtain current memory policy."); if (get_mempolicy(oldpolicy, oldmask->maskp, oldmask->size + 1, 0, 0) < 0) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "Failed to get current mempolicy: %s. " - "Assuming MPOL_DEFAULT.\n", strerror(errno)); + "Assuming MPOL_DEFAULT.", strerror(errno)); *oldpolicy = MPOL_DEFAULT; } - RTE_LOG(DEBUG, EAL, - "Setting policy MPOL_PREFERRED for socket %d\n", + RTE_LOG_LINE(DEBUG, EAL, + "Setting policy MPOL_PREFERRED for socket %d", socket_id); numa_set_preferred(socket_id); } @@ -173,13 +173,13 @@ prepare_numa(int *oldpolicy, struct bitmask *oldmask, int socket_id) static void restore_numa(int *oldpolicy, struct bitmask *oldmask) { - RTE_LOG(DEBUG, EAL, - "Restoring previous memory policy: %d\n", *oldpolicy); + RTE_LOG_LINE(DEBUG, EAL, + "Restoring previous memory policy: %d", *oldpolicy); if (*oldpolicy == MPOL_DEFAULT) { numa_set_localalloc(); } else if (set_mempolicy(*oldpolicy, oldmask->maskp, oldmask->size + 1) < 0) { - RTE_LOG(ERR, EAL, "Failed to restore mempolicy: %s\n", + RTE_LOG_LINE(ERR, EAL, "Failed to restore mempolicy: %s", strerror(errno)); numa_set_localalloc(); } @@ -223,7 +223,7 @@ static int lock(int fd, int type) /* couldn't lock */ return 0; } else if (ret) { - RTE_LOG(ERR, EAL, "%s(): error calling flock(): %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): error calling flock(): %s", __func__, strerror(errno)); return -1; } @@ -251,7 +251,7 @@ get_seg_memfd(struct hugepage_info *hi __rte_unused, snprintf(segname, sizeof(segname), "seg_%i", list_idx); fd = memfd_create(segname, flags); if (fd < 0) { - RTE_LOG(DEBUG, EAL, "%s(): memfd create failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): memfd create failed: %s", __func__, strerror(errno)); return -1; } @@ -265,7 +265,7 @@ get_seg_memfd(struct hugepage_info *hi __rte_unused, list_idx, seg_idx); fd = memfd_create(segname, flags); if (fd < 0) { - RTE_LOG(DEBUG, EAL, "%s(): memfd create failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): memfd create failed: %s", __func__, strerror(errno)); return -1; } @@ -316,7 +316,7 @@ get_seg_fd(char *path, int buflen, struct hugepage_info *hi, */ ret = stat(path, &st); if (ret < 0 && errno != ENOENT) { - RTE_LOG(DEBUG, EAL, "%s(): stat() for '%s' failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): stat() for '%s' failed: %s", __func__, path, strerror(errno)); return -1; } @@ -342,7 +342,7 @@ get_seg_fd(char *path, int buflen, struct hugepage_info *hi, ret == 0) { /* coverity[toctou] */ if (unlink(path) < 0) { - RTE_LOG(DEBUG, EAL, "%s(): could not remove '%s': %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): could not remove '%s': %s", __func__, path, strerror(errno)); return -1; } @@ -351,13 +351,13 @@ get_seg_fd(char *path, int buflen, struct hugepage_info *hi, /* coverity[toctou] */ fd = open(path, O_CREAT | O_RDWR, 0600); if (fd < 0) { - RTE_LOG(ERR, EAL, "%s(): open '%s' failed: %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): open '%s' failed: %s", __func__, path, strerror(errno)); return -1; } /* take out a read lock */ if (lock(fd, LOCK_SH) < 0) { - RTE_LOG(ERR, EAL, "%s(): lock '%s' failed: %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): lock '%s' failed: %s", __func__, path, strerror(errno)); close(fd); return -1; @@ -378,7 +378,7 @@ resize_hugefile_in_memory(int fd, uint64_t fa_offset, ret = fallocate(fd, flags, fa_offset, page_sz); if (ret < 0) { - RTE_LOG(DEBUG, EAL, "%s(): fallocate() failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): fallocate() failed: %s", __func__, strerror(errno)); return -1; @@ -402,7 +402,7 @@ resize_hugefile_in_filesystem(int fd, uint64_t fa_offset, uint64_t page_sz, */ if (!grow) { - RTE_LOG(DEBUG, EAL, "%s(): fallocate not supported, not freeing page back to the system\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): fallocate not supported, not freeing page back to the system", __func__); return -1; } @@ -414,7 +414,7 @@ resize_hugefile_in_filesystem(int fd, uint64_t fa_offset, uint64_t page_sz, *dirty = new_size <= cur_size; if (new_size > cur_size && ftruncate(fd, new_size) < 0) { - RTE_LOG(DEBUG, EAL, "%s(): ftruncate() failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): ftruncate() failed: %s", __func__, strerror(errno)); return -1; } @@ -444,12 +444,12 @@ resize_hugefile_in_filesystem(int fd, uint64_t fa_offset, uint64_t page_sz, if (ret < 0) { if (fallocate_supported == -1 && errno == ENOTSUP) { - RTE_LOG(ERR, EAL, "%s(): fallocate() not supported, hugepage deallocation will be disabled\n", + RTE_LOG_LINE(ERR, EAL, "%s(): fallocate() not supported, hugepage deallocation will be disabled", __func__); again = true; fallocate_supported = 0; } else { - RTE_LOG(DEBUG, EAL, "%s(): fallocate() failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): fallocate() failed: %s", __func__, strerror(errno)); return -1; @@ -483,7 +483,7 @@ close_hugefile(int fd, char *path, int list_idx) if (!internal_conf->in_memory && rte_eal_process_type() == RTE_PROC_PRIMARY && unlink(path)) - RTE_LOG(ERR, EAL, "%s(): unlinking '%s' failed: %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): unlinking '%s' failed: %s", __func__, path, strerror(errno)); close(fd); @@ -536,12 +536,12 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, /* these are checked at init, but code analyzers don't know that */ if (internal_conf->in_memory && !anonymous_hugepages_supported) { - RTE_LOG(ERR, EAL, "Anonymous hugepages not supported, in-memory mode cannot allocate memory\n"); + RTE_LOG_LINE(ERR, EAL, "Anonymous hugepages not supported, in-memory mode cannot allocate memory"); return -1; } if (internal_conf->in_memory && !memfd_create_supported && internal_conf->single_file_segments) { - RTE_LOG(ERR, EAL, "Single-file segments are not supported without memfd support\n"); + RTE_LOG_LINE(ERR, EAL, "Single-file segments are not supported without memfd support"); return -1; } @@ -569,7 +569,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, fd = get_seg_fd(path, sizeof(path), hi, list_idx, seg_idx, &dirty); if (fd < 0) { - RTE_LOG(ERR, EAL, "Couldn't get fd on hugepage file\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't get fd on hugepage file"); return -1; } @@ -584,14 +584,14 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, } else { map_offset = 0; if (ftruncate(fd, alloc_sz) < 0) { - RTE_LOG(DEBUG, EAL, "%s(): ftruncate() failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): ftruncate() failed: %s", __func__, strerror(errno)); goto resized; } if (internal_conf->hugepage_file.unlink_before_mapping && !internal_conf->in_memory) { if (unlink(path)) { - RTE_LOG(DEBUG, EAL, "%s(): unlink() failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): unlink() failed: %s", __func__, strerror(errno)); goto resized; } @@ -610,7 +610,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, map_offset); if (va == MAP_FAILED) { - RTE_LOG(DEBUG, EAL, "%s(): mmap() failed: %s\n", __func__, + RTE_LOG_LINE(DEBUG, EAL, "%s(): mmap() failed: %s", __func__, strerror(errno)); /* mmap failed, but the previous region might have been * unmapped anyway. try to remap it @@ -618,7 +618,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, goto unmapped; } if (va != addr) { - RTE_LOG(DEBUG, EAL, "%s(): wrong mmap() address\n", __func__); + RTE_LOG_LINE(DEBUG, EAL, "%s(): wrong mmap() address", __func__); munmap(va, alloc_sz); goto resized; } @@ -631,7 +631,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, * back here. */ if (huge_wrap_sigsetjmp()) { - RTE_LOG(DEBUG, EAL, "SIGBUS: Cannot mmap more hugepages of size %uMB\n", + RTE_LOG_LINE(DEBUG, EAL, "SIGBUS: Cannot mmap more hugepages of size %uMB", (unsigned int)(alloc_sz >> 20)); goto mapped; } @@ -645,7 +645,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, iova = rte_mem_virt2iova(addr); if (iova == RTE_BAD_PHYS_ADDR) { - RTE_LOG(DEBUG, EAL, "%s(): can't get IOVA addr\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): can't get IOVA addr", __func__); goto mapped; } @@ -661,19 +661,19 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, ret = get_mempolicy(&cur_socket_id, NULL, 0, addr, MPOL_F_NODE | MPOL_F_ADDR); if (ret < 0) { - RTE_LOG(DEBUG, EAL, "%s(): get_mempolicy: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): get_mempolicy: %s", __func__, strerror(errno)); goto mapped; } else if (cur_socket_id != socket_id) { - RTE_LOG(DEBUG, EAL, - "%s(): allocation happened on wrong socket (wanted %d, got %d)\n", + RTE_LOG_LINE(DEBUG, EAL, + "%s(): allocation happened on wrong socket (wanted %d, got %d)", __func__, socket_id, cur_socket_id); goto mapped; } } #else if (rte_socket_count() > 1) - RTE_LOG(DEBUG, EAL, "%s(): not checking hugepage NUMA node.\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): not checking hugepage NUMA node.", __func__); #endif @@ -703,7 +703,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, * somebody else maps this hole now, we could accidentally * override it in the future. */ - RTE_LOG(CRIT, EAL, "Can't mmap holes in our virtual address space\n"); + RTE_LOG_LINE(CRIT, EAL, "Can't mmap holes in our virtual address space"); } /* roll back the ref count */ if (internal_conf->single_file_segments) @@ -748,7 +748,7 @@ free_seg(struct rte_memseg *ms, struct hugepage_info *hi, if (mmap(ms->addr, ms->len, PROT_NONE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED, -1, 0) == MAP_FAILED) { - RTE_LOG(DEBUG, EAL, "couldn't unmap page\n"); + RTE_LOG_LINE(DEBUG, EAL, "couldn't unmap page"); return -1; } @@ -873,13 +873,13 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg) if (wa->hi->lock_descriptor == -1 && !internal_conf->in_memory) { dir_fd = open(wa->hi->hugedir, O_RDONLY); if (dir_fd < 0) { - RTE_LOG(ERR, EAL, "%s(): Cannot open '%s': %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): Cannot open '%s': %s", __func__, wa->hi->hugedir, strerror(errno)); return -1; } /* blocking writelock */ if (flock(dir_fd, LOCK_EX)) { - RTE_LOG(ERR, EAL, "%s(): Cannot lock '%s': %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): Cannot lock '%s': %s", __func__, wa->hi->hugedir, strerror(errno)); close(dir_fd); return -1; @@ -896,7 +896,7 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg) if (alloc_seg(cur, map_addr, wa->socket, wa->hi, msl_idx, cur_idx)) { - RTE_LOG(DEBUG, EAL, "attempted to allocate %i segments, but only %i were allocated\n", + RTE_LOG_LINE(DEBUG, EAL, "attempted to allocate %i segments, but only %i were allocated", need, i); /* if exact number wasn't requested, stop */ @@ -916,7 +916,7 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg) * may fail. */ if (free_seg(tmp, wa->hi, msl_idx, j)) - RTE_LOG(DEBUG, EAL, "Cannot free page\n"); + RTE_LOG_LINE(DEBUG, EAL, "Cannot free page"); } /* clear the list */ if (wa->ms) @@ -980,13 +980,13 @@ free_seg_walk(const struct rte_memseg_list *msl, void *arg) if (wa->hi->lock_descriptor == -1 && !internal_conf->in_memory) { dir_fd = open(wa->hi->hugedir, O_RDONLY); if (dir_fd < 0) { - RTE_LOG(ERR, EAL, "%s(): Cannot open '%s': %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): Cannot open '%s': %s", __func__, wa->hi->hugedir, strerror(errno)); return -1; } /* blocking writelock */ if (flock(dir_fd, LOCK_EX)) { - RTE_LOG(ERR, EAL, "%s(): Cannot lock '%s': %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): Cannot lock '%s': %s", __func__, wa->hi->hugedir, strerror(errno)); close(dir_fd); return -1; @@ -1037,7 +1037,7 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs, size_t page_sz, } } if (!hi) { - RTE_LOG(ERR, EAL, "%s(): can't find relevant hugepage_info entry\n", + RTE_LOG_LINE(ERR, EAL, "%s(): can't find relevant hugepage_info entry", __func__); return -1; } @@ -1061,7 +1061,7 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs, size_t page_sz, /* memalloc is locked, so it's safe to use thread-unsafe version */ ret = rte_memseg_list_walk_thread_unsafe(alloc_seg_walk, &wa); if (ret == 0) { - RTE_LOG(ERR, EAL, "%s(): couldn't find suitable memseg_list\n", + RTE_LOG_LINE(ERR, EAL, "%s(): couldn't find suitable memseg_list", __func__); ret = -1; } else if (ret > 0) { @@ -1104,7 +1104,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs) /* if this page is marked as unfreeable, fail */ if (cur->flags & RTE_MEMSEG_FLAG_DO_NOT_FREE) { - RTE_LOG(DEBUG, EAL, "Page is not allowed to be freed\n"); + RTE_LOG_LINE(DEBUG, EAL, "Page is not allowed to be freed"); ret = -1; continue; } @@ -1118,7 +1118,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs) break; } if (i == (int)RTE_DIM(internal_conf->hugepage_info)) { - RTE_LOG(ERR, EAL, "Can't find relevant hugepage_info entry\n"); + RTE_LOG_LINE(ERR, EAL, "Can't find relevant hugepage_info entry"); ret = -1; continue; } @@ -1133,7 +1133,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs) if (walk_res == 1) continue; if (walk_res == 0) - RTE_LOG(ERR, EAL, "Couldn't find memseg list\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't find memseg list"); ret = -1; } return ret; @@ -1344,13 +1344,13 @@ sync_existing(struct rte_memseg_list *primary_msl, */ dir_fd = open(hi->hugedir, O_RDONLY); if (dir_fd < 0) { - RTE_LOG(ERR, EAL, "%s(): Cannot open '%s': %s\n", __func__, + RTE_LOG_LINE(ERR, EAL, "%s(): Cannot open '%s': %s", __func__, hi->hugedir, strerror(errno)); return -1; } /* blocking writelock */ if (flock(dir_fd, LOCK_EX)) { - RTE_LOG(ERR, EAL, "%s(): Cannot lock '%s': %s\n", __func__, + RTE_LOG_LINE(ERR, EAL, "%s(): Cannot lock '%s': %s", __func__, hi->hugedir, strerror(errno)); close(dir_fd); return -1; @@ -1405,7 +1405,7 @@ sync_walk(const struct rte_memseg_list *msl, void *arg __rte_unused) } } if (!hi) { - RTE_LOG(ERR, EAL, "Can't find relevant hugepage_info entry\n"); + RTE_LOG_LINE(ERR, EAL, "Can't find relevant hugepage_info entry"); return -1; } @@ -1454,7 +1454,7 @@ secondary_msl_create_walk(const struct rte_memseg_list *msl, primary_msl->memseg_arr.len, primary_msl->memseg_arr.elt_sz); if (ret < 0) { - RTE_LOG(ERR, EAL, "Cannot initialize local memory map\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot initialize local memory map"); return -1; } local_msl->base_va = primary_msl->base_va; @@ -1479,7 +1479,7 @@ secondary_msl_destroy_walk(const struct rte_memseg_list *msl, ret = rte_fbarray_destroy(&local_msl->memseg_arr); if (ret < 0) { - RTE_LOG(ERR, EAL, "Cannot destroy local memory map\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot destroy local memory map"); return -1; } local_msl->base_va = NULL; @@ -1501,7 +1501,7 @@ alloc_list(int list_idx, int len) /* ensure we have space to store fd per each possible segment */ data = malloc(sizeof(int) * len); if (data == NULL) { - RTE_LOG(ERR, EAL, "Unable to allocate space for file descriptors\n"); + RTE_LOG_LINE(ERR, EAL, "Unable to allocate space for file descriptors"); return -1; } /* set all fd's as invalid */ @@ -1750,13 +1750,13 @@ eal_memalloc_init(void) int mfd_res = test_memfd_create(); if (mfd_res < 0) { - RTE_LOG(ERR, EAL, "Unable to check if memfd is supported\n"); + RTE_LOG_LINE(ERR, EAL, "Unable to check if memfd is supported"); return -1; } if (mfd_res == 1) - RTE_LOG(DEBUG, EAL, "Using memfd for anonymous memory\n"); + RTE_LOG_LINE(DEBUG, EAL, "Using memfd for anonymous memory"); else - RTE_LOG(INFO, EAL, "Using memfd is not supported, falling back to anonymous hugepages\n"); + RTE_LOG_LINE(INFO, EAL, "Using memfd is not supported, falling back to anonymous hugepages"); /* we only support single-file segments mode with in-memory mode * if we support hugetlbfs with memfd_create. this code will @@ -1764,18 +1764,18 @@ eal_memalloc_init(void) */ if (internal_conf->single_file_segments && mfd_res != 1) { - RTE_LOG(ERR, EAL, "Single-file segments mode cannot be used without memfd support\n"); + RTE_LOG_LINE(ERR, EAL, "Single-file segments mode cannot be used without memfd support"); return -1; } /* this cannot ever happen but better safe than sorry */ if (!anonymous_hugepages_supported) { - RTE_LOG(ERR, EAL, "Using anonymous memory is not supported\n"); + RTE_LOG_LINE(ERR, EAL, "Using anonymous memory is not supported"); return -1; } /* safety net, should be impossible to configure */ if (internal_conf->hugepage_file.unlink_before_mapping && !internal_conf->hugepage_file.unlink_existing) { - RTE_LOG(ERR, EAL, "Unlinking existing hugepage files is prohibited, cannot unlink them before mapping.\n"); + RTE_LOG_LINE(ERR, EAL, "Unlinking existing hugepage files is prohibited, cannot unlink them before mapping."); return -1; } } diff --git a/lib/eal/linux/eal_memory.c b/lib/eal/linux/eal_memory.c index 9b6f08fba8..2f2551588b 100644 --- a/lib/eal/linux/eal_memory.c +++ b/lib/eal/linux/eal_memory.c @@ -104,7 +104,7 @@ rte_mem_virt2phy(const void *virtaddr) fd = open("/proc/self/pagemap", O_RDONLY); if (fd < 0) { - RTE_LOG(INFO, EAL, "%s(): cannot open /proc/self/pagemap: %s\n", + RTE_LOG_LINE(INFO, EAL, "%s(): cannot open /proc/self/pagemap: %s", __func__, strerror(errno)); return RTE_BAD_IOVA; } @@ -112,7 +112,7 @@ rte_mem_virt2phy(const void *virtaddr) virt_pfn = (unsigned long)virtaddr / page_size; offset = sizeof(uint64_t) * virt_pfn; if (lseek(fd, offset, SEEK_SET) == (off_t) -1) { - RTE_LOG(INFO, EAL, "%s(): seek error in /proc/self/pagemap: %s\n", + RTE_LOG_LINE(INFO, EAL, "%s(): seek error in /proc/self/pagemap: %s", __func__, strerror(errno)); close(fd); return RTE_BAD_IOVA; @@ -121,12 +121,12 @@ rte_mem_virt2phy(const void *virtaddr) retval = read(fd, &page, PFN_MASK_SIZE); close(fd); if (retval < 0) { - RTE_LOG(INFO, EAL, "%s(): cannot read /proc/self/pagemap: %s\n", + RTE_LOG_LINE(INFO, EAL, "%s(): cannot read /proc/self/pagemap: %s", __func__, strerror(errno)); return RTE_BAD_IOVA; } else if (retval != PFN_MASK_SIZE) { - RTE_LOG(INFO, EAL, "%s(): read %d bytes from /proc/self/pagemap " - "but expected %d:\n", + RTE_LOG_LINE(INFO, EAL, "%s(): read %d bytes from /proc/self/pagemap " + "but expected %d:", __func__, retval, PFN_MASK_SIZE); return RTE_BAD_IOVA; } @@ -237,7 +237,7 @@ static int huge_wrap_sigsetjmp(void) /* Callback for numa library. */ void numa_error(char *where) { - RTE_LOG(ERR, EAL, "%s failed: %s\n", where, strerror(errno)); + RTE_LOG_LINE(ERR, EAL, "%s failed: %s", where, strerror(errno)); } #endif @@ -267,18 +267,18 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, /* Check if kernel supports NUMA. */ if (numa_available() != 0) { - RTE_LOG(DEBUG, EAL, "NUMA is not supported.\n"); + RTE_LOG_LINE(DEBUG, EAL, "NUMA is not supported."); have_numa = false; } if (have_numa) { - RTE_LOG(DEBUG, EAL, "Trying to obtain current memory policy.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Trying to obtain current memory policy."); oldmask = numa_allocate_nodemask(); if (get_mempolicy(&oldpolicy, oldmask->maskp, oldmask->size + 1, 0, 0) < 0) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "Failed to get current mempolicy: %s. " - "Assuming MPOL_DEFAULT.\n", strerror(errno)); + "Assuming MPOL_DEFAULT.", strerror(errno)); oldpolicy = MPOL_DEFAULT; } for (i = 0; i < RTE_MAX_NUMA_NODES; i++) @@ -316,8 +316,8 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, essential_memory[j] -= hugepage_sz; } - RTE_LOG(DEBUG, EAL, - "Setting policy MPOL_PREFERRED for socket %d\n", + RTE_LOG_LINE(DEBUG, EAL, + "Setting policy MPOL_PREFERRED for socket %d", node_id); numa_set_preferred(node_id); } @@ -332,7 +332,7 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, /* try to create hugepage file */ fd = open(hf->filepath, O_CREAT | O_RDWR, 0600); if (fd < 0) { - RTE_LOG(DEBUG, EAL, "%s(): open failed: %s\n", __func__, + RTE_LOG_LINE(DEBUG, EAL, "%s(): open failed: %s", __func__, strerror(errno)); goto out; } @@ -345,7 +345,7 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, virtaddr = mmap(NULL, hugepage_sz, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_POPULATE, fd, 0); if (virtaddr == MAP_FAILED) { - RTE_LOG(DEBUG, EAL, "%s(): mmap failed: %s\n", __func__, + RTE_LOG_LINE(DEBUG, EAL, "%s(): mmap failed: %s", __func__, strerror(errno)); close(fd); goto out; @@ -361,8 +361,8 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, * back here. */ if (huge_wrap_sigsetjmp()) { - RTE_LOG(DEBUG, EAL, "SIGBUS: Cannot mmap more " - "hugepages of size %u MB\n", + RTE_LOG_LINE(DEBUG, EAL, "SIGBUS: Cannot mmap more " + "hugepages of size %u MB", (unsigned int)(hugepage_sz / 0x100000)); munmap(virtaddr, hugepage_sz); close(fd); @@ -378,7 +378,7 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, /* set shared lock on the file. */ if (flock(fd, LOCK_SH) < 0) { - RTE_LOG(DEBUG, EAL, "%s(): Locking file failed:%s \n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): Locking file failed:%s ", __func__, strerror(errno)); close(fd); goto out; @@ -390,13 +390,13 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, out: #ifdef RTE_EAL_NUMA_AWARE_HUGEPAGES if (maxnode) { - RTE_LOG(DEBUG, EAL, - "Restoring previous memory policy: %d\n", oldpolicy); + RTE_LOG_LINE(DEBUG, EAL, + "Restoring previous memory policy: %d", oldpolicy); if (oldpolicy == MPOL_DEFAULT) { numa_set_localalloc(); } else if (set_mempolicy(oldpolicy, oldmask->maskp, oldmask->size + 1) < 0) { - RTE_LOG(ERR, EAL, "Failed to restore mempolicy: %s\n", + RTE_LOG_LINE(ERR, EAL, "Failed to restore mempolicy: %s", strerror(errno)); numa_set_localalloc(); } @@ -424,8 +424,8 @@ find_numasocket(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi) f = fopen("/proc/self/numa_maps", "r"); if (f == NULL) { - RTE_LOG(NOTICE, EAL, "NUMA support not available" - " consider that all memory is in socket_id 0\n"); + RTE_LOG_LINE(NOTICE, EAL, "NUMA support not available" + " consider that all memory is in socket_id 0"); return 0; } @@ -443,20 +443,20 @@ find_numasocket(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi) /* get zone addr */ virt_addr = strtoull(buf, &end, 16); if (virt_addr == 0 || end == buf) { - RTE_LOG(ERR, EAL, "%s(): error in numa_maps parsing\n", __func__); + RTE_LOG_LINE(ERR, EAL, "%s(): error in numa_maps parsing", __func__); goto error; } /* get node id (socket id) */ nodestr = strstr(buf, " N"); if (nodestr == NULL) { - RTE_LOG(ERR, EAL, "%s(): error in numa_maps parsing\n", __func__); + RTE_LOG_LINE(ERR, EAL, "%s(): error in numa_maps parsing", __func__); goto error; } nodestr += 2; end = strstr(nodestr, "="); if (end == NULL) { - RTE_LOG(ERR, EAL, "%s(): error in numa_maps parsing\n", __func__); + RTE_LOG_LINE(ERR, EAL, "%s(): error in numa_maps parsing", __func__); goto error; } end[0] = '\0'; @@ -464,7 +464,7 @@ find_numasocket(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi) socket_id = strtoul(nodestr, &end, 0); if ((nodestr[0] == '\0') || (end == NULL) || (*end != '\0')) { - RTE_LOG(ERR, EAL, "%s(): error in numa_maps parsing\n", __func__); + RTE_LOG_LINE(ERR, EAL, "%s(): error in numa_maps parsing", __func__); goto error; } @@ -475,8 +475,8 @@ find_numasocket(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi) hugepg_tbl[i].socket_id = socket_id; hp_count++; #ifdef RTE_EAL_NUMA_AWARE_HUGEPAGES - RTE_LOG(DEBUG, EAL, - "Hugepage %s is on socket %d\n", + RTE_LOG_LINE(DEBUG, EAL, + "Hugepage %s is on socket %d", hugepg_tbl[i].filepath, socket_id); #endif } @@ -589,7 +589,7 @@ unlink_hugepage_files(struct hugepage_file *hugepg_tbl, struct hugepage_file *hp = &hugepg_tbl[page]; if (hp->orig_va != NULL && unlink(hp->filepath)) { - RTE_LOG(WARNING, EAL, "%s(): Removing %s failed: %s\n", + RTE_LOG_LINE(WARNING, EAL, "%s(): Removing %s failed: %s", __func__, hp->filepath, strerror(errno)); } } @@ -639,7 +639,7 @@ unmap_unneeded_hugepages(struct hugepage_file *hugepg_tbl, hp->orig_va = NULL; if (unlink(hp->filepath) == -1) { - RTE_LOG(ERR, EAL, "%s(): Removing %s failed: %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): Removing %s failed: %s", __func__, hp->filepath, strerror(errno)); return -1; } @@ -676,7 +676,7 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end) socket_id = hugepages[seg_start].socket_id; seg_len = seg_end - seg_start; - RTE_LOG(DEBUG, EAL, "Attempting to map %" PRIu64 "M on socket %i\n", + RTE_LOG_LINE(DEBUG, EAL, "Attempting to map %" PRIu64 "M on socket %i", (seg_len * page_sz) >> 20ULL, socket_id); /* find free space in memseg lists */ @@ -716,8 +716,8 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end) break; } if (msl_idx == RTE_MAX_MEMSEG_LISTS) { - RTE_LOG(ERR, EAL, "Could not find space for memseg. Please increase RTE_MAX_MEMSEG_PER_LIST " - "RTE_MAX_MEMSEG_PER_TYPE and/or RTE_MAX_MEM_MB_PER_TYPE in configuration.\n"); + RTE_LOG_LINE(ERR, EAL, "Could not find space for memseg. Please increase RTE_MAX_MEMSEG_PER_LIST " + "RTE_MAX_MEMSEG_PER_TYPE and/or RTE_MAX_MEM_MB_PER_TYPE in configuration."); return -1; } @@ -735,13 +735,13 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end) fd = open(hfile->filepath, O_RDWR); if (fd < 0) { - RTE_LOG(ERR, EAL, "Could not open '%s': %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not open '%s': %s", hfile->filepath, strerror(errno)); return -1; } /* set shared lock on the file. */ if (flock(fd, LOCK_SH) < 0) { - RTE_LOG(DEBUG, EAL, "Could not lock '%s': %s\n", + RTE_LOG_LINE(DEBUG, EAL, "Could not lock '%s': %s", hfile->filepath, strerror(errno)); close(fd); return -1; @@ -755,7 +755,7 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end) addr = mmap(addr, page_sz, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_POPULATE | MAP_FIXED, fd, 0); if (addr == MAP_FAILED) { - RTE_LOG(ERR, EAL, "Couldn't remap '%s': %s\n", + RTE_LOG_LINE(ERR, EAL, "Couldn't remap '%s': %s", hfile->filepath, strerror(errno)); close(fd); return -1; @@ -790,10 +790,10 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end) /* store segment fd internally */ if (eal_memalloc_set_seg_fd(msl_idx, ms_idx, fd) < 0) - RTE_LOG(ERR, EAL, "Could not store segment fd: %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not store segment fd: %s", rte_strerror(rte_errno)); } - RTE_LOG(DEBUG, EAL, "Allocated %" PRIu64 "M on socket %i\n", + RTE_LOG_LINE(DEBUG, EAL, "Allocated %" PRIu64 "M on socket %i", (seg_len * page_sz) >> 20, socket_id); return seg_len; } @@ -819,7 +819,7 @@ static int memseg_list_free(struct rte_memseg_list *msl) { if (rte_fbarray_destroy(&msl->memseg_arr)) { - RTE_LOG(ERR, EAL, "Cannot destroy memseg list\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot destroy memseg list"); return -1; } memset(msl, 0, sizeof(*msl)); @@ -965,7 +965,7 @@ prealloc_segments(struct hugepage_file *hugepages, int n_pages) break; } if (msl_idx == RTE_MAX_MEMSEG_LISTS) { - RTE_LOG(ERR, EAL, "Not enough space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS\n"); + RTE_LOG_LINE(ERR, EAL, "Not enough space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS"); return -1; } @@ -976,7 +976,7 @@ prealloc_segments(struct hugepage_file *hugepages, int n_pages) /* finally, allocate VA space */ if (eal_memseg_list_alloc(msl, 0) < 0) { - RTE_LOG(ERR, EAL, "Cannot preallocate 0x%"PRIx64"kB hugepages\n", + RTE_LOG_LINE(ERR, EAL, "Cannot preallocate 0x%"PRIx64"kB hugepages", page_sz >> 10); return -1; } @@ -1177,15 +1177,15 @@ eal_legacy_hugepage_init(void) /* create a memfd and store it in the segment fd table */ memfd = memfd_create("nohuge", 0); if (memfd < 0) { - RTE_LOG(DEBUG, EAL, "Cannot create memfd: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "Cannot create memfd: %s", strerror(errno)); - RTE_LOG(DEBUG, EAL, "Falling back to anonymous map\n"); + RTE_LOG_LINE(DEBUG, EAL, "Falling back to anonymous map"); } else { /* we got an fd - now resize it */ if (ftruncate(memfd, internal_conf->memory) < 0) { - RTE_LOG(ERR, EAL, "Cannot resize memfd: %s\n", + RTE_LOG_LINE(ERR, EAL, "Cannot resize memfd: %s", strerror(errno)); - RTE_LOG(ERR, EAL, "Falling back to anonymous map\n"); + RTE_LOG_LINE(ERR, EAL, "Falling back to anonymous map"); close(memfd); } else { /* creating memfd-backed file was successful. @@ -1193,7 +1193,7 @@ eal_legacy_hugepage_init(void) * other processes (such as vhost backend), so * map it as shared memory. */ - RTE_LOG(DEBUG, EAL, "Using memfd for anonymous memory\n"); + RTE_LOG_LINE(DEBUG, EAL, "Using memfd for anonymous memory"); fd = memfd; flags = MAP_SHARED; } @@ -1203,7 +1203,7 @@ eal_legacy_hugepage_init(void) * fit into the DMA mask. */ if (eal_memseg_list_alloc(msl, 0)) { - RTE_LOG(ERR, EAL, "Cannot preallocate VA space for hugepage memory\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot preallocate VA space for hugepage memory"); return -1; } @@ -1211,7 +1211,7 @@ eal_legacy_hugepage_init(void) addr = mmap(prealloc_addr, mem_sz, PROT_READ | PROT_WRITE, flags | MAP_FIXED, fd, 0); if (addr == MAP_FAILED || addr != prealloc_addr) { - RTE_LOG(ERR, EAL, "%s: mmap() failed: %s\n", __func__, + RTE_LOG_LINE(ERR, EAL, "%s: mmap() failed: %s", __func__, strerror(errno)); munmap(prealloc_addr, mem_sz); return -1; @@ -1222,7 +1222,7 @@ eal_legacy_hugepage_init(void) */ if (fd != -1) { if (eal_memalloc_set_seg_list_fd(0, fd) < 0) { - RTE_LOG(ERR, EAL, "Cannot set up segment list fd\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot set up segment list fd"); /* not a serious error, proceed */ } } @@ -1231,13 +1231,13 @@ eal_legacy_hugepage_init(void) if (mcfg->dma_maskbits && rte_mem_check_dma_mask_thread_unsafe(mcfg->dma_maskbits)) { - RTE_LOG(ERR, EAL, - "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask.\n", + RTE_LOG_LINE(ERR, EAL, + "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask.", __func__); if (rte_eal_iova_mode() == RTE_IOVA_VA && rte_eal_using_phys_addrs()) - RTE_LOG(ERR, EAL, - "%s(): Please try initializing EAL with --iova-mode=pa parameter.\n", + RTE_LOG_LINE(ERR, EAL, + "%s(): Please try initializing EAL with --iova-mode=pa parameter.", __func__); goto fail; } @@ -1292,8 +1292,8 @@ eal_legacy_hugepage_init(void) pages_old = hpi->num_pages[0]; pages_new = map_all_hugepages(&tmp_hp[hp_offset], hpi, memory); if (pages_new < pages_old) { - RTE_LOG(DEBUG, EAL, - "%d not %d hugepages of size %u MB allocated\n", + RTE_LOG_LINE(DEBUG, EAL, + "%d not %d hugepages of size %u MB allocated", pages_new, pages_old, (unsigned)(hpi->hugepage_sz / 0x100000)); @@ -1309,23 +1309,23 @@ eal_legacy_hugepage_init(void) rte_eal_iova_mode() != RTE_IOVA_VA) { /* find physical addresses for each hugepage */ if (find_physaddrs(&tmp_hp[hp_offset], hpi) < 0) { - RTE_LOG(DEBUG, EAL, "Failed to find phys addr " - "for %u MB pages\n", + RTE_LOG_LINE(DEBUG, EAL, "Failed to find phys addr " + "for %u MB pages", (unsigned int)(hpi->hugepage_sz / 0x100000)); goto fail; } } else { /* set physical addresses for each hugepage */ if (set_physaddrs(&tmp_hp[hp_offset], hpi) < 0) { - RTE_LOG(DEBUG, EAL, "Failed to set phys addr " - "for %u MB pages\n", + RTE_LOG_LINE(DEBUG, EAL, "Failed to set phys addr " + "for %u MB pages", (unsigned int)(hpi->hugepage_sz / 0x100000)); goto fail; } } if (find_numasocket(&tmp_hp[hp_offset], hpi) < 0){ - RTE_LOG(DEBUG, EAL, "Failed to find NUMA socket for %u MB pages\n", + RTE_LOG_LINE(DEBUG, EAL, "Failed to find NUMA socket for %u MB pages", (unsigned)(hpi->hugepage_sz / 0x100000)); goto fail; } @@ -1382,9 +1382,9 @@ eal_legacy_hugepage_init(void) for (i = 0; i < (int) internal_conf->num_hugepage_sizes; i++) { for (j = 0; j < RTE_MAX_NUMA_NODES; j++) { if (used_hp[i].num_pages[j] > 0) { - RTE_LOG(DEBUG, EAL, + RTE_LOG_LINE(DEBUG, EAL, "Requesting %u pages of size %uMB" - " from socket %i\n", + " from socket %i", used_hp[i].num_pages[j], (unsigned) (used_hp[i].hugepage_sz / 0x100000), @@ -1398,7 +1398,7 @@ eal_legacy_hugepage_init(void) nr_hugefiles * sizeof(struct hugepage_file)); if (hugepage == NULL) { - RTE_LOG(ERR, EAL, "Failed to create shared memory!\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to create shared memory!"); goto fail; } memset(hugepage, 0, nr_hugefiles * sizeof(struct hugepage_file)); @@ -1409,7 +1409,7 @@ eal_legacy_hugepage_init(void) */ if (unmap_unneeded_hugepages(tmp_hp, used_hp, internal_conf->num_hugepage_sizes) < 0) { - RTE_LOG(ERR, EAL, "Unmapping and locking hugepages failed!\n"); + RTE_LOG_LINE(ERR, EAL, "Unmapping and locking hugepages failed!"); goto fail; } @@ -1420,7 +1420,7 @@ eal_legacy_hugepage_init(void) */ if (copy_hugepages_to_shared_mem(hugepage, nr_hugefiles, tmp_hp, nr_hugefiles) < 0) { - RTE_LOG(ERR, EAL, "Copying tables to shared memory failed!\n"); + RTE_LOG_LINE(ERR, EAL, "Copying tables to shared memory failed!"); goto fail; } @@ -1428,7 +1428,7 @@ eal_legacy_hugepage_init(void) /* for legacy 32-bit mode, we did not preallocate VA space, so do it */ if (internal_conf->legacy_mem && prealloc_segments(hugepage, nr_hugefiles)) { - RTE_LOG(ERR, EAL, "Could not preallocate VA space for hugepages\n"); + RTE_LOG_LINE(ERR, EAL, "Could not preallocate VA space for hugepages"); goto fail; } #endif @@ -1437,14 +1437,14 @@ eal_legacy_hugepage_init(void) * pages become first-class citizens in DPDK memory subsystem */ if (remap_needed_hugepages(hugepage, nr_hugefiles)) { - RTE_LOG(ERR, EAL, "Couldn't remap hugepage files into memseg lists\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't remap hugepage files into memseg lists"); goto fail; } /* free the hugepage backing files */ if (internal_conf->hugepage_file.unlink_before_mapping && unlink_hugepage_files(tmp_hp, internal_conf->num_hugepage_sizes) < 0) { - RTE_LOG(ERR, EAL, "Unlinking hugepage files failed!\n"); + RTE_LOG_LINE(ERR, EAL, "Unlinking hugepage files failed!"); goto fail; } @@ -1480,8 +1480,8 @@ eal_legacy_hugepage_init(void) if (mcfg->dma_maskbits && rte_mem_check_dma_mask_thread_unsafe(mcfg->dma_maskbits)) { - RTE_LOG(ERR, EAL, - "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask.\n", + RTE_LOG_LINE(ERR, EAL, + "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask.", __func__); goto fail; } @@ -1527,15 +1527,15 @@ eal_legacy_hugepage_attach(void) int fd, fd_hugepage = -1; if (aslr_enabled() > 0) { - RTE_LOG(WARNING, EAL, "WARNING: Address Space Layout Randomization " - "(ASLR) is enabled in the kernel.\n"); - RTE_LOG(WARNING, EAL, " This may cause issues with mapping memory " - "into secondary processes\n"); + RTE_LOG_LINE(WARNING, EAL, "WARNING: Address Space Layout Randomization " + "(ASLR) is enabled in the kernel."); + RTE_LOG_LINE(WARNING, EAL, " This may cause issues with mapping memory " + "into secondary processes"); } fd_hugepage = open(eal_hugepage_data_path(), O_RDONLY); if (fd_hugepage < 0) { - RTE_LOG(ERR, EAL, "Could not open %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not open %s", eal_hugepage_data_path()); goto error; } @@ -1543,13 +1543,13 @@ eal_legacy_hugepage_attach(void) size = getFileSize(fd_hugepage); hp = mmap(NULL, size, PROT_READ, MAP_PRIVATE, fd_hugepage, 0); if (hp == MAP_FAILED) { - RTE_LOG(ERR, EAL, "Could not mmap %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not mmap %s", eal_hugepage_data_path()); goto error; } num_hp = size / sizeof(struct hugepage_file); - RTE_LOG(DEBUG, EAL, "Analysing %u files\n", num_hp); + RTE_LOG_LINE(DEBUG, EAL, "Analysing %u files", num_hp); /* map all segments into memory to make sure we get the addrs. the * segments themselves are already in memseg list (which is shared and @@ -1570,7 +1570,7 @@ eal_legacy_hugepage_attach(void) fd = open(hf->filepath, O_RDWR); if (fd < 0) { - RTE_LOG(ERR, EAL, "Could not open %s: %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not open %s: %s", hf->filepath, strerror(errno)); goto error; } @@ -1578,14 +1578,14 @@ eal_legacy_hugepage_attach(void) map_addr = mmap(map_addr, map_sz, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_FIXED, fd, 0); if (map_addr == MAP_FAILED) { - RTE_LOG(ERR, EAL, "Could not map %s: %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not map %s: %s", hf->filepath, strerror(errno)); goto fd_error; } /* set shared lock on the file. */ if (flock(fd, LOCK_SH) < 0) { - RTE_LOG(DEBUG, EAL, "%s(): Locking file failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): Locking file failed: %s", __func__, strerror(errno)); goto mmap_error; } @@ -1593,13 +1593,13 @@ eal_legacy_hugepage_attach(void) /* find segment data */ msl = rte_mem_virt2memseg_list(map_addr); if (msl == NULL) { - RTE_LOG(DEBUG, EAL, "%s(): Cannot find memseg list\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): Cannot find memseg list", __func__); goto mmap_error; } ms = rte_mem_virt2memseg(map_addr, msl); if (ms == NULL) { - RTE_LOG(DEBUG, EAL, "%s(): Cannot find memseg\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): Cannot find memseg", __func__); goto mmap_error; } @@ -1607,14 +1607,14 @@ eal_legacy_hugepage_attach(void) msl_idx = msl - mcfg->memsegs; ms_idx = rte_fbarray_find_idx(&msl->memseg_arr, ms); if (ms_idx < 0) { - RTE_LOG(DEBUG, EAL, "%s(): Cannot find memseg idx\n", + RTE_LOG_LINE(DEBUG, EAL, "%s(): Cannot find memseg idx", __func__); goto mmap_error; } /* store segment fd internally */ if (eal_memalloc_set_seg_fd(msl_idx, ms_idx, fd) < 0) - RTE_LOG(ERR, EAL, "Could not store segment fd: %s\n", + RTE_LOG_LINE(ERR, EAL, "Could not store segment fd: %s", rte_strerror(rte_errno)); } /* unmap the hugepage config file, since we are done using it */ @@ -1642,9 +1642,9 @@ static int eal_hugepage_attach(void) { if (eal_memalloc_sync_with_primary()) { - RTE_LOG(ERR, EAL, "Could not map memory from primary process\n"); + RTE_LOG_LINE(ERR, EAL, "Could not map memory from primary process"); if (aslr_enabled() > 0) - RTE_LOG(ERR, EAL, "It is recommended to disable ASLR in the kernel and retry running both primary and secondary processes\n"); + RTE_LOG_LINE(ERR, EAL, "It is recommended to disable ASLR in the kernel and retry running both primary and secondary processes"); return -1; } return 0; @@ -1740,7 +1740,7 @@ memseg_primary_init_32(void) max_mem = (uint64_t)RTE_MAX_MEM_MB << 20; if (total_requested_mem > max_mem) { - RTE_LOG(ERR, EAL, "Invalid parameters: 32-bit process can at most use %uM of memory\n", + RTE_LOG_LINE(ERR, EAL, "Invalid parameters: 32-bit process can at most use %uM of memory", (unsigned int)(max_mem >> 20)); return -1; } @@ -1787,7 +1787,7 @@ memseg_primary_init_32(void) skip |= active_sockets == 0 && socket_id != main_lcore_socket; if (skip) { - RTE_LOG(DEBUG, EAL, "Will not preallocate memory on socket %u\n", + RTE_LOG_LINE(DEBUG, EAL, "Will not preallocate memory on socket %u", socket_id); continue; } @@ -1819,8 +1819,8 @@ memseg_primary_init_32(void) max_pagesz_mem = RTE_ALIGN_FLOOR(max_pagesz_mem, hugepage_sz); - RTE_LOG(DEBUG, EAL, "Attempting to preallocate " - "%" PRIu64 "M on socket %i\n", + RTE_LOG_LINE(DEBUG, EAL, "Attempting to preallocate " + "%" PRIu64 "M on socket %i", max_pagesz_mem >> 20, socket_id); type_msl_idx = 0; @@ -1830,8 +1830,8 @@ memseg_primary_init_32(void) unsigned int n_segs; if (msl_idx >= RTE_MAX_MEMSEG_LISTS) { - RTE_LOG(ERR, EAL, - "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS\n"); + RTE_LOG_LINE(ERR, EAL, + "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS"); return -1; } @@ -1847,7 +1847,7 @@ memseg_primary_init_32(void) /* failing to allocate a memseg list is * a serious error. */ - RTE_LOG(ERR, EAL, "Cannot allocate memseg list\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate memseg list"); return -1; } @@ -1855,7 +1855,7 @@ memseg_primary_init_32(void) /* if we couldn't allocate VA space, we * can try with smaller page sizes. */ - RTE_LOG(ERR, EAL, "Cannot allocate VA space for memseg list, retrying with different page size\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate VA space for memseg list, retrying with different page size"); /* deallocate memseg list */ if (memseg_list_free(msl)) return -1; @@ -1870,7 +1870,7 @@ memseg_primary_init_32(void) cur_socket_mem += cur_pagesz_mem; } if (cur_socket_mem == 0) { - RTE_LOG(ERR, EAL, "Cannot allocate VA space on socket %u\n", + RTE_LOG_LINE(ERR, EAL, "Cannot allocate VA space on socket %u", socket_id); return -1; } @@ -1901,13 +1901,13 @@ memseg_secondary_init(void) continue; if (rte_fbarray_attach(&msl->memseg_arr)) { - RTE_LOG(ERR, EAL, "Cannot attach to primary process memseg lists\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot attach to primary process memseg lists"); return -1; } /* preallocate VA space */ if (eal_memseg_list_alloc(msl, 0)) { - RTE_LOG(ERR, EAL, "Cannot preallocate VA space for hugepage memory\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot preallocate VA space for hugepage memory"); return -1; } } @@ -1930,21 +1930,21 @@ rte_eal_memseg_init(void) lim.rlim_cur = lim.rlim_max; if (setrlimit(RLIMIT_NOFILE, &lim) < 0) { - RTE_LOG(DEBUG, EAL, "Setting maximum number of open files failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "Setting maximum number of open files failed: %s", strerror(errno)); } else { - RTE_LOG(DEBUG, EAL, "Setting maximum number of open files to %" - PRIu64 "\n", + RTE_LOG_LINE(DEBUG, EAL, "Setting maximum number of open files to %" + PRIu64, (uint64_t)lim.rlim_cur); } } else { - RTE_LOG(ERR, EAL, "Cannot get current resource limits\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot get current resource limits"); } #ifndef RTE_EAL_NUMA_AWARE_HUGEPAGES if (!internal_conf->legacy_mem && rte_socket_count() > 1) { - RTE_LOG(WARNING, EAL, "DPDK is running on a NUMA system, but is compiled without NUMA support.\n"); - RTE_LOG(WARNING, EAL, "This will have adverse consequences for performance and usability.\n"); - RTE_LOG(WARNING, EAL, "Please use --"OPT_LEGACY_MEM" option, or recompile with NUMA support.\n"); + RTE_LOG_LINE(WARNING, EAL, "DPDK is running on a NUMA system, but is compiled without NUMA support."); + RTE_LOG_LINE(WARNING, EAL, "This will have adverse consequences for performance and usability."); + RTE_LOG_LINE(WARNING, EAL, "Please use --"OPT_LEGACY_MEM" option, or recompile with NUMA support."); } #endif diff --git a/lib/eal/linux/eal_thread.c b/lib/eal/linux/eal_thread.c index 880070c627..80b6f19a9e 100644 --- a/lib/eal/linux/eal_thread.c +++ b/lib/eal/linux/eal_thread.c @@ -28,7 +28,7 @@ void rte_thread_set_name(rte_thread_t thread_id, const char *thread_name) const size_t truncatedsz = sizeof(truncated); if (strlcpy(truncated, thread_name, truncatedsz) >= truncatedsz) - RTE_LOG(DEBUG, EAL, "Truncated thread name\n"); + RTE_LOG_LINE(DEBUG, EAL, "Truncated thread name"); ret = pthread_setname_np((pthread_t)thread_id.opaque_id, truncated); #endif @@ -37,5 +37,5 @@ void rte_thread_set_name(rte_thread_t thread_id, const char *thread_name) RTE_SET_USED(thread_name); if (ret != 0) - RTE_LOG(DEBUG, EAL, "Failed to set thread name\n"); + RTE_LOG_LINE(DEBUG, EAL, "Failed to set thread name"); } diff --git a/lib/eal/linux/eal_timer.c b/lib/eal/linux/eal_timer.c index df9ad61ae9..3813b1a66e 100644 --- a/lib/eal/linux/eal_timer.c +++ b/lib/eal/linux/eal_timer.c @@ -139,20 +139,20 @@ rte_eal_hpet_init(int make_default) eal_get_internal_configuration(); if (internal_conf->no_hpet) { - RTE_LOG(NOTICE, EAL, "HPET is disabled\n"); + RTE_LOG_LINE(NOTICE, EAL, "HPET is disabled"); return -1; } fd = open(DEV_HPET, O_RDONLY); if (fd < 0) { - RTE_LOG(ERR, EAL, "ERROR: Cannot open "DEV_HPET": %s!\n", + RTE_LOG_LINE(ERR, EAL, "ERROR: Cannot open "DEV_HPET": %s!", strerror(errno)); internal_conf->no_hpet = 1; return -1; } eal_hpet = mmap(NULL, 1024, PROT_READ, MAP_SHARED, fd, 0); if (eal_hpet == MAP_FAILED) { - RTE_LOG(ERR, EAL, "ERROR: Cannot mmap "DEV_HPET"!\n"); + RTE_LOG_LINE(ERR, EAL, "ERROR: Cannot mmap "DEV_HPET"!"); close(fd); internal_conf->no_hpet = 1; return -1; @@ -166,7 +166,7 @@ rte_eal_hpet_init(int make_default) eal_hpet_resolution_hz = (1000ULL*1000ULL*1000ULL*1000ULL*1000ULL) / (uint64_t)eal_hpet_resolution_fs; - RTE_LOG(INFO, EAL, "HPET frequency is ~%"PRIu64" kHz\n", + RTE_LOG_LINE(INFO, EAL, "HPET frequency is ~%"PRIu64" kHz", eal_hpet_resolution_hz/1000); eal_hpet_msb = (eal_hpet->counter_l >> 30); @@ -176,7 +176,7 @@ rte_eal_hpet_init(int make_default) ret = rte_thread_create_internal_control(&msb_inc_thread_id, "hpet-msb", hpet_msb_inc, NULL); if (ret != 0) { - RTE_LOG(ERR, EAL, "ERROR: Cannot create HPET timer thread!\n"); + RTE_LOG_LINE(ERR, EAL, "ERROR: Cannot create HPET timer thread!"); internal_conf->no_hpet = 1; return -1; } diff --git a/lib/eal/linux/eal_vfio.c b/lib/eal/linux/eal_vfio.c index ad3c1654b2..e8a783aaa8 100644 --- a/lib/eal/linux/eal_vfio.c +++ b/lib/eal/linux/eal_vfio.c @@ -367,7 +367,7 @@ vfio_open_group_fd(int iommu_group_num) if (vfio_group_fd < 0) { /* if file not found, it's not an error */ if (errno != ENOENT) { - RTE_LOG(ERR, EAL, "Cannot open %s: %s\n", + RTE_LOG_LINE(ERR, EAL, "Cannot open %s: %s", filename, strerror(errno)); return -1; } @@ -379,8 +379,8 @@ vfio_open_group_fd(int iommu_group_num) vfio_group_fd = open(filename, O_RDWR); if (vfio_group_fd < 0) { if (errno != ENOENT) { - RTE_LOG(ERR, EAL, - "Cannot open %s: %s\n", + RTE_LOG_LINE(ERR, EAL, + "Cannot open %s: %s", filename, strerror(errno)); return -1; } @@ -408,14 +408,14 @@ vfio_open_group_fd(int iommu_group_num) if (p->result == SOCKET_OK && mp_rep->num_fds == 1) { vfio_group_fd = mp_rep->fds[0]; } else if (p->result == SOCKET_NO_FD) { - RTE_LOG(ERR, EAL, "Bad VFIO group fd\n"); + RTE_LOG_LINE(ERR, EAL, "Bad VFIO group fd"); vfio_group_fd = -ENOENT; } } free(mp_reply.msgs); if (vfio_group_fd < 0 && vfio_group_fd != -ENOENT) - RTE_LOG(ERR, EAL, "Cannot request VFIO group fd\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot request VFIO group fd"); return vfio_group_fd; } @@ -452,7 +452,7 @@ vfio_get_group_fd(struct vfio_config *vfio_cfg, /* Lets see first if there is room for a new group */ if (vfio_cfg->vfio_active_groups == VFIO_MAX_GROUPS) { - RTE_LOG(ERR, EAL, "Maximum number of VFIO groups reached!\n"); + RTE_LOG_LINE(ERR, EAL, "Maximum number of VFIO groups reached!"); return -1; } @@ -465,13 +465,13 @@ vfio_get_group_fd(struct vfio_config *vfio_cfg, /* This should not happen */ if (i == VFIO_MAX_GROUPS) { - RTE_LOG(ERR, EAL, "No VFIO group free slot found\n"); + RTE_LOG_LINE(ERR, EAL, "No VFIO group free slot found"); return -1; } vfio_group_fd = vfio_open_group_fd(iommu_group_num); if (vfio_group_fd < 0) { - RTE_LOG(ERR, EAL, "Failed to open VFIO group %d\n", + RTE_LOG_LINE(ERR, EAL, "Failed to open VFIO group %d", iommu_group_num); return vfio_group_fd; } @@ -551,13 +551,13 @@ vfio_group_device_get(int vfio_group_fd) vfio_cfg = get_vfio_cfg_by_group_fd(vfio_group_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO group fd!\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid VFIO group fd!"); return; } i = get_vfio_group_idx(vfio_group_fd); if (i < 0 || i > (VFIO_MAX_GROUPS - 1)) - RTE_LOG(ERR, EAL, "Wrong VFIO group index (%d)\n", i); + RTE_LOG_LINE(ERR, EAL, "Wrong VFIO group index (%d)", i); else vfio_cfg->vfio_groups[i].devices++; } @@ -570,13 +570,13 @@ vfio_group_device_put(int vfio_group_fd) vfio_cfg = get_vfio_cfg_by_group_fd(vfio_group_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO group fd!\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid VFIO group fd!"); return; } i = get_vfio_group_idx(vfio_group_fd); if (i < 0 || i > (VFIO_MAX_GROUPS - 1)) - RTE_LOG(ERR, EAL, "Wrong VFIO group index (%d)\n", i); + RTE_LOG_LINE(ERR, EAL, "Wrong VFIO group index (%d)", i); else vfio_cfg->vfio_groups[i].devices--; } @@ -589,13 +589,13 @@ vfio_group_device_count(int vfio_group_fd) vfio_cfg = get_vfio_cfg_by_group_fd(vfio_group_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO group fd!\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid VFIO group fd!"); return -1; } i = get_vfio_group_idx(vfio_group_fd); if (i < 0 || i > (VFIO_MAX_GROUPS - 1)) { - RTE_LOG(ERR, EAL, "Wrong VFIO group index (%d)\n", i); + RTE_LOG_LINE(ERR, EAL, "Wrong VFIO group index (%d)", i); return -1; } @@ -636,8 +636,8 @@ vfio_mem_event_callback(enum rte_mem_event type, const void *addr, size_t len, while (cur_len < len) { /* some memory segments may have invalid IOVA */ if (ms->iova == RTE_BAD_IOVA) { - RTE_LOG(DEBUG, EAL, - "Memory segment at %p has bad IOVA, skipping\n", + RTE_LOG_LINE(DEBUG, EAL, + "Memory segment at %p has bad IOVA, skipping", ms->addr); goto next; } @@ -670,7 +670,7 @@ vfio_sync_default_container(void) /* default container fd should have been opened in rte_vfio_enable() */ if (!default_vfio_cfg->vfio_enabled || default_vfio_cfg->vfio_container_fd < 0) { - RTE_LOG(ERR, EAL, "VFIO support is not initialized\n"); + RTE_LOG_LINE(ERR, EAL, "VFIO support is not initialized"); return -1; } @@ -690,8 +690,8 @@ vfio_sync_default_container(void) } free(mp_reply.msgs); if (iommu_type_id < 0) { - RTE_LOG(ERR, EAL, - "Could not get IOMMU type for default container\n"); + RTE_LOG_LINE(ERR, EAL, + "Could not get IOMMU type for default container"); return -1; } @@ -708,7 +708,7 @@ vfio_sync_default_container(void) return 0; } - RTE_LOG(ERR, EAL, "Could not find IOMMU type id (%i)\n", + RTE_LOG_LINE(ERR, EAL, "Could not find IOMMU type id (%i)", iommu_type_id); return -1; } @@ -721,7 +721,7 @@ rte_vfio_clear_group(int vfio_group_fd) vfio_cfg = get_vfio_cfg_by_group_fd(vfio_group_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO group fd!\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid VFIO group fd!"); return -1; } @@ -756,8 +756,8 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, /* get group number */ ret = rte_vfio_get_group_num(sysfs_base, dev_addr, &iommu_group_num); if (ret == 0) { - RTE_LOG(NOTICE, EAL, - "%s not managed by VFIO driver, skipping\n", + RTE_LOG_LINE(NOTICE, EAL, + "%s not managed by VFIO driver, skipping", dev_addr); return 1; } @@ -776,8 +776,8 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, * isn't managed by VFIO */ if (vfio_group_fd == -ENOENT) { - RTE_LOG(NOTICE, EAL, - "%s not managed by VFIO driver, skipping\n", + RTE_LOG_LINE(NOTICE, EAL, + "%s not managed by VFIO driver, skipping", dev_addr); return 1; } @@ -790,14 +790,14 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, /* check if the group is viable */ ret = ioctl(vfio_group_fd, VFIO_GROUP_GET_STATUS, &group_status); if (ret) { - RTE_LOG(ERR, EAL, "%s cannot get VFIO group status, " - "error %i (%s)\n", dev_addr, errno, strerror(errno)); + RTE_LOG_LINE(ERR, EAL, "%s cannot get VFIO group status, " + "error %i (%s)", dev_addr, errno, strerror(errno)); close(vfio_group_fd); rte_vfio_clear_group(vfio_group_fd); return -1; } else if (!(group_status.flags & VFIO_GROUP_FLAGS_VIABLE)) { - RTE_LOG(ERR, EAL, "%s VFIO group is not viable! " - "Not all devices in IOMMU group bound to VFIO or unbound\n", + RTE_LOG_LINE(ERR, EAL, "%s VFIO group is not viable! " + "Not all devices in IOMMU group bound to VFIO or unbound", dev_addr); close(vfio_group_fd); rte_vfio_clear_group(vfio_group_fd); @@ -817,9 +817,9 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, ret = ioctl(vfio_group_fd, VFIO_GROUP_SET_CONTAINER, &vfio_container_fd); if (ret) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "%s cannot add VFIO group to container, error " - "%i (%s)\n", dev_addr, errno, strerror(errno)); + "%i (%s)", dev_addr, errno, strerror(errno)); close(vfio_group_fd); rte_vfio_clear_group(vfio_group_fd); return -1; @@ -841,8 +841,8 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, /* select an IOMMU type which we will be using */ t = vfio_set_iommu_type(vfio_container_fd); if (!t) { - RTE_LOG(ERR, EAL, - "%s failed to select IOMMU type\n", + RTE_LOG_LINE(ERR, EAL, + "%s failed to select IOMMU type", dev_addr); close(vfio_group_fd); rte_vfio_clear_group(vfio_group_fd); @@ -857,9 +857,9 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, else ret = 0; if (ret) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "%s DMA remapping failed, error " - "%i (%s)\n", + "%i (%s)", dev_addr, errno, strerror(errno)); close(vfio_group_fd); rte_vfio_clear_group(vfio_group_fd); @@ -886,10 +886,10 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, map->addr, map->iova, map->len, 1); if (ret) { - RTE_LOG(ERR, EAL, "Couldn't map user memory for DMA: " + RTE_LOG_LINE(ERR, EAL, "Couldn't map user memory for DMA: " "va: 0x%" PRIx64 " " "iova: 0x%" PRIx64 " " - "len: 0x%" PRIu64 "\n", + "len: 0x%" PRIu64, map->addr, map->iova, map->len); rte_spinlock_recursive_unlock( @@ -911,13 +911,13 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, rte_mcfg_mem_read_unlock(); if (ret && rte_errno != ENOTSUP) { - RTE_LOG(ERR, EAL, "Could not install memory event callback for VFIO\n"); + RTE_LOG_LINE(ERR, EAL, "Could not install memory event callback for VFIO"); return -1; } if (ret) - RTE_LOG(DEBUG, EAL, "Memory event callbacks not supported\n"); + RTE_LOG_LINE(DEBUG, EAL, "Memory event callbacks not supported"); else - RTE_LOG(DEBUG, EAL, "Installed memory event callback for VFIO\n"); + RTE_LOG_LINE(DEBUG, EAL, "Installed memory event callback for VFIO"); } } else if (rte_eal_process_type() != RTE_PROC_PRIMARY && vfio_cfg == default_vfio_cfg && @@ -929,7 +929,7 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, */ ret = vfio_sync_default_container(); if (ret < 0) { - RTE_LOG(ERR, EAL, "Could not sync default VFIO container\n"); + RTE_LOG_LINE(ERR, EAL, "Could not sync default VFIO container"); close(vfio_group_fd); rte_vfio_clear_group(vfio_group_fd); return -1; @@ -937,7 +937,7 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, /* we have successfully initialized VFIO, notify user */ const struct vfio_iommu_type *t = default_vfio_cfg->vfio_iommu_type; - RTE_LOG(INFO, EAL, "Using IOMMU type %d (%s)\n", + RTE_LOG_LINE(INFO, EAL, "Using IOMMU type %d (%s)", t->type_id, t->name); } @@ -965,7 +965,7 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, * the VFIO group or the container not having IOMMU configured. */ - RTE_LOG(WARNING, EAL, "Getting a vfio_dev_fd for %s failed\n", + RTE_LOG_LINE(WARNING, EAL, "Getting a vfio_dev_fd for %s failed", dev_addr); close(vfio_group_fd); rte_vfio_clear_group(vfio_group_fd); @@ -976,8 +976,8 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, dev_get_info: ret = ioctl(*vfio_dev_fd, VFIO_DEVICE_GET_INFO, device_info); if (ret) { - RTE_LOG(ERR, EAL, "%s cannot get device info, " - "error %i (%s)\n", dev_addr, errno, + RTE_LOG_LINE(ERR, EAL, "%s cannot get device info, " + "error %i (%s)", dev_addr, errno, strerror(errno)); close(*vfio_dev_fd); close(vfio_group_fd); @@ -1007,7 +1007,7 @@ rte_vfio_release_device(const char *sysfs_base, const char *dev_addr, /* get group number */ ret = rte_vfio_get_group_num(sysfs_base, dev_addr, &iommu_group_num); if (ret <= 0) { - RTE_LOG(WARNING, EAL, "%s not managed by VFIO driver\n", + RTE_LOG_LINE(WARNING, EAL, "%s not managed by VFIO driver", dev_addr); /* This is an error at this point. */ ret = -1; @@ -1017,7 +1017,7 @@ rte_vfio_release_device(const char *sysfs_base, const char *dev_addr, /* get the actual group fd */ vfio_group_fd = rte_vfio_get_group_fd(iommu_group_num); if (vfio_group_fd < 0) { - RTE_LOG(INFO, EAL, "rte_vfio_get_group_fd failed for %s\n", + RTE_LOG_LINE(INFO, EAL, "rte_vfio_get_group_fd failed for %s", dev_addr); ret = vfio_group_fd; goto out; @@ -1034,7 +1034,7 @@ rte_vfio_release_device(const char *sysfs_base, const char *dev_addr, /* Closing a device */ if (close(vfio_dev_fd) < 0) { - RTE_LOG(INFO, EAL, "Error when closing vfio_dev_fd for %s\n", + RTE_LOG_LINE(INFO, EAL, "Error when closing vfio_dev_fd for %s", dev_addr); ret = -1; goto out; @@ -1047,14 +1047,14 @@ rte_vfio_release_device(const char *sysfs_base, const char *dev_addr, if (!vfio_group_device_count(vfio_group_fd)) { if (close(vfio_group_fd) < 0) { - RTE_LOG(INFO, EAL, "Error when closing vfio_group_fd for %s\n", + RTE_LOG_LINE(INFO, EAL, "Error when closing vfio_group_fd for %s", dev_addr); ret = -1; goto out; } if (rte_vfio_clear_group(vfio_group_fd) < 0) { - RTE_LOG(INFO, EAL, "Error when clearing group for %s\n", + RTE_LOG_LINE(INFO, EAL, "Error when clearing group for %s", dev_addr); ret = -1; goto out; @@ -1101,21 +1101,21 @@ rte_vfio_enable(const char *modname) } } - RTE_LOG(DEBUG, EAL, "Probing VFIO support...\n"); + RTE_LOG_LINE(DEBUG, EAL, "Probing VFIO support..."); /* check if vfio module is loaded */ vfio_available = rte_eal_check_module(modname); /* return error directly */ if (vfio_available == -1) { - RTE_LOG(INFO, EAL, "Could not get loaded module details!\n"); + RTE_LOG_LINE(INFO, EAL, "Could not get loaded module details!"); return -1; } /* return 0 if VFIO modules not loaded */ if (vfio_available == 0) { - RTE_LOG(DEBUG, EAL, - "VFIO modules not loaded, skipping VFIO support...\n"); + RTE_LOG_LINE(DEBUG, EAL, + "VFIO modules not loaded, skipping VFIO support..."); return 0; } @@ -1131,10 +1131,10 @@ rte_vfio_enable(const char *modname) /* check if we have VFIO driver enabled */ if (default_vfio_cfg->vfio_container_fd != -1) { - RTE_LOG(INFO, EAL, "VFIO support initialized\n"); + RTE_LOG_LINE(INFO, EAL, "VFIO support initialized"); default_vfio_cfg->vfio_enabled = 1; } else { - RTE_LOG(NOTICE, EAL, "VFIO support could not be initialized\n"); + RTE_LOG_LINE(NOTICE, EAL, "VFIO support could not be initialized"); } return 0; @@ -1186,7 +1186,7 @@ vfio_get_default_container_fd(void) } free(mp_reply.msgs); - RTE_LOG(ERR, EAL, "Cannot request default VFIO container fd\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot request default VFIO container fd"); return -1; } @@ -1209,13 +1209,13 @@ vfio_set_iommu_type(int vfio_container_fd) int ret = ioctl(vfio_container_fd, VFIO_SET_IOMMU, t->type_id); if (!ret) { - RTE_LOG(INFO, EAL, "Using IOMMU type %d (%s)\n", + RTE_LOG_LINE(INFO, EAL, "Using IOMMU type %d (%s)", t->type_id, t->name); return t; } /* not an error, there may be more supported IOMMU types */ - RTE_LOG(DEBUG, EAL, "Set IOMMU type %d (%s) failed, error " - "%i (%s)\n", t->type_id, t->name, errno, + RTE_LOG_LINE(DEBUG, EAL, "Set IOMMU type %d (%s) failed, error " + "%i (%s)", t->type_id, t->name, errno, strerror(errno)); } /* if we didn't find a suitable IOMMU type, fail */ @@ -1233,15 +1233,15 @@ vfio_has_supported_extensions(int vfio_container_fd) ret = ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION, t->type_id); if (ret < 0) { - RTE_LOG(ERR, EAL, "Could not get IOMMU type, error " - "%i (%s)\n", errno, strerror(errno)); + RTE_LOG_LINE(ERR, EAL, "Could not get IOMMU type, error " + "%i (%s)", errno, strerror(errno)); close(vfio_container_fd); return -1; } else if (ret == 1) { /* we found a supported extension */ n_extensions++; } - RTE_LOG(DEBUG, EAL, "IOMMU type %d (%s) is %s\n", + RTE_LOG_LINE(DEBUG, EAL, "IOMMU type %d (%s) is %s", t->type_id, t->name, ret ? "supported" : "not supported"); } @@ -1271,9 +1271,9 @@ rte_vfio_get_container_fd(void) if (internal_conf->process_type == RTE_PROC_PRIMARY) { vfio_container_fd = open(VFIO_CONTAINER_PATH, O_RDWR); if (vfio_container_fd < 0) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "Cannot open VFIO container %s, error " - "%i (%s)\n", VFIO_CONTAINER_PATH, + "%i (%s)", VFIO_CONTAINER_PATH, errno, strerror(errno)); return -1; } @@ -1282,19 +1282,19 @@ rte_vfio_get_container_fd(void) ret = ioctl(vfio_container_fd, VFIO_GET_API_VERSION); if (ret != VFIO_API_VERSION) { if (ret < 0) - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "Could not get VFIO API version, error " - "%i (%s)\n", errno, strerror(errno)); + "%i (%s)", errno, strerror(errno)); else - RTE_LOG(ERR, EAL, "Unsupported VFIO API version!\n"); + RTE_LOG_LINE(ERR, EAL, "Unsupported VFIO API version!"); close(vfio_container_fd); return -1; } ret = vfio_has_supported_extensions(vfio_container_fd); if (ret) { - RTE_LOG(ERR, EAL, - "No supported IOMMU extensions found!\n"); + RTE_LOG_LINE(ERR, EAL, + "No supported IOMMU extensions found!"); return -1; } @@ -1322,7 +1322,7 @@ rte_vfio_get_container_fd(void) } free(mp_reply.msgs); - RTE_LOG(ERR, EAL, "Cannot request VFIO container fd\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot request VFIO container fd"); return -1; } @@ -1352,7 +1352,7 @@ rte_vfio_get_group_num(const char *sysfs_base, tok, RTE_DIM(tok), '/'); if (ret <= 0) { - RTE_LOG(ERR, EAL, "%s cannot get IOMMU group\n", dev_addr); + RTE_LOG_LINE(ERR, EAL, "%s cannot get IOMMU group", dev_addr); return -1; } @@ -1362,7 +1362,7 @@ rte_vfio_get_group_num(const char *sysfs_base, end = group_tok; *iommu_group_num = strtol(group_tok, &end, 10); if ((end != group_tok && *end != '\0') || errno != 0) { - RTE_LOG(ERR, EAL, "%s error parsing IOMMU number!\n", dev_addr); + RTE_LOG_LINE(ERR, EAL, "%s error parsing IOMMU number!", dev_addr); return -1; } @@ -1411,12 +1411,12 @@ vfio_type1_dma_mem_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, * returned from kernel. */ if (errno == EEXIST) { - RTE_LOG(DEBUG, EAL, + RTE_LOG_LINE(DEBUG, EAL, "Memory segment is already mapped, skipping"); } else { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "Cannot set up DMA remapping, error " - "%i (%s)\n", errno, strerror(errno)); + "%i (%s)", errno, strerror(errno)); return -1; } } @@ -1429,12 +1429,12 @@ vfio_type1_dma_mem_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, ret = ioctl(vfio_container_fd, VFIO_IOMMU_UNMAP_DMA, &dma_unmap); if (ret) { - RTE_LOG(ERR, EAL, "Cannot clear DMA remapping, error " - "%i (%s)\n", errno, strerror(errno)); + RTE_LOG_LINE(ERR, EAL, "Cannot clear DMA remapping, error " + "%i (%s)", errno, strerror(errno)); return -1; } else if (dma_unmap.size != len) { - RTE_LOG(ERR, EAL, "Unexpected size %"PRIu64 - " of DMA remapping cleared instead of %"PRIu64"\n", + RTE_LOG_LINE(ERR, EAL, "Unexpected size %"PRIu64 + " of DMA remapping cleared instead of %"PRIu64, (uint64_t)dma_unmap.size, len); rte_errno = EIO; return -1; @@ -1470,16 +1470,16 @@ vfio_spapr_dma_do_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, struct vfio_iommu_type1_dma_map dma_map; if (iova + len > spapr_dma_win_len) { - RTE_LOG(ERR, EAL, "DMA map attempt outside DMA window\n"); + RTE_LOG_LINE(ERR, EAL, "DMA map attempt outside DMA window"); return -1; } ret = ioctl(vfio_container_fd, VFIO_IOMMU_SPAPR_REGISTER_MEMORY, ®); if (ret) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "Cannot register vaddr for IOMMU, error " - "%i (%s)\n", errno, strerror(errno)); + "%i (%s)", errno, strerror(errno)); return -1; } @@ -1493,8 +1493,8 @@ vfio_spapr_dma_do_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, ret = ioctl(vfio_container_fd, VFIO_IOMMU_MAP_DMA, &dma_map); if (ret) { - RTE_LOG(ERR, EAL, "Cannot map vaddr for IOMMU, error " - "%i (%s)\n", errno, strerror(errno)); + RTE_LOG_LINE(ERR, EAL, "Cannot map vaddr for IOMMU, error " + "%i (%s)", errno, strerror(errno)); return -1; } @@ -1509,17 +1509,17 @@ vfio_spapr_dma_do_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, ret = ioctl(vfio_container_fd, VFIO_IOMMU_UNMAP_DMA, &dma_unmap); if (ret) { - RTE_LOG(ERR, EAL, "Cannot unmap vaddr for IOMMU, error " - "%i (%s)\n", errno, strerror(errno)); + RTE_LOG_LINE(ERR, EAL, "Cannot unmap vaddr for IOMMU, error " + "%i (%s)", errno, strerror(errno)); return -1; } ret = ioctl(vfio_container_fd, VFIO_IOMMU_SPAPR_UNREGISTER_MEMORY, ®); if (ret) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "Cannot unregister vaddr for IOMMU, error " - "%i (%s)\n", errno, strerror(errno)); + "%i (%s)", errno, strerror(errno)); return -1; } } @@ -1599,7 +1599,7 @@ find_highest_mem_addr(struct spapr_size_walk_param *param) */ FILE *fd = fopen(proc_iomem, "r"); if (fd == NULL) { - RTE_LOG(ERR, EAL, "Cannot open %s\n", proc_iomem); + RTE_LOG_LINE(ERR, EAL, "Cannot open %s", proc_iomem); return -1; } /* Scan /proc/iomem for the highest PA in the system */ @@ -1612,15 +1612,15 @@ find_highest_mem_addr(struct spapr_size_walk_param *param) /* Validate the format of the memory string */ if (space == NULL || dash == NULL || space < dash) { - RTE_LOG(ERR, EAL, "Can't parse line \"%s\" in file %s\n", + RTE_LOG_LINE(ERR, EAL, "Can't parse line \"%s\" in file %s", line, proc_iomem); continue; } start = strtoull(line, NULL, 16); end = strtoull(dash + 1, NULL, 16); - RTE_LOG(DEBUG, EAL, "Found system RAM from 0x%" PRIx64 - " to 0x%" PRIx64 "\n", start, end); + RTE_LOG_LINE(DEBUG, EAL, "Found system RAM from 0x%" PRIx64 + " to 0x%" PRIx64, start, end); if (end > max) max = end; } @@ -1628,22 +1628,22 @@ find_highest_mem_addr(struct spapr_size_walk_param *param) fclose(fd); if (max == 0) { - RTE_LOG(ERR, EAL, "Failed to find valid \"System RAM\" " - "entry in file %s\n", proc_iomem); + RTE_LOG_LINE(ERR, EAL, "Failed to find valid \"System RAM\" " + "entry in file %s", proc_iomem); return -1; } spapr_dma_win_len = rte_align64pow2(max + 1); return 0; } else if (rte_eal_iova_mode() == RTE_IOVA_VA) { - RTE_LOG(DEBUG, EAL, "Highest VA address in memseg list is 0x%" - PRIx64 "\n", param->max_va); + RTE_LOG_LINE(DEBUG, EAL, "Highest VA address in memseg list is 0x%" + PRIx64, param->max_va); spapr_dma_win_len = rte_align64pow2(param->max_va); return 0; } spapr_dma_win_len = 0; - RTE_LOG(ERR, EAL, "Unsupported IOVA mode\n"); + RTE_LOG_LINE(ERR, EAL, "Unsupported IOVA mode"); return -1; } @@ -1668,18 +1668,18 @@ spapr_dma_win_size(void) /* walk the memseg list to find the page size/max VA address */ memset(¶m, 0, sizeof(param)); if (rte_memseg_list_walk(vfio_spapr_size_walk, ¶m) < 0) { - RTE_LOG(ERR, EAL, "Failed to walk memseg list for DMA window size\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to walk memseg list for DMA window size"); return -1; } /* we can't be sure if DMA window covers external memory */ if (param.is_user_managed) - RTE_LOG(WARNING, EAL, "Detected user managed external memory which may not be managed by the IOMMU\n"); + RTE_LOG_LINE(WARNING, EAL, "Detected user managed external memory which may not be managed by the IOMMU"); /* check physical/virtual memory size */ if (find_highest_mem_addr(¶m) < 0) return -1; - RTE_LOG(DEBUG, EAL, "Setting DMA window size to 0x%" PRIx64 "\n", + RTE_LOG_LINE(DEBUG, EAL, "Setting DMA window size to 0x%" PRIx64, spapr_dma_win_len); spapr_dma_win_page_sz = param.page_sz; rte_mem_set_dma_mask(rte_ctz64(spapr_dma_win_len)); @@ -1703,7 +1703,7 @@ vfio_spapr_create_dma_window(int vfio_container_fd) ret = ioctl(vfio_container_fd, VFIO_IOMMU_SPAPR_TCE_GET_INFO, &info); if (ret) { - RTE_LOG(ERR, EAL, "Cannot get IOMMU info, error %i (%s)\n", + RTE_LOG_LINE(ERR, EAL, "Cannot get IOMMU info, error %i (%s)", errno, strerror(errno)); return -1; } @@ -1744,17 +1744,17 @@ vfio_spapr_create_dma_window(int vfio_container_fd) } #endif /* VFIO_IOMMU_SPAPR_INFO_DDW */ if (ret) { - RTE_LOG(ERR, EAL, "Cannot create new DMA window, error " - "%i (%s)\n", errno, strerror(errno)); - RTE_LOG(ERR, EAL, - "Consider using a larger hugepage size if supported by the system\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot create new DMA window, error " + "%i (%s)", errno, strerror(errno)); + RTE_LOG_LINE(ERR, EAL, + "Consider using a larger hugepage size if supported by the system"); return -1; } /* verify the start address */ if (create.start_addr != 0) { - RTE_LOG(ERR, EAL, "Received unsupported start address 0x%" - PRIx64 "\n", (uint64_t)create.start_addr); + RTE_LOG_LINE(ERR, EAL, "Received unsupported start address 0x%" + PRIx64, (uint64_t)create.start_addr); return -1; } return ret; @@ -1769,13 +1769,13 @@ vfio_spapr_dma_mem_map(int vfio_container_fd, uint64_t vaddr, if (do_map) { if (vfio_spapr_dma_do_map(vfio_container_fd, vaddr, iova, len, 1)) { - RTE_LOG(ERR, EAL, "Failed to map DMA\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to map DMA"); ret = -1; } } else { if (vfio_spapr_dma_do_map(vfio_container_fd, vaddr, iova, len, 0)) { - RTE_LOG(ERR, EAL, "Failed to unmap DMA\n"); + RTE_LOG_LINE(ERR, EAL, "Failed to unmap DMA"); ret = -1; } } @@ -1787,7 +1787,7 @@ static int vfio_spapr_dma_map(int vfio_container_fd) { if (vfio_spapr_create_dma_window(vfio_container_fd) < 0) { - RTE_LOG(ERR, EAL, "Could not create new DMA window!\n"); + RTE_LOG_LINE(ERR, EAL, "Could not create new DMA window!"); return -1; } @@ -1822,14 +1822,14 @@ vfio_dma_mem_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, const struct vfio_iommu_type *t = vfio_cfg->vfio_iommu_type; if (!t) { - RTE_LOG(ERR, EAL, "VFIO support not initialized\n"); + RTE_LOG_LINE(ERR, EAL, "VFIO support not initialized"); rte_errno = ENODEV; return -1; } if (!t->dma_user_map_func) { - RTE_LOG(ERR, EAL, - "VFIO custom DMA region mapping not supported by IOMMU %s\n", + RTE_LOG_LINE(ERR, EAL, + "VFIO custom DMA region mapping not supported by IOMMU %s", t->name); rte_errno = ENOTSUP; return -1; @@ -1851,7 +1851,7 @@ container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, user_mem_maps = &vfio_cfg->mem_maps; rte_spinlock_recursive_lock(&user_mem_maps->lock); if (user_mem_maps->n_maps == VFIO_MAX_USER_MEM_MAPS) { - RTE_LOG(ERR, EAL, "No more space for user mem maps\n"); + RTE_LOG_LINE(ERR, EAL, "No more space for user mem maps"); rte_errno = ENOMEM; ret = -1; goto out; @@ -1865,7 +1865,7 @@ container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, * this to be unsupported, because we can't just store any old * mapping and pollute list of active mappings willy-nilly. */ - RTE_LOG(ERR, EAL, "Couldn't map new region for DMA\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't map new region for DMA"); ret = -1; goto out; } @@ -1921,7 +1921,7 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, orig_maps, RTE_DIM(orig_maps)); /* did we find anything? */ if (n_orig < 0) { - RTE_LOG(ERR, EAL, "Couldn't find previously mapped region\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't find previously mapped region"); rte_errno = EINVAL; ret = -1; goto out; @@ -1943,7 +1943,7 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, vaddr + len, iova + len); if (!start_aligned || !end_aligned) { - RTE_LOG(DEBUG, EAL, "DMA partial unmap unsupported\n"); + RTE_LOG_LINE(DEBUG, EAL, "DMA partial unmap unsupported"); rte_errno = ENOTSUP; ret = -1; goto out; @@ -1961,7 +1961,7 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, /* can we store the new maps in our list? */ newlen = (user_mem_maps->n_maps - n_orig) + n_new; if (newlen >= VFIO_MAX_USER_MEM_MAPS) { - RTE_LOG(ERR, EAL, "Not enough space to store partial mapping\n"); + RTE_LOG_LINE(ERR, EAL, "Not enough space to store partial mapping"); rte_errno = ENOMEM; ret = -1; goto out; @@ -1978,11 +1978,11 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, * within our mapped range but had invalid alignment). */ if (rte_errno != ENODEV && rte_errno != ENOTSUP) { - RTE_LOG(ERR, EAL, "Couldn't unmap region for DMA\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't unmap region for DMA"); ret = -1; goto out; } else { - RTE_LOG(DEBUG, EAL, "DMA unmapping failed, but removing mappings anyway\n"); + RTE_LOG_LINE(DEBUG, EAL, "DMA unmapping failed, but removing mappings anyway"); } } @@ -2005,8 +2005,8 @@ rte_vfio_noiommu_is_enabled(void) fd = open(VFIO_NOIOMMU_MODE, O_RDONLY); if (fd < 0) { if (errno != ENOENT) { - RTE_LOG(ERR, EAL, "Cannot open VFIO noiommu file " - "%i (%s)\n", errno, strerror(errno)); + RTE_LOG_LINE(ERR, EAL, "Cannot open VFIO noiommu file " + "%i (%s)", errno, strerror(errno)); return -1; } /* @@ -2019,8 +2019,8 @@ rte_vfio_noiommu_is_enabled(void) cnt = read(fd, &c, 1); close(fd); if (cnt != 1) { - RTE_LOG(ERR, EAL, "Unable to read from VFIO noiommu file " - "%i (%s)\n", errno, strerror(errno)); + RTE_LOG_LINE(ERR, EAL, "Unable to read from VFIO noiommu file " + "%i (%s)", errno, strerror(errno)); return -1; } @@ -2039,13 +2039,13 @@ rte_vfio_container_create(void) } if (i == VFIO_MAX_CONTAINERS) { - RTE_LOG(ERR, EAL, "Exceed max VFIO container limit\n"); + RTE_LOG_LINE(ERR, EAL, "Exceed max VFIO container limit"); return -1; } vfio_cfgs[i].vfio_container_fd = rte_vfio_get_container_fd(); if (vfio_cfgs[i].vfio_container_fd < 0) { - RTE_LOG(NOTICE, EAL, "Fail to create a new VFIO container\n"); + RTE_LOG_LINE(NOTICE, EAL, "Fail to create a new VFIO container"); return -1; } @@ -2060,7 +2060,7 @@ rte_vfio_container_destroy(int container_fd) vfio_cfg = get_vfio_cfg_by_container_fd(container_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO container fd\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid VFIO container fd"); return -1; } @@ -2084,7 +2084,7 @@ rte_vfio_container_group_bind(int container_fd, int iommu_group_num) vfio_cfg = get_vfio_cfg_by_container_fd(container_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO container fd\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid VFIO container fd"); return -1; } @@ -2100,7 +2100,7 @@ rte_vfio_container_group_unbind(int container_fd, int iommu_group_num) vfio_cfg = get_vfio_cfg_by_container_fd(container_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO container fd\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid VFIO container fd"); return -1; } @@ -2113,14 +2113,14 @@ rte_vfio_container_group_unbind(int container_fd, int iommu_group_num) /* This should not happen */ if (i == VFIO_MAX_GROUPS || cur_grp == NULL) { - RTE_LOG(ERR, EAL, "Specified VFIO group number not found\n"); + RTE_LOG_LINE(ERR, EAL, "Specified VFIO group number not found"); return -1; } if (cur_grp->fd >= 0 && close(cur_grp->fd) < 0) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "Error when closing vfio_group_fd for iommu_group_num " - "%d\n", iommu_group_num); + "%d", iommu_group_num); return -1; } cur_grp->group_num = -1; @@ -2144,7 +2144,7 @@ rte_vfio_container_dma_map(int container_fd, uint64_t vaddr, uint64_t iova, vfio_cfg = get_vfio_cfg_by_container_fd(container_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO container fd\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid VFIO container fd"); return -1; } @@ -2164,7 +2164,7 @@ rte_vfio_container_dma_unmap(int container_fd, uint64_t vaddr, uint64_t iova, vfio_cfg = get_vfio_cfg_by_container_fd(container_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO container fd\n"); + RTE_LOG_LINE(ERR, EAL, "Invalid VFIO container fd"); return -1; } diff --git a/lib/eal/linux/eal_vfio_mp_sync.c b/lib/eal/linux/eal_vfio_mp_sync.c index 157f20e583..a78113844b 100644 --- a/lib/eal/linux/eal_vfio_mp_sync.c +++ b/lib/eal/linux/eal_vfio_mp_sync.c @@ -33,7 +33,7 @@ vfio_mp_primary(const struct rte_mp_msg *msg, const void *peer) (const struct vfio_mp_param *)msg->param; if (msg->len_param != sizeof(*m)) { - RTE_LOG(ERR, EAL, "vfio received invalid message!\n"); + RTE_LOG_LINE(ERR, EAL, "vfio received invalid message!"); return -1; } @@ -95,7 +95,7 @@ vfio_mp_primary(const struct rte_mp_msg *msg, const void *peer) break; } default: - RTE_LOG(ERR, EAL, "vfio received invalid message!\n"); + RTE_LOG_LINE(ERR, EAL, "vfio received invalid message!"); return -1; } diff --git a/lib/eal/riscv/rte_cycles.c b/lib/eal/riscv/rte_cycles.c index 358f271311..e27e02d9a9 100644 --- a/lib/eal/riscv/rte_cycles.c +++ b/lib/eal/riscv/rte_cycles.c @@ -38,14 +38,14 @@ __rte_riscv_timefrq(void) break; } fail: - RTE_LOG(WARNING, EAL, "Unable to read timebase-frequency from FDT.\n"); + RTE_LOG_LINE(WARNING, EAL, "Unable to read timebase-frequency from FDT."); return 0; } uint64_t get_tsc_freq_arch(void) { - RTE_LOG(NOTICE, EAL, "TSC using RISC-V %s.\n", + RTE_LOG_LINE(NOTICE, EAL, "TSC using RISC-V %s.", RTE_RISCV_RDTSC_USE_HPM ? "rdcycle" : "rdtime"); if (!RTE_RISCV_RDTSC_USE_HPM) return __rte_riscv_timefrq(); diff --git a/lib/eal/unix/eal_filesystem.c b/lib/eal/unix/eal_filesystem.c index afbab9368a..4d90c2707f 100644 --- a/lib/eal/unix/eal_filesystem.c +++ b/lib/eal/unix/eal_filesystem.c @@ -41,7 +41,7 @@ int eal_create_runtime_dir(void) /* create DPDK subdirectory under runtime dir */ ret = snprintf(tmp, sizeof(tmp), "%s/dpdk", directory); if (ret < 0 || ret == sizeof(tmp)) { - RTE_LOG(ERR, EAL, "Error creating DPDK runtime path name\n"); + RTE_LOG_LINE(ERR, EAL, "Error creating DPDK runtime path name"); return -1; } @@ -49,7 +49,7 @@ int eal_create_runtime_dir(void) ret = snprintf(run_dir, sizeof(run_dir), "%s/%s", tmp, eal_get_hugefile_prefix()); if (ret < 0 || ret == sizeof(run_dir)) { - RTE_LOG(ERR, EAL, "Error creating prefix-specific runtime path name\n"); + RTE_LOG_LINE(ERR, EAL, "Error creating prefix-specific runtime path name"); return -1; } @@ -58,14 +58,14 @@ int eal_create_runtime_dir(void) */ ret = mkdir(tmp, 0700); if (ret < 0 && errno != EEXIST) { - RTE_LOG(ERR, EAL, "Error creating '%s': %s\n", + RTE_LOG_LINE(ERR, EAL, "Error creating '%s': %s", tmp, strerror(errno)); return -1; } ret = mkdir(run_dir, 0700); if (ret < 0 && errno != EEXIST) { - RTE_LOG(ERR, EAL, "Error creating '%s': %s\n", + RTE_LOG_LINE(ERR, EAL, "Error creating '%s': %s", run_dir, strerror(errno)); return -1; } @@ -84,20 +84,20 @@ int eal_parse_sysfs_value(const char *filename, unsigned long *val) char *end = NULL; if ((f = fopen(filename, "r")) == NULL) { - RTE_LOG(ERR, EAL, "%s(): cannot open sysfs value %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): cannot open sysfs value %s", __func__, filename); return -1; } if (fgets(buf, sizeof(buf), f) == NULL) { - RTE_LOG(ERR, EAL, "%s(): cannot read sysfs value %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): cannot read sysfs value %s", __func__, filename); fclose(f); return -1; } *val = strtoul(buf, &end, 0); if ((buf[0] == '\0') || (end == NULL) || (*end != '\n')) { - RTE_LOG(ERR, EAL, "%s(): cannot parse sysfs value %s\n", + RTE_LOG_LINE(ERR, EAL, "%s(): cannot parse sysfs value %s", __func__, filename); fclose(f); return -1; diff --git a/lib/eal/unix/eal_firmware.c b/lib/eal/unix/eal_firmware.c index 1a7cf8e7b7..b071bb1396 100644 --- a/lib/eal/unix/eal_firmware.c +++ b/lib/eal/unix/eal_firmware.c @@ -151,7 +151,7 @@ rte_firmware_read(const char *name, void **buf, size_t *bufsz) path[PATH_MAX - 1] = '\0'; #ifndef RTE_HAS_LIBARCHIVE if (access(path, F_OK) == 0) { - RTE_LOG(WARNING, EAL, "libarchive not linked, %s cannot be decompressed\n", + RTE_LOG_LINE(WARNING, EAL, "libarchive not linked, %s cannot be decompressed", path); } #else diff --git a/lib/eal/unix/eal_unix_memory.c b/lib/eal/unix/eal_unix_memory.c index 68ae93bd6e..16183fb395 100644 --- a/lib/eal/unix/eal_unix_memory.c +++ b/lib/eal/unix/eal_unix_memory.c @@ -29,8 +29,8 @@ mem_map(void *requested_addr, size_t size, int prot, int flags, { void *virt = mmap(requested_addr, size, prot, flags, fd, offset); if (virt == MAP_FAILED) { - RTE_LOG(DEBUG, EAL, - "Cannot mmap(%p, 0x%zx, 0x%x, 0x%x, %d, 0x%"PRIx64"): %s\n", + RTE_LOG_LINE(DEBUG, EAL, + "Cannot mmap(%p, 0x%zx, 0x%x, 0x%x, %d, 0x%"PRIx64"): %s", requested_addr, size, prot, flags, fd, offset, strerror(errno)); rte_errno = errno; @@ -44,7 +44,7 @@ mem_unmap(void *virt, size_t size) { int ret = munmap(virt, size); if (ret < 0) { - RTE_LOG(DEBUG, EAL, "Cannot munmap(%p, 0x%zx): %s\n", + RTE_LOG_LINE(DEBUG, EAL, "Cannot munmap(%p, 0x%zx): %s", virt, size, strerror(errno)); rte_errno = errno; } @@ -83,7 +83,7 @@ eal_mem_set_dump(void *virt, size_t size, bool dump) int flags = dump ? EAL_DODUMP : EAL_DONTDUMP; int ret = madvise(virt, size, flags); if (ret) { - RTE_LOG(DEBUG, EAL, "madvise(%p, %#zx, %d) failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "madvise(%p, %#zx, %d) failed: %s", virt, size, flags, strerror(rte_errno)); rte_errno = errno; } diff --git a/lib/eal/unix/rte_thread.c b/lib/eal/unix/rte_thread.c index 36a21ab2f9..bee77e9448 100644 --- a/lib/eal/unix/rte_thread.c +++ b/lib/eal/unix/rte_thread.c @@ -53,7 +53,7 @@ thread_map_priority_to_os_value(enum rte_thread_priority eal_pri, int *os_pri, *os_pri = sched_get_priority_max(SCHED_RR); break; default: - RTE_LOG(DEBUG, EAL, "The requested priority value is invalid.\n"); + RTE_LOG_LINE(DEBUG, EAL, "The requested priority value is invalid."); return EINVAL; } @@ -79,7 +79,7 @@ thread_map_os_priority_to_eal_priority(int policy, int os_pri, } break; default: - RTE_LOG(DEBUG, EAL, "The OS priority value does not map to an EAL-defined priority.\n"); + RTE_LOG_LINE(DEBUG, EAL, "The OS priority value does not map to an EAL-defined priority."); return EINVAL; } @@ -97,7 +97,7 @@ thread_start_wrapper(void *arg) if (ctx->thread_attr != NULL && CPU_COUNT(&ctx->thread_attr->cpuset) > 0) { ret = rte_thread_set_affinity_by_id(rte_thread_self(), &ctx->thread_attr->cpuset); if (ret != 0) - RTE_LOG(DEBUG, EAL, "rte_thread_set_affinity_by_id failed\n"); + RTE_LOG_LINE(DEBUG, EAL, "rte_thread_set_affinity_by_id failed"); } pthread_mutex_lock(&ctx->wrapper_mutex); @@ -136,7 +136,7 @@ rte_thread_create(rte_thread_t *thread_id, if (thread_attr != NULL) { ret = pthread_attr_init(&attr); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "pthread_attr_init failed\n"); + RTE_LOG_LINE(DEBUG, EAL, "pthread_attr_init failed"); goto cleanup; } @@ -149,7 +149,7 @@ rte_thread_create(rte_thread_t *thread_id, ret = pthread_attr_setinheritsched(attrp, PTHREAD_EXPLICIT_SCHED); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "pthread_attr_setinheritsched failed\n"); + RTE_LOG_LINE(DEBUG, EAL, "pthread_attr_setinheritsched failed"); goto cleanup; } @@ -165,13 +165,13 @@ rte_thread_create(rte_thread_t *thread_id, ret = pthread_attr_setschedpolicy(attrp, policy); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "pthread_attr_setschedpolicy failed\n"); + RTE_LOG_LINE(DEBUG, EAL, "pthread_attr_setschedpolicy failed"); goto cleanup; } ret = pthread_attr_setschedparam(attrp, ¶m); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "pthread_attr_setschedparam failed\n"); + RTE_LOG_LINE(DEBUG, EAL, "pthread_attr_setschedparam failed"); goto cleanup; } } @@ -179,7 +179,7 @@ rte_thread_create(rte_thread_t *thread_id, ret = pthread_create((pthread_t *)&thread_id->opaque_id, attrp, thread_start_wrapper, &ctx); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "pthread_create failed\n"); + RTE_LOG_LINE(DEBUG, EAL, "pthread_create failed"); goto cleanup; } @@ -211,7 +211,7 @@ rte_thread_join(rte_thread_t thread_id, uint32_t *value_ptr) ret = pthread_join((pthread_t)thread_id.opaque_id, pres); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "pthread_join failed\n"); + RTE_LOG_LINE(DEBUG, EAL, "pthread_join failed"); return ret; } @@ -256,7 +256,7 @@ rte_thread_get_priority(rte_thread_t thread_id, ret = pthread_getschedparam((pthread_t)thread_id.opaque_id, &policy, ¶m); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "pthread_getschedparam failed\n"); + RTE_LOG_LINE(DEBUG, EAL, "pthread_getschedparam failed"); goto cleanup; } @@ -295,13 +295,13 @@ rte_thread_key_create(rte_thread_key *key, void (*destructor)(void *)) *key = malloc(sizeof(**key)); if ((*key) == NULL) { - RTE_LOG(DEBUG, EAL, "Cannot allocate TLS key.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Cannot allocate TLS key."); rte_errno = ENOMEM; return -1; } err = pthread_key_create(&((*key)->thread_index), destructor); if (err) { - RTE_LOG(DEBUG, EAL, "pthread_key_create failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "pthread_key_create failed: %s", strerror(err)); free(*key); rte_errno = ENOEXEC; @@ -316,13 +316,13 @@ rte_thread_key_delete(rte_thread_key key) int err; if (!key) { - RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Invalid TLS key."); rte_errno = EINVAL; return -1; } err = pthread_key_delete(key->thread_index); if (err) { - RTE_LOG(DEBUG, EAL, "pthread_key_delete failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "pthread_key_delete failed: %s", strerror(err)); free(key); rte_errno = ENOEXEC; @@ -338,13 +338,13 @@ rte_thread_value_set(rte_thread_key key, const void *value) int err; if (!key) { - RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Invalid TLS key."); rte_errno = EINVAL; return -1; } err = pthread_setspecific(key->thread_index, value); if (err) { - RTE_LOG(DEBUG, EAL, "pthread_setspecific failed: %s\n", + RTE_LOG_LINE(DEBUG, EAL, "pthread_setspecific failed: %s", strerror(err)); rte_errno = ENOEXEC; return -1; @@ -356,7 +356,7 @@ void * rte_thread_value_get(rte_thread_key key) { if (!key) { - RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Invalid TLS key."); rte_errno = EINVAL; return NULL; } diff --git a/lib/eal/windows/eal.c b/lib/eal/windows/eal.c index 7ec2152211..b573fa7c74 100644 --- a/lib/eal/windows/eal.c +++ b/lib/eal/windows/eal.c @@ -67,7 +67,7 @@ eal_proc_type_detect(void) ptype = RTE_PROC_SECONDARY; } - RTE_LOG(INFO, EAL, "Auto-detected process type: %s\n", + RTE_LOG_LINE(INFO, EAL, "Auto-detected process type: %s", ptype == RTE_PROC_PRIMARY ? "PRIMARY" : "SECONDARY"); return ptype; @@ -175,16 +175,16 @@ eal_parse_args(int argc, char **argv) exit(EXIT_SUCCESS); default: if (opt < OPT_LONG_MIN_NUM && isprint(opt)) { - RTE_LOG(ERR, EAL, "Option %c is not supported " - "on Windows\n", opt); + RTE_LOG_LINE(ERR, EAL, "Option %c is not supported " + "on Windows", opt); } else if (opt >= OPT_LONG_MIN_NUM && opt < OPT_LONG_MAX_NUM) { - RTE_LOG(ERR, EAL, "Option %s is not supported " - "on Windows\n", + RTE_LOG_LINE(ERR, EAL, "Option %s is not supported " + "on Windows", eal_long_options[option_index].name); } else { - RTE_LOG(ERR, EAL, "Option %d is not supported " - "on Windows\n", opt); + RTE_LOG_LINE(ERR, EAL, "Option %d is not supported " + "on Windows", opt); } eal_usage(prgname); return -1; @@ -217,7 +217,7 @@ static void rte_eal_init_alert(const char *msg) { fprintf(stderr, "EAL: FATAL: %s\n", msg); - RTE_LOG(ERR, EAL, "%s\n", msg); + RTE_LOG_LINE(ERR, EAL, "%s", msg); } /* Stubs to enable EAL trace point compilation @@ -312,8 +312,8 @@ rte_eal_init(int argc, char **argv) /* Prevent creation of shared memory files. */ if (internal_conf->in_memory == 0) { - RTE_LOG(WARNING, EAL, "Multi-process support is requested, " - "but not available.\n"); + RTE_LOG_LINE(WARNING, EAL, "Multi-process support is requested, " + "but not available."); internal_conf->in_memory = 1; internal_conf->no_shconf = 1; } @@ -356,21 +356,21 @@ rte_eal_init(int argc, char **argv) has_phys_addr = true; if (eal_mem_virt2iova_init() < 0) { /* Non-fatal error if physical addresses are not required. */ - RTE_LOG(DEBUG, EAL, "Cannot access virt2phys driver, " - "PA will not be available\n"); + RTE_LOG_LINE(DEBUG, EAL, "Cannot access virt2phys driver, " + "PA will not be available"); has_phys_addr = false; } iova_mode = internal_conf->iova_mode; if (iova_mode == RTE_IOVA_DC) { - RTE_LOG(DEBUG, EAL, "Specific IOVA mode is not requested, autodetecting\n"); + RTE_LOG_LINE(DEBUG, EAL, "Specific IOVA mode is not requested, autodetecting"); if (has_phys_addr) { - RTE_LOG(DEBUG, EAL, "Selecting IOVA mode according to bus requests\n"); + RTE_LOG_LINE(DEBUG, EAL, "Selecting IOVA mode according to bus requests"); iova_mode = rte_bus_get_iommu_class(); if (iova_mode == RTE_IOVA_DC) { if (!RTE_IOVA_IN_MBUF) { iova_mode = RTE_IOVA_VA; - RTE_LOG(DEBUG, EAL, "IOVA as VA mode is forced by build option.\n"); + RTE_LOG_LINE(DEBUG, EAL, "IOVA as VA mode is forced by build option."); } else { iova_mode = RTE_IOVA_PA; } @@ -392,7 +392,7 @@ rte_eal_init(int argc, char **argv) return -1; } - RTE_LOG(DEBUG, EAL, "Selected IOVA mode '%s'\n", + RTE_LOG_LINE(DEBUG, EAL, "Selected IOVA mode '%s'", iova_mode == RTE_IOVA_PA ? "PA" : "VA"); rte_eal_get_configuration()->iova_mode = iova_mode; @@ -442,7 +442,7 @@ rte_eal_init(int argc, char **argv) &lcore_config[config->main_lcore].cpuset); ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset)); - RTE_LOG(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])\n", + RTE_LOG_LINE(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])", config->main_lcore, rte_thread_self().opaque_id, cpuset, ret == 0 ? "" : "..."); @@ -474,7 +474,7 @@ rte_eal_init(int argc, char **argv) ret = rte_thread_set_affinity_by_id(lcore_config[i].thread_id, &lcore_config[i].cpuset); if (ret != 0) - RTE_LOG(DEBUG, EAL, "Cannot set affinity\n"); + RTE_LOG_LINE(DEBUG, EAL, "Cannot set affinity"); } /* Initialize services so drivers can register services during probe. */ diff --git a/lib/eal/windows/eal_alarm.c b/lib/eal/windows/eal_alarm.c index 34b52380ce..c56aa0e687 100644 --- a/lib/eal/windows/eal_alarm.c +++ b/lib/eal/windows/eal_alarm.c @@ -92,7 +92,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg) int ret; if (cb_fn == NULL) { - RTE_LOG(ERR, EAL, "NULL callback\n"); + RTE_LOG_LINE(ERR, EAL, "NULL callback"); ret = -EINVAL; goto exit; } @@ -105,7 +105,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg) ap = calloc(1, sizeof(*ap)); if (ap == NULL) { - RTE_LOG(ERR, EAL, "Cannot allocate alarm entry\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate alarm entry"); ret = -ENOMEM; goto exit; } @@ -129,7 +129,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg) /* Directly schedule callback execution. */ ret = alarm_set(ap, deadline); if (ret < 0) { - RTE_LOG(ERR, EAL, "Cannot setup alarm\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot setup alarm"); goto fail; } } else { @@ -143,7 +143,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg) ret = intr_thread_exec_sync(alarm_task_exec, &task); if (ret < 0) { - RTE_LOG(ERR, EAL, "Cannot setup alarm in interrupt thread\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot setup alarm in interrupt thread"); goto fail; } @@ -187,7 +187,7 @@ rte_eal_alarm_cancel(rte_eal_alarm_callback cb_fn, void *cb_arg) removed = 0; if (cb_fn == NULL) { - RTE_LOG(ERR, EAL, "NULL callback\n"); + RTE_LOG_LINE(ERR, EAL, "NULL callback"); return -EINVAL; } @@ -246,7 +246,7 @@ intr_thread_exec_sync(void (*func)(void *arg), void *arg) rte_spinlock_lock(&task.lock); ret = eal_intr_thread_schedule(intr_thread_entry, &task); if (ret < 0) { - RTE_LOG(ERR, EAL, "Cannot schedule task to interrupt thread\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot schedule task to interrupt thread"); return -EINVAL; } diff --git a/lib/eal/windows/eal_debug.c b/lib/eal/windows/eal_debug.c index 56ed70df7d..be646080c3 100644 --- a/lib/eal/windows/eal_debug.c +++ b/lib/eal/windows/eal_debug.c @@ -48,8 +48,8 @@ rte_dump_stack(void) error_code = GetLastError(); if (error_code == ERROR_INVALID_ADDRESS) { /* Missing symbols, print message */ - rte_log(RTE_LOG_ERR, RTE_LOGTYPE_EAL, - "%d: [<missing_symbols>]\n", frame_num--); + RTE_LOG_LINE(ERR, EAL, + "%d: [<missing_symbols>]", frame_num--); continue; } else { RTE_LOG_WIN32_ERR("SymFromAddr()"); @@ -67,8 +67,8 @@ rte_dump_stack(void) } } - rte_log(RTE_LOG_ERR, RTE_LOGTYPE_EAL, - "%d: [%s (%s+0x%0llx)[0x%0llX]]\n", frame_num, + RTE_LOG_LINE(ERR, EAL, + "%d: [%s (%s+0x%0llx)[0x%0llX]]", frame_num, error_code ? "<unknown>" : line.FileName, symbol_info->Name, sym_disp, symbol_info->Address); frame_num--; diff --git a/lib/eal/windows/eal_dev.c b/lib/eal/windows/eal_dev.c index 35191056fd..264bc4a649 100644 --- a/lib/eal/windows/eal_dev.c +++ b/lib/eal/windows/eal_dev.c @@ -7,27 +7,27 @@ int rte_dev_event_monitor_start(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for Windows\n"); + RTE_LOG_LINE(ERR, EAL, "Device event is not supported for Windows"); return -1; } int rte_dev_event_monitor_stop(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for Windows\n"); + RTE_LOG_LINE(ERR, EAL, "Device event is not supported for Windows"); return -1; } int rte_dev_hotplug_handle_enable(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for Windows\n"); + RTE_LOG_LINE(ERR, EAL, "Device event is not supported for Windows"); return -1; } int rte_dev_hotplug_handle_disable(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for Windows\n"); + RTE_LOG_LINE(ERR, EAL, "Device event is not supported for Windows"); return -1; } diff --git a/lib/eal/windows/eal_hugepages.c b/lib/eal/windows/eal_hugepages.c index 775c67e4c4..f2336fbe1e 100644 --- a/lib/eal/windows/eal_hugepages.c +++ b/lib/eal/windows/eal_hugepages.c @@ -89,8 +89,8 @@ hugepage_info_init(void) } hpi->num_pages[socket_id] = bytes / hpi->hugepage_sz; - RTE_LOG(DEBUG, EAL, - "Found %u hugepages of %zu bytes on socket %u\n", + RTE_LOG_LINE(DEBUG, EAL, + "Found %u hugepages of %zu bytes on socket %u", hpi->num_pages[socket_id], hpi->hugepage_sz, socket_id); } @@ -105,13 +105,13 @@ int eal_hugepage_info_init(void) { if (hugepage_claim_privilege() < 0) { - RTE_LOG(ERR, EAL, - "Cannot claim hugepage privilege, check large-page support privilege\n"); + RTE_LOG_LINE(ERR, EAL, + "Cannot claim hugepage privilege, check large-page support privilege"); return -1; } if (hugepage_info_init() < 0) { - RTE_LOG(ERR, EAL, "Cannot discover available hugepages\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot discover available hugepages"); return -1; } diff --git a/lib/eal/windows/eal_interrupts.c b/lib/eal/windows/eal_interrupts.c index 49efdc098c..a9c62453b8 100644 --- a/lib/eal/windows/eal_interrupts.c +++ b/lib/eal/windows/eal_interrupts.c @@ -39,7 +39,7 @@ eal_intr_thread_main(LPVOID arg __rte_unused) bool finished = false; if (eal_intr_thread_handle_init() < 0) { - RTE_LOG(ERR, EAL, "Cannot open interrupt thread handle\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot open interrupt thread handle"); goto cleanup; } @@ -57,7 +57,7 @@ eal_intr_thread_main(LPVOID arg __rte_unused) DWORD error = GetLastError(); if (error != WAIT_IO_COMPLETION) { RTE_LOG_WIN32_ERR("GetQueuedCompletionStatusEx()"); - RTE_LOG(ERR, EAL, "Failed waiting for interrupts\n"); + RTE_LOG_LINE(ERR, EAL, "Failed waiting for interrupts"); break; } @@ -94,7 +94,7 @@ rte_eal_intr_init(void) intr_iocp = CreateIoCompletionPort(INVALID_HANDLE_VALUE, NULL, 0, 1); if (intr_iocp == NULL) { RTE_LOG_WIN32_ERR("CreateIoCompletionPort()"); - RTE_LOG(ERR, EAL, "Cannot create interrupt IOCP\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot create interrupt IOCP"); return -1; } @@ -102,7 +102,7 @@ rte_eal_intr_init(void) eal_intr_thread_main, NULL); if (ret != 0) { rte_errno = -ret; - RTE_LOG(ERR, EAL, "Cannot create interrupt thread\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot create interrupt thread"); } return ret; @@ -140,7 +140,7 @@ eal_intr_thread_cancel(void) if (!PostQueuedCompletionStatus( intr_iocp, 0, IOCP_KEY_SHUTDOWN, NULL)) { RTE_LOG_WIN32_ERR("PostQueuedCompletionStatus()"); - RTE_LOG(ERR, EAL, "Cannot cancel interrupt thread\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot cancel interrupt thread"); return; } diff --git a/lib/eal/windows/eal_lcore.c b/lib/eal/windows/eal_lcore.c index 286fe241eb..d65fa1e7a6 100644 --- a/lib/eal/windows/eal_lcore.c +++ b/lib/eal/windows/eal_lcore.c @@ -65,7 +65,8 @@ eal_query_group_affinity(void) &infos_size)) { DWORD error = GetLastError(); if (error != ERROR_INSUFFICIENT_BUFFER) { - RTE_LOG(ERR, EAL, "Cannot get group information size, error %lu\n", error); + RTE_LOG_LINE(ERR, EAL, "Cannot get group information size, error %lu", + error); rte_errno = EINVAL; ret = -1; goto cleanup; @@ -74,7 +75,7 @@ eal_query_group_affinity(void) infos = malloc(infos_size); if (infos == NULL) { - RTE_LOG(ERR, EAL, "Cannot allocate memory for NUMA node information\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate memory for NUMA node information"); rte_errno = ENOMEM; ret = -1; goto cleanup; @@ -82,7 +83,7 @@ eal_query_group_affinity(void) if (!GetLogicalProcessorInformationEx(RelationGroup, infos, &infos_size)) { - RTE_LOG(ERR, EAL, "Cannot get group information, error %lu\n", + RTE_LOG_LINE(ERR, EAL, "Cannot get group information, error %lu", GetLastError()); rte_errno = EINVAL; ret = -1; diff --git a/lib/eal/windows/eal_memalloc.c b/lib/eal/windows/eal_memalloc.c index aa7589b81d..fa9d1fdc1e 100644 --- a/lib/eal/windows/eal_memalloc.c +++ b/lib/eal/windows/eal_memalloc.c @@ -52,7 +52,7 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id, } /* Bugcheck, should not happen. */ - RTE_LOG(DEBUG, EAL, "Attempted to reallocate segment %p " + RTE_LOG_LINE(DEBUG, EAL, "Attempted to reallocate segment %p " "(size %zu) on socket %d", ms->addr, ms->len, ms->socket_id); return -1; @@ -66,8 +66,8 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id, /* Request a new chunk of memory from OS. */ addr = eal_mem_alloc_socket(alloc_sz, socket_id); if (addr == NULL) { - RTE_LOG(DEBUG, EAL, "Cannot allocate %zu bytes " - "on socket %d\n", alloc_sz, socket_id); + RTE_LOG_LINE(DEBUG, EAL, "Cannot allocate %zu bytes " + "on socket %d", alloc_sz, socket_id); return -1; } } else { @@ -79,15 +79,15 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id, * error, because it breaks MSL assumptions. */ if ((addr != NULL) && (addr != requested_addr)) { - RTE_LOG(CRIT, EAL, "Address %p occupied by an alien " - " allocation - MSL is not VA-contiguous!\n", + RTE_LOG_LINE(CRIT, EAL, "Address %p occupied by an alien " + " allocation - MSL is not VA-contiguous!", requested_addr); return -1; } if (addr == NULL) { - RTE_LOG(DEBUG, EAL, "Cannot commit reserved memory %p " - "(size %zu) on socket %d\n", + RTE_LOG_LINE(DEBUG, EAL, "Cannot commit reserved memory %p " + "(size %zu) on socket %d", requested_addr, alloc_sz, socket_id); return -1; } @@ -101,8 +101,8 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id, iova = rte_mem_virt2iova(addr); if (iova == RTE_BAD_IOVA) { - RTE_LOG(DEBUG, EAL, - "Cannot get IOVA of allocated segment\n"); + RTE_LOG_LINE(DEBUG, EAL, + "Cannot get IOVA of allocated segment"); goto error; } @@ -115,12 +115,12 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id, page = &info.VirtualAttributes; if (!page->Valid || !page->LargePage) { - RTE_LOG(DEBUG, EAL, "Got regular page instead of a hugepage\n"); + RTE_LOG_LINE(DEBUG, EAL, "Got regular page instead of a hugepage"); goto error; } if (page->Node != numa_node) { - RTE_LOG(DEBUG, EAL, - "NUMA node hint %u (socket %d) not respected, got %u\n", + RTE_LOG_LINE(DEBUG, EAL, + "NUMA node hint %u (socket %d) not respected, got %u", numa_node, socket_id, page->Node); goto error; } @@ -141,8 +141,8 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id, /* During decommitment, memory is temporarily returned * to the system and the address may become unavailable. */ - RTE_LOG(CRIT, EAL, "Address %p occupied by an alien " - " allocation - MSL is not VA-contiguous!\n", addr); + RTE_LOG_LINE(CRIT, EAL, "Address %p occupied by an alien " + " allocation - MSL is not VA-contiguous!", addr); } return -1; } @@ -153,8 +153,8 @@ free_seg(struct rte_memseg *ms) if (eal_mem_decommit(ms->addr, ms->len)) { if (rte_errno == EADDRNOTAVAIL) { /* See alloc_seg() for explanation. */ - RTE_LOG(CRIT, EAL, "Address %p occupied by an alien " - " allocation - MSL is not VA-contiguous!\n", + RTE_LOG_LINE(CRIT, EAL, "Address %p occupied by an alien " + " allocation - MSL is not VA-contiguous!", ms->addr); } return -1; @@ -233,8 +233,8 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg) map_addr = RTE_PTR_ADD(cur_msl->base_va, cur_idx * page_sz); if (alloc_seg(cur, map_addr, wa->socket, wa->hi)) { - RTE_LOG(DEBUG, EAL, "attempted to allocate %i segments, " - "but only %i were allocated\n", need, i); + RTE_LOG_LINE(DEBUG, EAL, "attempted to allocate %i segments, " + "but only %i were allocated", need, i); /* if exact number wasn't requested, stop */ if (!wa->exact) @@ -249,7 +249,7 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg) rte_fbarray_set_free(arr, j); if (free_seg(tmp)) - RTE_LOG(DEBUG, EAL, "Cannot free page\n"); + RTE_LOG_LINE(DEBUG, EAL, "Cannot free page"); } /* clear the list */ if (wa->ms) @@ -318,7 +318,7 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs, eal_get_internal_configuration(); if (internal_conf->legacy_mem) { - RTE_LOG(ERR, EAL, "dynamic allocation not supported in legacy mode\n"); + RTE_LOG_LINE(ERR, EAL, "dynamic allocation not supported in legacy mode"); return -ENOTSUP; } @@ -330,7 +330,7 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs, } } if (!hi) { - RTE_LOG(ERR, EAL, "cannot find relevant hugepage_info entry\n"); + RTE_LOG_LINE(ERR, EAL, "cannot find relevant hugepage_info entry"); return -1; } @@ -346,7 +346,7 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs, /* memalloc is locked, so it's safe to use thread-unsafe version */ ret = rte_memseg_list_walk_thread_unsafe(alloc_seg_walk, &wa); if (ret == 0) { - RTE_LOG(ERR, EAL, "cannot find suitable memseg_list\n"); + RTE_LOG_LINE(ERR, EAL, "cannot find suitable memseg_list"); ret = -1; } else if (ret > 0) { ret = (int)wa.segs_allocated; @@ -383,7 +383,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs) /* if this page is marked as unfreeable, fail */ if (cur->flags & RTE_MEMSEG_FLAG_DO_NOT_FREE) { - RTE_LOG(DEBUG, EAL, "Page is not allowed to be freed\n"); + RTE_LOG_LINE(DEBUG, EAL, "Page is not allowed to be freed"); ret = -1; continue; } @@ -396,7 +396,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs) break; } if (i == RTE_DIM(internal_conf->hugepage_info)) { - RTE_LOG(ERR, EAL, "Can't find relevant hugepage_info entry\n"); + RTE_LOG_LINE(ERR, EAL, "Can't find relevant hugepage_info entry"); ret = -1; continue; } @@ -411,7 +411,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs) if (walk_res == 1) continue; if (walk_res == 0) - RTE_LOG(ERR, EAL, "Couldn't find memseg list\n"); + RTE_LOG_LINE(ERR, EAL, "Couldn't find memseg list"); ret = -1; } return ret; diff --git a/lib/eal/windows/eal_memory.c b/lib/eal/windows/eal_memory.c index fd39155163..7e1e8d4c84 100644 --- a/lib/eal/windows/eal_memory.c +++ b/lib/eal/windows/eal_memory.c @@ -114,8 +114,8 @@ eal_mem_win32api_init(void) library_name, function); /* Contrary to the docs, Server 2016 is not supported. */ - RTE_LOG(ERR, EAL, "Windows 10 or Windows Server 2019 " - " is required for memory management\n"); + RTE_LOG_LINE(ERR, EAL, "Windows 10 or Windows Server 2019 " + " is required for memory management"); ret = -1; } @@ -173,8 +173,8 @@ eal_mem_virt2iova_init(void) detail = malloc(detail_size); if (detail == NULL) { - RTE_LOG(ERR, EAL, "Cannot allocate virt2phys " - "device interface detail data\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate virt2phys " + "device interface detail data"); goto exit; } @@ -185,7 +185,7 @@ eal_mem_virt2iova_init(void) goto exit; } - RTE_LOG(DEBUG, EAL, "Found virt2phys device: %s\n", detail->DevicePath); + RTE_LOG_LINE(DEBUG, EAL, "Found virt2phys device: %s", detail->DevicePath); virt2phys_device = CreateFile( detail->DevicePath, 0, 0, NULL, OPEN_EXISTING, 0, NULL); @@ -574,8 +574,8 @@ rte_mem_map(void *requested_addr, size_t size, int prot, int flags, int ret = mem_free(requested_addr, size, true); if (ret) { if (ret > 0) { - RTE_LOG(ERR, EAL, "Cannot map memory " - "to a region not reserved\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot map memory " + "to a region not reserved"); rte_errno = EADDRNOTAVAIL; } return NULL; @@ -691,7 +691,7 @@ eal_nohuge_init(void) NULL, mem_sz, MEM_RESERVE | MEM_COMMIT, PAGE_READWRITE); if (addr == NULL) { RTE_LOG_WIN32_ERR("VirtualAlloc(size=%#zx)", mem_sz); - RTE_LOG(ERR, EAL, "Cannot allocate memory\n"); + RTE_LOG_LINE(ERR, EAL, "Cannot allocate memory"); return -1; } @@ -702,9 +702,9 @@ eal_nohuge_init(void) if (mcfg->dma_maskbits && rte_mem_check_dma_mask_thread_unsafe(mcfg->dma_maskbits)) { - RTE_LOG(ERR, EAL, + RTE_LOG_LINE(ERR, EAL, "%s(): couldn't allocate memory due to IOVA " - "exceeding limits of current DMA mask.\n", __func__); + "exceeding limits of current DMA mask.", __func__); return -1; } diff --git a/lib/eal/windows/eal_windows.h b/lib/eal/windows/eal_windows.h index 43b228d388..ee206f365d 100644 --- a/lib/eal/windows/eal_windows.h +++ b/lib/eal/windows/eal_windows.h @@ -17,7 +17,7 @@ */ #define EAL_LOG_NOT_IMPLEMENTED() \ do { \ - RTE_LOG(DEBUG, EAL, "%s() is not implemented\n", __func__); \ + RTE_LOG_LINE(DEBUG, EAL, "%s() is not implemented", __func__); \ rte_errno = ENOTSUP; \ } while (0) @@ -25,7 +25,7 @@ * Log current function as a stub. */ #define EAL_LOG_STUB() \ - RTE_LOG(DEBUG, EAL, "Windows: %s() is a stub\n", __func__) + RTE_LOG_LINE(DEBUG, EAL, "Windows: %s() is a stub", __func__) /** * Create a map of processors and cores on the system. diff --git a/lib/eal/windows/include/rte_windows.h b/lib/eal/windows/include/rte_windows.h index 83730c3d2e..0b0d117865 100644 --- a/lib/eal/windows/include/rte_windows.h +++ b/lib/eal/windows/include/rte_windows.h @@ -48,9 +48,9 @@ extern "C" { * Log GetLastError() with context, usually a Win32 API function and arguments. */ #define RTE_LOG_WIN32_ERR(...) \ - RTE_LOG(DEBUG, EAL, RTE_FMT("GetLastError()=%lu: " \ - RTE_FMT_HEAD(__VA_ARGS__,) "\n", GetLastError(), \ - RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(DEBUG, EAL, RTE_FMT("GetLastError()=%lu: " \ + RTE_FMT_HEAD(__VA_ARGS__ ,), GetLastError(), \ + RTE_FMT_TAIL(__VA_ARGS__ ,))) #ifdef __cplusplus } diff --git a/lib/eal/windows/rte_thread.c b/lib/eal/windows/rte_thread.c index 145ac4b5aa..7c62f57e0d 100644 --- a/lib/eal/windows/rte_thread.c +++ b/lib/eal/windows/rte_thread.c @@ -67,7 +67,7 @@ static int thread_log_last_error(const char *message) { DWORD error = GetLastError(); - RTE_LOG(DEBUG, EAL, "GetLastError()=%lu: %s\n", error, message); + RTE_LOG_LINE(DEBUG, EAL, "GetLastError()=%lu: %s", error, message); return thread_translate_win32_error(error); } @@ -90,7 +90,7 @@ thread_map_priority_to_os_value(enum rte_thread_priority eal_pri, int *os_pri, *os_pri = THREAD_PRIORITY_TIME_CRITICAL; break; default: - RTE_LOG(DEBUG, EAL, "The requested priority value is invalid.\n"); + RTE_LOG_LINE(DEBUG, EAL, "The requested priority value is invalid."); return EINVAL; } @@ -109,7 +109,7 @@ thread_map_os_priority_to_eal_value(int os_pri, DWORD pri_class, } break; case HIGH_PRIORITY_CLASS: - RTE_LOG(WARNING, EAL, "The OS priority class is high not real-time.\n"); + RTE_LOG_LINE(WARNING, EAL, "The OS priority class is high not real-time."); /* FALLTHROUGH */ case REALTIME_PRIORITY_CLASS: if (os_pri == THREAD_PRIORITY_TIME_CRITICAL) { @@ -118,7 +118,7 @@ thread_map_os_priority_to_eal_value(int os_pri, DWORD pri_class, } break; default: - RTE_LOG(DEBUG, EAL, "The OS priority value does not map to an EAL-defined priority.\n"); + RTE_LOG_LINE(DEBUG, EAL, "The OS priority value does not map to an EAL-defined priority."); return EINVAL; } @@ -148,7 +148,7 @@ convert_cpuset_to_affinity(const rte_cpuset_t *cpuset, if (affinity->Group == (USHORT)-1) { affinity->Group = cpu_affinity->Group; } else if (affinity->Group != cpu_affinity->Group) { - RTE_LOG(DEBUG, EAL, "All processors must belong to the same processor group\n"); + RTE_LOG_LINE(DEBUG, EAL, "All processors must belong to the same processor group"); ret = ENOTSUP; goto cleanup; } @@ -194,7 +194,7 @@ rte_thread_create(rte_thread_t *thread_id, ctx = calloc(1, sizeof(*ctx)); if (ctx == NULL) { - RTE_LOG(DEBUG, EAL, "Insufficient memory for thread context allocations\n"); + RTE_LOG_LINE(DEBUG, EAL, "Insufficient memory for thread context allocations"); ret = ENOMEM; goto cleanup; } @@ -217,7 +217,7 @@ rte_thread_create(rte_thread_t *thread_id, &thread_affinity ); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "Unable to convert cpuset to thread affinity\n"); + RTE_LOG_LINE(DEBUG, EAL, "Unable to convert cpuset to thread affinity"); thread_exit = true; goto resume_thread; } @@ -232,7 +232,7 @@ rte_thread_create(rte_thread_t *thread_id, ret = rte_thread_set_priority(*thread_id, thread_attr->priority); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "Unable to set thread priority\n"); + RTE_LOG_LINE(DEBUG, EAL, "Unable to set thread priority"); thread_exit = true; goto resume_thread; } @@ -360,7 +360,7 @@ rte_thread_set_name(rte_thread_t thread_id, const char *thread_name) CloseHandle(thread_handle); if (ret != 0) - RTE_LOG(DEBUG, EAL, "Failed to set thread name\n"); + RTE_LOG_LINE(DEBUG, EAL, "Failed to set thread name"); } int @@ -446,7 +446,7 @@ rte_thread_key_create(rte_thread_key *key, { *key = malloc(sizeof(**key)); if ((*key) == NULL) { - RTE_LOG(DEBUG, EAL, "Cannot allocate TLS key.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Cannot allocate TLS key."); rte_errno = ENOMEM; return -1; } @@ -464,7 +464,7 @@ int rte_thread_key_delete(rte_thread_key key) { if (!key) { - RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Invalid TLS key."); rte_errno = EINVAL; return -1; } @@ -484,7 +484,7 @@ rte_thread_value_set(rte_thread_key key, const void *value) char *p; if (!key) { - RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Invalid TLS key."); rte_errno = EINVAL; return -1; } @@ -504,7 +504,7 @@ rte_thread_value_get(rte_thread_key key) void *output; if (!key) { - RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n"); + RTE_LOG_LINE(DEBUG, EAL, "Invalid TLS key."); rte_errno = EINVAL; return NULL; } @@ -532,7 +532,7 @@ rte_thread_set_affinity_by_id(rte_thread_t thread_id, ret = convert_cpuset_to_affinity(cpuset, &thread_affinity); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "Unable to convert cpuset to thread affinity\n"); + RTE_LOG_LINE(DEBUG, EAL, "Unable to convert cpuset to thread affinity"); goto cleanup; } diff --git a/lib/efd/rte_efd.c b/lib/efd/rte_efd.c index 78fb9250ef..e441263335 100644 --- a/lib/efd/rte_efd.c +++ b/lib/efd/rte_efd.c @@ -512,13 +512,13 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len, efd_list = RTE_TAILQ_CAST(rte_efd_tailq.head, rte_efd_list); if (online_cpu_socket_bitmask == 0) { - RTE_LOG(ERR, EFD, "At least one CPU socket must be enabled " - "in the bitmask\n"); + RTE_LOG_LINE(ERR, EFD, "At least one CPU socket must be enabled " + "in the bitmask"); return NULL; } if (max_num_rules == 0) { - RTE_LOG(ERR, EFD, "Max num rules must be higher than 0\n"); + RTE_LOG_LINE(ERR, EFD, "Max num rules must be higher than 0"); return NULL; } @@ -557,7 +557,7 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len, te = rte_zmalloc("EFD_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, EFD, "tailq entry allocation failed\n"); + RTE_LOG_LINE(ERR, EFD, "tailq entry allocation failed"); goto error_unlock_exit; } @@ -567,15 +567,15 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len, RTE_CACHE_LINE_SIZE, offline_cpu_socket); if (table == NULL) { - RTE_LOG(ERR, EFD, "Allocating EFD table management structure" - " on socket %u failed\n", + RTE_LOG_LINE(ERR, EFD, "Allocating EFD table management structure" + " on socket %u failed", offline_cpu_socket); goto error_unlock_exit; } - RTE_LOG(DEBUG, EFD, "Allocated EFD table management structure " - "on socket %u\n", offline_cpu_socket); + RTE_LOG_LINE(DEBUG, EFD, "Allocated EFD table management structure " + "on socket %u", offline_cpu_socket); table->max_num_rules = num_chunks * EFD_TARGET_CHUNK_MAX_NUM_RULES; table->num_rules = 0; @@ -589,16 +589,16 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len, RTE_CACHE_LINE_SIZE, offline_cpu_socket); if (key_array == NULL) { - RTE_LOG(ERR, EFD, "Allocating key array" - " on socket %u failed\n", + RTE_LOG_LINE(ERR, EFD, "Allocating key array" + " on socket %u failed", offline_cpu_socket); goto error_unlock_exit; } table->keys = key_array; strlcpy(table->name, name, sizeof(table->name)); - RTE_LOG(DEBUG, EFD, "Creating an EFD table with %u chunks," - " which potentially supports %u entries\n", + RTE_LOG_LINE(DEBUG, EFD, "Creating an EFD table with %u chunks," + " which potentially supports %u entries", num_chunks, table->max_num_rules); /* Make sure all the allocatable table pointers are NULL initially */ @@ -626,15 +626,15 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len, RTE_CACHE_LINE_SIZE, socket_id); if (table->chunks[socket_id] == NULL) { - RTE_LOG(ERR, EFD, + RTE_LOG_LINE(ERR, EFD, "Allocating EFD online table on " - "socket %u failed\n", + "socket %u failed", socket_id); goto error_unlock_exit; } - RTE_LOG(DEBUG, EFD, + RTE_LOG_LINE(DEBUG, EFD, "Allocated EFD online table of size " - "%"PRIu64" bytes (%.2f MB) on socket %u\n", + "%"PRIu64" bytes (%.2f MB) on socket %u", online_table_size, (float) online_table_size / (1024.0F * 1024.0F), @@ -678,14 +678,14 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len, RTE_CACHE_LINE_SIZE, offline_cpu_socket); if (table->offline_chunks == NULL) { - RTE_LOG(ERR, EFD, "Allocating EFD offline table on socket %u " - "failed\n", offline_cpu_socket); + RTE_LOG_LINE(ERR, EFD, "Allocating EFD offline table on socket %u " + "failed", offline_cpu_socket); goto error_unlock_exit; } - RTE_LOG(DEBUG, EFD, + RTE_LOG_LINE(DEBUG, EFD, "Allocated EFD offline table of size %"PRIu64" bytes " - " (%.2f MB) on socket %u\n", offline_table_size, + " (%.2f MB) on socket %u", offline_table_size, (float) offline_table_size / (1024.0F * 1024.0F), offline_cpu_socket); @@ -698,7 +698,7 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len, r = rte_ring_create(ring_name, rte_align32pow2(table->max_num_rules), offline_cpu_socket, 0); if (r == NULL) { - RTE_LOG(ERR, EFD, "memory allocation failed\n"); + RTE_LOG_LINE(ERR, EFD, "memory allocation failed"); rte_efd_free(table); return NULL; } @@ -1018,9 +1018,9 @@ efd_compute_update(struct rte_efd_table * const table, if (found == 0) { /* Key does not exist. Insert the rule into the bin/group */ if (unlikely(current_group->num_rules >= EFD_MAX_GROUP_NUM_RULES)) { - RTE_LOG(ERR, EFD, + RTE_LOG_LINE(ERR, EFD, "Fatal: No room remaining for insert into " - "chunk %u group %u bin %u\n", + "chunk %u group %u bin %u", *chunk_id, current_group_id, *bin_id); return RTE_EFD_UPDATE_FAILED; @@ -1028,9 +1028,9 @@ efd_compute_update(struct rte_efd_table * const table, if (unlikely(current_group->num_rules == (EFD_MAX_GROUP_NUM_RULES - 1))) { - RTE_LOG(INFO, EFD, "Warn: Insert into last " + RTE_LOG_LINE(INFO, EFD, "Warn: Insert into last " "available slot in chunk %u " - "group %u bin %u\n", *chunk_id, + "group %u bin %u", *chunk_id, current_group_id, *bin_id); status = RTE_EFD_UPDATE_WARN_GROUP_FULL; } @@ -1117,10 +1117,10 @@ efd_compute_update(struct rte_efd_table * const table, if (current_group != new_group && new_group->num_rules + bin_size > EFD_MAX_GROUP_NUM_RULES) { - RTE_LOG(DEBUG, EFD, + RTE_LOG_LINE(DEBUG, EFD, "Unable to move_groups to dest group " "containing %u entries." - "bin_size:%u choice:%02x\n", + "bin_size:%u choice:%02x", new_group->num_rules, bin_size, choice - 1); goto next_choice; @@ -1135,9 +1135,9 @@ efd_compute_update(struct rte_efd_table * const table, if (!ret) return status; - RTE_LOG(DEBUG, EFD, + RTE_LOG_LINE(DEBUG, EFD, "Failed to find perfect hash for group " - "containing %u entries. bin_size:%u choice:%02x\n", + "containing %u entries. bin_size:%u choice:%02x", new_group->num_rules, bin_size, choice - 1); /* Restore groups modified to their previous state */ revert_groups(current_group, new_group, bin_size); diff --git a/lib/fib/rte_fib.c b/lib/fib/rte_fib.c index f88e71a59d..3d9bf6fe9d 100644 --- a/lib/fib/rte_fib.c +++ b/lib/fib/rte_fib.c @@ -171,8 +171,8 @@ rte_fib_create(const char *name, int socket_id, struct rte_fib_conf *conf) rib = rte_rib_create(name, socket_id, &rib_conf); if (rib == NULL) { - RTE_LOG(ERR, LPM, - "Can not allocate RIB %s\n", name); + RTE_LOG_LINE(ERR, LPM, + "Can not allocate RIB %s", name); return NULL; } @@ -196,8 +196,8 @@ rte_fib_create(const char *name, int socket_id, struct rte_fib_conf *conf) /* allocate tailq entry */ te = rte_zmalloc("FIB_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, LPM, - "Can not allocate tailq entry for FIB %s\n", name); + RTE_LOG_LINE(ERR, LPM, + "Can not allocate tailq entry for FIB %s", name); rte_errno = ENOMEM; goto exit; } @@ -206,7 +206,7 @@ rte_fib_create(const char *name, int socket_id, struct rte_fib_conf *conf) fib = rte_zmalloc_socket(mem_name, sizeof(struct rte_fib), RTE_CACHE_LINE_SIZE, socket_id); if (fib == NULL) { - RTE_LOG(ERR, LPM, "FIB %s memory allocation failed\n", name); + RTE_LOG_LINE(ERR, LPM, "FIB %s memory allocation failed", name); rte_errno = ENOMEM; goto free_te; } @@ -217,9 +217,9 @@ rte_fib_create(const char *name, int socket_id, struct rte_fib_conf *conf) fib->def_nh = conf->default_nh; ret = init_dataplane(fib, socket_id, conf); if (ret < 0) { - RTE_LOG(ERR, LPM, + RTE_LOG_LINE(ERR, LPM, "FIB dataplane struct %s memory allocation failed " - "with err %d\n", name, ret); + "with err %d", name, ret); rte_errno = -ret; goto free_fib; } diff --git a/lib/fib/rte_fib6.c b/lib/fib/rte_fib6.c index ab1d960479..2d23c09eea 100644 --- a/lib/fib/rte_fib6.c +++ b/lib/fib/rte_fib6.c @@ -171,8 +171,8 @@ rte_fib6_create(const char *name, int socket_id, struct rte_fib6_conf *conf) rib = rte_rib6_create(name, socket_id, &rib_conf); if (rib == NULL) { - RTE_LOG(ERR, LPM, - "Can not allocate RIB %s\n", name); + RTE_LOG_LINE(ERR, LPM, + "Can not allocate RIB %s", name); return NULL; } @@ -196,8 +196,8 @@ rte_fib6_create(const char *name, int socket_id, struct rte_fib6_conf *conf) /* allocate tailq entry */ te = rte_zmalloc("FIB_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, LPM, - "Can not allocate tailq entry for FIB %s\n", name); + RTE_LOG_LINE(ERR, LPM, + "Can not allocate tailq entry for FIB %s", name); rte_errno = ENOMEM; goto exit; } @@ -206,7 +206,7 @@ rte_fib6_create(const char *name, int socket_id, struct rte_fib6_conf *conf) fib = rte_zmalloc_socket(mem_name, sizeof(struct rte_fib6), RTE_CACHE_LINE_SIZE, socket_id); if (fib == NULL) { - RTE_LOG(ERR, LPM, "FIB %s memory allocation failed\n", name); + RTE_LOG_LINE(ERR, LPM, "FIB %s memory allocation failed", name); rte_errno = ENOMEM; goto free_te; } @@ -217,8 +217,8 @@ rte_fib6_create(const char *name, int socket_id, struct rte_fib6_conf *conf) fib->def_nh = conf->default_nh; ret = init_dataplane(fib, socket_id, conf); if (ret < 0) { - RTE_LOG(ERR, LPM, - "FIB dataplane struct %s memory allocation failed\n", + RTE_LOG_LINE(ERR, LPM, + "FIB dataplane struct %s memory allocation failed", name); rte_errno = -ret; goto free_fib; diff --git a/lib/hash/rte_cuckoo_hash.c b/lib/hash/rte_cuckoo_hash.c index 8e4364f060..f2a398d3bb 100644 --- a/lib/hash/rte_cuckoo_hash.c +++ b/lib/hash/rte_cuckoo_hash.c @@ -164,7 +164,7 @@ rte_hash_create(const struct rte_hash_parameters *params) hash_list = RTE_TAILQ_CAST(rte_hash_tailq.head, rte_hash_list); if (params == NULL) { - RTE_LOG(ERR, HASH, "rte_hash_create has no parameters\n"); + RTE_LOG_LINE(ERR, HASH, "%s has no parameters", __func__); return NULL; } @@ -173,13 +173,13 @@ rte_hash_create(const struct rte_hash_parameters *params) (params->entries < RTE_HASH_BUCKET_ENTRIES) || (params->key_len == 0)) { rte_errno = EINVAL; - RTE_LOG(ERR, HASH, "rte_hash_create has invalid parameters\n"); + RTE_LOG_LINE(ERR, HASH, "%s has invalid parameters", __func__); return NULL; } if (params->extra_flag & ~RTE_HASH_EXTRA_FLAGS_MASK) { rte_errno = EINVAL; - RTE_LOG(ERR, HASH, "rte_hash_create: unsupported extra flags\n"); + RTE_LOG_LINE(ERR, HASH, "%s: unsupported extra flags", __func__); return NULL; } @@ -187,8 +187,8 @@ rte_hash_create(const struct rte_hash_parameters *params) if ((params->extra_flag & RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY) && (params->extra_flag & RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF)) { rte_errno = EINVAL; - RTE_LOG(ERR, HASH, "rte_hash_create: choose rw concurrency or " - "rw concurrency lock free\n"); + RTE_LOG_LINE(ERR, HASH, "%s: choose rw concurrency or rw concurrency lock free", + __func__); return NULL; } @@ -238,7 +238,7 @@ rte_hash_create(const struct rte_hash_parameters *params) r = rte_ring_create_elem(ring_name, sizeof(uint32_t), rte_align32pow2(num_key_slots), params->socket_id, 0); if (r == NULL) { - RTE_LOG(ERR, HASH, "memory allocation failed\n"); + RTE_LOG_LINE(ERR, HASH, "memory allocation failed"); goto err; } @@ -254,8 +254,8 @@ rte_hash_create(const struct rte_hash_parameters *params) params->socket_id, 0); if (r_ext == NULL) { - RTE_LOG(ERR, HASH, "ext buckets memory allocation " - "failed\n"); + RTE_LOG_LINE(ERR, HASH, "ext buckets memory allocation " + "failed"); goto err; } } @@ -280,7 +280,7 @@ rte_hash_create(const struct rte_hash_parameters *params) te = rte_zmalloc("HASH_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, HASH, "tailq entry allocation failed\n"); + RTE_LOG_LINE(ERR, HASH, "tailq entry allocation failed"); goto err_unlock; } @@ -288,7 +288,7 @@ rte_hash_create(const struct rte_hash_parameters *params) RTE_CACHE_LINE_SIZE, params->socket_id); if (h == NULL) { - RTE_LOG(ERR, HASH, "memory allocation failed\n"); + RTE_LOG_LINE(ERR, HASH, "memory allocation failed"); goto err_unlock; } @@ -297,7 +297,7 @@ rte_hash_create(const struct rte_hash_parameters *params) RTE_CACHE_LINE_SIZE, params->socket_id); if (buckets == NULL) { - RTE_LOG(ERR, HASH, "buckets memory allocation failed\n"); + RTE_LOG_LINE(ERR, HASH, "buckets memory allocation failed"); goto err_unlock; } @@ -307,8 +307,8 @@ rte_hash_create(const struct rte_hash_parameters *params) num_buckets * sizeof(struct rte_hash_bucket), RTE_CACHE_LINE_SIZE, params->socket_id); if (buckets_ext == NULL) { - RTE_LOG(ERR, HASH, "ext buckets memory allocation " - "failed\n"); + RTE_LOG_LINE(ERR, HASH, "ext buckets memory allocation " + "failed"); goto err_unlock; } /* Populate ext bkt ring. We reserve 0 similar to the @@ -323,8 +323,8 @@ rte_hash_create(const struct rte_hash_parameters *params) ext_bkt_to_free = rte_zmalloc(NULL, sizeof(uint32_t) * num_key_slots, 0); if (ext_bkt_to_free == NULL) { - RTE_LOG(ERR, HASH, "ext bkt to free memory allocation " - "failed\n"); + RTE_LOG_LINE(ERR, HASH, "ext bkt to free memory allocation " + "failed"); goto err_unlock; } } @@ -339,7 +339,7 @@ rte_hash_create(const struct rte_hash_parameters *params) RTE_CACHE_LINE_SIZE, params->socket_id); if (k == NULL) { - RTE_LOG(ERR, HASH, "memory allocation failed\n"); + RTE_LOG_LINE(ERR, HASH, "memory allocation failed"); goto err_unlock; } @@ -347,7 +347,7 @@ rte_hash_create(const struct rte_hash_parameters *params) RTE_CACHE_LINE_SIZE, params->socket_id); if (tbl_chng_cnt == NULL) { - RTE_LOG(ERR, HASH, "memory allocation failed\n"); + RTE_LOG_LINE(ERR, HASH, "memory allocation failed"); goto err_unlock; } @@ -395,7 +395,7 @@ rte_hash_create(const struct rte_hash_parameters *params) sizeof(struct lcore_cache) * RTE_MAX_LCORE, RTE_CACHE_LINE_SIZE, params->socket_id); if (local_free_slots == NULL) { - RTE_LOG(ERR, HASH, "local free slots memory allocation failed\n"); + RTE_LOG_LINE(ERR, HASH, "local free slots memory allocation failed"); goto err_unlock; } } @@ -637,7 +637,7 @@ rte_hash_reset(struct rte_hash *h) /* Reclaim all the resources */ rte_rcu_qsbr_dq_reclaim(h->dq, ~0, NULL, &pending, NULL); if (pending != 0) - RTE_LOG(ERR, HASH, "RCU reclaim all resources failed\n"); + RTE_LOG_LINE(ERR, HASH, "RCU reclaim all resources failed"); } memset(h->buckets, 0, h->num_buckets * sizeof(struct rte_hash_bucket)); @@ -1511,8 +1511,8 @@ __hash_rcu_qsbr_free_resource(void *p, void *e, unsigned int n) /* Return key indexes to free slot ring */ ret = free_slot(h, rcu_dq_entry.key_idx); if (ret < 0) { - RTE_LOG(ERR, HASH, - "%s: could not enqueue free slots in global ring\n", + RTE_LOG_LINE(ERR, HASH, + "%s: could not enqueue free slots in global ring", __func__); } } @@ -1540,7 +1540,7 @@ rte_hash_rcu_qsbr_add(struct rte_hash *h, struct rte_hash_rcu_config *cfg) hash_rcu_cfg = rte_zmalloc(NULL, sizeof(struct rte_hash_rcu_config), 0); if (hash_rcu_cfg == NULL) { - RTE_LOG(ERR, HASH, "memory allocation failed\n"); + RTE_LOG_LINE(ERR, HASH, "memory allocation failed"); return 1; } @@ -1564,7 +1564,7 @@ rte_hash_rcu_qsbr_add(struct rte_hash *h, struct rte_hash_rcu_config *cfg) h->dq = rte_rcu_qsbr_dq_create(¶ms); if (h->dq == NULL) { rte_free(hash_rcu_cfg); - RTE_LOG(ERR, HASH, "HASH defer queue creation failed\n"); + RTE_LOG_LINE(ERR, HASH, "HASH defer queue creation failed"); return 1; } } else { @@ -1593,8 +1593,8 @@ remove_entry(const struct rte_hash *h, struct rte_hash_bucket *bkt, int ret = free_slot(h, bkt->key_idx[i]); if (ret < 0) { - RTE_LOG(ERR, HASH, - "%s: could not enqueue free slots in global ring\n", + RTE_LOG_LINE(ERR, HASH, + "%s: could not enqueue free slots in global ring", __func__); } } @@ -1783,7 +1783,7 @@ __rte_hash_del_key_with_hash(const struct rte_hash *h, const void *key, } else if (h->dq) /* Push into QSBR FIFO if using RTE_HASH_QSBR_MODE_DQ */ if (rte_rcu_qsbr_dq_enqueue(h->dq, &rcu_dq_entry) != 0) - RTE_LOG(ERR, HASH, "Failed to push QSBR FIFO\n"); + RTE_LOG_LINE(ERR, HASH, "Failed to push QSBR FIFO"); } __hash_rw_writer_unlock(h); return ret; diff --git a/lib/hash/rte_fbk_hash.c b/lib/hash/rte_fbk_hash.c index faeb50cd89..20433a92c8 100644 --- a/lib/hash/rte_fbk_hash.c +++ b/lib/hash/rte_fbk_hash.c @@ -118,7 +118,7 @@ rte_fbk_hash_create(const struct rte_fbk_hash_params *params) te = rte_zmalloc("FBK_HASH_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, HASH, "Failed to allocate tailq entry\n"); + RTE_LOG_LINE(ERR, HASH, "Failed to allocate tailq entry"); goto exit; } @@ -126,7 +126,7 @@ rte_fbk_hash_create(const struct rte_fbk_hash_params *params) ht = rte_zmalloc_socket(hash_name, mem_size, 0, params->socket_id); if (ht == NULL) { - RTE_LOG(ERR, HASH, "Failed to allocate fbk hash table\n"); + RTE_LOG_LINE(ERR, HASH, "Failed to allocate fbk hash table"); rte_free(te); goto exit; } diff --git a/lib/hash/rte_hash_crc.c b/lib/hash/rte_hash_crc.c index 1439d8a71f..0d52840eaa 100644 --- a/lib/hash/rte_hash_crc.c +++ b/lib/hash/rte_hash_crc.c @@ -34,8 +34,8 @@ rte_hash_crc_set_alg(uint8_t alg) #if defined RTE_ARCH_X86 if (!(alg & CRC32_SSE42_x64)) - RTE_LOG(WARNING, HASH_CRC, - "Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42\n"); + RTE_LOG_LINE(WARNING, HASH_CRC, + "Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42"); if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T) || alg == CRC32_SSE42) rte_hash_crc32_alg = CRC32_SSE42; else @@ -44,15 +44,15 @@ rte_hash_crc_set_alg(uint8_t alg) #if defined RTE_ARCH_ARM64 if (!(alg & CRC32_ARM64)) - RTE_LOG(WARNING, HASH_CRC, - "Unsupported CRC32 algorithm requested using CRC32_ARM64\n"); + RTE_LOG_LINE(WARNING, HASH_CRC, + "Unsupported CRC32 algorithm requested using CRC32_ARM64"); if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_CRC32)) rte_hash_crc32_alg = CRC32_ARM64; #endif if (rte_hash_crc32_alg == CRC32_SW) - RTE_LOG(WARNING, HASH_CRC, - "Unsupported CRC32 algorithm requested using CRC32_SW\n"); + RTE_LOG_LINE(WARNING, HASH_CRC, + "Unsupported CRC32 algorithm requested using CRC32_SW"); } /* Setting the best available algorithm */ diff --git a/lib/hash/rte_thash.c b/lib/hash/rte_thash.c index d819dddd84..a5d84eee8e 100644 --- a/lib/hash/rte_thash.c +++ b/lib/hash/rte_thash.c @@ -243,8 +243,8 @@ rte_thash_init_ctx(const char *name, uint32_t key_len, uint32_t reta_sz, /* allocate tailq entry */ te = rte_zmalloc("THASH_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, HASH, - "Can not allocate tailq entry for thash context %s\n", + RTE_LOG_LINE(ERR, HASH, + "Can not allocate tailq entry for thash context %s", name); rte_errno = ENOMEM; goto exit; @@ -252,7 +252,7 @@ rte_thash_init_ctx(const char *name, uint32_t key_len, uint32_t reta_sz, ctx = rte_zmalloc(NULL, sizeof(struct rte_thash_ctx) + key_len, 0); if (ctx == NULL) { - RTE_LOG(ERR, HASH, "thash ctx %s memory allocation failed\n", + RTE_LOG_LINE(ERR, HASH, "thash ctx %s memory allocation failed", name); rte_errno = ENOMEM; goto free_te; @@ -275,7 +275,7 @@ rte_thash_init_ctx(const char *name, uint32_t key_len, uint32_t reta_sz, ctx->matrices = rte_zmalloc(NULL, key_len * sizeof(uint64_t), RTE_CACHE_LINE_SIZE); if (ctx->matrices == NULL) { - RTE_LOG(ERR, HASH, "Cannot allocate matrices\n"); + RTE_LOG_LINE(ERR, HASH, "Cannot allocate matrices"); rte_errno = ENOMEM; goto free_ctx; } @@ -390,8 +390,8 @@ generate_subkey(struct rte_thash_ctx *ctx, struct thash_lfsr *lfsr, if (((lfsr->bits_cnt + req_bits) > (1ULL << lfsr->deg) - 1) && ((ctx->flags & RTE_THASH_IGNORE_PERIOD_OVERFLOW) != RTE_THASH_IGNORE_PERIOD_OVERFLOW)) { - RTE_LOG(ERR, HASH, - "Can't generate m-sequence due to period overflow\n"); + RTE_LOG_LINE(ERR, HASH, + "Can't generate m-sequence due to period overflow"); return -ENOSPC; } @@ -470,9 +470,9 @@ insert_before(struct rte_thash_ctx *ctx, return ret; } } else if ((next_ent != NULL) && (end > next_ent->offset)) { - RTE_LOG(ERR, HASH, + RTE_LOG_LINE(ERR, HASH, "Can't add helper %s due to conflict with existing" - " helper %s\n", ent->name, next_ent->name); + " helper %s", ent->name, next_ent->name); rte_free(ent); return -ENOSPC; } @@ -519,9 +519,9 @@ insert_after(struct rte_thash_ctx *ctx, int ret; if ((next_ent != NULL) && (end > next_ent->offset)) { - RTE_LOG(ERR, HASH, + RTE_LOG_LINE(ERR, HASH, "Can't add helper %s due to conflict with existing" - " helper %s\n", ent->name, next_ent->name); + " helper %s", ent->name, next_ent->name); rte_free(ent); return -EEXIST; } diff --git a/lib/hash/rte_thash_gfni.c b/lib/hash/rte_thash_gfni.c index c863789b51..6b84180b62 100644 --- a/lib/hash/rte_thash_gfni.c +++ b/lib/hash/rte_thash_gfni.c @@ -20,8 +20,8 @@ rte_thash_gfni(const uint64_t *mtrx __rte_unused, if (!warned) { warned = true; - RTE_LOG(ERR, HASH, - "%s is undefined under given arch\n", __func__); + RTE_LOG_LINE(ERR, HASH, + "%s is undefined under given arch", __func__); } return 0; @@ -38,8 +38,8 @@ rte_thash_gfni_bulk(const uint64_t *mtrx __rte_unused, if (!warned) { warned = true; - RTE_LOG(ERR, HASH, - "%s is undefined under given arch\n", __func__); + RTE_LOG_LINE(ERR, HASH, + "%s is undefined under given arch", __func__); } for (i = 0; i < num; i++) diff --git a/lib/ip_frag/rte_ip_frag_common.c b/lib/ip_frag/rte_ip_frag_common.c index eed399da6b..02dcac3137 100644 --- a/lib/ip_frag/rte_ip_frag_common.c +++ b/lib/ip_frag/rte_ip_frag_common.c @@ -54,20 +54,20 @@ rte_ip_frag_table_create(uint32_t bucket_num, uint32_t bucket_entries, if (rte_is_power_of_2(bucket_entries) == 0 || nb_entries > UINT32_MAX || nb_entries == 0 || nb_entries < max_entries) { - RTE_LOG(ERR, IPFRAG, "%s: invalid input parameter\n", __func__); + RTE_LOG_LINE(ERR, IPFRAG, "%s: invalid input parameter", __func__); return NULL; } sz = sizeof (*tbl) + nb_entries * sizeof (tbl->pkt[0]); if ((tbl = rte_zmalloc_socket(__func__, sz, RTE_CACHE_LINE_SIZE, socket_id)) == NULL) { - RTE_LOG(ERR, IPFRAG, - "%s: allocation of %zu bytes at socket %d failed do\n", + RTE_LOG_LINE(ERR, IPFRAG, + "%s: allocation of %zu bytes at socket %d failed do", __func__, sz, socket_id); return NULL; } - RTE_LOG(INFO, IPFRAG, "%s: allocated of %zu bytes at socket %d\n", + RTE_LOG_LINE(INFO, IPFRAG, "%s: allocated of %zu bytes at socket %d", __func__, sz, socket_id); tbl->max_cycles = max_cycles; diff --git a/lib/latencystats/rte_latencystats.c b/lib/latencystats/rte_latencystats.c index f3c1746cca..cc3c2cf4de 100644 --- a/lib/latencystats/rte_latencystats.c +++ b/lib/latencystats/rte_latencystats.c @@ -25,7 +25,6 @@ latencystat_cycles_per_ns(void) return rte_get_timer_hz() / NS_PER_SEC; } -/* Macros for printing using RTE_LOG */ RTE_LOG_REGISTER_DEFAULT(latencystat_logtype, INFO); #define RTE_LOGTYPE_LATENCY_STATS latencystat_logtype @@ -96,7 +95,7 @@ rte_latencystats_update(void) latency_stats_index, values, NUM_LATENCY_STATS); if (ret < 0) - RTE_LOG(INFO, LATENCY_STATS, "Failed to push the stats\n"); + RTE_LOG_LINE(INFO, LATENCY_STATS, "Failed to push the stats"); return ret; } @@ -228,7 +227,7 @@ rte_latencystats_init(uint64_t app_samp_intvl, mz = rte_memzone_reserve(MZ_RTE_LATENCY_STATS, sizeof(*glob_stats), rte_socket_id(), flags); if (mz == NULL) { - RTE_LOG(ERR, LATENCY_STATS, "Cannot reserve memory: %s:%d\n", + RTE_LOG_LINE(ERR, LATENCY_STATS, "Cannot reserve memory: %s:%d", __func__, __LINE__); return -ENOMEM; } @@ -244,8 +243,8 @@ rte_latencystats_init(uint64_t app_samp_intvl, latency_stats_index = rte_metrics_reg_names(ptr_strings, NUM_LATENCY_STATS); if (latency_stats_index < 0) { - RTE_LOG(DEBUG, LATENCY_STATS, - "Failed to register latency stats names\n"); + RTE_LOG_LINE(DEBUG, LATENCY_STATS, + "Failed to register latency stats names"); return -1; } @@ -253,8 +252,8 @@ rte_latencystats_init(uint64_t app_samp_intvl, ret = rte_mbuf_dyn_rx_timestamp_register(×tamp_dynfield_offset, ×tamp_dynflag); if (ret != 0) { - RTE_LOG(ERR, LATENCY_STATS, - "Cannot register mbuf field/flag for timestamp\n"); + RTE_LOG_LINE(ERR, LATENCY_STATS, + "Cannot register mbuf field/flag for timestamp"); return -rte_errno; } @@ -264,8 +263,8 @@ rte_latencystats_init(uint64_t app_samp_intvl, ret = rte_eth_dev_info_get(pid, &dev_info); if (ret != 0) { - RTE_LOG(INFO, LATENCY_STATS, - "Error during getting device (port %u) info: %s\n", + RTE_LOG_LINE(INFO, LATENCY_STATS, + "Error during getting device (port %u) info: %s", pid, strerror(-ret)); continue; @@ -276,18 +275,18 @@ rte_latencystats_init(uint64_t app_samp_intvl, cbs->cb = rte_eth_add_first_rx_callback(pid, qid, add_time_stamps, user_cb); if (!cbs->cb) - RTE_LOG(INFO, LATENCY_STATS, "Failed to " + RTE_LOG_LINE(INFO, LATENCY_STATS, "Failed to " "register Rx callback for pid=%d, " - "qid=%d\n", pid, qid); + "qid=%d", pid, qid); } for (qid = 0; qid < dev_info.nb_tx_queues; qid++) { cbs = &tx_cbs[pid][qid]; cbs->cb = rte_eth_add_tx_callback(pid, qid, calc_latency, user_cb); if (!cbs->cb) - RTE_LOG(INFO, LATENCY_STATS, "Failed to " + RTE_LOG_LINE(INFO, LATENCY_STATS, "Failed to " "register Tx callback for pid=%d, " - "qid=%d\n", pid, qid); + "qid=%d", pid, qid); } } return 0; @@ -308,8 +307,8 @@ rte_latencystats_uninit(void) ret = rte_eth_dev_info_get(pid, &dev_info); if (ret != 0) { - RTE_LOG(INFO, LATENCY_STATS, - "Error during getting device (port %u) info: %s\n", + RTE_LOG_LINE(INFO, LATENCY_STATS, + "Error during getting device (port %u) info: %s", pid, strerror(-ret)); continue; @@ -319,17 +318,17 @@ rte_latencystats_uninit(void) cbs = &rx_cbs[pid][qid]; ret = rte_eth_remove_rx_callback(pid, qid, cbs->cb); if (ret) - RTE_LOG(INFO, LATENCY_STATS, "failed to " + RTE_LOG_LINE(INFO, LATENCY_STATS, "failed to " "remove Rx callback for pid=%d, " - "qid=%d\n", pid, qid); + "qid=%d", pid, qid); } for (qid = 0; qid < dev_info.nb_tx_queues; qid++) { cbs = &tx_cbs[pid][qid]; ret = rte_eth_remove_tx_callback(pid, qid, cbs->cb); if (ret) - RTE_LOG(INFO, LATENCY_STATS, "failed to " + RTE_LOG_LINE(INFO, LATENCY_STATS, "failed to " "remove Tx callback for pid=%d, " - "qid=%d\n", pid, qid); + "qid=%d", pid, qid); } } @@ -366,8 +365,8 @@ rte_latencystats_get(struct rte_metric_value *values, uint16_t size) const struct rte_memzone *mz; mz = rte_memzone_lookup(MZ_RTE_LATENCY_STATS); if (mz == NULL) { - RTE_LOG(ERR, LATENCY_STATS, - "Latency stats memzone not found\n"); + RTE_LOG_LINE(ERR, LATENCY_STATS, + "Latency stats memzone not found"); return -ENOMEM; } glob_stats = mz->addr; diff --git a/lib/log/log.c b/lib/log/log.c index ab06132a98..fa22d128a7 100644 --- a/lib/log/log.c +++ b/lib/log/log.c @@ -146,7 +146,7 @@ logtype_set_level(uint32_t type, uint32_t level) if (current != level) { rte_logs.dynamic_types[type].loglevel = level; - RTE_LOG(DEBUG, EAL, "%s log level changed from %s to %s\n", + RTE_LOG_LINE(DEBUG, EAL, "%s log level changed from %s to %s", rte_logs.dynamic_types[type].name == NULL ? "" : rte_logs.dynamic_types[type].name, eal_log_level2str(current), @@ -518,8 +518,8 @@ eal_log_set_default(FILE *default_log) default_log_stream = default_log; #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG - RTE_LOG(NOTICE, EAL, - "Debug dataplane logs available - lower performance\n"); + RTE_LOG_LINE(NOTICE, EAL, + "Debug dataplane logs available - lower performance"); #endif } diff --git a/lib/lpm/rte_lpm.c b/lib/lpm/rte_lpm.c index 0ca8214786..a332faf720 100644 --- a/lib/lpm/rte_lpm.c +++ b/lib/lpm/rte_lpm.c @@ -192,7 +192,7 @@ rte_lpm_create(const char *name, int socket_id, /* allocate tailq entry */ te = rte_zmalloc("LPM_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, LPM, "Failed to allocate tailq entry\n"); + RTE_LOG_LINE(ERR, LPM, "Failed to allocate tailq entry"); rte_errno = ENOMEM; goto exit; } @@ -201,7 +201,7 @@ rte_lpm_create(const char *name, int socket_id, i_lpm = rte_zmalloc_socket(mem_name, mem_size, RTE_CACHE_LINE_SIZE, socket_id); if (i_lpm == NULL) { - RTE_LOG(ERR, LPM, "LPM memory allocation failed\n"); + RTE_LOG_LINE(ERR, LPM, "LPM memory allocation failed"); rte_free(te); rte_errno = ENOMEM; goto exit; @@ -211,7 +211,7 @@ rte_lpm_create(const char *name, int socket_id, (size_t)rules_size, RTE_CACHE_LINE_SIZE, socket_id); if (i_lpm->rules_tbl == NULL) { - RTE_LOG(ERR, LPM, "LPM rules_tbl memory allocation failed\n"); + RTE_LOG_LINE(ERR, LPM, "LPM rules_tbl memory allocation failed"); rte_free(i_lpm); i_lpm = NULL; rte_free(te); @@ -223,7 +223,7 @@ rte_lpm_create(const char *name, int socket_id, (size_t)tbl8s_size, RTE_CACHE_LINE_SIZE, socket_id); if (i_lpm->lpm.tbl8 == NULL) { - RTE_LOG(ERR, LPM, "LPM tbl8 memory allocation failed\n"); + RTE_LOG_LINE(ERR, LPM, "LPM tbl8 memory allocation failed"); rte_free(i_lpm->rules_tbl); rte_free(i_lpm); i_lpm = NULL; @@ -338,7 +338,7 @@ rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_lpm_rcu_config *cfg) params.v = cfg->v; i_lpm->dq = rte_rcu_qsbr_dq_create(¶ms); if (i_lpm->dq == NULL) { - RTE_LOG(ERR, LPM, "LPM defer queue creation failed\n"); + RTE_LOG_LINE(ERR, LPM, "LPM defer queue creation failed"); return 1; } } else { @@ -565,7 +565,7 @@ tbl8_free(struct __rte_lpm *i_lpm, uint32_t tbl8_group_start) status = rte_rcu_qsbr_dq_enqueue(i_lpm->dq, (void *)&tbl8_group_start); if (status == 1) { - RTE_LOG(ERR, LPM, "Failed to push QSBR FIFO\n"); + RTE_LOG_LINE(ERR, LPM, "Failed to push QSBR FIFO"); return -rte_errno; } } diff --git a/lib/lpm/rte_lpm6.c b/lib/lpm/rte_lpm6.c index 24ce7dd022..251bfcc73d 100644 --- a/lib/lpm/rte_lpm6.c +++ b/lib/lpm/rte_lpm6.c @@ -280,7 +280,7 @@ rte_lpm6_create(const char *name, int socket_id, rules_tbl = rte_hash_create(&rule_hash_tbl_params); if (rules_tbl == NULL) { - RTE_LOG(ERR, LPM, "LPM rules hash table allocation failed: %s (%d)\n", + RTE_LOG_LINE(ERR, LPM, "LPM rules hash table allocation failed: %s (%d)", rte_strerror(rte_errno), rte_errno); goto fail_wo_unlock; } @@ -290,7 +290,7 @@ rte_lpm6_create(const char *name, int socket_id, sizeof(uint32_t) * config->number_tbl8s, RTE_CACHE_LINE_SIZE); if (tbl8_pool == NULL) { - RTE_LOG(ERR, LPM, "LPM tbl8 pool allocation failed: %s (%d)\n", + RTE_LOG_LINE(ERR, LPM, "LPM tbl8 pool allocation failed: %s (%d)", rte_strerror(rte_errno), rte_errno); rte_errno = ENOMEM; goto fail_wo_unlock; @@ -301,7 +301,7 @@ rte_lpm6_create(const char *name, int socket_id, sizeof(struct rte_lpm_tbl8_hdr) * config->number_tbl8s, RTE_CACHE_LINE_SIZE); if (tbl8_hdrs == NULL) { - RTE_LOG(ERR, LPM, "LPM tbl8 headers allocation failed: %s (%d)\n", + RTE_LOG_LINE(ERR, LPM, "LPM tbl8 headers allocation failed: %s (%d)", rte_strerror(rte_errno), rte_errno); rte_errno = ENOMEM; goto fail_wo_unlock; @@ -330,7 +330,7 @@ rte_lpm6_create(const char *name, int socket_id, /* allocate tailq entry */ te = rte_zmalloc("LPM6_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, LPM, "Failed to allocate tailq entry!\n"); + RTE_LOG_LINE(ERR, LPM, "Failed to allocate tailq entry!"); rte_errno = ENOMEM; goto fail; } @@ -340,7 +340,7 @@ rte_lpm6_create(const char *name, int socket_id, RTE_CACHE_LINE_SIZE, socket_id); if (lpm == NULL) { - RTE_LOG(ERR, LPM, "LPM memory allocation failed\n"); + RTE_LOG_LINE(ERR, LPM, "LPM memory allocation failed"); rte_free(te); rte_errno = ENOMEM; goto fail; diff --git a/lib/mbuf/rte_mbuf.c b/lib/mbuf/rte_mbuf.c index 3eccc61827..8472c6a977 100644 --- a/lib/mbuf/rte_mbuf.c +++ b/lib/mbuf/rte_mbuf.c @@ -231,7 +231,7 @@ rte_pktmbuf_pool_create_by_ops(const char *name, unsigned int n, int ret; if (RTE_ALIGN(priv_size, RTE_MBUF_PRIV_ALIGN) != priv_size) { - RTE_LOG(ERR, MBUF, "mbuf priv_size=%u is not aligned\n", + RTE_LOG_LINE(ERR, MBUF, "mbuf priv_size=%u is not aligned", priv_size); rte_errno = EINVAL; return NULL; @@ -251,7 +251,7 @@ rte_pktmbuf_pool_create_by_ops(const char *name, unsigned int n, mp_ops_name = rte_mbuf_best_mempool_ops(); ret = rte_mempool_set_ops_byname(mp, mp_ops_name, NULL); if (ret != 0) { - RTE_LOG(ERR, MBUF, "error setting mempool handler\n"); + RTE_LOG_LINE(ERR, MBUF, "error setting mempool handler"); rte_mempool_free(mp); rte_errno = -ret; return NULL; @@ -297,7 +297,7 @@ rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n, int ret; if (RTE_ALIGN(priv_size, RTE_MBUF_PRIV_ALIGN) != priv_size) { - RTE_LOG(ERR, MBUF, "mbuf priv_size=%u is not aligned\n", + RTE_LOG_LINE(ERR, MBUF, "mbuf priv_size=%u is not aligned", priv_size); rte_errno = EINVAL; return NULL; @@ -307,12 +307,12 @@ rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n, const struct rte_pktmbuf_extmem *extm = ext_mem + i; if (!extm->elt_size || !extm->buf_len || !extm->buf_ptr) { - RTE_LOG(ERR, MBUF, "invalid extmem descriptor\n"); + RTE_LOG_LINE(ERR, MBUF, "invalid extmem descriptor"); rte_errno = EINVAL; return NULL; } if (data_room_size > extm->elt_size) { - RTE_LOG(ERR, MBUF, "ext elt_size=%u is too small\n", + RTE_LOG_LINE(ERR, MBUF, "ext elt_size=%u is too small", priv_size); rte_errno = EINVAL; return NULL; @@ -321,7 +321,7 @@ rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n, } /* Check whether enough external memory provided. */ if (n_elts < n) { - RTE_LOG(ERR, MBUF, "not enough extmem\n"); + RTE_LOG_LINE(ERR, MBUF, "not enough extmem"); rte_errno = ENOMEM; return NULL; } @@ -342,7 +342,7 @@ rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n, mp_ops_name = rte_mbuf_best_mempool_ops(); ret = rte_mempool_set_ops_byname(mp, mp_ops_name, NULL); if (ret != 0) { - RTE_LOG(ERR, MBUF, "error setting mempool handler\n"); + RTE_LOG_LINE(ERR, MBUF, "error setting mempool handler"); rte_mempool_free(mp); rte_errno = -ret; return NULL; diff --git a/lib/mbuf/rte_mbuf_dyn.c b/lib/mbuf/rte_mbuf_dyn.c index 4fb1863a10..a9f7bb2b81 100644 --- a/lib/mbuf/rte_mbuf_dyn.c +++ b/lib/mbuf/rte_mbuf_dyn.c @@ -118,7 +118,7 @@ init_shared_mem(void) mz = rte_memzone_lookup(RTE_MBUF_DYN_MZNAME); } if (mz == NULL) { - RTE_LOG(ERR, MBUF, "Failed to get mbuf dyn shared memory\n"); + RTE_LOG_LINE(ERR, MBUF, "Failed to get mbuf dyn shared memory"); return -1; } @@ -317,7 +317,7 @@ __rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield *params, shm->free_space[i] = 0; process_score(); - RTE_LOG(DEBUG, MBUF, "Registered dynamic field %s (sz=%zu, al=%zu, fl=0x%x) -> %zd\n", + RTE_LOG_LINE(DEBUG, MBUF, "Registered dynamic field %s (sz=%zu, al=%zu, fl=0x%x) -> %zd", params->name, params->size, params->align, params->flags, offset); @@ -491,7 +491,7 @@ __rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params, shm->free_flags &= ~(1ULL << bitnum); - RTE_LOG(DEBUG, MBUF, "Registered dynamic flag %s (fl=0x%x) -> %u\n", + RTE_LOG_LINE(DEBUG, MBUF, "Registered dynamic flag %s (fl=0x%x) -> %u", params->name, params->flags, bitnum); return bitnum; @@ -592,8 +592,8 @@ rte_mbuf_dyn_timestamp_register(int *field_offset, uint64_t *flag, offset = rte_mbuf_dynfield_register(&field_desc); if (offset < 0) { - RTE_LOG(ERR, MBUF, - "Failed to register mbuf field for timestamp\n"); + RTE_LOG_LINE(ERR, MBUF, + "Failed to register mbuf field for timestamp"); return -1; } if (field_offset != NULL) @@ -602,8 +602,8 @@ rte_mbuf_dyn_timestamp_register(int *field_offset, uint64_t *flag, strlcpy(flag_desc.name, flag_name, sizeof(flag_desc.name)); offset = rte_mbuf_dynflag_register(&flag_desc); if (offset < 0) { - RTE_LOG(ERR, MBUF, - "Failed to register mbuf flag for %s timestamp\n", + RTE_LOG_LINE(ERR, MBUF, + "Failed to register mbuf flag for %s timestamp", direction); return -1; } diff --git a/lib/mbuf/rte_mbuf_pool_ops.c b/lib/mbuf/rte_mbuf_pool_ops.c index 5318430126..639aa557f8 100644 --- a/lib/mbuf/rte_mbuf_pool_ops.c +++ b/lib/mbuf/rte_mbuf_pool_ops.c @@ -33,8 +33,8 @@ rte_mbuf_set_platform_mempool_ops(const char *ops_name) return 0; } - RTE_LOG(ERR, MBUF, - "%s is already registered as platform mbuf pool ops\n", + RTE_LOG_LINE(ERR, MBUF, + "%s is already registered as platform mbuf pool ops", (char *)mz->addr); return -EEXIST; } diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c index 2f8adad5ca..b66c8898a8 100644 --- a/lib/mempool/rte_mempool.c +++ b/lib/mempool/rte_mempool.c @@ -775,7 +775,7 @@ rte_mempool_cache_create(uint32_t size, int socket_id) cache = rte_zmalloc_socket("MEMPOOL_CACHE", sizeof(*cache), RTE_CACHE_LINE_SIZE, socket_id); if (cache == NULL) { - RTE_LOG(ERR, MEMPOOL, "Cannot allocate mempool cache.\n"); + RTE_LOG_LINE(ERR, MEMPOOL, "Cannot allocate mempool cache."); rte_errno = ENOMEM; return NULL; } @@ -877,7 +877,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size, /* try to allocate tailq entry */ te = rte_zmalloc("MEMPOOL_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, MEMPOOL, "Cannot allocate tailq entry!\n"); + RTE_LOG_LINE(ERR, MEMPOOL, "Cannot allocate tailq entry!"); goto exit_unlock; } @@ -1088,16 +1088,16 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp, if (free == 0) { if (cookie != RTE_MEMPOOL_HEADER_COOKIE1) { - RTE_LOG(CRIT, MEMPOOL, - "obj=%p, mempool=%p, cookie=%" PRIx64 "\n", + RTE_LOG_LINE(CRIT, MEMPOOL, + "obj=%p, mempool=%p, cookie=%" PRIx64, obj, (const void *) mp, cookie); rte_panic("MEMPOOL: bad header cookie (put)\n"); } hdr->cookie = RTE_MEMPOOL_HEADER_COOKIE2; } else if (free == 1) { if (cookie != RTE_MEMPOOL_HEADER_COOKIE2) { - RTE_LOG(CRIT, MEMPOOL, - "obj=%p, mempool=%p, cookie=%" PRIx64 "\n", + RTE_LOG_LINE(CRIT, MEMPOOL, + "obj=%p, mempool=%p, cookie=%" PRIx64, obj, (const void *) mp, cookie); rte_panic("MEMPOOL: bad header cookie (get)\n"); } @@ -1105,8 +1105,8 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp, } else if (free == 2) { if (cookie != RTE_MEMPOOL_HEADER_COOKIE1 && cookie != RTE_MEMPOOL_HEADER_COOKIE2) { - RTE_LOG(CRIT, MEMPOOL, - "obj=%p, mempool=%p, cookie=%" PRIx64 "\n", + RTE_LOG_LINE(CRIT, MEMPOOL, + "obj=%p, mempool=%p, cookie=%" PRIx64, obj, (const void *) mp, cookie); rte_panic("MEMPOOL: bad header cookie (audit)\n"); } @@ -1114,8 +1114,8 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp, tlr = rte_mempool_get_trailer(obj); cookie = tlr->cookie; if (cookie != RTE_MEMPOOL_TRAILER_COOKIE) { - RTE_LOG(CRIT, MEMPOOL, - "obj=%p, mempool=%p, cookie=%" PRIx64 "\n", + RTE_LOG_LINE(CRIT, MEMPOOL, + "obj=%p, mempool=%p, cookie=%" PRIx64, obj, (const void *) mp, cookie); rte_panic("MEMPOOL: bad trailer cookie\n"); } @@ -1200,7 +1200,7 @@ mempool_audit_cache(const struct rte_mempool *mp) const struct rte_mempool_cache *cache; cache = &mp->local_cache[lcore_id]; if (cache->len > RTE_DIM(cache->objs)) { - RTE_LOG(CRIT, MEMPOOL, "badness on cache[%u]\n", + RTE_LOG_LINE(CRIT, MEMPOOL, "badness on cache[%u]", lcore_id); rte_panic("MEMPOOL: invalid cache len\n"); } @@ -1429,7 +1429,7 @@ rte_mempool_event_callback_register(rte_mempool_event_callback *func, cb = calloc(1, sizeof(*cb)); if (cb == NULL) { - RTE_LOG(ERR, MEMPOOL, "Cannot allocate event callback!\n"); + RTE_LOG_LINE(ERR, MEMPOOL, "Cannot allocate event callback!"); ret = -ENOMEM; goto exit; } diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h index 4f8511b8f5..30ce579737 100644 --- a/lib/mempool/rte_mempool.h +++ b/lib/mempool/rte_mempool.h @@ -847,7 +847,7 @@ rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table, ret = ops->enqueue(mp, obj_table, n); #ifdef RTE_LIBRTE_MEMPOOL_DEBUG if (unlikely(ret < 0)) - RTE_LOG(CRIT, MEMPOOL, "cannot enqueue %u objects to mempool %s\n", + RTE_LOG_LINE(CRIT, MEMPOOL, "cannot enqueue %u objects to mempool %s", n, mp->name); #endif return ret; diff --git a/lib/mempool/rte_mempool_ops.c b/lib/mempool/rte_mempool_ops.c index e871de9ec9..d35e9b118b 100644 --- a/lib/mempool/rte_mempool_ops.c +++ b/lib/mempool/rte_mempool_ops.c @@ -31,22 +31,22 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h) if (rte_mempool_ops_table.num_ops >= RTE_MEMPOOL_MAX_OPS_IDX) { rte_spinlock_unlock(&rte_mempool_ops_table.sl); - RTE_LOG(ERR, MEMPOOL, - "Maximum number of mempool ops structs exceeded\n"); + RTE_LOG_LINE(ERR, MEMPOOL, + "Maximum number of mempool ops structs exceeded"); return -ENOSPC; } if (h->alloc == NULL || h->enqueue == NULL || h->dequeue == NULL || h->get_count == NULL) { rte_spinlock_unlock(&rte_mempool_ops_table.sl); - RTE_LOG(ERR, MEMPOOL, - "Missing callback while registering mempool ops\n"); + RTE_LOG_LINE(ERR, MEMPOOL, + "Missing callback while registering mempool ops"); return -EINVAL; } if (strlen(h->name) >= sizeof(ops->name) - 1) { rte_spinlock_unlock(&rte_mempool_ops_table.sl); - RTE_LOG(DEBUG, MEMPOOL, "%s(): mempool_ops <%s>: name too long\n", + RTE_LOG_LINE(DEBUG, MEMPOOL, "%s(): mempool_ops <%s>: name too long", __func__, h->name); rte_errno = EEXIST; return -EEXIST; diff --git a/lib/pipeline/rte_pipeline.c b/lib/pipeline/rte_pipeline.c index 436cf54953..fe91c48947 100644 --- a/lib/pipeline/rte_pipeline.c +++ b/lib/pipeline/rte_pipeline.c @@ -160,22 +160,22 @@ static int rte_pipeline_check_params(struct rte_pipeline_params *params) { if (params == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: Incorrect value for parameter params\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Incorrect value for parameter params", __func__); return -EINVAL; } /* name */ if (params->name == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: Incorrect value for parameter name\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Incorrect value for parameter name", __func__); return -EINVAL; } /* socket */ if (params->socket_id < 0) { - RTE_LOG(ERR, PIPELINE, - "%s: Incorrect value for parameter socket_id\n", + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Incorrect value for parameter socket_id", __func__); return -EINVAL; } @@ -192,8 +192,8 @@ rte_pipeline_create(struct rte_pipeline_params *params) /* Check input parameters */ status = rte_pipeline_check_params(params); if (status != 0) { - RTE_LOG(ERR, PIPELINE, - "%s: Pipeline params check failed (%d)\n", + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Pipeline params check failed (%d)", __func__, status); return NULL; } @@ -203,8 +203,8 @@ rte_pipeline_create(struct rte_pipeline_params *params) RTE_CACHE_LINE_SIZE, params->socket_id); if (p == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: Pipeline memory allocation failed\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Pipeline memory allocation failed", __func__); return NULL; } @@ -232,8 +232,8 @@ rte_pipeline_free(struct rte_pipeline *p) /* Check input parameters */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: rte_pipeline parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: rte_pipeline parameter is NULL", __func__); return -EINVAL; } @@ -273,44 +273,44 @@ rte_table_check_params(struct rte_pipeline *p, uint32_t *table_id) { if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter is NULL", __func__); return -EINVAL; } if (params == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: params parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: params parameter is NULL", __func__); return -EINVAL; } if (table_id == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: table_id parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: table_id parameter is NULL", __func__); return -EINVAL; } /* ops */ if (params->ops == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: params->ops is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: params->ops is NULL", __func__); return -EINVAL; } if (params->ops->f_create == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_create function pointer is NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: f_create function pointer is NULL", __func__); return -EINVAL; } if (params->ops->f_lookup == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_lookup function pointer is NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: f_lookup function pointer is NULL", __func__); return -EINVAL; } /* De we have room for one more table? */ if (p->num_tables == RTE_PIPELINE_TABLE_MAX) { - RTE_LOG(ERR, PIPELINE, - "%s: Incorrect value for num_tables parameter\n", + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Incorrect value for num_tables parameter", __func__); return -EINVAL; } @@ -343,8 +343,8 @@ rte_pipeline_table_create(struct rte_pipeline *p, default_entry = rte_zmalloc_socket( "PIPELINE", entry_size, RTE_CACHE_LINE_SIZE, p->socket_id); if (default_entry == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: Failed to allocate default entry\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Failed to allocate default entry", __func__); return -EINVAL; } @@ -353,7 +353,7 @@ rte_pipeline_table_create(struct rte_pipeline *p, entry_size); if (h_table == NULL) { rte_free(default_entry); - RTE_LOG(ERR, PIPELINE, "%s: Table creation failed\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, "%s: Table creation failed", __func__); return -EINVAL; } @@ -399,20 +399,20 @@ rte_pipeline_table_default_entry_add(struct rte_pipeline *p, /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter is NULL", __func__); return -EINVAL; } if (default_entry == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: default_entry parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: default_entry parameter is NULL", __func__); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: table_id %d out of range\n", __func__, table_id); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: table_id %d out of range", __func__, table_id); return -EINVAL; } @@ -421,8 +421,8 @@ rte_pipeline_table_default_entry_add(struct rte_pipeline *p, if ((default_entry->action == RTE_PIPELINE_ACTION_TABLE) && table->table_next_id_valid && (default_entry->table_id != table->table_next_id)) { - RTE_LOG(ERR, PIPELINE, - "%s: Tree-like topologies not allowed\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Tree-like topologies not allowed", __func__); return -EINVAL; } @@ -448,14 +448,14 @@ rte_pipeline_table_default_entry_delete(struct rte_pipeline *p, /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: pipeline parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: pipeline parameter is NULL", __func__); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: table_id %d out of range\n", __func__, table_id); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: table_id %d out of range", __func__, table_id); return -EINVAL; } @@ -484,32 +484,32 @@ rte_pipeline_table_entry_add(struct rte_pipeline *p, /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter is NULL", __func__); return -EINVAL; } if (key == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: key parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, "%s: key parameter is NULL", __func__); return -EINVAL; } if (entry == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: entry parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: entry parameter is NULL", __func__); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: table_id %d out of range\n", __func__, table_id); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: table_id %d out of range", __func__, table_id); return -EINVAL; } table = &p->tables[table_id]; if (table->ops.f_add == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: f_add function pointer NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: f_add function pointer NULL", __func__); return -EINVAL; } @@ -517,8 +517,8 @@ rte_pipeline_table_entry_add(struct rte_pipeline *p, if ((entry->action == RTE_PIPELINE_ACTION_TABLE) && table->table_next_id_valid && (entry->table_id != table->table_next_id)) { - RTE_LOG(ERR, PIPELINE, - "%s: Tree-like topologies not allowed\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Tree-like topologies not allowed", __func__); return -EINVAL; } @@ -544,28 +544,28 @@ rte_pipeline_table_entry_delete(struct rte_pipeline *p, /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (key == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: key parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: key parameter is NULL", __func__); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: table_id %d out of range\n", __func__, table_id); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: table_id %d out of range", __func__, table_id); return -EINVAL; } table = &p->tables[table_id]; if (table->ops.f_delete == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_delete function pointer NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: f_delete function pointer NULL", __func__); return -EINVAL; } @@ -585,32 +585,32 @@ int rte_pipeline_table_entry_add_bulk(struct rte_pipeline *p, /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter is NULL", __func__); return -EINVAL; } if (keys == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: keys parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, "%s: keys parameter is NULL", __func__); return -EINVAL; } if (entries == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: entries parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: entries parameter is NULL", __func__); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: table_id %d out of range\n", __func__, table_id); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: table_id %d out of range", __func__, table_id); return -EINVAL; } table = &p->tables[table_id]; if (table->ops.f_add_bulk == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: f_add_bulk function pointer NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: f_add_bulk function pointer NULL", __func__); return -EINVAL; } @@ -619,8 +619,8 @@ int rte_pipeline_table_entry_add_bulk(struct rte_pipeline *p, if ((entries[i]->action == RTE_PIPELINE_ACTION_TABLE) && table->table_next_id_valid && (entries[i]->table_id != table->table_next_id)) { - RTE_LOG(ERR, PIPELINE, - "%s: Tree-like topologies not allowed\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Tree-like topologies not allowed", __func__); return -EINVAL; } } @@ -649,28 +649,28 @@ int rte_pipeline_table_entry_delete_bulk(struct rte_pipeline *p, /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (keys == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: key parameter is NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: key parameter is NULL", __func__); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: table_id %d out of range\n", __func__, table_id); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: table_id %d out of range", __func__, table_id); return -EINVAL; } table = &p->tables[table_id]; if (table->ops.f_delete_bulk == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_delete function pointer NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: f_delete function pointer NULL", __func__); return -EINVAL; } @@ -687,35 +687,35 @@ rte_pipeline_port_in_check_params(struct rte_pipeline *p, uint32_t *port_id) { if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (params == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: params parameter NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, "%s: params parameter NULL", __func__); return -EINVAL; } if (port_id == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: port_id parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: port_id parameter NULL", __func__); return -EINVAL; } /* ops */ if (params->ops == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: params->ops parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: params->ops parameter NULL", __func__); return -EINVAL; } if (params->ops->f_create == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_create function pointer NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: f_create function pointer NULL", __func__); return -EINVAL; } if (params->ops->f_rx == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: f_rx function pointer NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: f_rx function pointer NULL", __func__); return -EINVAL; } @@ -723,15 +723,15 @@ rte_pipeline_port_in_check_params(struct rte_pipeline *p, /* burst_size */ if ((params->burst_size == 0) || (params->burst_size > RTE_PORT_IN_BURST_SIZE_MAX)) { - RTE_LOG(ERR, PIPELINE, "%s: invalid value for burst_size\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: invalid value for burst_size", __func__); return -EINVAL; } /* Do we have room for one more port? */ if (p->num_ports_in == RTE_PIPELINE_PORT_IN_MAX) { - RTE_LOG(ERR, PIPELINE, - "%s: invalid value for num_ports_in\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: invalid value for num_ports_in", __func__); return -EINVAL; } @@ -744,51 +744,51 @@ rte_pipeline_port_out_check_params(struct rte_pipeline *p, uint32_t *port_id) { if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (params == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: params parameter NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, "%s: params parameter NULL", __func__); return -EINVAL; } if (port_id == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: port_id parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: port_id parameter NULL", __func__); return -EINVAL; } /* ops */ if (params->ops == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: params->ops parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: params->ops parameter NULL", __func__); return -EINVAL; } if (params->ops->f_create == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_create function pointer NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: f_create function pointer NULL", __func__); return -EINVAL; } if (params->ops->f_tx == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_tx function pointer NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: f_tx function pointer NULL", __func__); return -EINVAL; } if (params->ops->f_tx_bulk == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_tx_bulk function pointer NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: f_tx_bulk function pointer NULL", __func__); return -EINVAL; } /* Do we have room for one more port? */ if (p->num_ports_out == RTE_PIPELINE_PORT_OUT_MAX) { - RTE_LOG(ERR, PIPELINE, - "%s: invalid value for num_ports_out\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: invalid value for num_ports_out", __func__); return -EINVAL; } @@ -816,7 +816,7 @@ rte_pipeline_port_in_create(struct rte_pipeline *p, /* Create the port */ h_port = params->ops->f_create(params->arg_create, p->socket_id); if (h_port == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: Port creation failed\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, "%s: Port creation failed", __func__); return -EINVAL; } @@ -866,7 +866,7 @@ rte_pipeline_port_out_create(struct rte_pipeline *p, /* Create the port */ h_port = params->ops->f_create(params->arg_create, p->socket_id); if (h_port == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: Port creation failed\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, "%s: Port creation failed", __func__); return -EINVAL; } @@ -901,21 +901,21 @@ rte_pipeline_port_in_connect_to_table(struct rte_pipeline *p, /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (port_id >= p->num_ports_in) { - RTE_LOG(ERR, PIPELINE, - "%s: port IN ID %u is out of range\n", + RTE_LOG_LINE(ERR, PIPELINE, + "%s: port IN ID %u is out of range", __func__, port_id); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: Table ID %u is out of range\n", + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Table ID %u is out of range", __func__, table_id); return -EINVAL; } @@ -935,14 +935,14 @@ rte_pipeline_port_in_enable(struct rte_pipeline *p, uint32_t port_id) /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (port_id >= p->num_ports_in) { - RTE_LOG(ERR, PIPELINE, - "%s: port IN ID %u is out of range\n", + RTE_LOG_LINE(ERR, PIPELINE, + "%s: port IN ID %u is out of range", __func__, port_id); return -EINVAL; } @@ -982,13 +982,13 @@ rte_pipeline_port_in_disable(struct rte_pipeline *p, uint32_t port_id) /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (port_id >= p->num_ports_in) { - RTE_LOG(ERR, PIPELINE, "%s: port IN ID %u is out of range\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: port IN ID %u is out of range", __func__, port_id); return -EINVAL; } @@ -1035,7 +1035,7 @@ rte_pipeline_check(struct rte_pipeline *p) /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } @@ -1043,17 +1043,17 @@ rte_pipeline_check(struct rte_pipeline *p) /* Check that pipeline has at least one input port, one table and one output port */ if (p->num_ports_in == 0) { - RTE_LOG(ERR, PIPELINE, "%s: must have at least 1 input port\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: must have at least 1 input port", __func__); return -EINVAL; } if (p->num_tables == 0) { - RTE_LOG(ERR, PIPELINE, "%s: must have at least 1 table\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: must have at least 1 table", __func__); return -EINVAL; } if (p->num_ports_out == 0) { - RTE_LOG(ERR, PIPELINE, "%s: must have at least 1 output port\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: must have at least 1 output port", __func__); return -EINVAL; } @@ -1063,8 +1063,8 @@ rte_pipeline_check(struct rte_pipeline *p) struct rte_port_in *port_in = &p->ports_in[port_in_id]; if (port_in->table_id == RTE_TABLE_INVALID) { - RTE_LOG(ERR, PIPELINE, - "%s: Port IN ID %u is not connected\n", + RTE_LOG_LINE(ERR, PIPELINE, + "%s: Port IN ID %u is not connected", __func__, port_in_id); return -EINVAL; } @@ -1447,7 +1447,7 @@ rte_pipeline_flush(struct rte_pipeline *p) /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } @@ -1500,14 +1500,14 @@ int rte_pipeline_port_in_stats_read(struct rte_pipeline *p, uint32_t port_id, int retval; if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (port_id >= p->num_ports_in) { - RTE_LOG(ERR, PIPELINE, - "%s: port IN ID %u is out of range\n", + RTE_LOG_LINE(ERR, PIPELINE, + "%s: port IN ID %u is out of range", __func__, port_id); return -EINVAL; } @@ -1537,13 +1537,13 @@ int rte_pipeline_port_out_stats_read(struct rte_pipeline *p, uint32_t port_id, int retval; if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", __func__); + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (port_id >= p->num_ports_out) { - RTE_LOG(ERR, PIPELINE, - "%s: port OUT ID %u is out of range\n", __func__, port_id); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: port OUT ID %u is out of range", __func__, port_id); return -EINVAL; } @@ -1571,14 +1571,14 @@ int rte_pipeline_table_stats_read(struct rte_pipeline *p, uint32_t table_id, int retval; if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: table %u is out of range\n", __func__, table_id); + RTE_LOG_LINE(ERR, PIPELINE, + "%s: table %u is out of range", __func__, table_id); return -EINVAL; } diff --git a/lib/port/rte_port_ethdev.c b/lib/port/rte_port_ethdev.c index e6bb7ee480..7f7eadda11 100644 --- a/lib/port/rte_port_ethdev.c +++ b/lib/port/rte_port_ethdev.c @@ -43,7 +43,7 @@ rte_port_ethdev_reader_create(void *params, int socket_id) /* Check input parameters */ if (conf == NULL) { - RTE_LOG(ERR, PORT, "%s: params is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: params is NULL", __func__); return NULL; } @@ -51,7 +51,7 @@ rte_port_ethdev_reader_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -78,7 +78,7 @@ static int rte_port_ethdev_reader_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__); return -EINVAL; } @@ -142,7 +142,7 @@ rte_port_ethdev_writer_create(void *params, int socket_id) (conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->tx_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__); return NULL; } @@ -150,7 +150,7 @@ rte_port_ethdev_writer_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -257,7 +257,7 @@ static int rte_port_ethdev_writer_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } @@ -323,7 +323,7 @@ rte_port_ethdev_writer_nodrop_create(void *params, int socket_id) (conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->tx_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__); return NULL; } @@ -331,7 +331,7 @@ rte_port_ethdev_writer_nodrop_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -470,7 +470,7 @@ static int rte_port_ethdev_writer_nodrop_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } diff --git a/lib/port/rte_port_eventdev.c b/lib/port/rte_port_eventdev.c index 13350fd608..1d0571966c 100644 --- a/lib/port/rte_port_eventdev.c +++ b/lib/port/rte_port_eventdev.c @@ -45,7 +45,7 @@ rte_port_eventdev_reader_create(void *params, int socket_id) /* Check input parameters */ if (conf == NULL) { - RTE_LOG(ERR, PORT, "%s: params is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: params is NULL", __func__); return NULL; } @@ -53,7 +53,7 @@ rte_port_eventdev_reader_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -85,7 +85,7 @@ static int rte_port_eventdev_reader_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__); return -EINVAL; } @@ -155,7 +155,7 @@ rte_port_eventdev_writer_create(void *params, int socket_id) (conf->enq_burst_sz == 0) || (conf->enq_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->enq_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__); return NULL; } @@ -163,7 +163,7 @@ rte_port_eventdev_writer_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -290,7 +290,7 @@ static int rte_port_eventdev_writer_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } @@ -362,7 +362,7 @@ rte_port_eventdev_writer_nodrop_create(void *params, int socket_id) (conf->enq_burst_sz == 0) || (conf->enq_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->enq_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__); return NULL; } @@ -370,7 +370,7 @@ rte_port_eventdev_writer_nodrop_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -530,7 +530,7 @@ static int rte_port_eventdev_writer_nodrop_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } diff --git a/lib/port/rte_port_fd.c b/lib/port/rte_port_fd.c index 7e140793b2..1b95d7b014 100644 --- a/lib/port/rte_port_fd.c +++ b/lib/port/rte_port_fd.c @@ -43,19 +43,19 @@ rte_port_fd_reader_create(void *params, int socket_id) /* Check input parameters */ if (conf == NULL) { - RTE_LOG(ERR, PORT, "%s: params is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: params is NULL", __func__); return NULL; } if (conf->fd < 0) { - RTE_LOG(ERR, PORT, "%s: Invalid file descriptor\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid file descriptor", __func__); return NULL; } if (conf->mtu == 0) { - RTE_LOG(ERR, PORT, "%s: Invalid MTU\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid MTU", __func__); return NULL; } if (conf->mempool == NULL) { - RTE_LOG(ERR, PORT, "%s: Invalid mempool\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid mempool", __func__); return NULL; } @@ -63,7 +63,7 @@ rte_port_fd_reader_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -109,7 +109,7 @@ static int rte_port_fd_reader_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__); return -EINVAL; } @@ -171,7 +171,7 @@ rte_port_fd_writer_create(void *params, int socket_id) (conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->tx_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__); return NULL; } @@ -179,7 +179,7 @@ rte_port_fd_writer_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -279,7 +279,7 @@ static int rte_port_fd_writer_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } @@ -344,7 +344,7 @@ rte_port_fd_writer_nodrop_create(void *params, int socket_id) (conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->tx_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__); return NULL; } @@ -352,7 +352,7 @@ rte_port_fd_writer_nodrop_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -464,7 +464,7 @@ static int rte_port_fd_writer_nodrop_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } diff --git a/lib/port/rte_port_frag.c b/lib/port/rte_port_frag.c index e1f1892176..39ff31e447 100644 --- a/lib/port/rte_port_frag.c +++ b/lib/port/rte_port_frag.c @@ -62,24 +62,24 @@ rte_port_ring_reader_frag_create(void *params, int socket_id, int is_ipv4) /* Check input parameters */ if (conf == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter conf is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Parameter conf is NULL", __func__); return NULL; } if (conf->ring == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter ring is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Parameter ring is NULL", __func__); return NULL; } if (conf->mtu == 0) { - RTE_LOG(ERR, PORT, "%s: Parameter mtu is invalid\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Parameter mtu is invalid", __func__); return NULL; } if (conf->pool_direct == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter pool_direct is NULL\n", + RTE_LOG_LINE(ERR, PORT, "%s: Parameter pool_direct is NULL", __func__); return NULL; } if (conf->pool_indirect == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter pool_indirect is NULL\n", + RTE_LOG_LINE(ERR, PORT, "%s: Parameter pool_indirect is NULL", __func__); return NULL; } @@ -88,7 +88,7 @@ rte_port_ring_reader_frag_create(void *params, int socket_id, int is_ipv4) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__); return NULL; } @@ -232,7 +232,7 @@ static int rte_port_ring_reader_frag_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Parameter port is NULL", __func__); return -1; } diff --git a/lib/port/rte_port_ras.c b/lib/port/rte_port_ras.c index 15109661d1..1e697fd226 100644 --- a/lib/port/rte_port_ras.c +++ b/lib/port/rte_port_ras.c @@ -69,16 +69,16 @@ rte_port_ring_writer_ras_create(void *params, int socket_id, int is_ipv4) /* Check input parameters */ if (conf == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter conf is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Parameter conf is NULL", __func__); return NULL; } if (conf->ring == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter ring is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Parameter ring is NULL", __func__); return NULL; } if ((conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX)) { - RTE_LOG(ERR, PORT, "%s: Parameter tx_burst_sz is invalid\n", + RTE_LOG_LINE(ERR, PORT, "%s: Parameter tx_burst_sz is invalid", __func__); return NULL; } @@ -87,7 +87,7 @@ rte_port_ring_writer_ras_create(void *params, int socket_id, int is_ipv4) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate socket\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate socket", __func__); return NULL; } @@ -103,7 +103,7 @@ rte_port_ring_writer_ras_create(void *params, int socket_id, int is_ipv4) socket_id); if (port->frag_tbl == NULL) { - RTE_LOG(ERR, PORT, "%s: rte_ip_frag_table_create failed\n", + RTE_LOG_LINE(ERR, PORT, "%s: rte_ip_frag_table_create failed", __func__); rte_free(port); return NULL; @@ -282,7 +282,7 @@ rte_port_ring_writer_ras_free(void *port) port; if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Parameter port is NULL", __func__); return -1; } diff --git a/lib/port/rte_port_ring.c b/lib/port/rte_port_ring.c index 002efb7c3e..42b33763d1 100644 --- a/lib/port/rte_port_ring.c +++ b/lib/port/rte_port_ring.c @@ -46,7 +46,7 @@ rte_port_ring_reader_create_internal(void *params, int socket_id, (conf->ring == NULL) || (rte_ring_is_cons_single(conf->ring) && is_multi) || (!rte_ring_is_cons_single(conf->ring) && !is_multi)) { - RTE_LOG(ERR, PORT, "%s: Invalid Parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid Parameters", __func__); return NULL; } @@ -54,7 +54,7 @@ rte_port_ring_reader_create_internal(void *params, int socket_id, port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -107,7 +107,7 @@ static int rte_port_ring_reader_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__); return -EINVAL; } @@ -174,7 +174,7 @@ rte_port_ring_writer_create_internal(void *params, int socket_id, (rte_ring_is_prod_single(conf->ring) && is_multi) || (!rte_ring_is_prod_single(conf->ring) && !is_multi) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX)) { - RTE_LOG(ERR, PORT, "%s: Invalid Parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid Parameters", __func__); return NULL; } @@ -182,7 +182,7 @@ rte_port_ring_writer_create_internal(void *params, int socket_id, port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -370,7 +370,7 @@ rte_port_ring_writer_free(void *port) struct rte_port_ring_writer *p = port; if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } @@ -443,7 +443,7 @@ rte_port_ring_writer_nodrop_create_internal(void *params, int socket_id, (rte_ring_is_prod_single(conf->ring) && is_multi) || (!rte_ring_is_prod_single(conf->ring) && !is_multi) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX)) { - RTE_LOG(ERR, PORT, "%s: Invalid Parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid Parameters", __func__); return NULL; } @@ -451,7 +451,7 @@ rte_port_ring_writer_nodrop_create_internal(void *params, int socket_id, port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -703,7 +703,7 @@ rte_port_ring_writer_nodrop_free(void *port) port; if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } diff --git a/lib/port/rte_port_sched.c b/lib/port/rte_port_sched.c index f6255c4346..e83112989f 100644 --- a/lib/port/rte_port_sched.c +++ b/lib/port/rte_port_sched.c @@ -40,7 +40,7 @@ rte_port_sched_reader_create(void *params, int socket_id) /* Check input parameters */ if ((conf == NULL) || (conf->sched == NULL)) { - RTE_LOG(ERR, PORT, "%s: Invalid params\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid params", __func__); return NULL; } @@ -48,7 +48,7 @@ rte_port_sched_reader_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -74,7 +74,7 @@ static int rte_port_sched_reader_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__); return -EINVAL; } @@ -139,7 +139,7 @@ rte_port_sched_writer_create(void *params, int socket_id) (conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->tx_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid params\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid params", __func__); return NULL; } @@ -147,7 +147,7 @@ rte_port_sched_writer_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -247,7 +247,7 @@ static int rte_port_sched_writer_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__); return -EINVAL; } diff --git a/lib/port/rte_port_source_sink.c b/lib/port/rte_port_source_sink.c index ff9677cdfe..cb4b7fa7fb 100644 --- a/lib/port/rte_port_source_sink.c +++ b/lib/port/rte_port_source_sink.c @@ -75,8 +75,8 @@ pcap_source_load(struct rte_port_source *port, /* first time open, get packet number */ pcap_handle = pcap_open_offline(file_name, pcap_errbuf); if (pcap_handle == NULL) { - RTE_LOG(ERR, PORT, "Failed to open pcap file " - "'%s' for reading\n", file_name); + RTE_LOG_LINE(ERR, PORT, "Failed to open pcap file " + "'%s' for reading", file_name); goto error_exit; } @@ -88,29 +88,29 @@ pcap_source_load(struct rte_port_source *port, port->pkt_len = rte_zmalloc_socket("PCAP", (sizeof(*port->pkt_len) * n_pkts), 0, socket_id); if (port->pkt_len == NULL) { - RTE_LOG(ERR, PORT, "No enough memory\n"); + RTE_LOG_LINE(ERR, PORT, "No enough memory"); goto error_exit; } pkt_len_aligns = rte_malloc("PCAP", (sizeof(*pkt_len_aligns) * n_pkts), 0); if (pkt_len_aligns == NULL) { - RTE_LOG(ERR, PORT, "No enough memory\n"); + RTE_LOG_LINE(ERR, PORT, "No enough memory"); goto error_exit; } port->pkts = rte_zmalloc_socket("PCAP", (sizeof(*port->pkts) * n_pkts), 0, socket_id); if (port->pkts == NULL) { - RTE_LOG(ERR, PORT, "No enough memory\n"); + RTE_LOG_LINE(ERR, PORT, "No enough memory"); goto error_exit; } /* open 2nd time, get pkt_len */ pcap_handle = pcap_open_offline(file_name, pcap_errbuf); if (pcap_handle == NULL) { - RTE_LOG(ERR, PORT, "Failed to open pcap file " - "'%s' for reading\n", file_name); + RTE_LOG_LINE(ERR, PORT, "Failed to open pcap file " + "'%s' for reading", file_name); goto error_exit; } @@ -128,7 +128,7 @@ pcap_source_load(struct rte_port_source *port, buff = rte_zmalloc_socket("PCAP", total_buff_len, 0, socket_id); if (buff == NULL) { - RTE_LOG(ERR, PORT, "No enough memory\n"); + RTE_LOG_LINE(ERR, PORT, "No enough memory"); goto error_exit; } @@ -137,8 +137,8 @@ pcap_source_load(struct rte_port_source *port, /* open file one last time to copy the pkt content */ pcap_handle = pcap_open_offline(file_name, pcap_errbuf); if (pcap_handle == NULL) { - RTE_LOG(ERR, PORT, "Failed to open pcap file " - "'%s' for reading\n", file_name); + RTE_LOG_LINE(ERR, PORT, "Failed to open pcap file " + "'%s' for reading", file_name); goto error_exit; } @@ -155,8 +155,8 @@ pcap_source_load(struct rte_port_source *port, rte_free(pkt_len_aligns); - RTE_LOG(INFO, PORT, "Successfully load pcap file " - "'%s' with %u pkts\n", + RTE_LOG_LINE(INFO, PORT, "Successfully load pcap file " + "'%s' with %u pkts", file_name, port->n_pkts); return 0; @@ -180,8 +180,8 @@ pcap_source_load(struct rte_port_source *port, int _ret = 0; \ \ if (file_name) { \ - RTE_LOG(ERR, PORT, "Source port field " \ - "\"file_name\" is not NULL.\n"); \ + RTE_LOG_LINE(ERR, PORT, "Source port field " \ + "\"file_name\" is not NULL."); \ _ret = -1; \ } \ \ @@ -199,7 +199,7 @@ rte_port_source_create(void *params, int socket_id) /* Check input arguments*/ if ((p == NULL) || (p->mempool == NULL)) { - RTE_LOG(ERR, PORT, "%s: Invalid params\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid params", __func__); return NULL; } @@ -207,7 +207,7 @@ rte_port_source_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -332,15 +332,15 @@ pcap_sink_open(struct rte_port_sink *port, /** Open a dead pcap handler for opening dumper file */ tx_pcap = pcap_open_dead(DLT_EN10MB, 65535); if (tx_pcap == NULL) { - RTE_LOG(ERR, PORT, "Cannot open pcap dead handler\n"); + RTE_LOG_LINE(ERR, PORT, "Cannot open pcap dead handler"); return -1; } /* The dumper is created using the previous pcap_t reference */ pcap_dumper = pcap_dump_open(tx_pcap, file_name); if (pcap_dumper == NULL) { - RTE_LOG(ERR, PORT, "Failed to open pcap file " - "\"%s\" for writing\n", file_name); + RTE_LOG_LINE(ERR, PORT, "Failed to open pcap file " + "\"%s\" for writing", file_name); return -1; } @@ -349,7 +349,7 @@ pcap_sink_open(struct rte_port_sink *port, port->pkt_index = 0; port->dump_finish = 0; - RTE_LOG(INFO, PORT, "Ready to dump packets to file \"%s\"\n", + RTE_LOG_LINE(INFO, PORT, "Ready to dump packets to file \"%s\"", file_name); return 0; @@ -402,7 +402,7 @@ pcap_sink_write_pkt(struct rte_port_sink *port, struct rte_mbuf *mbuf) if ((port->max_pkts != 0) && (port->pkt_index >= port->max_pkts)) { port->dump_finish = 1; - RTE_LOG(INFO, PORT, "Dumped %u packets to file\n", + RTE_LOG_LINE(INFO, PORT, "Dumped %u packets to file", port->pkt_index); } @@ -433,8 +433,8 @@ do { \ int _ret = 0; \ \ if (file_name) { \ - RTE_LOG(ERR, PORT, "Sink port field " \ - "\"file_name\" is not NULL.\n"); \ + RTE_LOG_LINE(ERR, PORT, "Sink port field " \ + "\"file_name\" is not NULL."); \ _ret = -1; \ } \ \ @@ -459,7 +459,7 @@ rte_port_sink_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } diff --git a/lib/port/rte_port_sym_crypto.c b/lib/port/rte_port_sym_crypto.c index 27b7e07cea..8e9abff9d6 100644 --- a/lib/port/rte_port_sym_crypto.c +++ b/lib/port/rte_port_sym_crypto.c @@ -44,7 +44,7 @@ rte_port_sym_crypto_reader_create(void *params, int socket_id) /* Check input parameters */ if (conf == NULL) { - RTE_LOG(ERR, PORT, "%s: params is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: params is NULL", __func__); return NULL; } @@ -52,7 +52,7 @@ rte_port_sym_crypto_reader_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -100,7 +100,7 @@ static int rte_port_sym_crypto_reader_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__); return -EINVAL; } @@ -167,7 +167,7 @@ rte_port_sym_crypto_writer_create(void *params, int socket_id) (conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->tx_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__); return NULL; } @@ -175,7 +175,7 @@ rte_port_sym_crypto_writer_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -285,7 +285,7 @@ static int rte_port_sym_crypto_writer_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } @@ -353,7 +353,7 @@ rte_port_sym_crypto_writer_nodrop_create(void *params, int socket_id) (conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->tx_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__); return NULL; } @@ -361,7 +361,7 @@ rte_port_sym_crypto_writer_nodrop_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__); return NULL; } @@ -497,7 +497,7 @@ static int rte_port_sym_crypto_writer_nodrop_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__); return -EINVAL; } diff --git a/lib/power/guest_channel.c b/lib/power/guest_channel.c index a6f2097d5b..a9bbda8f48 100644 --- a/lib/power/guest_channel.c +++ b/lib/power/guest_channel.c @@ -59,38 +59,38 @@ guest_channel_host_connect(const char *path, unsigned int lcore_id) int fd = -1; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d\n", + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d", lcore_id, RTE_MAX_LCORE-1); return -1; } /* check if path is already open */ if (global_fds[lcore_id] != -1) { - RTE_LOG(ERR, GUEST_CHANNEL, "Channel(%u) is already open with fd %d\n", + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Channel(%u) is already open with fd %d", lcore_id, global_fds[lcore_id]); return -1; } snprintf(fd_path, PATH_MAX, "%s.%u", path, lcore_id); - RTE_LOG(INFO, GUEST_CHANNEL, "Opening channel '%s' for lcore %u\n", + RTE_LOG_LINE(INFO, GUEST_CHANNEL, "Opening channel '%s' for lcore %u", fd_path, lcore_id); fd = open(fd_path, O_RDWR); if (fd < 0) { - RTE_LOG(ERR, GUEST_CHANNEL, "Unable to connect to '%s' with error " - "%s\n", fd_path, strerror(errno)); + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Unable to connect to '%s' with error " + "%s", fd_path, strerror(errno)); return -1; } flags = fcntl(fd, F_GETFL, 0); if (flags < 0) { - RTE_LOG(ERR, GUEST_CHANNEL, "Failed on fcntl get flags for file %s\n", + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Failed on fcntl get flags for file %s", fd_path); goto error; } flags |= O_NONBLOCK; if (fcntl(fd, F_SETFL, flags) < 0) { - RTE_LOG(ERR, GUEST_CHANNEL, "Failed on setting non-blocking mode for " - "file %s\n", fd_path); + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Failed on setting non-blocking mode for " + "file %s", fd_path); goto error; } /* QEMU needs a delay after connection */ @@ -103,13 +103,13 @@ guest_channel_host_connect(const char *path, unsigned int lcore_id) global_fds[lcore_id] = fd; ret = guest_channel_send_msg(&pkt, lcore_id); if (ret != 0) { - RTE_LOG(ERR, GUEST_CHANNEL, - "Error on channel '%s' communications test: %s\n", + RTE_LOG_LINE(ERR, GUEST_CHANNEL, + "Error on channel '%s' communications test: %s", fd_path, ret > 0 ? strerror(ret) : "channel not connected"); goto error; } - RTE_LOG(INFO, GUEST_CHANNEL, "Channel '%s' is now connected\n", fd_path); + RTE_LOG_LINE(INFO, GUEST_CHANNEL, "Channel '%s' is now connected", fd_path); return 0; error: close(fd); @@ -125,13 +125,13 @@ guest_channel_send_msg(struct rte_power_channel_packet *pkt, void *buffer = pkt; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d\n", + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d", lcore_id, RTE_MAX_LCORE-1); return -1; } if (global_fds[lcore_id] < 0) { - RTE_LOG(ERR, GUEST_CHANNEL, "Channel is not connected\n"); + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Channel is not connected"); return -1; } while (buffer_len > 0) { @@ -166,13 +166,13 @@ int power_guest_channel_read_msg(void *pkt, return -1; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d\n", + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d", lcore_id, RTE_MAX_LCORE-1); return -1; } if (global_fds[lcore_id] < 0) { - RTE_LOG(ERR, GUEST_CHANNEL, "Channel is not connected\n"); + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Channel is not connected"); return -1; } @@ -181,10 +181,10 @@ int power_guest_channel_read_msg(void *pkt, ret = poll(&fds, 1, TIMEOUT); if (ret == 0) { - RTE_LOG(DEBUG, GUEST_CHANNEL, "Timeout occurred during poll function.\n"); + RTE_LOG_LINE(DEBUG, GUEST_CHANNEL, "Timeout occurred during poll function."); return -1; } else if (ret < 0) { - RTE_LOG(ERR, GUEST_CHANNEL, "Error occurred during poll function: %s\n", + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Error occurred during poll function: %s", strerror(errno)); return -1; } @@ -200,7 +200,7 @@ int power_guest_channel_read_msg(void *pkt, } if (ret == 0) { - RTE_LOG(ERR, GUEST_CHANNEL, "Expected more data, but connection has been closed.\n"); + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Expected more data, but connection has been closed."); return -1; } pkt = (char *)pkt + ret; @@ -221,7 +221,7 @@ void guest_channel_host_disconnect(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d\n", + RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d", lcore_id, RTE_MAX_LCORE-1); return; } diff --git a/lib/power/power_acpi_cpufreq.c b/lib/power/power_acpi_cpufreq.c index 8b55f19247..dd143f2cc8 100644 --- a/lib/power/power_acpi_cpufreq.c +++ b/lib/power/power_acpi_cpufreq.c @@ -63,8 +63,8 @@ static int set_freq_internal(struct acpi_power_info *pi, uint32_t idx) { if (idx >= RTE_MAX_LCORE_FREQS || idx >= pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Invalid frequency index %u, which " - "should be less than %u\n", idx, pi->nb_freqs); + RTE_LOG_LINE(ERR, POWER, "Invalid frequency index %u, which " + "should be less than %u", idx, pi->nb_freqs); return -1; } @@ -75,13 +75,13 @@ set_freq_internal(struct acpi_power_info *pi, uint32_t idx) POWER_DEBUG_TRACE("Frequency[%u] %u to be set for lcore %u\n", idx, pi->freqs[idx], pi->lcore_id); if (fseek(pi->f, 0, SEEK_SET) < 0) { - RTE_LOG(ERR, POWER, "Fail to set file position indicator to 0 " - "for setting frequency for lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to set file position indicator to 0 " + "for setting frequency for lcore %u", pi->lcore_id); return -1; } if (fprintf(pi->f, "%u", pi->freqs[idx]) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new frequency for " - "lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to write new frequency for " + "lcore %u", pi->lcore_id); return -1; } fflush(pi->f); @@ -127,14 +127,14 @@ power_get_available_freqs(struct acpi_power_info *pi) open_core_sysfs_file(&f, "r", POWER_SYSFILE_AVAIL_FREQ, pi->lcore_id); if (f == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_AVAIL_FREQ); goto out; } ret = read_core_sysfs_s(f, buf, sizeof(buf)); if ((ret) < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_AVAIL_FREQ); goto out; } @@ -143,12 +143,12 @@ power_get_available_freqs(struct acpi_power_info *pi) count = rte_strsplit(buf, sizeof(buf), freqs, RTE_MAX_LCORE_FREQS, ' '); if (count <= 0) { - RTE_LOG(ERR, POWER, "No available frequency in " - ""POWER_SYSFILE_AVAIL_FREQ"\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "No available frequency in " + POWER_SYSFILE_AVAIL_FREQ, pi->lcore_id); goto out; } if (count >= RTE_MAX_LCORE_FREQS) { - RTE_LOG(ERR, POWER, "Too many available frequencies : %d\n", + RTE_LOG_LINE(ERR, POWER, "Too many available frequencies : %d", count); goto out; } @@ -196,14 +196,14 @@ power_init_for_setting_freq(struct acpi_power_info *pi) open_core_sysfs_file(&f, "rw+", POWER_SYSFILE_SETSPEED, pi->lcore_id); if (f == NULL) { - RTE_LOG(ERR, POWER, "Failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to open %s", POWER_SYSFILE_SETSPEED); goto err; } ret = read_core_sysfs_s(f, buf, sizeof(buf)); if ((ret) < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_SETSPEED); goto err; } @@ -237,7 +237,7 @@ power_acpi_cpufreq_init(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n", + RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceeds %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -253,42 +253,42 @@ power_acpi_cpufreq_init(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "in use\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is " + "in use", lcore_id); return -1; } pi->lcore_id = lcore_id; /* Check and set the governor */ if (power_set_governor_userspace(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to " - "userspace\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set governor of lcore %u to " + "userspace", lcore_id); goto fail; } /* Get the available frequencies */ if (power_get_available_freqs(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot get available frequencies of " - "lcore %u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot get available frequencies of " + "lcore %u", lcore_id); goto fail; } /* Init for setting lcore frequency */ if (power_init_for_setting_freq(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot init for setting frequency for " - "lcore %u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot init for setting frequency for " + "lcore %u", lcore_id); goto fail; } /* Set freq to max by default */ if (power_acpi_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, "Cannot set frequency of lcore %u " - "to max\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set frequency of lcore %u " + "to max", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Initialized successfully for lcore %u " - "power management\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Initialized successfully for lcore %u " + "power management", lcore_id); exp_state = POWER_ONGOING; rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_USED, rte_memory_order_release, rte_memory_order_relaxed); @@ -310,7 +310,7 @@ power_acpi_cpufreq_exit(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n", + RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceeds %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -325,8 +325,8 @@ power_acpi_cpufreq_exit(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "not used\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is " + "not used", lcore_id); return -1; } @@ -336,14 +336,14 @@ power_acpi_cpufreq_exit(unsigned int lcore_id) /* Set the governor back to the original */ if (power_set_governor_original(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set the governor of %u back " - "to the original\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set the governor of %u back " + "to the original", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Power management of lcore %u has exited from " + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u has exited from " "'userspace' mode and been set back to the " - "original\n", lcore_id); + "original", lcore_id); exp_state = POWER_ONGOING; rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_IDLE, rte_memory_order_release, rte_memory_order_relaxed); @@ -364,18 +364,18 @@ power_acpi_cpufreq_freqs(unsigned int lcore_id, uint32_t *freqs, uint32_t num) struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return 0; } if (freqs == NULL) { - RTE_LOG(ERR, POWER, "NULL buffer supplied\n"); + RTE_LOG_LINE(ERR, POWER, "NULL buffer supplied"); return 0; } pi = &lcore_power_info[lcore_id]; if (num < pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Buffer size is not enough\n"); + RTE_LOG_LINE(ERR, POWER, "Buffer size is not enough"); return 0; } rte_memcpy(freqs, pi->freqs, pi->nb_freqs * sizeof(uint32_t)); @@ -387,7 +387,7 @@ uint32_t power_acpi_cpufreq_get_freq(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return RTE_POWER_INVALID_FREQ_INDEX; } @@ -398,7 +398,7 @@ int power_acpi_cpufreq_set_freq(unsigned int lcore_id, uint32_t index) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -411,7 +411,7 @@ power_acpi_cpufreq_freq_down(unsigned int lcore_id) struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -429,7 +429,7 @@ power_acpi_cpufreq_freq_up(unsigned int lcore_id) struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -446,7 +446,7 @@ int power_acpi_cpufreq_freq_max(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -470,7 +470,7 @@ power_acpi_cpufreq_freq_min(unsigned int lcore_id) struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -487,7 +487,7 @@ power_acpi_turbo_status(unsigned int lcore_id) struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -503,7 +503,7 @@ power_acpi_enable_turbo(unsigned int lcore_id) struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -513,16 +513,16 @@ power_acpi_enable_turbo(unsigned int lcore_id) pi->turbo_enable = 1; else { pi->turbo_enable = 0; - RTE_LOG(ERR, POWER, - "Failed to enable turbo on lcore %u\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to enable turbo on lcore %u", lcore_id); return -1; } /* Max may have changed, so call to max function */ if (power_acpi_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, - "Failed to set frequency of lcore %u to max\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to set frequency of lcore %u to max", lcore_id); return -1; } @@ -536,7 +536,7 @@ power_acpi_disable_turbo(unsigned int lcore_id) struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -547,8 +547,8 @@ power_acpi_disable_turbo(unsigned int lcore_id) if ((pi->turbo_available) && (pi->curr_idx <= 1)) { /* Try to set freq to max by default coming out of turbo */ if (power_acpi_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, - "Failed to set frequency of lcore %u to max\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to set frequency of lcore %u to max", lcore_id); return -1; } @@ -563,11 +563,11 @@ int power_acpi_get_capabilities(unsigned int lcore_id, struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } if (caps == NULL) { - RTE_LOG(ERR, POWER, "Invalid argument\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid argument"); return -1; } diff --git a/lib/power/power_amd_pstate_cpufreq.c b/lib/power/power_amd_pstate_cpufreq.c index dbd9d2b3ee..44581fd48b 100644 --- a/lib/power/power_amd_pstate_cpufreq.c +++ b/lib/power/power_amd_pstate_cpufreq.c @@ -70,8 +70,8 @@ static int set_freq_internal(struct amd_pstate_power_info *pi, uint32_t idx) { if (idx >= RTE_MAX_LCORE_FREQS || idx >= pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Invalid frequency index %u, which " - "should be less than %u\n", idx, pi->nb_freqs); + RTE_LOG_LINE(ERR, POWER, "Invalid frequency index %u, which " + "should be less than %u", idx, pi->nb_freqs); return -1; } @@ -82,13 +82,13 @@ set_freq_internal(struct amd_pstate_power_info *pi, uint32_t idx) POWER_DEBUG_TRACE("Frequency[%u] %u to be set for lcore %u\n", idx, pi->freqs[idx], pi->lcore_id); if (fseek(pi->f, 0, SEEK_SET) < 0) { - RTE_LOG(ERR, POWER, "Fail to set file position indicator to 0 " - "for setting frequency for lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to set file position indicator to 0 " + "for setting frequency for lcore %u", pi->lcore_id); return -1; } if (fprintf(pi->f, "%u", pi->freqs[idx]) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new frequency for " - "lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to write new frequency for " + "lcore %u", pi->lcore_id); return -1; } fflush(pi->f); @@ -119,7 +119,7 @@ power_check_turbo(struct amd_pstate_power_info *pi) open_core_sysfs_file(&f_max, "r", POWER_SYSFILE_HIGHEST_PERF, pi->lcore_id); if (f_max == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_HIGHEST_PERF); goto err; } @@ -127,21 +127,21 @@ power_check_turbo(struct amd_pstate_power_info *pi) open_core_sysfs_file(&f_nom, "r", POWER_SYSFILE_NOMINAL_PERF, pi->lcore_id); if (f_nom == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_NOMINAL_PERF); goto err; } ret = read_core_sysfs_u32(f_max, &highest_perf); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_HIGHEST_PERF); goto err; } ret = read_core_sysfs_u32(f_nom, &nominal_perf); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_NOMINAL_PERF); goto err; } @@ -190,7 +190,7 @@ power_get_available_freqs(struct amd_pstate_power_info *pi) open_core_sysfs_file(&f_max, "r", POWER_SYSFILE_SCALING_MAX_FREQ, pi->lcore_id); if (f_max == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_SCALING_MAX_FREQ); goto out; } @@ -198,7 +198,7 @@ power_get_available_freqs(struct amd_pstate_power_info *pi) open_core_sysfs_file(&f_min, "r", POWER_SYSFILE_SCALING_MIN_FREQ, pi->lcore_id); if (f_min == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_SCALING_MIN_FREQ); goto out; } @@ -206,28 +206,28 @@ power_get_available_freqs(struct amd_pstate_power_info *pi) open_core_sysfs_file(&f_nom, "r", POWER_SYSFILE_NOMINAL_FREQ, pi->lcore_id); if (f_nom == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_NOMINAL_FREQ); goto out; } ret = read_core_sysfs_u32(f_max, &scaling_max_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_SCALING_MAX_FREQ); goto out; } ret = read_core_sysfs_u32(f_min, &scaling_min_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_SCALING_MIN_FREQ); goto out; } ret = read_core_sysfs_u32(f_nom, &nominal_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_NOMINAL_FREQ); goto out; } @@ -235,8 +235,8 @@ power_get_available_freqs(struct amd_pstate_power_info *pi) power_check_turbo(pi); if (scaling_max_freq < scaling_min_freq) { - RTE_LOG(ERR, POWER, "scaling min freq exceeds max freq, " - "not expected! Check system power policy\n"); + RTE_LOG_LINE(ERR, POWER, "scaling min freq exceeds max freq, " + "not expected! Check system power policy"); goto out; } else if (scaling_max_freq == scaling_min_freq) { num_freqs = 1; @@ -304,14 +304,14 @@ power_init_for_setting_freq(struct amd_pstate_power_info *pi) open_core_sysfs_file(&f, "rw+", POWER_SYSFILE_SETSPEED, pi->lcore_id); if (f == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_SETSPEED); goto err; } ret = read_core_sysfs_s(f, buf, sizeof(buf)); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_SETSPEED); goto err; } @@ -355,7 +355,7 @@ power_amd_pstate_cpufreq_init(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n", + RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceeds %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -371,42 +371,42 @@ power_amd_pstate_cpufreq_init(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "in use\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is " + "in use", lcore_id); return -1; } pi->lcore_id = lcore_id; /* Check and set the governor */ if (power_set_governor_userspace(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to " - "userspace\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set governor of lcore %u to " + "userspace", lcore_id); goto fail; } /* Get the available frequencies */ if (power_get_available_freqs(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot get available frequencies of " - "lcore %u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot get available frequencies of " + "lcore %u", lcore_id); goto fail; } /* Init for setting lcore frequency */ if (power_init_for_setting_freq(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot init for setting frequency for " - "lcore %u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot init for setting frequency for " + "lcore %u", lcore_id); goto fail; } /* Set freq to max by default */ if (power_amd_pstate_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, "Cannot set frequency of lcore %u " - "to max\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set frequency of lcore %u " + "to max", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Initialized successfully for lcore %u " - "power management\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Initialized successfully for lcore %u " + "power management", lcore_id); rte_atomic_store_explicit(&(pi->state), POWER_USED, rte_memory_order_release); @@ -434,7 +434,7 @@ power_amd_pstate_cpufreq_exit(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n", + RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceeds %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -449,8 +449,8 @@ power_amd_pstate_cpufreq_exit(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "not used\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is " + "not used", lcore_id); return -1; } @@ -460,14 +460,14 @@ power_amd_pstate_cpufreq_exit(unsigned int lcore_id) /* Set the governor back to the original */ if (power_set_governor_original(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set the governor of %u back " - "to the original\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set the governor of %u back " + "to the original", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Power management of lcore %u has exited from " + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u has exited from " "'userspace' mode and been set back to the " - "original\n", lcore_id); + "original", lcore_id); rte_atomic_store_explicit(&(pi->state), POWER_IDLE, rte_memory_order_release); return 0; @@ -484,18 +484,18 @@ power_amd_pstate_cpufreq_freqs(unsigned int lcore_id, uint32_t *freqs, uint32_t struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return 0; } if (freqs == NULL) { - RTE_LOG(ERR, POWER, "NULL buffer supplied\n"); + RTE_LOG_LINE(ERR, POWER, "NULL buffer supplied"); return 0; } pi = &lcore_power_info[lcore_id]; if (num < pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Buffer size is not enough\n"); + RTE_LOG_LINE(ERR, POWER, "Buffer size is not enough"); return 0; } rte_memcpy(freqs, pi->freqs, pi->nb_freqs * sizeof(uint32_t)); @@ -507,7 +507,7 @@ uint32_t power_amd_pstate_cpufreq_get_freq(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return RTE_POWER_INVALID_FREQ_INDEX; } @@ -518,7 +518,7 @@ int power_amd_pstate_cpufreq_set_freq(unsigned int lcore_id, uint32_t index) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -531,7 +531,7 @@ power_amd_pstate_cpufreq_freq_down(unsigned int lcore_id) struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -549,7 +549,7 @@ power_amd_pstate_cpufreq_freq_up(unsigned int lcore_id) struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -566,7 +566,7 @@ int power_amd_pstate_cpufreq_freq_max(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -591,7 +591,7 @@ power_amd_pstate_cpufreq_freq_min(unsigned int lcore_id) struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -607,7 +607,7 @@ power_amd_pstate_turbo_status(unsigned int lcore_id) struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -622,7 +622,7 @@ power_amd_pstate_enable_turbo(unsigned int lcore_id) struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -632,8 +632,8 @@ power_amd_pstate_enable_turbo(unsigned int lcore_id) pi->turbo_enable = 1; else { pi->turbo_enable = 0; - RTE_LOG(ERR, POWER, - "Failed to enable turbo on lcore %u\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to enable turbo on lcore %u", lcore_id); return -1; } @@ -643,8 +643,8 @@ power_amd_pstate_enable_turbo(unsigned int lcore_id) */ /* Max may have changed, so call to max function */ if (power_amd_pstate_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, - "Failed to set frequency of lcore %u to max\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to set frequency of lcore %u to max", lcore_id); return -1; } @@ -658,7 +658,7 @@ power_amd_pstate_disable_turbo(unsigned int lcore_id) struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -669,8 +669,8 @@ power_amd_pstate_disable_turbo(unsigned int lcore_id) if ((pi->turbo_available) && (pi->curr_idx <= pi->nom_idx)) { /* Try to set freq to max by default coming out of turbo */ if (power_amd_pstate_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, - "Failed to set frequency of lcore %u to max\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to set frequency of lcore %u to max", lcore_id); return -1; } @@ -686,11 +686,11 @@ power_amd_pstate_get_capabilities(unsigned int lcore_id, struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } if (caps == NULL) { - RTE_LOG(ERR, POWER, "Invalid argument\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid argument"); return -1; } diff --git a/lib/power/power_common.c b/lib/power/power_common.c index bf77eafa88..bc57642cd1 100644 --- a/lib/power/power_common.c +++ b/lib/power/power_common.c @@ -163,14 +163,14 @@ power_set_governor(unsigned int lcore_id, const char *new_governor, open_core_sysfs_file(&f_governor, "rw+", POWER_SYSFILE_GOVERNOR, lcore_id); if (f_governor == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_GOVERNOR); goto out; } ret = read_core_sysfs_s(f_governor, buf, sizeof(buf)); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_GOVERNOR); goto out; } @@ -190,14 +190,14 @@ power_set_governor(unsigned int lcore_id, const char *new_governor, /* Write the new governor */ ret = write_core_sysfs_s(f_governor, new_governor); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to write %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to write %s", POWER_SYSFILE_GOVERNOR); goto out; } ret = 0; - RTE_LOG(INFO, POWER, "Power management governor of lcore %u has been " - "set to '%s' successfully\n", lcore_id, new_governor); + RTE_LOG_LINE(INFO, POWER, "Power management governor of lcore %u has been " + "set to '%s' successfully", lcore_id, new_governor); out: if (f_governor != NULL) fclose(f_governor); diff --git a/lib/power/power_cppc_cpufreq.c b/lib/power/power_cppc_cpufreq.c index bb70f6ae52..83e1e62830 100644 --- a/lib/power/power_cppc_cpufreq.c +++ b/lib/power/power_cppc_cpufreq.c @@ -73,8 +73,8 @@ static int set_freq_internal(struct cppc_power_info *pi, uint32_t idx) { if (idx >= RTE_MAX_LCORE_FREQS || idx >= pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Invalid frequency index %u, which " - "should be less than %u\n", idx, pi->nb_freqs); + RTE_LOG_LINE(ERR, POWER, "Invalid frequency index %u, which " + "should be less than %u", idx, pi->nb_freqs); return -1; } @@ -85,13 +85,13 @@ set_freq_internal(struct cppc_power_info *pi, uint32_t idx) POWER_DEBUG_TRACE("Frequency[%u] %u to be set for lcore %u\n", idx, pi->freqs[idx], pi->lcore_id); if (fseek(pi->f, 0, SEEK_SET) < 0) { - RTE_LOG(ERR, POWER, "Fail to set file position indicator to 0 " - "for setting frequency for lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to set file position indicator to 0 " + "for setting frequency for lcore %u", pi->lcore_id); return -1; } if (fprintf(pi->f, "%u", pi->freqs[idx]) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new frequency for " - "lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to write new frequency for " + "lcore %u", pi->lcore_id); return -1; } fflush(pi->f); @@ -122,7 +122,7 @@ power_check_turbo(struct cppc_power_info *pi) open_core_sysfs_file(&f_max, "r", POWER_SYSFILE_HIGHEST_PERF, pi->lcore_id); if (f_max == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_HIGHEST_PERF); goto err; } @@ -130,7 +130,7 @@ power_check_turbo(struct cppc_power_info *pi) open_core_sysfs_file(&f_nom, "r", POWER_SYSFILE_NOMINAL_PERF, pi->lcore_id); if (f_nom == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_NOMINAL_PERF); goto err; } @@ -138,28 +138,28 @@ power_check_turbo(struct cppc_power_info *pi) open_core_sysfs_file(&f_cmax, "r", POWER_SYSFILE_SYS_MAX, pi->lcore_id); if (f_cmax == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_SYS_MAX); goto err; } ret = read_core_sysfs_u32(f_max, &highest_perf); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_HIGHEST_PERF); goto err; } ret = read_core_sysfs_u32(f_nom, &nominal_perf); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_NOMINAL_PERF); goto err; } ret = read_core_sysfs_u32(f_cmax, &cpuinfo_max_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_SYS_MAX); goto err; } @@ -209,7 +209,7 @@ power_get_available_freqs(struct cppc_power_info *pi) open_core_sysfs_file(&f_max, "r", POWER_SYSFILE_SCALING_MAX_FREQ, pi->lcore_id); if (f_max == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_SCALING_MAX_FREQ); goto out; } @@ -217,21 +217,21 @@ power_get_available_freqs(struct cppc_power_info *pi) open_core_sysfs_file(&f_min, "r", POWER_SYSFILE_SCALING_MIN_FREQ, pi->lcore_id); if (f_min == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_SCALING_MIN_FREQ); goto out; } ret = read_core_sysfs_u32(f_max, &scaling_max_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_SCALING_MAX_FREQ); goto out; } ret = read_core_sysfs_u32(f_min, &scaling_min_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_SCALING_MIN_FREQ); goto out; } @@ -249,7 +249,7 @@ power_get_available_freqs(struct cppc_power_info *pi) num_freqs = (nominal_perf - scaling_min_freq) / BUS_FREQ + 1 + pi->turbo_available; if (num_freqs >= RTE_MAX_LCORE_FREQS) { - RTE_LOG(ERR, POWER, "Too many available frequencies: %d\n", + RTE_LOG_LINE(ERR, POWER, "Too many available frequencies: %d", num_freqs); goto out; } @@ -290,14 +290,14 @@ power_init_for_setting_freq(struct cppc_power_info *pi) open_core_sysfs_file(&f, "rw+", POWER_SYSFILE_SETSPEED, pi->lcore_id); if (f == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_SETSPEED); goto err; } ret = read_core_sysfs_s(f, buf, sizeof(buf)); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_SETSPEED); goto err; } @@ -341,7 +341,7 @@ power_cppc_cpufreq_init(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n", + RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceeds %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -357,42 +357,42 @@ power_cppc_cpufreq_init(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "in use\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is " + "in use", lcore_id); return -1; } pi->lcore_id = lcore_id; /* Check and set the governor */ if (power_set_governor_userspace(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to " - "userspace\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set governor of lcore %u to " + "userspace", lcore_id); goto fail; } /* Get the available frequencies */ if (power_get_available_freqs(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot get available frequencies of " - "lcore %u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot get available frequencies of " + "lcore %u", lcore_id); goto fail; } /* Init for setting lcore frequency */ if (power_init_for_setting_freq(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot init for setting frequency for " - "lcore %u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot init for setting frequency for " + "lcore %u", lcore_id); goto fail; } /* Set freq to max by default */ if (power_cppc_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, "Cannot set frequency of lcore %u " - "to max\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set frequency of lcore %u " + "to max", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Initialized successfully for lcore %u " - "power management\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Initialized successfully for lcore %u " + "power management", lcore_id); rte_atomic_store_explicit(&(pi->state), POWER_USED, rte_memory_order_release); @@ -420,7 +420,7 @@ power_cppc_cpufreq_exit(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n", + RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceeds %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -435,8 +435,8 @@ power_cppc_cpufreq_exit(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "not used\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is " + "not used", lcore_id); return -1; } @@ -446,14 +446,14 @@ power_cppc_cpufreq_exit(unsigned int lcore_id) /* Set the governor back to the original */ if (power_set_governor_original(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set the governor of %u back " - "to the original\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set the governor of %u back " + "to the original", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Power management of lcore %u has exited from " + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u has exited from " "'userspace' mode and been set back to the " - "original\n", lcore_id); + "original", lcore_id); rte_atomic_store_explicit(&(pi->state), POWER_IDLE, rte_memory_order_release); return 0; @@ -470,18 +470,18 @@ power_cppc_cpufreq_freqs(unsigned int lcore_id, uint32_t *freqs, uint32_t num) struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return 0; } if (freqs == NULL) { - RTE_LOG(ERR, POWER, "NULL buffer supplied\n"); + RTE_LOG_LINE(ERR, POWER, "NULL buffer supplied"); return 0; } pi = &lcore_power_info[lcore_id]; if (num < pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Buffer size is not enough\n"); + RTE_LOG_LINE(ERR, POWER, "Buffer size is not enough"); return 0; } rte_memcpy(freqs, pi->freqs, pi->nb_freqs * sizeof(uint32_t)); @@ -493,7 +493,7 @@ uint32_t power_cppc_cpufreq_get_freq(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return RTE_POWER_INVALID_FREQ_INDEX; } @@ -504,7 +504,7 @@ int power_cppc_cpufreq_set_freq(unsigned int lcore_id, uint32_t index) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -517,7 +517,7 @@ power_cppc_cpufreq_freq_down(unsigned int lcore_id) struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -535,7 +535,7 @@ power_cppc_cpufreq_freq_up(unsigned int lcore_id) struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -552,7 +552,7 @@ int power_cppc_cpufreq_freq_max(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -576,7 +576,7 @@ power_cppc_cpufreq_freq_min(unsigned int lcore_id) struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -592,7 +592,7 @@ power_cppc_turbo_status(unsigned int lcore_id) struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -607,7 +607,7 @@ power_cppc_enable_turbo(unsigned int lcore_id) struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -617,8 +617,8 @@ power_cppc_enable_turbo(unsigned int lcore_id) pi->turbo_enable = 1; else { pi->turbo_enable = 0; - RTE_LOG(ERR, POWER, - "Failed to enable turbo on lcore %u\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to enable turbo on lcore %u", lcore_id); return -1; } @@ -628,8 +628,8 @@ power_cppc_enable_turbo(unsigned int lcore_id) */ /* Max may have changed, so call to max function */ if (power_cppc_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, - "Failed to set frequency of lcore %u to max\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to set frequency of lcore %u to max", lcore_id); return -1; } @@ -643,7 +643,7 @@ power_cppc_disable_turbo(unsigned int lcore_id) struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -654,8 +654,8 @@ power_cppc_disable_turbo(unsigned int lcore_id) if ((pi->turbo_available) && (pi->curr_idx <= 1)) { /* Try to set freq to max by default coming out of turbo */ if (power_cppc_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, - "Failed to set frequency of lcore %u to max\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to set frequency of lcore %u to max", lcore_id); return -1; } @@ -671,11 +671,11 @@ power_cppc_get_capabilities(unsigned int lcore_id, struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } if (caps == NULL) { - RTE_LOG(ERR, POWER, "Invalid argument\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid argument"); return -1; } diff --git a/lib/power/power_intel_uncore.c b/lib/power/power_intel_uncore.c index 688aebc4ee..0ee8e603d2 100644 --- a/lib/power/power_intel_uncore.c +++ b/lib/power/power_intel_uncore.c @@ -52,8 +52,8 @@ set_uncore_freq_internal(struct uncore_power_info *ui, uint32_t idx) int ret; if (idx >= MAX_UNCORE_FREQS || idx >= ui->nb_freqs) { - RTE_LOG(DEBUG, POWER, "Invalid uncore frequency index %u, which " - "should be less than %u\n", idx, ui->nb_freqs); + RTE_LOG_LINE(DEBUG, POWER, "Invalid uncore frequency index %u, which " + "should be less than %u", idx, ui->nb_freqs); return -1; } @@ -65,13 +65,13 @@ set_uncore_freq_internal(struct uncore_power_info *ui, uint32_t idx) open_core_sysfs_file(&ui->f_cur_max, "rw+", POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ, ui->pkg, ui->die); if (ui->f_cur_max == NULL) { - RTE_LOG(DEBUG, POWER, "failed to open %s\n", + RTE_LOG_LINE(DEBUG, POWER, "failed to open %s", POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ); return -1; } ret = read_core_sysfs_u32(ui->f_cur_max, &curr_max_freq); if (ret < 0) { - RTE_LOG(DEBUG, POWER, "Failed to read %s\n", + RTE_LOG_LINE(DEBUG, POWER, "Failed to read %s", POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ); fclose(ui->f_cur_max); return -1; @@ -79,14 +79,14 @@ set_uncore_freq_internal(struct uncore_power_info *ui, uint32_t idx) /* check this value first before fprintf value to f_cur_max, so value isn't overwritten */ if (fprintf(ui->f_cur_min, "%u", target_uncore_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new uncore frequency for " - "pkg %02u die %02u\n", ui->pkg, ui->die); + RTE_LOG_LINE(ERR, POWER, "Fail to write new uncore frequency for " + "pkg %02u die %02u", ui->pkg, ui->die); return -1; } if (fprintf(ui->f_cur_max, "%u", target_uncore_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new uncore frequency for " - "pkg %02u die %02u\n", ui->pkg, ui->die); + RTE_LOG_LINE(ERR, POWER, "Fail to write new uncore frequency for " + "pkg %02u die %02u", ui->pkg, ui->die); return -1; } @@ -121,13 +121,13 @@ power_init_for_setting_uncore_freq(struct uncore_power_info *ui) open_core_sysfs_file(&f_base_max, "r", POWER_INTEL_UNCORE_SYSFILE_BASE_MAX_FREQ, ui->pkg, ui->die); if (f_base_max == NULL) { - RTE_LOG(DEBUG, POWER, "failed to open %s\n", + RTE_LOG_LINE(DEBUG, POWER, "failed to open %s", POWER_INTEL_UNCORE_SYSFILE_BASE_MAX_FREQ); goto err; } ret = read_core_sysfs_u32(f_base_max, &base_max_freq); if (ret < 0) { - RTE_LOG(DEBUG, POWER, "Failed to read %s\n", + RTE_LOG_LINE(DEBUG, POWER, "Failed to read %s", POWER_INTEL_UNCORE_SYSFILE_BASE_MAX_FREQ); goto err; } @@ -136,14 +136,14 @@ power_init_for_setting_uncore_freq(struct uncore_power_info *ui) open_core_sysfs_file(&f_base_min, "r", POWER_INTEL_UNCORE_SYSFILE_BASE_MIN_FREQ, ui->pkg, ui->die); if (f_base_min == NULL) { - RTE_LOG(DEBUG, POWER, "failed to open %s\n", + RTE_LOG_LINE(DEBUG, POWER, "failed to open %s", POWER_INTEL_UNCORE_SYSFILE_BASE_MIN_FREQ); goto err; } if (f_base_min != NULL) { ret = read_core_sysfs_u32(f_base_min, &base_min_freq); if (ret < 0) { - RTE_LOG(DEBUG, POWER, "Failed to read %s\n", + RTE_LOG_LINE(DEBUG, POWER, "Failed to read %s", POWER_INTEL_UNCORE_SYSFILE_BASE_MIN_FREQ); goto err; } @@ -153,14 +153,14 @@ power_init_for_setting_uncore_freq(struct uncore_power_info *ui) open_core_sysfs_file(&f_min, "rw+", POWER_INTEL_UNCORE_SYSFILE_MIN_FREQ, ui->pkg, ui->die); if (f_min == NULL) { - RTE_LOG(DEBUG, POWER, "failed to open %s\n", + RTE_LOG_LINE(DEBUG, POWER, "failed to open %s", POWER_INTEL_UNCORE_SYSFILE_MIN_FREQ); goto err; } if (f_min != NULL) { ret = read_core_sysfs_u32(f_min, &min_freq); if (ret < 0) { - RTE_LOG(DEBUG, POWER, "Failed to read %s\n", + RTE_LOG_LINE(DEBUG, POWER, "Failed to read %s", POWER_INTEL_UNCORE_SYSFILE_MIN_FREQ); goto err; } @@ -170,14 +170,14 @@ power_init_for_setting_uncore_freq(struct uncore_power_info *ui) open_core_sysfs_file(&f_max, "rw+", POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ, ui->pkg, ui->die); if (f_max == NULL) { - RTE_LOG(DEBUG, POWER, "failed to open %s\n", + RTE_LOG_LINE(DEBUG, POWER, "failed to open %s", POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ); goto err; } if (f_max != NULL) { ret = read_core_sysfs_u32(f_max, &max_freq); if (ret < 0) { - RTE_LOG(DEBUG, POWER, "Failed to read %s\n", + RTE_LOG_LINE(DEBUG, POWER, "Failed to read %s", POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ); goto err; } @@ -222,7 +222,7 @@ power_get_available_uncore_freqs(struct uncore_power_info *ui) num_uncore_freqs = (ui->init_max_freq - ui->init_min_freq) / BUS_FREQ + 1; if (num_uncore_freqs >= MAX_UNCORE_FREQS) { - RTE_LOG(ERR, POWER, "Too many available uncore frequencies: %d\n", + RTE_LOG_LINE(ERR, POWER, "Too many available uncore frequencies: %d", num_uncore_freqs); goto out; } @@ -250,7 +250,7 @@ check_pkg_die_values(unsigned int pkg, unsigned int die) if (max_pkgs == 0) return -1; if (pkg >= max_pkgs) { - RTE_LOG(DEBUG, POWER, "Package number %02u can not exceed %u\n", + RTE_LOG_LINE(DEBUG, POWER, "Package number %02u can not exceed %u", pkg, max_pkgs); return -1; } @@ -259,7 +259,7 @@ check_pkg_die_values(unsigned int pkg, unsigned int die) if (max_dies == 0) return -1; if (die >= max_dies) { - RTE_LOG(DEBUG, POWER, "Die number %02u can not exceed %u\n", + RTE_LOG_LINE(DEBUG, POWER, "Die number %02u can not exceed %u", die, max_dies); return -1; } @@ -282,15 +282,15 @@ power_intel_uncore_init(unsigned int pkg, unsigned int die) /* Init for setting uncore die frequency */ if (power_init_for_setting_uncore_freq(ui) < 0) { - RTE_LOG(DEBUG, POWER, "Cannot init for setting uncore frequency for " - "pkg %02u die %02u\n", pkg, die); + RTE_LOG_LINE(DEBUG, POWER, "Cannot init for setting uncore frequency for " + "pkg %02u die %02u", pkg, die); return -1; } /* Get the available frequencies */ if (power_get_available_uncore_freqs(ui) < 0) { - RTE_LOG(DEBUG, POWER, "Cannot get available uncore frequencies of " - "pkg %02u die %02u\n", pkg, die); + RTE_LOG_LINE(DEBUG, POWER, "Cannot get available uncore frequencies of " + "pkg %02u die %02u", pkg, die); return -1; } @@ -309,14 +309,14 @@ power_intel_uncore_exit(unsigned int pkg, unsigned int die) ui = &uncore_info[pkg][die]; if (fprintf(ui->f_cur_min, "%u", ui->org_min_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write original uncore frequency for " - "pkg %02u die %02u\n", ui->pkg, ui->die); + RTE_LOG_LINE(ERR, POWER, "Fail to write original uncore frequency for " + "pkg %02u die %02u", ui->pkg, ui->die); return -1; } if (fprintf(ui->f_cur_max, "%u", ui->org_max_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write original uncore frequency for " - "pkg %02u die %02u\n", ui->pkg, ui->die); + RTE_LOG_LINE(ERR, POWER, "Fail to write original uncore frequency for " + "pkg %02u die %02u", ui->pkg, ui->die); return -1; } @@ -385,13 +385,13 @@ power_intel_uncore_freqs(unsigned int pkg, unsigned int die, uint32_t *freqs, ui return -1; if (freqs == NULL) { - RTE_LOG(ERR, POWER, "NULL buffer supplied\n"); + RTE_LOG_LINE(ERR, POWER, "NULL buffer supplied"); return 0; } ui = &uncore_info[pkg][die]; if (num < ui->nb_freqs) { - RTE_LOG(ERR, POWER, "Buffer size is not enough\n"); + RTE_LOG_LINE(ERR, POWER, "Buffer size is not enough"); return 0; } rte_memcpy(freqs, ui->freqs, ui->nb_freqs * sizeof(uint32_t)); @@ -419,10 +419,10 @@ power_intel_uncore_get_num_pkgs(void) d = opendir(INTEL_UNCORE_FREQUENCY_DIR); if (d == NULL) { - RTE_LOG(ERR, POWER, + RTE_LOG_LINE(ERR, POWER, "Uncore frequency management not supported/enabled on this kernel. " "Please enable CONFIG_INTEL_UNCORE_FREQ_CONTROL if on Intel x86 with linux kernel" - " >= 5.6\n"); + " >= 5.6"); return 0; } @@ -451,16 +451,16 @@ power_intel_uncore_get_num_dies(unsigned int pkg) if (max_pkgs == 0) return 0; if (pkg >= max_pkgs) { - RTE_LOG(DEBUG, POWER, "Invalid package number\n"); + RTE_LOG_LINE(DEBUG, POWER, "Invalid package number"); return 0; } d = opendir(INTEL_UNCORE_FREQUENCY_DIR); if (d == NULL) { - RTE_LOG(ERR, POWER, + RTE_LOG_LINE(ERR, POWER, "Uncore frequency management not supported/enabled on this kernel. " "Please enable CONFIG_INTEL_UNCORE_FREQ_CONTROL if on Intel x86 with linux kernel" - " >= 5.6\n"); + " >= 5.6"); return 0; } diff --git a/lib/power/power_kvm_vm.c b/lib/power/power_kvm_vm.c index db031f4310..218799491e 100644 --- a/lib/power/power_kvm_vm.c +++ b/lib/power/power_kvm_vm.c @@ -25,7 +25,7 @@ int power_kvm_vm_init(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Core(%u) is out of range 0...%d\n", + RTE_LOG_LINE(ERR, POWER, "Core(%u) is out of range 0...%d", lcore_id, RTE_MAX_LCORE-1); return -1; } @@ -46,16 +46,16 @@ power_kvm_vm_freqs(__rte_unused unsigned int lcore_id, __rte_unused uint32_t *freqs, __rte_unused uint32_t num) { - RTE_LOG(ERR, POWER, "rte_power_freqs is not implemented " - "for Virtual Machine Power Management\n"); + RTE_LOG_LINE(ERR, POWER, "rte_power_freqs is not implemented " + "for Virtual Machine Power Management"); return -ENOTSUP; } uint32_t power_kvm_vm_get_freq(__rte_unused unsigned int lcore_id) { - RTE_LOG(ERR, POWER, "rte_power_get_freq is not implemented " - "for Virtual Machine Power Management\n"); + RTE_LOG_LINE(ERR, POWER, "rte_power_get_freq is not implemented " + "for Virtual Machine Power Management"); return -ENOTSUP; } @@ -63,8 +63,8 @@ int power_kvm_vm_set_freq(__rte_unused unsigned int lcore_id, __rte_unused uint32_t index) { - RTE_LOG(ERR, POWER, "rte_power_set_freq is not implemented " - "for Virtual Machine Power Management\n"); + RTE_LOG_LINE(ERR, POWER, "rte_power_set_freq is not implemented " + "for Virtual Machine Power Management"); return -ENOTSUP; } @@ -74,7 +74,7 @@ send_msg(unsigned int lcore_id, uint32_t scale_direction) int ret; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Core(%u) is out of range 0...%d\n", + RTE_LOG_LINE(ERR, POWER, "Core(%u) is out of range 0...%d", lcore_id, RTE_MAX_LCORE-1); return -1; } @@ -82,7 +82,7 @@ send_msg(unsigned int lcore_id, uint32_t scale_direction) ret = guest_channel_send_msg(&pkt[lcore_id], lcore_id); if (ret == 0) return 1; - RTE_LOG(DEBUG, POWER, "Error sending message: %s\n", + RTE_LOG_LINE(DEBUG, POWER, "Error sending message: %s", ret > 0 ? strerror(ret) : "channel not connected"); return -1; } @@ -114,7 +114,7 @@ power_kvm_vm_freq_min(unsigned int lcore_id) int power_kvm_vm_turbo_status(__rte_unused unsigned int lcore_id) { - RTE_LOG(ERR, POWER, "rte_power_turbo_status is not implemented for Virtual Machine Power Management\n"); + RTE_LOG_LINE(ERR, POWER, "rte_power_turbo_status is not implemented for Virtual Machine Power Management"); return -ENOTSUP; } @@ -134,6 +134,6 @@ struct rte_power_core_capabilities; int power_kvm_vm_get_capabilities(__rte_unused unsigned int lcore_id, __rte_unused struct rte_power_core_capabilities *caps) { - RTE_LOG(ERR, POWER, "rte_power_get_capabilities is not implemented for Virtual Machine Power Management\n"); + RTE_LOG_LINE(ERR, POWER, "rte_power_get_capabilities is not implemented for Virtual Machine Power Management"); return -ENOTSUP; } diff --git a/lib/power/power_pstate_cpufreq.c b/lib/power/power_pstate_cpufreq.c index 5ca5f60bcd..56aa302b5d 100644 --- a/lib/power/power_pstate_cpufreq.c +++ b/lib/power/power_pstate_cpufreq.c @@ -82,7 +82,7 @@ power_read_turbo_pct(uint64_t *outVal) fd = open(POWER_SYSFILE_TURBO_PCT, O_RDONLY); if (fd < 0) { - RTE_LOG(ERR, POWER, "Error opening '%s': %s\n", POWER_SYSFILE_TURBO_PCT, + RTE_LOG_LINE(ERR, POWER, "Error opening '%s': %s", POWER_SYSFILE_TURBO_PCT, strerror(errno)); return fd; } @@ -90,7 +90,7 @@ power_read_turbo_pct(uint64_t *outVal) ret = read(fd, val, sizeof(val)); if (ret < 0) { - RTE_LOG(ERR, POWER, "Error reading '%s': %s\n", POWER_SYSFILE_TURBO_PCT, + RTE_LOG_LINE(ERR, POWER, "Error reading '%s': %s", POWER_SYSFILE_TURBO_PCT, strerror(errno)); goto out; } @@ -98,7 +98,7 @@ power_read_turbo_pct(uint64_t *outVal) errno = 0; *outVal = (uint64_t) strtol(val, &endptr, 10); if (errno != 0 || (*endptr != 0 && *endptr != '\n')) { - RTE_LOG(ERR, POWER, "Error converting str to digits, read from %s: %s\n", + RTE_LOG_LINE(ERR, POWER, "Error converting str to digits, read from %s: %s", POWER_SYSFILE_TURBO_PCT, strerror(errno)); ret = -1; goto out; @@ -126,7 +126,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) open_core_sysfs_file(&f_base_max, "r", POWER_SYSFILE_BASE_MAX_FREQ, pi->lcore_id); if (f_base_max == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_BASE_MAX_FREQ); goto err; } @@ -134,7 +134,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) open_core_sysfs_file(&f_base_min, "r", POWER_SYSFILE_BASE_MIN_FREQ, pi->lcore_id); if (f_base_min == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_BASE_MIN_FREQ); goto err; } @@ -142,7 +142,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) open_core_sysfs_file(&f_min, "rw+", POWER_SYSFILE_MIN_FREQ, pi->lcore_id); if (f_min == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_MIN_FREQ); goto err; } @@ -150,7 +150,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) open_core_sysfs_file(&f_max, "rw+", POWER_SYSFILE_MAX_FREQ, pi->lcore_id); if (f_max == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_MAX_FREQ); goto err; } @@ -162,7 +162,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) /* read base max ratio */ ret = read_core_sysfs_u32(f_base_max, &base_max_ratio); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_BASE_MAX_FREQ); goto err; } @@ -170,7 +170,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) /* read base min ratio */ ret = read_core_sysfs_u32(f_base_min, &base_min_ratio); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_BASE_MIN_FREQ); goto err; } @@ -179,7 +179,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) if (f_base != NULL) { ret = read_core_sysfs_u32(f_base, &base_ratio); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_BASE_FREQ); goto err; } @@ -257,8 +257,8 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx) uint32_t target_freq = 0; if (idx >= RTE_MAX_LCORE_FREQS || idx >= pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Invalid frequency index %u, which " - "should be less than %u\n", idx, pi->nb_freqs); + RTE_LOG_LINE(ERR, POWER, "Invalid frequency index %u, which " + "should be less than %u", idx, pi->nb_freqs); return -1; } @@ -270,15 +270,15 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx) * User need change the min/max as same value. */ if (fseek(pi->f_cur_min, 0, SEEK_SET) < 0) { - RTE_LOG(ERR, POWER, "Fail to set file position indicator to 0 " - "for setting frequency for lcore %u\n", + RTE_LOG_LINE(ERR, POWER, "Fail to set file position indicator to 0 " + "for setting frequency for lcore %u", pi->lcore_id); return -1; } if (fseek(pi->f_cur_max, 0, SEEK_SET) < 0) { - RTE_LOG(ERR, POWER, "Fail to set file position indicator to 0 " - "for setting frequency for lcore %u\n", + RTE_LOG_LINE(ERR, POWER, "Fail to set file position indicator to 0 " + "for setting frequency for lcore %u", pi->lcore_id); return -1; } @@ -288,7 +288,7 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx) if (pi->turbo_enable) target_freq = pi->sys_max_freq; else { - RTE_LOG(ERR, POWER, "Turbo is off, frequency can't be scaled up more %u\n", + RTE_LOG_LINE(ERR, POWER, "Turbo is off, frequency can't be scaled up more %u", pi->lcore_id); return -1; } @@ -299,14 +299,14 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx) if (idx > pi->curr_idx) { if (fprintf(pi->f_cur_min, "%u", target_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new frequency for " - "lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to write new frequency for " + "lcore %u", pi->lcore_id); return -1; } if (fprintf(pi->f_cur_max, "%u", target_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new frequency for " - "lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to write new frequency for " + "lcore %u", pi->lcore_id); return -1; } @@ -322,14 +322,14 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx) if (idx < pi->curr_idx) { if (fprintf(pi->f_cur_max, "%u", target_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new frequency for " - "lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to write new frequency for " + "lcore %u", pi->lcore_id); return -1; } if (fprintf(pi->f_cur_min, "%u", target_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new frequency for " - "lcore %u\n", pi->lcore_id); + RTE_LOG_LINE(ERR, POWER, "Fail to write new frequency for " + "lcore %u", pi->lcore_id); return -1; } @@ -384,7 +384,7 @@ power_get_available_freqs(struct pstate_power_info *pi) open_core_sysfs_file(&f_max, "r", POWER_SYSFILE_BASE_MAX_FREQ, pi->lcore_id); if (f_max == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_BASE_MAX_FREQ); goto out; } @@ -392,7 +392,7 @@ power_get_available_freqs(struct pstate_power_info *pi) open_core_sysfs_file(&f_min, "r", POWER_SYSFILE_BASE_MIN_FREQ, pi->lcore_id); if (f_min == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_BASE_MIN_FREQ); goto out; } @@ -400,14 +400,14 @@ power_get_available_freqs(struct pstate_power_info *pi) /* read base ratios */ ret = read_core_sysfs_u32(f_max, &sys_max_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_BASE_MAX_FREQ); goto out; } ret = read_core_sysfs_u32(f_min, &sys_min_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_BASE_MIN_FREQ); goto out; } @@ -450,7 +450,7 @@ power_get_available_freqs(struct pstate_power_info *pi) num_freqs = (RTE_MIN(base_max_freq, sys_max_freq) - sys_min_freq) / BUS_FREQ + 1 + pi->turbo_available; if (num_freqs >= RTE_MAX_LCORE_FREQS) { - RTE_LOG(ERR, POWER, "Too many available frequencies: %d\n", + RTE_LOG_LINE(ERR, POWER, "Too many available frequencies: %d", num_freqs); goto out; } @@ -494,14 +494,14 @@ power_get_cur_idx(struct pstate_power_info *pi) open_core_sysfs_file(&f_cur, "r", POWER_SYSFILE_CUR_FREQ, pi->lcore_id); if (f_cur == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + RTE_LOG_LINE(ERR, POWER, "failed to open %s", POWER_SYSFILE_CUR_FREQ); goto fail; } ret = read_core_sysfs_u32(f_cur, &sys_cur_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + RTE_LOG_LINE(ERR, POWER, "Failed to read %s", POWER_SYSFILE_CUR_FREQ); goto fail; } @@ -543,7 +543,7 @@ power_pstate_cpufreq_init(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceed %u\n", + RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceed %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -559,47 +559,47 @@ power_pstate_cpufreq_init(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "in use\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is " + "in use", lcore_id); return -1; } pi->lcore_id = lcore_id; /* Check and set the governor */ if (power_set_governor_performance(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to " - "performance\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set governor of lcore %u to " + "performance", lcore_id); goto fail; } /* Init for setting lcore frequency */ if (power_init_for_setting_freq(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot init for setting frequency for " - "lcore %u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot init for setting frequency for " + "lcore %u", lcore_id); goto fail; } /* Get the available frequencies */ if (power_get_available_freqs(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot get available frequencies of " - "lcore %u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot get available frequencies of " + "lcore %u", lcore_id); goto fail; } if (power_get_cur_idx(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot get current frequency " - "index of lcore %u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot get current frequency " + "index of lcore %u", lcore_id); goto fail; } /* Set freq to max by default */ if (power_pstate_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, "Cannot set frequency of lcore %u " - "to max\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set frequency of lcore %u " + "to max", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Initialized successfully for lcore %u " - "power management\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Initialized successfully for lcore %u " + "power management", lcore_id); exp_state = POWER_ONGOING; rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_USED, rte_memory_order_release, rte_memory_order_relaxed); @@ -621,7 +621,7 @@ power_pstate_cpufreq_exit(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n", + RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceeds %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -637,8 +637,8 @@ power_pstate_cpufreq_exit(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "not used\n", lcore_id); + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is " + "not used", lcore_id); return -1; } @@ -650,14 +650,14 @@ power_pstate_cpufreq_exit(unsigned int lcore_id) /* Set the governor back to the original */ if (power_set_governor_original(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set the governor of %u back " - "to the original\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Cannot set the governor of %u back " + "to the original", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Power management of lcore %u has exited from " + RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u has exited from " "'performance' mode and been set back to the " - "original\n", lcore_id); + "original", lcore_id); exp_state = POWER_ONGOING; rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_IDLE, rte_memory_order_release, rte_memory_order_relaxed); @@ -679,18 +679,18 @@ power_pstate_cpufreq_freqs(unsigned int lcore_id, uint32_t *freqs, uint32_t num) struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return 0; } if (freqs == NULL) { - RTE_LOG(ERR, POWER, "NULL buffer supplied\n"); + RTE_LOG_LINE(ERR, POWER, "NULL buffer supplied"); return 0; } pi = &lcore_power_info[lcore_id]; if (num < pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Buffer size is not enough\n"); + RTE_LOG_LINE(ERR, POWER, "Buffer size is not enough"); return 0; } rte_memcpy(freqs, pi->freqs, pi->nb_freqs * sizeof(uint32_t)); @@ -702,7 +702,7 @@ uint32_t power_pstate_cpufreq_get_freq(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return RTE_POWER_INVALID_FREQ_INDEX; } @@ -714,7 +714,7 @@ int power_pstate_cpufreq_set_freq(unsigned int lcore_id, uint32_t index) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -727,7 +727,7 @@ power_pstate_cpufreq_freq_up(unsigned int lcore_id) struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -746,7 +746,7 @@ power_pstate_cpufreq_freq_down(unsigned int lcore_id) struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -762,7 +762,7 @@ int power_pstate_cpufreq_freq_max(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -787,7 +787,7 @@ power_pstate_cpufreq_freq_min(unsigned int lcore_id) struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -804,7 +804,7 @@ power_pstate_turbo_status(unsigned int lcore_id) struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -819,7 +819,7 @@ power_pstate_enable_turbo(unsigned int lcore_id) struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -829,8 +829,8 @@ power_pstate_enable_turbo(unsigned int lcore_id) pi->turbo_enable = 1; else { pi->turbo_enable = 0; - RTE_LOG(ERR, POWER, - "Failed to enable turbo on lcore %u\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to enable turbo on lcore %u", lcore_id); return -1; } @@ -845,7 +845,7 @@ power_pstate_disable_turbo(unsigned int lcore_id) struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } @@ -856,8 +856,8 @@ power_pstate_disable_turbo(unsigned int lcore_id) if (pi->turbo_available && pi->curr_idx <= 1) { /* Try to set freq to max by default coming out of turbo */ if (power_pstate_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, - "Failed to set frequency of lcore %u to max\n", + RTE_LOG_LINE(ERR, POWER, + "Failed to set frequency of lcore %u to max", lcore_id); return -1; } @@ -873,11 +873,11 @@ int power_pstate_get_capabilities(unsigned int lcore_id, struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID"); return -1; } if (caps == NULL) { - RTE_LOG(ERR, POWER, "Invalid argument\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid argument"); return -1; } diff --git a/lib/power/rte_power.c b/lib/power/rte_power.c index 1502612b0a..7bee4f88f9 100644 --- a/lib/power/rte_power.c +++ b/lib/power/rte_power.c @@ -74,7 +74,7 @@ rte_power_set_env(enum power_management_env env) rte_spinlock_lock(&global_env_cfg_lock); if (global_default_env != PM_ENV_NOT_SET) { - RTE_LOG(ERR, POWER, "Power Management Environment already set.\n"); + RTE_LOG_LINE(ERR, POWER, "Power Management Environment already set."); rte_spinlock_unlock(&global_env_cfg_lock); return -1; } @@ -143,7 +143,7 @@ rte_power_set_env(enum power_management_env env) rte_power_freq_disable_turbo = power_amd_pstate_disable_turbo; rte_power_get_capabilities = power_amd_pstate_get_capabilities; } else { - RTE_LOG(ERR, POWER, "Invalid Power Management Environment(%d) set\n", + RTE_LOG_LINE(ERR, POWER, "Invalid Power Management Environment(%d) set", env); ret = -1; } @@ -190,46 +190,46 @@ rte_power_init(unsigned int lcore_id) case PM_ENV_AMD_PSTATE_CPUFREQ: return power_amd_pstate_cpufreq_init(lcore_id); default: - RTE_LOG(INFO, POWER, "Env isn't set yet!\n"); + RTE_LOG_LINE(INFO, POWER, "Env isn't set yet!"); } /* Auto detect Environment */ - RTE_LOG(INFO, POWER, "Attempting to initialise ACPI cpufreq power management...\n"); + RTE_LOG_LINE(INFO, POWER, "Attempting to initialise ACPI cpufreq power management..."); ret = power_acpi_cpufreq_init(lcore_id); if (ret == 0) { rte_power_set_env(PM_ENV_ACPI_CPUFREQ); goto out; } - RTE_LOG(INFO, POWER, "Attempting to initialise PSTAT power management...\n"); + RTE_LOG_LINE(INFO, POWER, "Attempting to initialise PSTAT power management..."); ret = power_pstate_cpufreq_init(lcore_id); if (ret == 0) { rte_power_set_env(PM_ENV_PSTATE_CPUFREQ); goto out; } - RTE_LOG(INFO, POWER, "Attempting to initialise AMD PSTATE power management...\n"); + RTE_LOG_LINE(INFO, POWER, "Attempting to initialise AMD PSTATE power management..."); ret = power_amd_pstate_cpufreq_init(lcore_id); if (ret == 0) { rte_power_set_env(PM_ENV_AMD_PSTATE_CPUFREQ); goto out; } - RTE_LOG(INFO, POWER, "Attempting to initialise CPPC power management...\n"); + RTE_LOG_LINE(INFO, POWER, "Attempting to initialise CPPC power management..."); ret = power_cppc_cpufreq_init(lcore_id); if (ret == 0) { rte_power_set_env(PM_ENV_CPPC_CPUFREQ); goto out; } - RTE_LOG(INFO, POWER, "Attempting to initialise VM power management...\n"); + RTE_LOG_LINE(INFO, POWER, "Attempting to initialise VM power management..."); ret = power_kvm_vm_init(lcore_id); if (ret == 0) { rte_power_set_env(PM_ENV_KVM_VM); goto out; } - RTE_LOG(ERR, POWER, "Unable to set Power Management Environment for lcore " - "%u\n", lcore_id); + RTE_LOG_LINE(ERR, POWER, "Unable to set Power Management Environment for lcore " + "%u", lcore_id); out: return ret; } @@ -249,7 +249,7 @@ rte_power_exit(unsigned int lcore_id) case PM_ENV_AMD_PSTATE_CPUFREQ: return power_amd_pstate_cpufreq_exit(lcore_id); default: - RTE_LOG(ERR, POWER, "Environment has not been set, unable to exit gracefully\n"); + RTE_LOG_LINE(ERR, POWER, "Environment has not been set, unable to exit gracefully"); } return -1; diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c index 6f18ed0adf..fb7d8fddb3 100644 --- a/lib/power/rte_power_pmd_mgmt.c +++ b/lib/power/rte_power_pmd_mgmt.c @@ -146,7 +146,7 @@ get_monitor_addresses(struct pmd_core_cfg *cfg, /* attempted out of bounds access */ if (i >= len) { - RTE_LOG(ERR, POWER, "Too many queues being monitored\n"); + RTE_LOG_LINE(ERR, POWER, "Too many queues being monitored"); return -1; } @@ -423,7 +423,7 @@ check_scale(unsigned int lcore) if (!rte_power_check_env_supported(PM_ENV_ACPI_CPUFREQ) && !rte_power_check_env_supported(PM_ENV_PSTATE_CPUFREQ) && !rte_power_check_env_supported(PM_ENV_AMD_PSTATE_CPUFREQ)) { - RTE_LOG(DEBUG, POWER, "Neither ACPI nor PSTATE modes are supported\n"); + RTE_LOG_LINE(DEBUG, POWER, "Neither ACPI nor PSTATE modes are supported"); return -ENOTSUP; } /* ensure we could initialize the power library */ @@ -434,7 +434,7 @@ check_scale(unsigned int lcore) env = rte_power_get_env(); if (env != PM_ENV_ACPI_CPUFREQ && env != PM_ENV_PSTATE_CPUFREQ && env != PM_ENV_AMD_PSTATE_CPUFREQ) { - RTE_LOG(DEBUG, POWER, "Neither ACPI nor PSTATE modes were initialized\n"); + RTE_LOG_LINE(DEBUG, POWER, "Neither ACPI nor PSTATE modes were initialized"); return -ENOTSUP; } @@ -450,7 +450,7 @@ check_monitor(struct pmd_core_cfg *cfg, const union queue *qdata) /* check if rte_power_monitor is supported */ if (!global_data.intrinsics_support.power_monitor) { - RTE_LOG(DEBUG, POWER, "Monitoring intrinsics are not supported\n"); + RTE_LOG_LINE(DEBUG, POWER, "Monitoring intrinsics are not supported"); return -ENOTSUP; } /* check if multi-monitor is supported */ @@ -459,14 +459,14 @@ check_monitor(struct pmd_core_cfg *cfg, const union queue *qdata) /* if we're adding a new queue, do we support multiple queues? */ if (cfg->n_queues > 0 && !multimonitor_supported) { - RTE_LOG(DEBUG, POWER, "Monitoring multiple queues is not supported\n"); + RTE_LOG_LINE(DEBUG, POWER, "Monitoring multiple queues is not supported"); return -ENOTSUP; } /* check if the device supports the necessary PMD API */ if (rte_eth_get_monitor_addr(qdata->portid, qdata->qid, &dummy) == -ENOTSUP) { - RTE_LOG(DEBUG, POWER, "The device does not support rte_eth_get_monitor_addr\n"); + RTE_LOG_LINE(DEBUG, POWER, "The device does not support rte_eth_get_monitor_addr"); return -ENOTSUP; } @@ -566,14 +566,14 @@ rte_power_ethdev_pmgmt_queue_enable(unsigned int lcore_id, uint16_t port_id, clb = clb_pause; break; default: - RTE_LOG(DEBUG, POWER, "Invalid power management type\n"); + RTE_LOG_LINE(DEBUG, POWER, "Invalid power management type"); ret = -EINVAL; goto end; } /* add this queue to the list */ ret = queue_list_add(lcore_cfg, &qdata); if (ret < 0) { - RTE_LOG(DEBUG, POWER, "Failed to add queue to list: %s\n", + RTE_LOG_LINE(DEBUG, POWER, "Failed to add queue to list: %s", strerror(-ret)); goto end; } @@ -686,7 +686,7 @@ int rte_power_pmd_mgmt_set_pause_duration(unsigned int duration) { if (duration == 0) { - RTE_LOG(ERR, POWER, "Pause duration must be greater than 0, value unchanged\n"); + RTE_LOG_LINE(ERR, POWER, "Pause duration must be greater than 0, value unchanged"); return -EINVAL; } pause_duration = duration; @@ -704,12 +704,12 @@ int rte_power_pmd_mgmt_set_scaling_freq_min(unsigned int lcore, unsigned int min) { if (lcore >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID: %u\n", lcore); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID: %u", lcore); return -EINVAL; } if (min > scale_freq_max[lcore]) { - RTE_LOG(ERR, POWER, "Invalid min frequency: Cannot be greater than max frequency\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid min frequency: Cannot be greater than max frequency"); return -EINVAL; } scale_freq_min[lcore] = min; @@ -721,7 +721,7 @@ int rte_power_pmd_mgmt_set_scaling_freq_max(unsigned int lcore, unsigned int max) { if (lcore >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID: %u\n", lcore); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID: %u", lcore); return -EINVAL; } @@ -729,7 +729,7 @@ rte_power_pmd_mgmt_set_scaling_freq_max(unsigned int lcore, unsigned int max) if (max == 0) max = UINT32_MAX; if (max < scale_freq_min[lcore]) { - RTE_LOG(ERR, POWER, "Invalid max frequency: Cannot be less than min frequency\n"); + RTE_LOG_LINE(ERR, POWER, "Invalid max frequency: Cannot be less than min frequency"); return -EINVAL; } @@ -742,12 +742,12 @@ int rte_power_pmd_mgmt_get_scaling_freq_min(unsigned int lcore) { if (lcore >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID: %u\n", lcore); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID: %u", lcore); return -EINVAL; } if (scale_freq_max[lcore] == 0) - RTE_LOG(DEBUG, POWER, "Scaling freq min config not set. Using sysfs min freq.\n"); + RTE_LOG_LINE(DEBUG, POWER, "Scaling freq min config not set. Using sysfs min freq."); return scale_freq_min[lcore]; } @@ -756,12 +756,12 @@ int rte_power_pmd_mgmt_get_scaling_freq_max(unsigned int lcore) { if (lcore >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID: %u\n", lcore); + RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID: %u", lcore); return -EINVAL; } if (scale_freq_max[lcore] == UINT32_MAX) { - RTE_LOG(DEBUG, POWER, "Scaling freq max config not set. Using sysfs max freq.\n"); + RTE_LOG_LINE(DEBUG, POWER, "Scaling freq max config not set. Using sysfs max freq."); return 0; } diff --git a/lib/power/rte_power_uncore.c b/lib/power/rte_power_uncore.c index 9c20fe150d..d57fc18faa 100644 --- a/lib/power/rte_power_uncore.c +++ b/lib/power/rte_power_uncore.c @@ -101,7 +101,7 @@ rte_power_set_uncore_env(enum rte_uncore_power_mgmt_env env) rte_spinlock_lock(&global_env_cfg_lock); if (default_uncore_env != RTE_UNCORE_PM_ENV_NOT_SET) { - RTE_LOG(ERR, POWER, "Uncore Power Management Env already set.\n"); + RTE_LOG_LINE(ERR, POWER, "Uncore Power Management Env already set."); rte_spinlock_unlock(&global_env_cfg_lock); return -1; } @@ -124,7 +124,7 @@ rte_power_set_uncore_env(enum rte_uncore_power_mgmt_env env) rte_power_uncore_get_num_pkgs = power_intel_uncore_get_num_pkgs; rte_power_uncore_get_num_dies = power_intel_uncore_get_num_dies; } else { - RTE_LOG(ERR, POWER, "Invalid Power Management Environment(%d) set\n", env); + RTE_LOG_LINE(ERR, POWER, "Invalid Power Management Environment(%d) set", env); ret = -1; goto out; } @@ -159,12 +159,12 @@ rte_power_uncore_init(unsigned int pkg, unsigned int die) case RTE_UNCORE_PM_ENV_INTEL_UNCORE: return power_intel_uncore_init(pkg, die); default: - RTE_LOG(INFO, POWER, "Uncore Env isn't set yet!\n"); + RTE_LOG_LINE(INFO, POWER, "Uncore Env isn't set yet!"); break; } /* Auto detect Environment */ - RTE_LOG(INFO, POWER, "Attempting to initialise Intel Uncore power mgmt...\n"); + RTE_LOG_LINE(INFO, POWER, "Attempting to initialise Intel Uncore power mgmt..."); ret = power_intel_uncore_init(pkg, die); if (ret == 0) { rte_power_set_uncore_env(RTE_UNCORE_PM_ENV_INTEL_UNCORE); @@ -172,8 +172,8 @@ rte_power_uncore_init(unsigned int pkg, unsigned int die) } if (default_uncore_env == RTE_UNCORE_PM_ENV_NOT_SET) { - RTE_LOG(ERR, POWER, "Unable to set Power Management Environment " - "for package %u Die %u\n", pkg, die); + RTE_LOG_LINE(ERR, POWER, "Unable to set Power Management Environment " + "for package %u Die %u", pkg, die); ret = 0; } out: @@ -187,7 +187,7 @@ rte_power_uncore_exit(unsigned int pkg, unsigned int die) case RTE_UNCORE_PM_ENV_INTEL_UNCORE: return power_intel_uncore_exit(pkg, die); default: - RTE_LOG(ERR, POWER, "Uncore Env has not been set, unable to exit gracefully\n"); + RTE_LOG_LINE(ERR, POWER, "Uncore Env has not been set, unable to exit gracefully"); break; } return -1; diff --git a/lib/rcu/rte_rcu_qsbr.c b/lib/rcu/rte_rcu_qsbr.c index 5b6530788a..bd0b83be0c 100644 --- a/lib/rcu/rte_rcu_qsbr.c +++ b/lib/rcu/rte_rcu_qsbr.c @@ -20,7 +20,7 @@ #include "rcu_qsbr_pvt.h" #define RCU_LOG(level, fmt, args...) \ - RTE_LOG(level, RCU, "%s(): " fmt "\n", __func__, ## args) + RTE_LOG_LINE(level, RCU, "%s(): " fmt, __func__, ## args) /* Get the memory size of QSBR variable */ size_t diff --git a/lib/reorder/rte_reorder.c b/lib/reorder/rte_reorder.c index 640719c3ec..847e45b9f7 100644 --- a/lib/reorder/rte_reorder.c +++ b/lib/reorder/rte_reorder.c @@ -74,34 +74,34 @@ rte_reorder_init(struct rte_reorder_buffer *b, unsigned int bufsize, }; if (b == NULL) { - RTE_LOG(ERR, REORDER, "Invalid reorder buffer parameter:" - " NULL\n"); + RTE_LOG_LINE(ERR, REORDER, "Invalid reorder buffer parameter:" + " NULL"); rte_errno = EINVAL; return NULL; } if (!rte_is_power_of_2(size)) { - RTE_LOG(ERR, REORDER, "Invalid reorder buffer size" - " - Not a power of 2\n"); + RTE_LOG_LINE(ERR, REORDER, "Invalid reorder buffer size" + " - Not a power of 2"); rte_errno = EINVAL; return NULL; } if (name == NULL) { - RTE_LOG(ERR, REORDER, "Invalid reorder buffer name ptr:" - " NULL\n"); + RTE_LOG_LINE(ERR, REORDER, "Invalid reorder buffer name ptr:" + " NULL"); rte_errno = EINVAL; return NULL; } if (bufsize < min_bufsize) { - RTE_LOG(ERR, REORDER, "Invalid reorder buffer memory size: %u, " - "minimum required: %u\n", bufsize, min_bufsize); + RTE_LOG_LINE(ERR, REORDER, "Invalid reorder buffer memory size: %u, " + "minimum required: %u", bufsize, min_bufsize); rte_errno = EINVAL; return NULL; } rte_reorder_seqn_dynfield_offset = rte_mbuf_dynfield_register(&reorder_seqn_dynfield_desc); if (rte_reorder_seqn_dynfield_offset < 0) { - RTE_LOG(ERR, REORDER, - "Failed to register mbuf field for reorder sequence number, rte_errno: %i\n", + RTE_LOG_LINE(ERR, REORDER, + "Failed to register mbuf field for reorder sequence number, rte_errno: %i", rte_errno); rte_errno = ENOMEM; return NULL; @@ -161,14 +161,14 @@ rte_reorder_create(const char *name, unsigned socket_id, unsigned int size) /* Check user arguments. */ if (!rte_is_power_of_2(size)) { - RTE_LOG(ERR, REORDER, "Invalid reorder buffer size" - " - Not a power of 2\n"); + RTE_LOG_LINE(ERR, REORDER, "Invalid reorder buffer size" + " - Not a power of 2"); rte_errno = EINVAL; return NULL; } if (name == NULL) { - RTE_LOG(ERR, REORDER, "Invalid reorder buffer name ptr:" - " NULL\n"); + RTE_LOG_LINE(ERR, REORDER, "Invalid reorder buffer name ptr:" + " NULL"); rte_errno = EINVAL; return NULL; } @@ -176,7 +176,7 @@ rte_reorder_create(const char *name, unsigned socket_id, unsigned int size) /* allocate tailq entry */ te = rte_zmalloc("REORDER_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, REORDER, "Failed to allocate tailq entry\n"); + RTE_LOG_LINE(ERR, REORDER, "Failed to allocate tailq entry"); rte_errno = ENOMEM; return NULL; } @@ -184,7 +184,7 @@ rte_reorder_create(const char *name, unsigned socket_id, unsigned int size) /* Allocate memory to store the reorder buffer structure. */ b = rte_zmalloc_socket("REORDER_BUFFER", bufsize, 0, socket_id); if (b == NULL) { - RTE_LOG(ERR, REORDER, "Memzone allocation failed\n"); + RTE_LOG_LINE(ERR, REORDER, "Memzone allocation failed"); rte_errno = ENOMEM; rte_free(te); return NULL; diff --git a/lib/rib/rte_rib.c b/lib/rib/rte_rib.c index 251d0d4ef1..baee4bff5a 100644 --- a/lib/rib/rte_rib.c +++ b/lib/rib/rte_rib.c @@ -416,8 +416,8 @@ rte_rib_create(const char *name, int socket_id, const struct rte_rib_conf *conf) NULL, NULL, NULL, NULL, socket_id, 0); if (node_pool == NULL) { - RTE_LOG(ERR, LPM, - "Can not allocate mempool for RIB %s\n", name); + RTE_LOG_LINE(ERR, LPM, + "Can not allocate mempool for RIB %s", name); return NULL; } @@ -441,8 +441,8 @@ rte_rib_create(const char *name, int socket_id, const struct rte_rib_conf *conf) /* allocate tailq entry */ te = rte_zmalloc("RIB_TAILQ_ENTRY", sizeof(*te), 0); if (unlikely(te == NULL)) { - RTE_LOG(ERR, LPM, - "Can not allocate tailq entry for RIB %s\n", name); + RTE_LOG_LINE(ERR, LPM, + "Can not allocate tailq entry for RIB %s", name); rte_errno = ENOMEM; goto exit; } @@ -451,7 +451,7 @@ rte_rib_create(const char *name, int socket_id, const struct rte_rib_conf *conf) rib = rte_zmalloc_socket(mem_name, sizeof(struct rte_rib), RTE_CACHE_LINE_SIZE, socket_id); if (unlikely(rib == NULL)) { - RTE_LOG(ERR, LPM, "RIB %s memory allocation failed\n", name); + RTE_LOG_LINE(ERR, LPM, "RIB %s memory allocation failed", name); rte_errno = ENOMEM; goto free_te; } diff --git a/lib/rib/rte_rib6.c b/lib/rib/rte_rib6.c index ad3d48ab8e..ce54f51208 100644 --- a/lib/rib/rte_rib6.c +++ b/lib/rib/rte_rib6.c @@ -485,8 +485,8 @@ rte_rib6_create(const char *name, int socket_id, NULL, NULL, NULL, NULL, socket_id, 0); if (node_pool == NULL) { - RTE_LOG(ERR, LPM, - "Can not allocate mempool for RIB6 %s\n", name); + RTE_LOG_LINE(ERR, LPM, + "Can not allocate mempool for RIB6 %s", name); return NULL; } @@ -510,8 +510,8 @@ rte_rib6_create(const char *name, int socket_id, /* allocate tailq entry */ te = rte_zmalloc("RIB6_TAILQ_ENTRY", sizeof(*te), 0); if (unlikely(te == NULL)) { - RTE_LOG(ERR, LPM, - "Can not allocate tailq entry for RIB6 %s\n", name); + RTE_LOG_LINE(ERR, LPM, + "Can not allocate tailq entry for RIB6 %s", name); rte_errno = ENOMEM; goto exit; } @@ -520,7 +520,7 @@ rte_rib6_create(const char *name, int socket_id, rib = rte_zmalloc_socket(mem_name, sizeof(struct rte_rib6), RTE_CACHE_LINE_SIZE, socket_id); if (unlikely(rib == NULL)) { - RTE_LOG(ERR, LPM, "RIB6 %s memory allocation failed\n", name); + RTE_LOG_LINE(ERR, LPM, "RIB6 %s memory allocation failed", name); rte_errno = ENOMEM; goto free_te; } diff --git a/lib/ring/rte_ring.c b/lib/ring/rte_ring.c index 12046419f1..7fd6576c8c 100644 --- a/lib/ring/rte_ring.c +++ b/lib/ring/rte_ring.c @@ -55,15 +55,15 @@ rte_ring_get_memsize_elem(unsigned int esize, unsigned int count) /* Check if element size is a multiple of 4B */ if (esize % 4 != 0) { - RTE_LOG(ERR, RING, "element size is not a multiple of 4\n"); + RTE_LOG_LINE(ERR, RING, "element size is not a multiple of 4"); return -EINVAL; } /* count must be a power of 2 */ if ((!POWEROF2(count)) || (count > RTE_RING_SZ_MASK )) { - RTE_LOG(ERR, RING, - "Requested number of elements is invalid, must be power of 2, and not exceed %u\n", + RTE_LOG_LINE(ERR, RING, + "Requested number of elements is invalid, must be power of 2, and not exceed %u", RTE_RING_SZ_MASK); return -EINVAL; @@ -198,8 +198,8 @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned int count, /* future proof flags, only allow supported values */ if (flags & ~RING_F_MASK) { - RTE_LOG(ERR, RING, - "Unsupported flags requested %#x\n", flags); + RTE_LOG_LINE(ERR, RING, + "Unsupported flags requested %#x", flags); return -EINVAL; } @@ -219,8 +219,8 @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned int count, r->capacity = count; } else { if ((!POWEROF2(count)) || (count > RTE_RING_SZ_MASK)) { - RTE_LOG(ERR, RING, - "Requested size is invalid, must be power of 2, and not exceed the size limit %u\n", + RTE_LOG_LINE(ERR, RING, + "Requested size is invalid, must be power of 2, and not exceed the size limit %u", RTE_RING_SZ_MASK); return -EINVAL; } @@ -274,7 +274,7 @@ rte_ring_create_elem(const char *name, unsigned int esize, unsigned int count, te = rte_zmalloc("RING_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, RING, "Cannot reserve memory for tailq\n"); + RTE_LOG_LINE(ERR, RING, "Cannot reserve memory for tailq"); rte_errno = ENOMEM; return NULL; } @@ -299,7 +299,7 @@ rte_ring_create_elem(const char *name, unsigned int esize, unsigned int count, TAILQ_INSERT_TAIL(ring_list, te, next); } else { r = NULL; - RTE_LOG(ERR, RING, "Cannot reserve memory\n"); + RTE_LOG_LINE(ERR, RING, "Cannot reserve memory"); rte_free(te); } rte_mcfg_tailq_write_unlock(); @@ -331,8 +331,8 @@ rte_ring_free(struct rte_ring *r) * therefore, there is no memzone to free. */ if (r->memzone == NULL) { - RTE_LOG(ERR, RING, - "Cannot free ring, not created with rte_ring_create()\n"); + RTE_LOG_LINE(ERR, RING, + "Cannot free ring, not created with rte_ring_create()"); return; } @@ -355,7 +355,7 @@ rte_ring_free(struct rte_ring *r) rte_mcfg_tailq_write_unlock(); if (rte_memzone_free(r->memzone) != 0) - RTE_LOG(ERR, RING, "Cannot free memory\n"); + RTE_LOG_LINE(ERR, RING, "Cannot free memory"); rte_free(te); } diff --git a/lib/sched/rte_pie.c b/lib/sched/rte_pie.c index cce0ce762d..ac1f99e2bd 100644 --- a/lib/sched/rte_pie.c +++ b/lib/sched/rte_pie.c @@ -17,7 +17,7 @@ int rte_pie_rt_data_init(struct rte_pie *pie) { if (pie == NULL) { - RTE_LOG(ERR, SCHED, "%s: Invalid addr for pie\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: Invalid addr for pie", __func__); return -EINVAL; } @@ -39,26 +39,26 @@ rte_pie_config_init(struct rte_pie_config *pie_cfg, return -1; if (qdelay_ref <= 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for qdelay_ref\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for qdelay_ref", __func__); return -EINVAL; } if (dp_update_interval <= 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for dp_update_interval\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for dp_update_interval", __func__); return -EINVAL; } if (max_burst <= 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for max_burst\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for max_burst", __func__); return -EINVAL; } if (tailq_th <= 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for tailq_th\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for tailq_th", __func__); return -EINVAL; } diff --git a/lib/sched/rte_sched.c b/lib/sched/rte_sched.c index 76dd8dd738..75f2f12007 100644 --- a/lib/sched/rte_sched.c +++ b/lib/sched/rte_sched.c @@ -325,23 +325,23 @@ pipe_profile_check(struct rte_sched_pipe_params *params, /* Pipe parameters */ if (params == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter params\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter params", __func__); return -EINVAL; } /* TB rate: non-zero, not greater than port rate */ if (params->tb_rate == 0 || params->tb_rate > rate) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for tb rate\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for tb rate", __func__); return -EINVAL; } /* TB size: non-zero */ if (params->tb_size == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for tb size\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for tb size", __func__); return -EINVAL; } @@ -350,38 +350,38 @@ pipe_profile_check(struct rte_sched_pipe_params *params, if ((qsize[i] == 0 && params->tc_rate[i] != 0) || (qsize[i] != 0 && (params->tc_rate[i] == 0 || params->tc_rate[i] > params->tb_rate))) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for qsize or tc_rate\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for qsize or tc_rate", __func__); return -EINVAL; } } if (params->tc_rate[RTE_SCHED_TRAFFIC_CLASS_BE] == 0 || qsize[RTE_SCHED_TRAFFIC_CLASS_BE] == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for be traffic class rate\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for be traffic class rate", __func__); return -EINVAL; } /* TC period: non-zero */ if (params->tc_period == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for tc period\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for tc period", __func__); return -EINVAL; } /* Best effort tc oversubscription weight: non-zero */ if (params->tc_ov_weight == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for tc ov weight\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for tc ov weight", __func__); return -EINVAL; } /* Queue WRR weights: non-zero */ for (i = 0; i < RTE_SCHED_BE_QUEUES_PER_PIPE; i++) { if (params->wrr_weights[i] == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for wrr weight\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for wrr weight", __func__); return -EINVAL; } } @@ -397,20 +397,20 @@ subport_profile_check(struct rte_sched_subport_profile_params *params, /* Check user parameters */ if (params == NULL) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for parameter params\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Incorrect value for parameter params", __func__); return -EINVAL; } if (params->tb_rate == 0 || params->tb_rate > rate) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for tb rate\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Incorrect value for tb rate", __func__); return -EINVAL; } if (params->tb_size == 0) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for tb size\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Incorrect value for tb size", __func__); return -EINVAL; } @@ -418,21 +418,21 @@ subport_profile_check(struct rte_sched_subport_profile_params *params, uint64_t tc_rate = params->tc_rate[i]; if (tc_rate == 0 || (tc_rate > params->tb_rate)) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for tc rate\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Incorrect value for tc rate", __func__); return -EINVAL; } } if (params->tc_rate[RTE_SCHED_TRAFFIC_CLASS_BE] == 0) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect tc rate(best effort)\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Incorrect tc rate(best effort)", __func__); return -EINVAL; } if (params->tc_period == 0) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for tc period\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Incorrect value for tc period", __func__); return -EINVAL; } @@ -445,29 +445,29 @@ rte_sched_port_check_params(struct rte_sched_port_params *params) uint32_t i; if (params == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter params\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter params", __func__); return -EINVAL; } /* socket */ if (params->socket < 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for socket id\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for socket id", __func__); return -EINVAL; } /* rate */ if (params->rate == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for rate\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for rate", __func__); return -EINVAL; } /* mtu */ if (params->mtu == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for mtu\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for mtu", __func__); return -EINVAL; } @@ -475,8 +475,8 @@ rte_sched_port_check_params(struct rte_sched_port_params *params) if (params->n_subports_per_port == 0 || params->n_subports_per_port > 1u << 16 || !rte_is_power_of_2(params->n_subports_per_port)) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for number of subports\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for number of subports", __func__); return -EINVAL; } @@ -484,8 +484,8 @@ rte_sched_port_check_params(struct rte_sched_port_params *params) params->n_subport_profiles == 0 || params->n_max_subport_profiles == 0 || params->n_subport_profiles > params->n_max_subport_profiles) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for subport profiles\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for subport profiles", __func__); return -EINVAL; } @@ -496,8 +496,8 @@ rte_sched_port_check_params(struct rte_sched_port_params *params) status = subport_profile_check(p, params->rate); if (status != 0) { - RTE_LOG(ERR, SCHED, - "%s: subport profile check failed(%d)\n", + RTE_LOG_LINE(ERR, SCHED, + "%s: subport profile check failed(%d)", __func__, status); return -EINVAL; } @@ -506,8 +506,8 @@ rte_sched_port_check_params(struct rte_sched_port_params *params) /* n_pipes_per_subport: non-zero, power of 2 */ if (params->n_pipes_per_subport == 0 || !rte_is_power_of_2(params->n_pipes_per_subport)) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for maximum pipes number\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for maximum pipes number", __func__); return -EINVAL; } @@ -830,8 +830,8 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params, /* Check user parameters */ if (params == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter params\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter params", __func__); return -EINVAL; } @@ -842,14 +842,14 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params, uint16_t qsize = params->qsize[i]; if (qsize != 0 && !rte_is_power_of_2(qsize)) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for qsize\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for qsize", __func__); return -EINVAL; } } if (params->qsize[RTE_SCHED_TRAFFIC_CLASS_BE] == 0) { - RTE_LOG(ERR, SCHED, "%s: Incorrect qsize\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: Incorrect qsize", __func__); return -EINVAL; } @@ -857,8 +857,8 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params, if (params->n_pipes_per_subport_enabled == 0 || params->n_pipes_per_subport_enabled > n_max_pipes_per_subport || !rte_is_power_of_2(params->n_pipes_per_subport_enabled)) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for pipes number\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for pipes number", __func__); return -EINVAL; } @@ -867,8 +867,8 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params, params->n_pipe_profiles == 0 || params->n_max_pipe_profiles == 0 || params->n_pipe_profiles > params->n_max_pipe_profiles) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for pipe profiles\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for pipe profiles", __func__); return -EINVAL; } @@ -878,8 +878,8 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params, status = pipe_profile_check(p, rate, ¶ms->qsize[0]); if (status != 0) { - RTE_LOG(ERR, SCHED, - "%s: Pipe profile check failed(%d)\n", __func__, status); + RTE_LOG_LINE(ERR, SCHED, + "%s: Pipe profile check failed(%d)", __func__, status); return -EINVAL; } } @@ -896,8 +896,8 @@ rte_sched_port_get_memory_footprint(struct rte_sched_port_params *port_params, status = rte_sched_port_check_params(port_params); if (status != 0) { - RTE_LOG(ERR, SCHED, - "%s: Port scheduler port params check failed (%d)\n", + RTE_LOG_LINE(ERR, SCHED, + "%s: Port scheduler port params check failed (%d)", __func__, status); return 0; @@ -910,8 +910,8 @@ rte_sched_port_get_memory_footprint(struct rte_sched_port_params *port_params, port_params->n_pipes_per_subport, port_params->rate); if (status != 0) { - RTE_LOG(ERR, SCHED, - "%s: Port scheduler subport params check failed (%d)\n", + RTE_LOG_LINE(ERR, SCHED, + "%s: Port scheduler subport params check failed (%d)", __func__, status); return 0; @@ -941,8 +941,8 @@ rte_sched_port_config(struct rte_sched_port_params *params) status = rte_sched_port_check_params(params); if (status != 0) { - RTE_LOG(ERR, SCHED, - "%s: Port scheduler params check failed (%d)\n", + RTE_LOG_LINE(ERR, SCHED, + "%s: Port scheduler params check failed (%d)", __func__, status); return NULL; } @@ -956,7 +956,7 @@ rte_sched_port_config(struct rte_sched_port_params *params) port = rte_zmalloc_socket("qos_params", size0 + size1, RTE_CACHE_LINE_SIZE, params->socket); if (port == NULL) { - RTE_LOG(ERR, SCHED, "%s: Memory allocation fails\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: Memory allocation fails", __func__); return NULL; } @@ -965,7 +965,7 @@ rte_sched_port_config(struct rte_sched_port_params *params) port->subport_profiles = rte_zmalloc_socket("subport_profile", size2, RTE_CACHE_LINE_SIZE, params->socket); if (port->subport_profiles == NULL) { - RTE_LOG(ERR, SCHED, "%s: Memory allocation fails\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: Memory allocation fails", __func__); rte_free(port); return NULL; } @@ -1107,8 +1107,8 @@ rte_sched_red_config(struct rte_sched_port *port, params->cman_params->red_params[i][j].maxp_inv) != 0) { rte_sched_free_memory(port, n_subports); - RTE_LOG(NOTICE, SCHED, - "%s: RED configuration init fails\n", __func__); + RTE_LOG_LINE(NOTICE, SCHED, + "%s: RED configuration init fails", __func__); return -EINVAL; } } @@ -1127,8 +1127,8 @@ rte_sched_pie_config(struct rte_sched_port *port, for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) { if (params->cman_params->pie_params[i].tailq_th > params->qsize[i]) { - RTE_LOG(NOTICE, SCHED, - "%s: PIE tailq threshold incorrect\n", __func__); + RTE_LOG_LINE(NOTICE, SCHED, + "%s: PIE tailq threshold incorrect", __func__); return -EINVAL; } @@ -1139,8 +1139,8 @@ rte_sched_pie_config(struct rte_sched_port *port, params->cman_params->pie_params[i].tailq_th) != 0) { rte_sched_free_memory(port, n_subports); - RTE_LOG(NOTICE, SCHED, - "%s: PIE configuration init fails\n", __func__); + RTE_LOG_LINE(NOTICE, SCHED, + "%s: PIE configuration init fails", __func__); return -EINVAL; } } @@ -1171,14 +1171,14 @@ rte_sched_subport_tc_ov_config(struct rte_sched_port *port, struct rte_sched_subport *s; if (port == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter port\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter port", __func__); return -EINVAL; } if (subport_id >= port->n_subports_per_port) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter subport id\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter subport id", __func__); return -EINVAL; } @@ -1204,21 +1204,21 @@ rte_sched_subport_config(struct rte_sched_port *port, /* Check user parameters */ if (port == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter port\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter port", __func__); return 0; } if (subport_id >= port->n_subports_per_port) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for subport id\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for subport id", __func__); ret = -EINVAL; goto out; } if (subport_profile_id >= port->n_max_subport_profiles) { - RTE_LOG(ERR, SCHED, "%s: " - "Number of subport profile exceeds the max limit\n", + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Number of subport profile exceeds the max limit", __func__); ret = -EINVAL; goto out; @@ -1234,8 +1234,8 @@ rte_sched_subport_config(struct rte_sched_port *port, port->n_pipes_per_subport, port->rate); if (status != 0) { - RTE_LOG(NOTICE, SCHED, - "%s: Port scheduler params check failed (%d)\n", + RTE_LOG_LINE(NOTICE, SCHED, + "%s: Port scheduler params check failed (%d)", __func__, status); ret = -EINVAL; goto out; @@ -1250,8 +1250,8 @@ rte_sched_subport_config(struct rte_sched_port *port, s = rte_zmalloc_socket("subport_params", size0 + size1, RTE_CACHE_LINE_SIZE, port->socket); if (s == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Memory allocation fails\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Memory allocation fails", __func__); ret = -ENOMEM; goto out; } @@ -1282,8 +1282,8 @@ rte_sched_subport_config(struct rte_sched_port *port, s->cman_enabled = true; status = rte_sched_cman_config(port, s, params, n_subports); if (status) { - RTE_LOG(NOTICE, SCHED, - "%s: CMAN configuration fails\n", __func__); + RTE_LOG_LINE(NOTICE, SCHED, + "%s: CMAN configuration fails", __func__); return status; } } else { @@ -1330,8 +1330,8 @@ rte_sched_subport_config(struct rte_sched_port *port, s->bmp = rte_bitmap_init(n_subport_pipe_queues, s->bmp_array, bmp_mem_size); if (s->bmp == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Subport bitmap init error\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Subport bitmap init error", __func__); ret = -EINVAL; goto out; } @@ -1400,29 +1400,29 @@ rte_sched_pipe_config(struct rte_sched_port *port, deactivate = (pipe_profile < 0); if (port == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter port\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter port", __func__); return -EINVAL; } if (subport_id >= port->n_subports_per_port) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter subport id\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter subport id", __func__); ret = -EINVAL; goto out; } s = port->subports[subport_id]; if (pipe_id >= s->n_pipes_per_subport_enabled) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter pipe id\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter pipe id", __func__); ret = -EINVAL; goto out; } if (!deactivate && profile >= s->n_pipe_profiles) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter pipe profile\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter pipe profile", __func__); ret = -EINVAL; goto out; } @@ -1447,8 +1447,8 @@ rte_sched_pipe_config(struct rte_sched_port *port, s->tc_ov = s->tc_ov_rate > subport_tc_be_rate; if (s->tc_ov != tc_be_ov) { - RTE_LOG(DEBUG, SCHED, - "Subport %u Best-effort TC oversubscription is OFF (%.4lf >= %.4lf)\n", + RTE_LOG_LINE(DEBUG, SCHED, + "Subport %u Best-effort TC oversubscription is OFF (%.4lf >= %.4lf)", subport_id, subport_tc_be_rate, s->tc_ov_rate); } @@ -1489,8 +1489,8 @@ rte_sched_pipe_config(struct rte_sched_port *port, s->tc_ov = s->tc_ov_rate > subport_tc_be_rate; if (s->tc_ov != tc_be_ov) { - RTE_LOG(DEBUG, SCHED, - "Subport %u Best effort TC oversubscription is ON (%.4lf < %.4lf)\n", + RTE_LOG_LINE(DEBUG, SCHED, + "Subport %u Best effort TC oversubscription is ON (%.4lf < %.4lf)", subport_id, subport_tc_be_rate, s->tc_ov_rate); } p->tc_ov_period_id = s->tc_ov_period_id; @@ -1518,15 +1518,15 @@ rte_sched_subport_pipe_profile_add(struct rte_sched_port *port, /* Port */ if (port == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter port\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter port", __func__); return -EINVAL; } /* Subport id not exceeds the max limit */ if (subport_id > port->n_subports_per_port) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for subport id\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for subport id", __func__); return -EINVAL; } @@ -1534,16 +1534,16 @@ rte_sched_subport_pipe_profile_add(struct rte_sched_port *port, /* Pipe profiles exceeds the max limit */ if (s->n_pipe_profiles >= s->n_max_pipe_profiles) { - RTE_LOG(ERR, SCHED, - "%s: Number of pipe profiles exceeds the max limit\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Number of pipe profiles exceeds the max limit", __func__); return -EINVAL; } /* Pipe params */ status = pipe_profile_check(params, port->rate, &s->qsize[0]); if (status != 0) { - RTE_LOG(ERR, SCHED, - "%s: Pipe profile check failed(%d)\n", __func__, status); + RTE_LOG_LINE(ERR, SCHED, + "%s: Pipe profile check failed(%d)", __func__, status); return -EINVAL; } @@ -1553,8 +1553,8 @@ rte_sched_subport_pipe_profile_add(struct rte_sched_port *port, /* Pipe profile should not exists */ for (i = 0; i < s->n_pipe_profiles; i++) if (memcmp(s->pipe_profiles + i, pp, sizeof(*pp)) == 0) { - RTE_LOG(ERR, SCHED, - "%s: Pipe profile exists\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Pipe profile exists", __func__); return -EINVAL; } @@ -1581,20 +1581,20 @@ rte_sched_port_subport_profile_add(struct rte_sched_port *port, /* Port */ if (port == NULL) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for parameter port\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Incorrect value for parameter port", __func__); return -EINVAL; } if (params == NULL) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for parameter profile\n", __func__); + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Incorrect value for parameter profile", __func__); return -EINVAL; } if (subport_profile_id == NULL) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for parameter subport_profile_id\n", + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Incorrect value for parameter subport_profile_id", __func__); return -EINVAL; } @@ -1603,16 +1603,16 @@ rte_sched_port_subport_profile_add(struct rte_sched_port *port, /* Subport profiles exceeds the max limit */ if (port->n_subport_profiles >= port->n_max_subport_profiles) { - RTE_LOG(ERR, SCHED, "%s: " - "Number of subport profiles exceeds the max limit\n", + RTE_LOG_LINE(ERR, SCHED, "%s: " + "Number of subport profiles exceeds the max limit", __func__); return -EINVAL; } status = subport_profile_check(params, port->rate); if (status != 0) { - RTE_LOG(ERR, SCHED, - "%s: subport profile check failed(%d)\n", __func__, status); + RTE_LOG_LINE(ERR, SCHED, + "%s: subport profile check failed(%d)", __func__, status); return -EINVAL; } @@ -1622,8 +1622,8 @@ rte_sched_port_subport_profile_add(struct rte_sched_port *port, for (i = 0; i < port->n_subport_profiles; i++) if (memcmp(port->subport_profiles + i, dst, sizeof(*dst)) == 0) { - RTE_LOG(ERR, SCHED, - "%s: subport profile exists\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: subport profile exists", __func__); return -EINVAL; } @@ -1695,26 +1695,26 @@ rte_sched_subport_read_stats(struct rte_sched_port *port, /* Check user parameters */ if (port == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter port\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter port", __func__); return -EINVAL; } if (subport_id >= port->n_subports_per_port) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for subport id\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for subport id", __func__); return -EINVAL; } if (stats == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter stats\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter stats", __func__); return -EINVAL; } if (tc_ov == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for tc_ov\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for tc_ov", __func__); return -EINVAL; } @@ -1743,26 +1743,26 @@ rte_sched_queue_read_stats(struct rte_sched_port *port, /* Check user parameters */ if (port == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter port\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter port", __func__); return -EINVAL; } if (queue_id >= rte_sched_port_queues_per_port(port)) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for queue id\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for queue id", __func__); return -EINVAL; } if (stats == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter stats\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter stats", __func__); return -EINVAL; } if (qlen == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter qlen\n", __func__); + RTE_LOG_LINE(ERR, SCHED, + "%s: Incorrect value for parameter qlen", __func__); return -EINVAL; } subport_qmask = port->n_pipes_per_subport_log2 + 4; diff --git a/lib/table/rte_table_acl.c b/lib/table/rte_table_acl.c index 902cb78eac..944f5064d2 100644 --- a/lib/table/rte_table_acl.c +++ b/lib/table/rte_table_acl.c @@ -65,21 +65,21 @@ rte_table_acl_create( /* Check input parameters */ if (p == NULL) { - RTE_LOG(ERR, TABLE, "%s: Invalid value for params\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid value for params", __func__); return NULL; } if (p->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: Invalid value for name\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid value for name", __func__); return NULL; } if (p->n_rules == 0) { - RTE_LOG(ERR, TABLE, "%s: Invalid value for n_rules\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid value for n_rules", __func__); return NULL; } if ((p->n_rule_fields == 0) || (p->n_rule_fields > RTE_ACL_MAX_FIELDS)) { - RTE_LOG(ERR, TABLE, "%s: Invalid value for n_rule_fields\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid value for n_rule_fields", __func__); return NULL; } @@ -98,8 +98,8 @@ rte_table_acl_create( acl = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE, socket_id); if (acl == NULL) { - RTE_LOG(ERR, TABLE, - "%s: Cannot allocate %u bytes for ACL table\n", + RTE_LOG_LINE(ERR, TABLE, + "%s: Cannot allocate %u bytes for ACL table", __func__, total_size); return NULL; } @@ -140,7 +140,7 @@ rte_table_acl_free(void *table) /* Check input parameters */ if (table == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } @@ -164,7 +164,7 @@ rte_table_acl_build(struct rte_table_acl *acl, struct rte_acl_ctx **acl_ctx) /* Create low level ACL table */ ctx = rte_acl_create(&acl->acl_params); if (ctx == NULL) { - RTE_LOG(ERR, TABLE, "%s: Cannot create low level ACL table\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot create low level ACL table", __func__); return -1; } @@ -176,8 +176,8 @@ rte_table_acl_build(struct rte_table_acl *acl, struct rte_acl_ctx **acl_ctx) status = rte_acl_add_rules(ctx, acl->acl_rule_list[i], 1); if (status != 0) { - RTE_LOG(ERR, TABLE, - "%s: Cannot add rule to low level ACL table\n", + RTE_LOG_LINE(ERR, TABLE, + "%s: Cannot add rule to low level ACL table", __func__); rte_acl_free(ctx); return -1; @@ -196,8 +196,8 @@ rte_table_acl_build(struct rte_table_acl *acl, struct rte_acl_ctx **acl_ctx) /* Build low level ACl table */ status = rte_acl_build(ctx, &acl->cfg); if (status != 0) { - RTE_LOG(ERR, TABLE, - "%s: Cannot build the low level ACL table\n", + RTE_LOG_LINE(ERR, TABLE, + "%s: Cannot build the low level ACL table", __func__); rte_acl_free(ctx); return -1; @@ -226,29 +226,29 @@ rte_table_acl_entry_add( /* Check input parameters */ if (table == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } if (key == NULL) { - RTE_LOG(ERR, TABLE, "%s: key parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: key parameter is NULL", __func__); return -EINVAL; } if (entry == NULL) { - RTE_LOG(ERR, TABLE, "%s: entry parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: entry parameter is NULL", __func__); return -EINVAL; } if (key_found == NULL) { - RTE_LOG(ERR, TABLE, "%s: key_found parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: key_found parameter is NULL", __func__); return -EINVAL; } if (entry_ptr == NULL) { - RTE_LOG(ERR, TABLE, "%s: entry_ptr parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: entry_ptr parameter is NULL", __func__); return -EINVAL; } if (rule->priority > RTE_ACL_MAX_PRIORITY) { - RTE_LOG(ERR, TABLE, "%s: Priority is too high\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: Priority is too high", __func__); return -EINVAL; } @@ -291,7 +291,7 @@ rte_table_acl_entry_add( /* Return if max rules */ if (free_pos_valid == 0) { - RTE_LOG(ERR, TABLE, "%s: Max number of rules reached\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Max number of rules reached", __func__); return -ENOSPC; } @@ -342,15 +342,15 @@ rte_table_acl_entry_delete( /* Check input parameters */ if (table == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } if (key == NULL) { - RTE_LOG(ERR, TABLE, "%s: key parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: key parameter is NULL", __func__); return -EINVAL; } if (key_found == NULL) { - RTE_LOG(ERR, TABLE, "%s: key_found parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: key_found parameter is NULL", __func__); return -EINVAL; } @@ -424,28 +424,28 @@ rte_table_acl_entry_add_bulk( /* Check input parameters */ if (table == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } if (keys == NULL) { - RTE_LOG(ERR, TABLE, "%s: keys parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: keys parameter is NULL", __func__); return -EINVAL; } if (entries == NULL) { - RTE_LOG(ERR, TABLE, "%s: entries parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: entries parameter is NULL", __func__); return -EINVAL; } if (n_keys == 0) { - RTE_LOG(ERR, TABLE, "%s: 0 rules to add\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: 0 rules to add", __func__); return -EINVAL; } if (key_found == NULL) { - RTE_LOG(ERR, TABLE, "%s: key_found parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: key_found parameter is NULL", __func__); return -EINVAL; } if (entries_ptr == NULL) { - RTE_LOG(ERR, TABLE, "%s: entries_ptr parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: entries_ptr parameter is NULL", __func__); return -EINVAL; } @@ -455,20 +455,20 @@ rte_table_acl_entry_add_bulk( struct rte_table_acl_rule_add_params *rule; if (keys[i] == NULL) { - RTE_LOG(ERR, TABLE, "%s: keys[%" PRIu32 "] parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: keys[%" PRIu32 "] parameter is NULL", __func__, i); return -EINVAL; } if (entries[i] == NULL) { - RTE_LOG(ERR, TABLE, "%s: entries[%" PRIu32 "] parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: entries[%" PRIu32 "] parameter is NULL", __func__, i); return -EINVAL; } rule = keys[i]; if (rule->priority > RTE_ACL_MAX_PRIORITY) { - RTE_LOG(ERR, TABLE, "%s: Priority is too high\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: Priority is too high", __func__); return -EINVAL; } } @@ -604,26 +604,26 @@ rte_table_acl_entry_delete_bulk( /* Check input parameters */ if (table == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } if (keys == NULL) { - RTE_LOG(ERR, TABLE, "%s: key parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: key parameter is NULL", __func__); return -EINVAL; } if (n_keys == 0) { - RTE_LOG(ERR, TABLE, "%s: 0 rules to delete\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: 0 rules to delete", __func__); return -EINVAL; } if (key_found == NULL) { - RTE_LOG(ERR, TABLE, "%s: key_found parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: key_found parameter is NULL", __func__); return -EINVAL; } for (i = 0; i < n_keys; i++) { if (keys[i] == NULL) { - RTE_LOG(ERR, TABLE, "%s: keys[%" PRIu32 "] parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: keys[%" PRIu32 "] parameter is NULL", __func__, i); return -EINVAL; } diff --git a/lib/table/rte_table_array.c b/lib/table/rte_table_array.c index a45b29ed6a..0b3107104d 100644 --- a/lib/table/rte_table_array.c +++ b/lib/table/rte_table_array.c @@ -61,8 +61,8 @@ rte_table_array_create(void *params, int socket_id, uint32_t entry_size) total_size = total_cl_size * RTE_CACHE_LINE_SIZE; t = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE, socket_id); if (t == NULL) { - RTE_LOG(ERR, TABLE, - "%s: Cannot allocate %u bytes for array table\n", + RTE_LOG_LINE(ERR, TABLE, + "%s: Cannot allocate %u bytes for array table", __func__, total_size); return NULL; } @@ -83,7 +83,7 @@ rte_table_array_free(void *table) /* Check input parameters */ if (t == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } @@ -107,24 +107,24 @@ rte_table_array_entry_add( /* Check input parameters */ if (table == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } if (key == NULL) { - RTE_LOG(ERR, TABLE, "%s: key parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: key parameter is NULL", __func__); return -EINVAL; } if (entry == NULL) { - RTE_LOG(ERR, TABLE, "%s: entry parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: entry parameter is NULL", __func__); return -EINVAL; } if (key_found == NULL) { - RTE_LOG(ERR, TABLE, "%s: key_found parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: key_found parameter is NULL", __func__); return -EINVAL; } if (entry_ptr == NULL) { - RTE_LOG(ERR, TABLE, "%s: entry_ptr parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: entry_ptr parameter is NULL", __func__); return -EINVAL; } diff --git a/lib/table/rte_table_hash_cuckoo.c b/lib/table/rte_table_hash_cuckoo.c index 86c960c103..228b49a893 100644 --- a/lib/table/rte_table_hash_cuckoo.c +++ b/lib/table/rte_table_hash_cuckoo.c @@ -47,27 +47,27 @@ static int check_params_create_hash_cuckoo(struct rte_table_hash_cuckoo_params *params) { if (params == NULL) { - RTE_LOG(ERR, TABLE, "NULL Input Parameters.\n"); + RTE_LOG_LINE(ERR, TABLE, "NULL Input Parameters."); return -EINVAL; } if (params->name == NULL) { - RTE_LOG(ERR, TABLE, "Table name is NULL.\n"); + RTE_LOG_LINE(ERR, TABLE, "Table name is NULL."); return -EINVAL; } if (params->key_size == 0) { - RTE_LOG(ERR, TABLE, "Invalid key_size.\n"); + RTE_LOG_LINE(ERR, TABLE, "Invalid key_size."); return -EINVAL; } if (params->n_keys == 0) { - RTE_LOG(ERR, TABLE, "Invalid n_keys.\n"); + RTE_LOG_LINE(ERR, TABLE, "Invalid n_keys."); return -EINVAL; } if (params->f_hash == NULL) { - RTE_LOG(ERR, TABLE, "f_hash is NULL.\n"); + RTE_LOG_LINE(ERR, TABLE, "f_hash is NULL."); return -EINVAL; } @@ -94,8 +94,8 @@ rte_table_hash_cuckoo_create(void *params, t = rte_zmalloc_socket(p->name, total_size, RTE_CACHE_LINE_SIZE, socket_id); if (t == NULL) { - RTE_LOG(ERR, TABLE, - "%s: Cannot allocate %u bytes for cuckoo hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, + "%s: Cannot allocate %u bytes for cuckoo hash table %s", __func__, total_size, p->name); return NULL; } @@ -114,8 +114,8 @@ rte_table_hash_cuckoo_create(void *params, if (h_table == NULL) { h_table = rte_hash_create(&hash_cuckoo_params); if (h_table == NULL) { - RTE_LOG(ERR, TABLE, - "%s: failed to create cuckoo hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, + "%s: failed to create cuckoo hash table %s", __func__, p->name); rte_free(t); return NULL; @@ -131,8 +131,8 @@ rte_table_hash_cuckoo_create(void *params, t->key_offset = p->key_offset; t->h_table = h_table; - RTE_LOG(INFO, TABLE, - "%s: Cuckoo hash table %s memory footprint is %u bytes\n", + RTE_LOG_LINE(INFO, TABLE, + "%s: Cuckoo hash table %s memory footprint is %u bytes", __func__, p->name, total_size); return t; } diff --git a/lib/table/rte_table_hash_ext.c b/lib/table/rte_table_hash_ext.c index 9f0220ded2..38ea96c654 100644 --- a/lib/table/rte_table_hash_ext.c +++ b/lib/table/rte_table_hash_ext.c @@ -128,33 +128,33 @@ check_params_create(struct rte_table_hash_params *params) { /* name */ if (params->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: name invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: name invalid value", __func__); return -EINVAL; } /* key_size */ if ((params->key_size < sizeof(uint64_t)) || (!rte_is_power_of_2(params->key_size))) { - RTE_LOG(ERR, TABLE, "%s: key_size invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: key_size invalid value", __func__); return -EINVAL; } /* n_keys */ if (params->n_keys == 0) { - RTE_LOG(ERR, TABLE, "%s: n_keys invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_keys invalid value", __func__); return -EINVAL; } /* n_buckets */ if ((params->n_buckets == 0) || (!rte_is_power_of_2(params->n_buckets))) { - RTE_LOG(ERR, TABLE, "%s: n_buckets invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_buckets invalid value", __func__); return -EINVAL; } /* f_hash */ if (params->f_hash == NULL) { - RTE_LOG(ERR, TABLE, "%s: f_hash invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: f_hash invalid value", __func__); return -EINVAL; } @@ -211,8 +211,8 @@ rte_table_hash_ext_create(void *params, int socket_id, uint32_t entry_size) key_sz + key_stack_sz + bkt_ext_stack_sz + data_sz; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes" - " for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes" + " for hash table %s", __func__, total_size, p->name); return NULL; } @@ -222,13 +222,13 @@ rte_table_hash_ext_create(void *params, int socket_id, uint32_t entry_size) RTE_CACHE_LINE_SIZE, socket_id); if (t == NULL) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes" - " for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes" + " for hash table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, "%s (%u-byte key): Hash table %s memory " - "footprint is %" PRIu64 " bytes\n", + RTE_LOG_LINE(INFO, TABLE, "%s (%u-byte key): Hash table %s memory " + "footprint is %" PRIu64 " bytes", __func__, p->key_size, p->name, total_size); /* Memory initialization */ diff --git a/lib/table/rte_table_hash_key16.c b/lib/table/rte_table_hash_key16.c index 584c3f2c98..63b28f79c0 100644 --- a/lib/table/rte_table_hash_key16.c +++ b/lib/table/rte_table_hash_key16.c @@ -107,32 +107,32 @@ check_params_create(struct rte_table_hash_params *params) { /* name */ if (params->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: name invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: name invalid value", __func__); return -EINVAL; } /* key_size */ if (params->key_size != KEY_SIZE) { - RTE_LOG(ERR, TABLE, "%s: key_size invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: key_size invalid value", __func__); return -EINVAL; } /* n_keys */ if (params->n_keys == 0) { - RTE_LOG(ERR, TABLE, "%s: n_keys is zero\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_keys is zero", __func__); return -EINVAL; } /* n_buckets */ if ((params->n_buckets == 0) || (!rte_is_power_of_2(params->n_buckets))) { - RTE_LOG(ERR, TABLE, "%s: n_buckets invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_buckets invalid value", __func__); return -EINVAL; } /* f_hash */ if (params->f_hash == NULL) { - RTE_LOG(ERR, TABLE, "%s: f_hash function pointer is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: f_hash function pointer is NULL", __func__); return -EINVAL; } @@ -181,8 +181,8 @@ rte_table_hash_create_key16_lru(void *params, total_size = sizeof(struct rte_table_hash) + n_buckets * bucket_size; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } @@ -192,13 +192,13 @@ rte_table_hash_create_key16_lru(void *params, RTE_CACHE_LINE_SIZE, socket_id); if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, "%s: Hash table %s memory footprint " - "is %" PRIu64 " bytes\n", + RTE_LOG_LINE(INFO, TABLE, "%s: Hash table %s memory footprint " + "is %" PRIu64 " bytes", __func__, p->name, total_size); /* Memory initialization */ @@ -236,7 +236,7 @@ rte_table_hash_free_key16_lru(void *table) /* Check input parameters */ if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } @@ -391,8 +391,8 @@ rte_table_hash_create_key16_ext(void *params, total_size = sizeof(struct rte_table_hash) + (p->n_buckets + n_buckets_ext) * bucket_size + stack_size; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } @@ -402,13 +402,13 @@ rte_table_hash_create_key16_ext(void *params, RTE_CACHE_LINE_SIZE, socket_id); if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, "%s: Hash table %s memory footprint " - "is %" PRIu64 " bytes\n", + RTE_LOG_LINE(INFO, TABLE, "%s: Hash table %s memory footprint " + "is %" PRIu64 " bytes", __func__, p->name, total_size); /* Memory initialization */ @@ -446,7 +446,7 @@ rte_table_hash_free_key16_ext(void *table) /* Check input parameters */ if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } diff --git a/lib/table/rte_table_hash_key32.c b/lib/table/rte_table_hash_key32.c index 22b5ca9166..6293bf518b 100644 --- a/lib/table/rte_table_hash_key32.c +++ b/lib/table/rte_table_hash_key32.c @@ -111,32 +111,32 @@ check_params_create(struct rte_table_hash_params *params) { /* name */ if (params->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: name invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: name invalid value", __func__); return -EINVAL; } /* key_size */ if (params->key_size != KEY_SIZE) { - RTE_LOG(ERR, TABLE, "%s: key_size invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: key_size invalid value", __func__); return -EINVAL; } /* n_keys */ if (params->n_keys == 0) { - RTE_LOG(ERR, TABLE, "%s: n_keys is zero\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_keys is zero", __func__); return -EINVAL; } /* n_buckets */ if ((params->n_buckets == 0) || (!rte_is_power_of_2(params->n_buckets))) { - RTE_LOG(ERR, TABLE, "%s: n_buckets invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_buckets invalid value", __func__); return -EINVAL; } /* f_hash */ if (params->f_hash == NULL) { - RTE_LOG(ERR, TABLE, "%s: f_hash function pointer is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: f_hash function pointer is NULL", __func__); return -EINVAL; } @@ -184,8 +184,8 @@ rte_table_hash_create_key32_lru(void *params, KEYS_PER_BUCKET * entry_size); total_size = sizeof(struct rte_table_hash) + n_buckets * bucket_size; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } @@ -195,14 +195,14 @@ rte_table_hash_create_key32_lru(void *params, RTE_CACHE_LINE_SIZE, socket_id); if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, + RTE_LOG_LINE(INFO, TABLE, "%s: Hash table %s memory footprint " - "is %" PRIu64 " bytes\n", + "is %" PRIu64 " bytes", __func__, p->name, total_size); /* Memory initialization */ @@ -244,7 +244,7 @@ rte_table_hash_free_key32_lru(void *table) /* Check input parameters */ if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } @@ -400,8 +400,8 @@ rte_table_hash_create_key32_ext(void *params, total_size = sizeof(struct rte_table_hash) + (p->n_buckets + n_buckets_ext) * bucket_size + stack_size; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } @@ -411,14 +411,14 @@ rte_table_hash_create_key32_ext(void *params, RTE_CACHE_LINE_SIZE, socket_id); if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, + RTE_LOG_LINE(INFO, TABLE, "%s: Hash table %s memory footprint " - "is %" PRIu64" bytes\n", + "is %" PRIu64" bytes", __func__, p->name, total_size); /* Memory initialization */ @@ -460,7 +460,7 @@ rte_table_hash_free_key32_ext(void *table) /* Check input parameters */ if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } diff --git a/lib/table/rte_table_hash_key8.c b/lib/table/rte_table_hash_key8.c index bd0ec4aac0..69e61c2ec8 100644 --- a/lib/table/rte_table_hash_key8.c +++ b/lib/table/rte_table_hash_key8.c @@ -101,32 +101,32 @@ check_params_create(struct rte_table_hash_params *params) { /* name */ if (params->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: name invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: name invalid value", __func__); return -EINVAL; } /* key_size */ if (params->key_size != KEY_SIZE) { - RTE_LOG(ERR, TABLE, "%s: key_size invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: key_size invalid value", __func__); return -EINVAL; } /* n_keys */ if (params->n_keys == 0) { - RTE_LOG(ERR, TABLE, "%s: n_keys is zero\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_keys is zero", __func__); return -EINVAL; } /* n_buckets */ if ((params->n_buckets == 0) || (!rte_is_power_of_2(params->n_buckets))) { - RTE_LOG(ERR, TABLE, "%s: n_buckets invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_buckets invalid value", __func__); return -EINVAL; } /* f_hash */ if (params->f_hash == NULL) { - RTE_LOG(ERR, TABLE, "%s: f_hash function pointer is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: f_hash function pointer is NULL", __func__); return -EINVAL; } @@ -173,8 +173,8 @@ rte_table_hash_create_key8_lru(void *params, int socket_id, uint32_t entry_size) total_size = sizeof(struct rte_table_hash) + n_buckets * bucket_size; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes" - " for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes" + " for hash table %s", __func__, total_size, p->name); return NULL; } @@ -184,14 +184,14 @@ rte_table_hash_create_key8_lru(void *params, int socket_id, uint32_t entry_size) RTE_CACHE_LINE_SIZE, socket_id); if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes" - " for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes" + " for hash table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, "%s: Hash table %s memory footprint " - "is %" PRIu64 " bytes\n", + RTE_LOG_LINE(INFO, TABLE, "%s: Hash table %s memory footprint " + "is %" PRIu64 " bytes", __func__, p->name, total_size); /* Memory initialization */ @@ -226,7 +226,7 @@ rte_table_hash_free_key8_lru(void *table) /* Check input parameters */ if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } @@ -377,8 +377,8 @@ rte_table_hash_create_key8_ext(void *params, int socket_id, uint32_t entry_size) (p->n_buckets + n_buckets_ext) * bucket_size + stack_size; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } @@ -388,14 +388,14 @@ rte_table_hash_create_key8_ext(void *params, int socket_id, uint32_t entry_size) RTE_CACHE_LINE_SIZE, socket_id); if (f == NULL) { - RTE_LOG(ERR, TABLE, + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + "for hash table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, "%s: Hash table %s memory footprint " - "is %" PRIu64 " bytes\n", + RTE_LOG_LINE(INFO, TABLE, "%s: Hash table %s memory footprint " + "is %" PRIu64 " bytes", __func__, p->name, total_size); /* Memory initialization */ @@ -430,7 +430,7 @@ rte_table_hash_free_key8_ext(void *table) /* Check input parameters */ if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } diff --git a/lib/table/rte_table_hash_lru.c b/lib/table/rte_table_hash_lru.c index 758ec4fe7a..190062b33f 100644 --- a/lib/table/rte_table_hash_lru.c +++ b/lib/table/rte_table_hash_lru.c @@ -105,33 +105,33 @@ check_params_create(struct rte_table_hash_params *params) { /* name */ if (params->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: name invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: name invalid value", __func__); return -EINVAL; } /* key_size */ if ((params->key_size < sizeof(uint64_t)) || (!rte_is_power_of_2(params->key_size))) { - RTE_LOG(ERR, TABLE, "%s: key_size invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: key_size invalid value", __func__); return -EINVAL; } /* n_keys */ if (params->n_keys == 0) { - RTE_LOG(ERR, TABLE, "%s: n_keys invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_keys invalid value", __func__); return -EINVAL; } /* n_buckets */ if ((params->n_buckets == 0) || (!rte_is_power_of_2(params->n_buckets))) { - RTE_LOG(ERR, TABLE, "%s: n_buckets invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: n_buckets invalid value", __func__); return -EINVAL; } /* f_hash */ if (params->f_hash == NULL) { - RTE_LOG(ERR, TABLE, "%s: f_hash invalid value\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: f_hash invalid value", __func__); return -EINVAL; } @@ -187,9 +187,9 @@ rte_table_hash_lru_create(void *params, int socket_id, uint32_t entry_size) key_stack_sz + data_sz; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes for hash " - "table %s\n", + "table %s", __func__, total_size, p->name); return NULL; } @@ -199,14 +199,14 @@ rte_table_hash_lru_create(void *params, int socket_id, uint32_t entry_size) RTE_CACHE_LINE_SIZE, socket_id); if (t == NULL) { - RTE_LOG(ERR, TABLE, + RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes for hash " - "table %s\n", + "table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, "%s (%u-byte key): Hash table %s memory footprint" - " is %" PRIu64 " bytes\n", + RTE_LOG_LINE(INFO, TABLE, "%s (%u-byte key): Hash table %s memory footprint" + " is %" PRIu64 " bytes", __func__, p->key_size, p->name, total_size); /* Memory initialization */ diff --git a/lib/table/rte_table_lpm.c b/lib/table/rte_table_lpm.c index c2ef0d9ba0..989ab65ee6 100644 --- a/lib/table/rte_table_lpm.c +++ b/lib/table/rte_table_lpm.c @@ -59,29 +59,29 @@ rte_table_lpm_create(void *params, int socket_id, uint32_t entry_size) /* Check input parameters */ if (p == NULL) { - RTE_LOG(ERR, TABLE, "%s: NULL input parameters\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: NULL input parameters", __func__); return NULL; } if (p->n_rules == 0) { - RTE_LOG(ERR, TABLE, "%s: Invalid n_rules\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid n_rules", __func__); return NULL; } if (p->number_tbl8s == 0) { - RTE_LOG(ERR, TABLE, "%s: Invalid number_tbl8s\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid number_tbl8s", __func__); return NULL; } if (p->entry_unique_size == 0) { - RTE_LOG(ERR, TABLE, "%s: Invalid entry_unique_size\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid entry_unique_size", __func__); return NULL; } if (p->entry_unique_size > entry_size) { - RTE_LOG(ERR, TABLE, "%s: Invalid entry_unique_size\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid entry_unique_size", __func__); return NULL; } if (p->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: Table name is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Table name is NULL", __func__); return NULL; } @@ -93,8 +93,8 @@ rte_table_lpm_create(void *params, int socket_id, uint32_t entry_size) lpm = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE, socket_id); if (lpm == NULL) { - RTE_LOG(ERR, TABLE, - "%s: Cannot allocate %u bytes for LPM table\n", + RTE_LOG_LINE(ERR, TABLE, + "%s: Cannot allocate %u bytes for LPM table", __func__, total_size); return NULL; } @@ -107,7 +107,7 @@ rte_table_lpm_create(void *params, int socket_id, uint32_t entry_size) if (lpm->lpm == NULL) { rte_free(lpm); - RTE_LOG(ERR, TABLE, "Unable to create low-level LPM table\n"); + RTE_LOG_LINE(ERR, TABLE, "Unable to create low-level LPM table"); return NULL; } @@ -127,7 +127,7 @@ rte_table_lpm_free(void *table) /* Check input parameters */ if (lpm == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } @@ -187,21 +187,21 @@ rte_table_lpm_entry_add( /* Check input parameters */ if (lpm == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } if (ip_prefix == NULL) { - RTE_LOG(ERR, TABLE, "%s: ip_prefix parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: ip_prefix parameter is NULL", __func__); return -EINVAL; } if (entry == NULL) { - RTE_LOG(ERR, TABLE, "%s: entry parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: entry parameter is NULL", __func__); return -EINVAL; } if ((ip_prefix->depth == 0) || (ip_prefix->depth > 32)) { - RTE_LOG(ERR, TABLE, "%s: invalid depth (%d)\n", + RTE_LOG_LINE(ERR, TABLE, "%s: invalid depth (%d)", __func__, ip_prefix->depth); return -EINVAL; } @@ -216,7 +216,7 @@ rte_table_lpm_entry_add( uint8_t *nht_entry; if (nht_find_free(lpm, &nht_pos) == 0) { - RTE_LOG(ERR, TABLE, "%s: NHT full\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: NHT full", __func__); return -1; } @@ -226,7 +226,7 @@ rte_table_lpm_entry_add( /* Add rule to low level LPM table */ if (rte_lpm_add(lpm->lpm, ip_prefix->ip, ip_prefix->depth, nht_pos) < 0) { - RTE_LOG(ERR, TABLE, "%s: LPM rule add failed\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: LPM rule add failed", __func__); return -1; } @@ -253,16 +253,16 @@ rte_table_lpm_entry_delete( /* Check input parameters */ if (lpm == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } if (ip_prefix == NULL) { - RTE_LOG(ERR, TABLE, "%s: ip_prefix parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: ip_prefix parameter is NULL", __func__); return -EINVAL; } if ((ip_prefix->depth == 0) || (ip_prefix->depth > 32)) { - RTE_LOG(ERR, TABLE, "%s: invalid depth (%d)\n", __func__, + RTE_LOG_LINE(ERR, TABLE, "%s: invalid depth (%d)", __func__, ip_prefix->depth); return -EINVAL; } @@ -271,7 +271,7 @@ rte_table_lpm_entry_delete( status = rte_lpm_is_rule_present(lpm->lpm, ip_prefix->ip, ip_prefix->depth, &nht_pos); if (status < 0) { - RTE_LOG(ERR, TABLE, "%s: LPM algorithmic error\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: LPM algorithmic error", __func__); return -1; } if (status == 0) { @@ -282,7 +282,7 @@ rte_table_lpm_entry_delete( /* Delete rule from the low-level LPM table */ status = rte_lpm_delete(lpm->lpm, ip_prefix->ip, ip_prefix->depth); if (status) { - RTE_LOG(ERR, TABLE, "%s: LPM rule delete failed\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: LPM rule delete failed", __func__); return -1; } diff --git a/lib/table/rte_table_lpm_ipv6.c b/lib/table/rte_table_lpm_ipv6.c index 6f3e11a14f..5b0e643832 100644 --- a/lib/table/rte_table_lpm_ipv6.c +++ b/lib/table/rte_table_lpm_ipv6.c @@ -56,29 +56,29 @@ rte_table_lpm_ipv6_create(void *params, int socket_id, uint32_t entry_size) /* Check input parameters */ if (p == NULL) { - RTE_LOG(ERR, TABLE, "%s: NULL input parameters\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: NULL input parameters", __func__); return NULL; } if (p->n_rules == 0) { - RTE_LOG(ERR, TABLE, "%s: Invalid n_rules\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid n_rules", __func__); return NULL; } if (p->number_tbl8s == 0) { - RTE_LOG(ERR, TABLE, "%s: Invalid n_rules\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid n_rules", __func__); return NULL; } if (p->entry_unique_size == 0) { - RTE_LOG(ERR, TABLE, "%s: Invalid entry_unique_size\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid entry_unique_size", __func__); return NULL; } if (p->entry_unique_size > entry_size) { - RTE_LOG(ERR, TABLE, "%s: Invalid entry_unique_size\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Invalid entry_unique_size", __func__); return NULL; } if (p->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: Table name is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: Table name is NULL", __func__); return NULL; } @@ -90,8 +90,8 @@ rte_table_lpm_ipv6_create(void *params, int socket_id, uint32_t entry_size) lpm = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE, socket_id); if (lpm == NULL) { - RTE_LOG(ERR, TABLE, - "%s: Cannot allocate %u bytes for LPM IPv6 table\n", + RTE_LOG_LINE(ERR, TABLE, + "%s: Cannot allocate %u bytes for LPM IPv6 table", __func__, total_size); return NULL; } @@ -103,8 +103,8 @@ rte_table_lpm_ipv6_create(void *params, int socket_id, uint32_t entry_size) lpm->lpm = rte_lpm6_create(p->name, socket_id, &lpm6_config); if (lpm->lpm == NULL) { rte_free(lpm); - RTE_LOG(ERR, TABLE, - "Unable to create low-level LPM IPv6 table\n"); + RTE_LOG_LINE(ERR, TABLE, + "Unable to create low-level LPM IPv6 table"); return NULL; } @@ -124,7 +124,7 @@ rte_table_lpm_ipv6_free(void *table) /* Check input parameters */ if (lpm == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } @@ -184,21 +184,21 @@ rte_table_lpm_ipv6_entry_add( /* Check input parameters */ if (lpm == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } if (ip_prefix == NULL) { - RTE_LOG(ERR, TABLE, "%s: ip_prefix parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: ip_prefix parameter is NULL", __func__); return -EINVAL; } if (entry == NULL) { - RTE_LOG(ERR, TABLE, "%s: entry parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: entry parameter is NULL", __func__); return -EINVAL; } if ((ip_prefix->depth == 0) || (ip_prefix->depth > 128)) { - RTE_LOG(ERR, TABLE, "%s: invalid depth (%d)\n", __func__, + RTE_LOG_LINE(ERR, TABLE, "%s: invalid depth (%d)", __func__, ip_prefix->depth); return -EINVAL; } @@ -213,7 +213,7 @@ rte_table_lpm_ipv6_entry_add( uint8_t *nht_entry; if (nht_find_free(lpm, &nht_pos) == 0) { - RTE_LOG(ERR, TABLE, "%s: NHT full\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: NHT full", __func__); return -1; } @@ -224,7 +224,7 @@ rte_table_lpm_ipv6_entry_add( /* Add rule to low level LPM table */ if (rte_lpm6_add(lpm->lpm, ip_prefix->ip, ip_prefix->depth, nht_pos) < 0) { - RTE_LOG(ERR, TABLE, "%s: LPM IPv6 rule add failed\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: LPM IPv6 rule add failed", __func__); return -1; } @@ -252,16 +252,16 @@ rte_table_lpm_ipv6_entry_delete( /* Check input parameters */ if (lpm == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__); return -EINVAL; } if (ip_prefix == NULL) { - RTE_LOG(ERR, TABLE, "%s: ip_prefix parameter is NULL\n", + RTE_LOG_LINE(ERR, TABLE, "%s: ip_prefix parameter is NULL", __func__); return -EINVAL; } if ((ip_prefix->depth == 0) || (ip_prefix->depth > 128)) { - RTE_LOG(ERR, TABLE, "%s: invalid depth (%d)\n", __func__, + RTE_LOG_LINE(ERR, TABLE, "%s: invalid depth (%d)", __func__, ip_prefix->depth); return -EINVAL; } @@ -270,7 +270,7 @@ rte_table_lpm_ipv6_entry_delete( status = rte_lpm6_is_rule_present(lpm->lpm, ip_prefix->ip, ip_prefix->depth, &nht_pos); if (status < 0) { - RTE_LOG(ERR, TABLE, "%s: LPM IPv6 algorithmic error\n", + RTE_LOG_LINE(ERR, TABLE, "%s: LPM IPv6 algorithmic error", __func__); return -1; } @@ -282,7 +282,7 @@ rte_table_lpm_ipv6_entry_delete( /* Delete rule from the low-level LPM table */ status = rte_lpm6_delete(lpm->lpm, ip_prefix->ip, ip_prefix->depth); if (status) { - RTE_LOG(ERR, TABLE, "%s: LPM IPv6 rule delete failed\n", + RTE_LOG_LINE(ERR, TABLE, "%s: LPM IPv6 rule delete failed", __func__); return -1; } diff --git a/lib/table/rte_table_stub.c b/lib/table/rte_table_stub.c index cc21516995..a54b502f79 100644 --- a/lib/table/rte_table_stub.c +++ b/lib/table/rte_table_stub.c @@ -38,8 +38,8 @@ rte_table_stub_create(__rte_unused void *params, stub = rte_zmalloc_socket("TABLE", size, RTE_CACHE_LINE_SIZE, socket_id); if (stub == NULL) { - RTE_LOG(ERR, TABLE, - "%s: Cannot allocate %u bytes for stub table\n", + RTE_LOG_LINE(ERR, TABLE, + "%s: Cannot allocate %u bytes for stub table", __func__, size); return NULL; } diff --git a/lib/vhost/fd_man.c b/lib/vhost/fd_man.c index 83586c5b4f..ff91c3169a 100644 --- a/lib/vhost/fd_man.c +++ b/lib/vhost/fd_man.c @@ -334,8 +334,8 @@ fdset_pipe_init(struct fdset *fdset) int ret; if (pipe(fdset->u.pipefd) < 0) { - RTE_LOG(ERR, VHOST_FDMAN, - "failed to create pipe for vhost fdset\n"); + RTE_LOG_LINE(ERR, VHOST_FDMAN, + "failed to create pipe for vhost fdset"); return -1; } @@ -343,8 +343,8 @@ fdset_pipe_init(struct fdset *fdset) fdset_pipe_read_cb, NULL, NULL); if (ret < 0) { - RTE_LOG(ERR, VHOST_FDMAN, - "failed to add pipe readfd %d into vhost server fdset\n", + RTE_LOG_LINE(ERR, VHOST_FDMAN, + "failed to add pipe readfd %d into vhost server fdset", fdset->u.readfd); fdset_pipe_uninit(fdset); -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [PATCH v4 12/14] lib: convert to per line logging 2023-12-18 14:38 ` [PATCH v4 12/14] lib: convert to per line logging David Marchand @ 2023-12-20 13:46 ` Thomas Monjalon 2023-12-20 14:00 ` David Marchand 0 siblings, 1 reply; 122+ messages in thread From: Thomas Monjalon @ 2023-12-20 13:46 UTC (permalink / raw) To: David Marchand Cc: dev, ferruh.yigit, bruce.richardson, stephen, mb, Andrew Rybchenko, Konstantin Ananyev, Anatoly Burakov, Harman Kalra, Jerin Jacob, Sunil Kumar Kori, Harry van Haaren, Stanislaw Kardach, Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam, Byron Marohn, Yipeng Wang, Vladimir Medvedkin, Sameh Gobriel, Reshma Pattan, Cristian Dumitrescu, David Hunt, Sivaprasad Tummala, Honnappa Nagarahalli, Volodymyr Fialko, Maxime Coquelin, Chenbo Xia 18/12/2023 15:38, David Marchand: > Convert many libraries that call RTE_LOG(... "\n", ...) to RTE_LOG_LINE. > > Note: > - for acl and sched libraries that still has some debug multilines > messages, a direct call to RTE_LOG is used: this will make it easier to > notice such special cases, [...] > 129 files changed, 2280 insertions(+), 2279 deletions(-) It would be nice to keep RTE_LOG() only for multi-line as you do, and use RTE_LOG_LINE only in wrapper macros specific per library. This patch is using RTE_LOG_LINE directly in multiple places in libraries which do not have a dedicated wrapper macro. Would it be possible to introduce new wrappers for these libraries in this patch? ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [PATCH v4 12/14] lib: convert to per line logging 2023-12-20 13:46 ` Thomas Monjalon @ 2023-12-20 14:00 ` David Marchand 0 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-20 14:00 UTC (permalink / raw) To: Thomas Monjalon Cc: dev, ferruh.yigit, bruce.richardson, stephen, mb, Andrew Rybchenko, Konstantin Ananyev, Anatoly Burakov, Harman Kalra, Jerin Jacob, Sunil Kumar Kori, Harry van Haaren, Stanislaw Kardach, Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam, Byron Marohn, Yipeng Wang, Vladimir Medvedkin, Sameh Gobriel, Reshma Pattan, Cristian Dumitrescu, David Hunt, Sivaprasad Tummala, Honnappa Nagarahalli, Volodymyr Fialko, Maxime Coquelin, Chenbo Xia On Wed, Dec 20, 2023 at 2:46 PM Thomas Monjalon <thomas@monjalon.net> wrote: > > 18/12/2023 15:38, David Marchand: > > Convert many libraries that call RTE_LOG(... "\n", ...) to RTE_LOG_LINE. > > > > Note: > > - for acl and sched libraries that still has some debug multilines > > messages, a direct call to RTE_LOG is used: this will make it easier to > > notice such special cases, > > [...] > > 129 files changed, 2280 insertions(+), 2279 deletions(-) > > It would be nice to keep RTE_LOG() only for multi-line as you do, > and use RTE_LOG_LINE only in wrapper macros specific per library. > This patch is using RTE_LOG_LINE directly in multiple places > in libraries which do not have a dedicated wrapper macro. > Would it be possible to introduce new wrappers for these libraries > in this patch? It is doable and not too difficult to achieve with some scripting. I'll try in the v5 (since I need to send a new revision for build with -pedantic). -- David Marchand ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v4 13/14] lib: replace logging helpers 2023-12-18 14:37 ` [PATCH v4 00/14] Detect superfluous newline in logs David Marchand ` (11 preceding siblings ...) 2023-12-18 14:38 ` [PATCH v4 12/14] lib: convert to per line logging David Marchand @ 2023-12-18 14:38 ` David Marchand 2023-12-18 14:38 ` [PATCH v4 14/14] lib: use per line logging in helpers David Marchand 13 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-18 14:38 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Andrew Rybchenko, Konstantin Ananyev, Ruifeng Wang, Ori Kam, Yipeng Wang, Sameh Gobriel, Reshma Pattan, Anatoly Burakov, David Hunt, Sivaprasad Tummala, Ciara Power, Maxime Coquelin, Chenbo Xia This is a preparation step before the next change. Many libraries have their own logging helpers that do not add a newline in their format string. Some previous changes fixed places where some of those helpers are called without a trailing newline. Using RTE_LOG_LINE in the existing helpers will ensure we don't introduce new issues in the future. The problem is that if we simply convert to the RTE_LOG_LINE helper, a future fix may introduce a regression since the logging helper change won't be backported. To address this concern, rename existing helpers: backporting a call to them will trigger some conflict or build issue in LTS branches. Note: - bpf and vhost that still has some debug multilines messages, a direct call to RTE_LOG/RTE_LOG_DP is used: this will make it easier to notice such special cases, - about previously publicly exposed logging helpers, when such helper is not publicly used (iow in public inline API), it is removed from the public API (this is the case for the member library), Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> --- Changes since RFC v2: - kept a RTE_ prefix for bpf log macro to avoid potential collision with external code, --- lib/bpf/bpf.c | 2 +- lib/bpf/bpf_convert.c | 16 +- lib/bpf/bpf_exec.c | 12 +- lib/bpf/bpf_impl.h | 5 +- lib/bpf/bpf_jit_arm64.c | 8 +- lib/bpf/bpf_jit_x86.c | 4 +- lib/bpf/bpf_load.c | 2 +- lib/bpf/bpf_load_elf.c | 24 +- lib/bpf/bpf_pkt.c | 4 +- lib/bpf/bpf_stub.c | 4 +- lib/bpf/bpf_validate.c | 38 +- lib/ethdev/ethdev_driver.c | 44 +- lib/ethdev/ethdev_pci.h | 2 +- lib/ethdev/ethdev_private.c | 10 +- lib/ethdev/rte_class_eth.c | 2 +- lib/ethdev/rte_ethdev.c | 878 +++++++++++++-------------- lib/ethdev/rte_ethdev.h | 52 +- lib/ethdev/rte_ethdev_cman.c | 16 +- lib/ethdev/rte_ethdev_telemetry.c | 44 +- lib/ethdev/rte_flow.c | 64 +- lib/ethdev/rte_flow.h | 3 - lib/ethdev/sff_telemetry.c | 30 +- lib/member/member.h | 14 + lib/member/rte_member.c | 15 +- lib/member/rte_member.h | 9 - lib/member/rte_member_heap.h | 39 +- lib/member/rte_member_ht.c | 13 +- lib/member/rte_member_sketch.c | 41 +- lib/member/rte_member_vbf.c | 9 +- lib/pdump/rte_pdump.c | 112 ++-- lib/power/power_acpi_cpufreq.c | 10 +- lib/power/power_amd_pstate_cpufreq.c | 12 +- lib/power/power_common.c | 4 +- lib/power/power_common.h | 6 +- lib/power/power_cppc_cpufreq.c | 12 +- lib/power/power_intel_uncore.c | 4 +- lib/power/power_pstate_cpufreq.c | 12 +- lib/regexdev/rte_regexdev.c | 86 +-- lib/regexdev/rte_regexdev.h | 14 +- lib/telemetry/telemetry.c | 41 +- lib/vhost/iotlb.c | 18 +- lib/vhost/socket.c | 102 ++-- lib/vhost/vdpa.c | 8 +- lib/vhost/vduse.c | 120 ++-- lib/vhost/vduse.h | 4 +- lib/vhost/vhost.c | 118 ++-- lib/vhost/vhost.h | 24 +- lib/vhost/vhost_user.c | 508 ++++++++-------- lib/vhost/virtio_net.c | 188 +++--- lib/vhost/virtio_net_ctrl.c | 38 +- 50 files changed, 1431 insertions(+), 1414 deletions(-) create mode 100644 lib/member/member.h diff --git a/lib/bpf/bpf.c b/lib/bpf/bpf.c index 8a0254d8bb..bbe75c8bfe 100644 --- a/lib/bpf/bpf.c +++ b/lib/bpf/bpf.c @@ -44,7 +44,7 @@ __rte_bpf_jit(struct rte_bpf *bpf) #endif if (rc != 0) - RTE_BPF_LOG(WARNING, "%s(%p) failed, error code: %d;\n", + RTE_BPF_LOG_LINE(WARNING, "%s(%p) failed, error code: %d;", __func__, bpf, rc); return rc; } diff --git a/lib/bpf/bpf_convert.c b/lib/bpf/bpf_convert.c index d441be6663..d7ff2b4325 100644 --- a/lib/bpf/bpf_convert.c +++ b/lib/bpf/bpf_convert.c @@ -226,8 +226,8 @@ static bool convert_bpf_load(const struct bpf_insn *fp, case SKF_AD_OFF + SKF_AD_RANDOM: case SKF_AD_OFF + SKF_AD_ALU_XOR_X: /* Linux has special negative offsets to access meta-data. */ - RTE_BPF_LOG(ERR, - "rte_bpf_convert: socket offset %d not supported\n", + RTE_BPF_LOG_LINE(ERR, + "rte_bpf_convert: socket offset %d not supported", fp->k - SKF_AD_OFF); return true; default: @@ -246,7 +246,7 @@ static int bpf_convert_filter(const struct bpf_insn *prog, size_t len, uint8_t bpf_src; if (len > BPF_MAXINSNS) { - RTE_BPF_LOG(ERR, "%s: cBPF program too long (%zu insns)\n", + RTE_BPF_LOG_LINE(ERR, "%s: cBPF program too long (%zu insns)", __func__, len); return -EINVAL; } @@ -482,7 +482,7 @@ static int bpf_convert_filter(const struct bpf_insn *prog, size_t len, /* Unknown instruction. */ default: - RTE_BPF_LOG(ERR, "%s: Unknown instruction!: %#x\n", + RTE_BPF_LOG_LINE(ERR, "%s: Unknown instruction!: %#x", __func__, fp->code); goto err; } @@ -526,7 +526,7 @@ rte_bpf_convert(const struct bpf_program *prog) int ret; if (prog == NULL) { - RTE_BPF_LOG(ERR, "%s: NULL program\n", __func__); + RTE_BPF_LOG_LINE(ERR, "%s: NULL program", __func__); rte_errno = EINVAL; return NULL; } @@ -534,12 +534,12 @@ rte_bpf_convert(const struct bpf_program *prog) /* 1st pass: calculate the eBPF program length */ ret = bpf_convert_filter(prog->bf_insns, prog->bf_len, NULL, &ebpf_len); if (ret < 0) { - RTE_BPF_LOG(ERR, "%s: cannot get eBPF length\n", __func__); + RTE_BPF_LOG_LINE(ERR, "%s: cannot get eBPF length", __func__); rte_errno = -ret; return NULL; } - RTE_BPF_LOG(DEBUG, "%s: prog len cBPF=%u -> eBPF=%u\n", + RTE_BPF_LOG_LINE(DEBUG, "%s: prog len cBPF=%u -> eBPF=%u", __func__, prog->bf_len, ebpf_len); prm = rte_zmalloc("bpf_filter", @@ -555,7 +555,7 @@ rte_bpf_convert(const struct bpf_program *prog) /* 2nd pass: remap cBPF to eBPF instructions */ ret = bpf_convert_filter(prog->bf_insns, prog->bf_len, ebpf, &ebpf_len); if (ret < 0) { - RTE_BPF_LOG(ERR, "%s: cannot convert cBPF to eBPF\n", __func__); + RTE_BPF_LOG_LINE(ERR, "%s: cannot convert cBPF to eBPF", __func__); free(prm); rte_errno = -ret; return NULL; diff --git a/lib/bpf/bpf_exec.c b/lib/bpf/bpf_exec.c index 09f4a9a571..5d597ec170 100644 --- a/lib/bpf/bpf_exec.c +++ b/lib/bpf/bpf_exec.c @@ -43,8 +43,8 @@ #define BPF_DIV_ZERO_CHECK(bpf, reg, ins, type) do { \ if ((type)(reg)[(ins)->src_reg] == 0) { \ - RTE_BPF_LOG(ERR, \ - "%s(%p): division by 0 at pc: %#zx;\n", \ + RTE_BPF_LOG_LINE(ERR, \ + "%s(%p): division by 0 at pc: %#zx;", \ __func__, bpf, \ (uintptr_t)(ins) - (uintptr_t)(bpf)->prm.ins); \ return 0; \ @@ -136,8 +136,8 @@ bpf_ld_mbuf(const struct rte_bpf *bpf, uint64_t reg[EBPF_REG_NUM], mb = (const struct rte_mbuf *)(uintptr_t)reg[EBPF_REG_6]; p = rte_pktmbuf_read(mb, off, len, reg + EBPF_REG_0); if (p == NULL) - RTE_BPF_LOG(DEBUG, "%s(bpf=%p, mbuf=%p, ofs=%u, len=%u): " - "load beyond packet boundary at pc: %#zx;\n", + RTE_BPF_LOG_LINE(DEBUG, "%s(bpf=%p, mbuf=%p, ofs=%u, len=%u): " + "load beyond packet boundary at pc: %#zx;", __func__, bpf, mb, off, len, (uintptr_t)(ins) - (uintptr_t)(bpf)->prm.ins); return p; @@ -462,8 +462,8 @@ bpf_exec(const struct rte_bpf *bpf, uint64_t reg[EBPF_REG_NUM]) case (BPF_JMP | EBPF_EXIT): return reg[EBPF_REG_0]; default: - RTE_BPF_LOG(ERR, - "%s(%p): invalid opcode %#x at pc: %#zx;\n", + RTE_BPF_LOG_LINE(ERR, + "%s(%p): invalid opcode %#x at pc: %#zx;", __func__, bpf, ins->code, (uintptr_t)ins - (uintptr_t)bpf->prm.ins); return 0; diff --git a/lib/bpf/bpf_impl.h b/lib/bpf/bpf_impl.h index b483569071..6a82ae4ef2 100644 --- a/lib/bpf/bpf_impl.h +++ b/lib/bpf/bpf_impl.h @@ -27,9 +27,10 @@ int __rte_bpf_jit_x86(struct rte_bpf *bpf); int __rte_bpf_jit_arm64(struct rte_bpf *bpf); extern int rte_bpf_logtype; +#define RTE_LOGTYPE_BPF rte_bpf_logtype -#define RTE_BPF_LOG(lvl, fmt, args...) \ - rte_log(RTE_LOG_## lvl, rte_bpf_logtype, fmt, ##args) +#define RTE_BPF_LOG_LINE(lvl, fmt, args...) \ + RTE_LOG(lvl, BPF, fmt "\n", ##args) static inline size_t bpf_size(uint32_t bpf_op_sz) diff --git a/lib/bpf/bpf_jit_arm64.c b/lib/bpf/bpf_jit_arm64.c index f9ddafd7dc..96b8cd2e03 100644 --- a/lib/bpf/bpf_jit_arm64.c +++ b/lib/bpf/bpf_jit_arm64.c @@ -98,8 +98,8 @@ check_invalid_args(struct a64_jit_ctx *ctx, uint32_t limit) for (idx = 0; idx < limit; idx++) { if (rte_le_to_cpu_32(ctx->ins[idx]) == A64_INVALID_OP_CODE) { - RTE_BPF_LOG(ERR, - "%s: invalid opcode at %u;\n", __func__, idx); + RTE_BPF_LOG_LINE(ERR, + "%s: invalid opcode at %u;", __func__, idx); return -EINVAL; } } @@ -1378,8 +1378,8 @@ emit(struct a64_jit_ctx *ctx, struct rte_bpf *bpf) emit_epilogue(ctx); break; default: - RTE_BPF_LOG(ERR, - "%s(%p): invalid opcode %#x at pc: %u;\n", + RTE_BPF_LOG_LINE(ERR, + "%s(%p): invalid opcode %#x at pc: %u;", __func__, bpf, ins->code, i); return -EINVAL; } diff --git a/lib/bpf/bpf_jit_x86.c b/lib/bpf/bpf_jit_x86.c index a73b2006db..4d74e418f8 100644 --- a/lib/bpf/bpf_jit_x86.c +++ b/lib/bpf/bpf_jit_x86.c @@ -1476,8 +1476,8 @@ emit(struct bpf_jit_state *st, const struct rte_bpf *bpf) emit_epilog(st); break; default: - RTE_BPF_LOG(ERR, - "%s(%p): invalid opcode %#x at pc: %u;\n", + RTE_BPF_LOG_LINE(ERR, + "%s(%p): invalid opcode %#x at pc: %u;", __func__, bpf, ins->code, i); return -EINVAL; } diff --git a/lib/bpf/bpf_load.c b/lib/bpf/bpf_load.c index 45ce9210da..de43347405 100644 --- a/lib/bpf/bpf_load.c +++ b/lib/bpf/bpf_load.c @@ -98,7 +98,7 @@ rte_bpf_load(const struct rte_bpf_prm *prm) if (rc != 0) { rte_errno = -rc; - RTE_BPF_LOG(ERR, "%s: %d-th xsym is invalid\n", __func__, i); + RTE_BPF_LOG_LINE(ERR, "%s: %d-th xsym is invalid", __func__, i); return NULL; } diff --git a/lib/bpf/bpf_load_elf.c b/lib/bpf/bpf_load_elf.c index 02a5d8ba0d..e0abd3c856 100644 --- a/lib/bpf/bpf_load_elf.c +++ b/lib/bpf/bpf_load_elf.c @@ -84,8 +84,8 @@ resolve_xsym(const char *sn, size_t ofs, struct ebpf_insn *ins, size_t ins_sz, * as an ordinary EBPF_CALL. */ if (ins[idx].src_reg == EBPF_PSEUDO_CALL) { - RTE_BPF_LOG(INFO, "%s(%u): " - "EBPF_PSEUDO_CALL to external function: %s\n", + RTE_BPF_LOG_LINE(INFO, "%s(%u): " + "EBPF_PSEUDO_CALL to external function: %s", __func__, idx, sn); ins[idx].src_reg = EBPF_REG_0; } @@ -121,7 +121,7 @@ check_elf_header(const Elf64_Ehdr *eh) err = "unexpected machine type"; if (err != NULL) { - RTE_BPF_LOG(ERR, "%s(): %s\n", __func__, err); + RTE_BPF_LOG_LINE(ERR, "%s(): %s", __func__, err); return -EINVAL; } @@ -144,7 +144,7 @@ find_elf_code(Elf *elf, const char *section, Elf_Data **psd, size_t *pidx) eh = elf64_getehdr(elf); if (eh == NULL) { rc = elf_errno(); - RTE_BPF_LOG(ERR, "%s(%p, %s) error code: %d(%s)\n", + RTE_BPF_LOG_LINE(ERR, "%s(%p, %s) error code: %d(%s)", __func__, elf, section, rc, elf_errmsg(rc)); return -EINVAL; } @@ -167,7 +167,7 @@ find_elf_code(Elf *elf, const char *section, Elf_Data **psd, size_t *pidx) if (sd == NULL || sd->d_size == 0 || sd->d_size % sizeof(struct ebpf_insn) != 0) { rc = elf_errno(); - RTE_BPF_LOG(ERR, "%s(%p, %s) error code: %d(%s)\n", + RTE_BPF_LOG_LINE(ERR, "%s(%p, %s) error code: %d(%s)", __func__, elf, section, rc, elf_errmsg(rc)); return -EINVAL; } @@ -216,8 +216,8 @@ process_reloc(Elf *elf, size_t sym_idx, Elf64_Rel *re, size_t re_sz, rc = resolve_xsym(sn, ofs, ins, ins_sz, prm); if (rc != 0) { - RTE_BPF_LOG(ERR, - "resolve_xsym(%s, %zu) error code: %d\n", + RTE_BPF_LOG_LINE(ERR, + "resolve_xsym(%s, %zu) error code: %d", sn, ofs, rc); return rc; } @@ -309,7 +309,7 @@ rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname, fd = open(fname, O_RDONLY); if (fd < 0) { rc = errno; - RTE_BPF_LOG(ERR, "%s(%s) error code: %d(%s)\n", + RTE_BPF_LOG_LINE(ERR, "%s(%s) error code: %d(%s)", __func__, fname, rc, strerror(rc)); rte_errno = EINVAL; return NULL; @@ -319,15 +319,15 @@ rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname, close(fd); if (bpf == NULL) { - RTE_BPF_LOG(ERR, + RTE_BPF_LOG_LINE(ERR, "%s(fname=\"%s\", sname=\"%s\") failed, " - "error code: %d\n", + "error code: %d", __func__, fname, sname, rte_errno); return NULL; } - RTE_BPF_LOG(INFO, "%s(fname=\"%s\", sname=\"%s\") " - "successfully creates %p(jit={.func=%p,.sz=%zu});\n", + RTE_BPF_LOG_LINE(INFO, "%s(fname=\"%s\", sname=\"%s\") " + "successfully creates %p(jit={.func=%p,.sz=%zu});", __func__, fname, sname, bpf, bpf->jit.func, bpf->jit.sz); return bpf; } diff --git a/lib/bpf/bpf_pkt.c b/lib/bpf/bpf_pkt.c index 7a8e4a6ef4..793a75ded9 100644 --- a/lib/bpf/bpf_pkt.c +++ b/lib/bpf/bpf_pkt.c @@ -512,7 +512,7 @@ bpf_eth_elf_load(struct bpf_eth_cbh *cbh, uint16_t port, uint16_t queue, ftx = select_tx_callback(prm->prog_arg.type, flags); if (frx == NULL && ftx == NULL) { - RTE_BPF_LOG(ERR, "%s(%u, %u): no callback selected;\n", + RTE_BPF_LOG_LINE(ERR, "%s(%u, %u): no callback selected;", __func__, port, queue); return -EINVAL; } @@ -524,7 +524,7 @@ bpf_eth_elf_load(struct bpf_eth_cbh *cbh, uint16_t port, uint16_t queue, rte_bpf_get_jit(bpf, &jit); if ((flags & RTE_BPF_ETH_F_JIT) != 0 && jit.func == NULL) { - RTE_BPF_LOG(ERR, "%s(%u, %u): no JIT generated;\n", + RTE_BPF_LOG_LINE(ERR, "%s(%u, %u): no JIT generated;", __func__, port, queue); rte_bpf_destroy(bpf); return -ENOTSUP; diff --git a/lib/bpf/bpf_stub.c b/lib/bpf/bpf_stub.c index 83c2203622..1babb16bde 100644 --- a/lib/bpf/bpf_stub.c +++ b/lib/bpf/bpf_stub.c @@ -19,7 +19,7 @@ rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname, return NULL; } - RTE_BPF_LOG(ERR, "%s() is not supported, rebuild with libelf installed\n", + RTE_BPF_LOG_LINE(ERR, "%s() is not supported, rebuild with libelf installed", __func__); rte_errno = ENOTSUP; return NULL; @@ -35,7 +35,7 @@ rte_bpf_convert(const struct bpf_program *prog) return NULL; } - RTE_BPF_LOG(ERR, "%s() is not supported, rebuild with libpcap installed\n", + RTE_BPF_LOG_LINE(ERR, "%s() is not supported, rebuild with libpcap installed", __func__); rte_errno = ENOTSUP; return NULL; diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c index f246b3c5eb..79be5e917d 100644 --- a/lib/bpf/bpf_validate.c +++ b/lib/bpf/bpf_validate.c @@ -1812,15 +1812,15 @@ add_edge(struct bpf_verifier *bvf, struct inst_node *node, uint32_t nidx) uint32_t ne; if (nidx > bvf->prm->nb_ins) { - RTE_BPF_LOG(ERR, "%s: program boundary violation at pc: %u, " - "next pc: %u\n", + RTE_BPF_LOG_LINE(ERR, "%s: program boundary violation at pc: %u, " + "next pc: %u", __func__, get_node_idx(bvf, node), nidx); return -EINVAL; } ne = node->nb_edge; if (ne >= RTE_DIM(node->edge_dest)) { - RTE_BPF_LOG(ERR, "%s: internal error at pc: %u\n", + RTE_BPF_LOG_LINE(ERR, "%s: internal error at pc: %u", __func__, get_node_idx(bvf, node)); return -EINVAL; } @@ -1927,7 +1927,7 @@ log_unreachable(const struct bpf_verifier *bvf) if (node->colour == WHITE && ins->code != (BPF_LD | BPF_IMM | EBPF_DW)) - RTE_BPF_LOG(ERR, "unreachable code at pc: %u;\n", i); + RTE_BPF_LOG_LINE(ERR, "unreachable code at pc: %u;", i); } } @@ -1948,8 +1948,8 @@ log_loop(const struct bpf_verifier *bvf) for (j = 0; j != node->nb_edge; j++) { if (node->edge_type[j] == BACK_EDGE) - RTE_BPF_LOG(ERR, - "loop at pc:%u --> pc:%u;\n", + RTE_BPF_LOG_LINE(ERR, + "loop at pc:%u --> pc:%u;", i, node->edge_dest[j]); } } @@ -1979,7 +1979,7 @@ validate(struct bpf_verifier *bvf) err = check_syntax(ins); if (err != 0) { - RTE_BPF_LOG(ERR, "%s: %s at pc: %u\n", + RTE_BPF_LOG_LINE(ERR, "%s: %s at pc: %u", __func__, err, i); rc |= -EINVAL; } @@ -2048,7 +2048,7 @@ validate(struct bpf_verifier *bvf) dfs(bvf); - RTE_BPF_LOG(DEBUG, "%s(%p) stats:\n" + RTE_LOG(DEBUG, BPF, "%s(%p) stats:\n" "nb_nodes=%u;\n" "nb_jcc_nodes=%u;\n" "node_color={[WHITE]=%u, [GREY]=%u,, [BLACK]=%u};\n" @@ -2062,7 +2062,7 @@ validate(struct bpf_verifier *bvf) bvf->edge_type[BACK_EDGE], bvf->edge_type[CROSS_EDGE]); if (bvf->node_colour[BLACK] != bvf->nb_nodes) { - RTE_BPF_LOG(ERR, "%s(%p) unreachable instructions;\n", + RTE_BPF_LOG_LINE(ERR, "%s(%p) unreachable instructions;", __func__, bvf); log_unreachable(bvf); return -EINVAL; @@ -2070,13 +2070,13 @@ validate(struct bpf_verifier *bvf) if (bvf->node_colour[GREY] != 0 || bvf->node_colour[WHITE] != 0 || bvf->edge_type[UNKNOWN_EDGE] != 0) { - RTE_BPF_LOG(ERR, "%s(%p) DFS internal error;\n", + RTE_BPF_LOG_LINE(ERR, "%s(%p) DFS internal error;", __func__, bvf); return -EINVAL; } if (bvf->edge_type[BACK_EDGE] != 0) { - RTE_BPF_LOG(ERR, "%s(%p) loops detected;\n", + RTE_BPF_LOG_LINE(ERR, "%s(%p) loops detected;", __func__, bvf); log_loop(bvf); return -EINVAL; @@ -2144,8 +2144,8 @@ save_eval_state(struct bpf_verifier *bvf, struct inst_node *node) /* get new eval_state for this node */ st = pull_eval_state(bvf); if (st == NULL) { - RTE_BPF_LOG(ERR, - "%s: internal error (out of space) at pc: %u\n", + RTE_BPF_LOG_LINE(ERR, + "%s: internal error (out of space) at pc: %u", __func__, get_node_idx(bvf, node)); return -ENOMEM; } @@ -2157,7 +2157,7 @@ save_eval_state(struct bpf_verifier *bvf, struct inst_node *node) node->evst = bvf->evst; bvf->evst = st; - RTE_BPF_LOG(DEBUG, "%s(bvf=%p,node=%u) old/new states: %p/%p;\n", + RTE_BPF_LOG_LINE(DEBUG, "%s(bvf=%p,node=%u) old/new states: %p/%p;", __func__, bvf, get_node_idx(bvf, node), node->evst, bvf->evst); return 0; @@ -2169,7 +2169,7 @@ save_eval_state(struct bpf_verifier *bvf, struct inst_node *node) static void restore_eval_state(struct bpf_verifier *bvf, struct inst_node *node) { - RTE_BPF_LOG(DEBUG, "%s(bvf=%p,node=%u) old/new states: %p/%p;\n", + RTE_BPF_LOG_LINE(DEBUG, "%s(bvf=%p,node=%u) old/new states: %p/%p;", __func__, bvf, get_node_idx(bvf, node), bvf->evst, node->evst); bvf->evst = node->evst; @@ -2184,12 +2184,12 @@ log_dbg_eval_state(const struct bpf_verifier *bvf, const struct ebpf_insn *ins, const struct bpf_eval_state *st; const struct bpf_reg_val *rv; - RTE_BPF_LOG(DEBUG, "%s(pc=%u):\n", __func__, pc); + RTE_BPF_LOG_LINE(DEBUG, "%s(pc=%u):", __func__, pc); st = bvf->evst; rv = st->rv + ins->dst_reg; - RTE_BPF_LOG(DEBUG, + RTE_LOG(DEBUG, BPF, "r%u={\n" "\tv={type=%u, size=%zu},\n" "\tmask=0x%" PRIx64 ",\n" @@ -2263,7 +2263,7 @@ evaluate(struct bpf_verifier *bvf) if (ins_chk[op].eval != NULL && rc == 0) { err = ins_chk[op].eval(bvf, ins + idx); if (err != NULL) { - RTE_BPF_LOG(ERR, "%s: %s at pc: %u\n", + RTE_BPF_LOG_LINE(ERR, "%s: %s at pc: %u", __func__, err, idx); rc = -EINVAL; } @@ -2312,7 +2312,7 @@ __rte_bpf_validate(struct rte_bpf *bpf) bpf->prm.prog_arg.type != RTE_BPF_ARG_PTR && (sizeof(uint64_t) != sizeof(uintptr_t) || bpf->prm.prog_arg.type != RTE_BPF_ARG_PTR_MBUF)) { - RTE_BPF_LOG(ERR, "%s: unsupported argument type\n", __func__); + RTE_BPF_LOG_LINE(ERR, "%s: unsupported argument type", __func__); return -ENOTSUP; } diff --git a/lib/ethdev/ethdev_driver.c b/lib/ethdev/ethdev_driver.c index 55a9dcc565..bd917a15fc 100644 --- a/lib/ethdev/ethdev_driver.c +++ b/lib/ethdev/ethdev_driver.c @@ -80,12 +80,12 @@ rte_eth_dev_allocate(const char *name) name_len = strnlen(name, RTE_ETH_NAME_MAX_LEN); if (name_len == 0) { - RTE_ETHDEV_LOG(ERR, "Zero length Ethernet device name\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Zero length Ethernet device name"); return NULL; } if (name_len >= RTE_ETH_NAME_MAX_LEN) { - RTE_ETHDEV_LOG(ERR, "Ethernet device name is too long\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Ethernet device name is too long"); return NULL; } @@ -96,16 +96,16 @@ rte_eth_dev_allocate(const char *name) goto unlock; if (eth_dev_allocated(name) != NULL) { - RTE_ETHDEV_LOG(ERR, - "Ethernet device with name %s already allocated\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethernet device with name %s already allocated", name); goto unlock; } port_id = eth_dev_find_free_port(); if (port_id == RTE_MAX_ETHPORTS) { - RTE_ETHDEV_LOG(ERR, - "Reached maximum number of Ethernet ports\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Reached maximum number of Ethernet ports"); goto unlock; } @@ -163,8 +163,8 @@ rte_eth_dev_attach_secondary(const char *name) break; } if (i == RTE_MAX_ETHPORTS) { - RTE_ETHDEV_LOG(ERR, - "Device %s is not driven by the primary process\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Device %s is not driven by the primary process", name); } else { eth_dev = eth_dev_get(i); @@ -302,8 +302,8 @@ rte_eth_dev_create(struct rte_device *device, const char *name, device->numa_node); if (!ethdev->data->dev_private) { - RTE_ETHDEV_LOG(ERR, - "failed to allocate private data\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "failed to allocate private data"); retval = -ENOMEM; goto probe_failed; } @@ -311,8 +311,8 @@ rte_eth_dev_create(struct rte_device *device, const char *name, } else { ethdev = rte_eth_dev_attach_secondary(name); if (!ethdev) { - RTE_ETHDEV_LOG(ERR, - "secondary process attach failed, ethdev doesn't exist\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "secondary process attach failed, ethdev doesn't exist"); return -ENODEV; } } @@ -322,15 +322,15 @@ rte_eth_dev_create(struct rte_device *device, const char *name, if (ethdev_bus_specific_init) { retval = ethdev_bus_specific_init(ethdev, bus_init_params); if (retval) { - RTE_ETHDEV_LOG(ERR, - "ethdev bus specific initialisation failed\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "ethdev bus specific initialisation failed"); goto probe_failed; } } retval = ethdev_init(ethdev, init_params); if (retval) { - RTE_ETHDEV_LOG(ERR, "ethdev initialisation failed\n"); + RTE_ETHDEV_LOG_LINE(ERR, "ethdev initialisation failed"); goto probe_failed; } @@ -394,7 +394,7 @@ void rte_eth_dev_internal_reset(struct rte_eth_dev *dev) { if (dev->data->dev_started) { - RTE_ETHDEV_LOG(ERR, "Port %u must be stopped to allow reset\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port %u must be stopped to allow reset", dev->data->port_id); return; } @@ -487,7 +487,7 @@ rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da) pair = &args.pairs[i]; if (strcmp("representor", pair->key) == 0) { if (eth_da->type != RTE_ETH_REPRESENTOR_NONE) { - RTE_ETHDEV_LOG(ERR, "duplicated representor key: %s\n", + RTE_ETHDEV_LOG_LINE(ERR, "duplicated representor key: %s", dargs); result = -1; goto parse_cleanup; @@ -524,7 +524,7 @@ rte_eth_dma_zone_free(const struct rte_eth_dev *dev, const char *ring_name, rc = eth_dev_dma_mzone_name(z_name, sizeof(z_name), dev->data->port_id, queue_id, ring_name); if (rc >= RTE_MEMZONE_NAMESIZE) { - RTE_ETHDEV_LOG(ERR, "ring name too long\n"); + RTE_ETHDEV_LOG_LINE(ERR, "ring name too long"); return -ENAMETOOLONG; } @@ -549,7 +549,7 @@ rte_eth_dma_zone_reserve(const struct rte_eth_dev *dev, const char *ring_name, rc = eth_dev_dma_mzone_name(z_name, sizeof(z_name), dev->data->port_id, queue_id, ring_name); if (rc >= RTE_MEMZONE_NAMESIZE) { - RTE_ETHDEV_LOG(ERR, "ring name too long\n"); + RTE_ETHDEV_LOG_LINE(ERR, "ring name too long"); rte_errno = ENAMETOOLONG; return NULL; } @@ -559,8 +559,8 @@ rte_eth_dma_zone_reserve(const struct rte_eth_dev *dev, const char *ring_name, if ((socket_id != SOCKET_ID_ANY && socket_id != mz->socket_id) || size > mz->len || ((uintptr_t)mz->addr & (align - 1)) != 0) { - RTE_ETHDEV_LOG(ERR, - "memzone %s does not justify the requested attributes\n", + RTE_ETHDEV_LOG_LINE(ERR, + "memzone %s does not justify the requested attributes", mz->name); return NULL; } @@ -713,7 +713,7 @@ rte_eth_representor_id_get(uint16_t port_id, if (info->ranges[i].controller != controller) continue; if (info->ranges[i].id_end < info->ranges[i].id_base) { - RTE_ETHDEV_LOG(WARNING, "Port %hu invalid representor ID Range %u - %u, entry %d\n", + RTE_ETHDEV_LOG_LINE(WARNING, "Port %hu invalid representor ID Range %u - %u, entry %d", port_id, info->ranges[i].id_base, info->ranges[i].id_end, i); continue; diff --git a/lib/ethdev/ethdev_pci.h b/lib/ethdev/ethdev_pci.h index ddb559aa95..737fff1833 100644 --- a/lib/ethdev/ethdev_pci.h +++ b/lib/ethdev/ethdev_pci.h @@ -31,7 +31,7 @@ rte_eth_copy_pci_info(struct rte_eth_dev *eth_dev, struct rte_pci_device *pci_dev) { if ((eth_dev == NULL) || (pci_dev == NULL)) { - RTE_ETHDEV_LOG(ERR, "NULL pointer eth_dev=%p pci_dev=%p\n", + RTE_ETHDEV_LOG_LINE(ERR, "NULL pointer eth_dev=%p pci_dev=%p", (void *)eth_dev, (void *)pci_dev); return; } diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c index 0e1c7b23c1..a656df293c 100644 --- a/lib/ethdev/ethdev_private.c +++ b/lib/ethdev/ethdev_private.c @@ -182,7 +182,7 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data) RTE_DIM(eth_da->representor_ports)); done: if (str == NULL) - RTE_ETHDEV_LOG(ERR, "wrong representor format: %s\n", str); + RTE_ETHDEV_LOG_LINE(ERR, "wrong representor format: %s", str); return str == NULL ? -1 : 0; } @@ -214,7 +214,7 @@ dummy_eth_rx_burst(void *rxq, port_id = queue - per_port_queues; if (port_id < RTE_DIM(per_port_queues) && !queue->rx_warn_once) { - RTE_ETHDEV_LOG(ERR, "lcore %u called rx_pkt_burst for not ready port %"PRIuPTR"\n", + RTE_ETHDEV_LOG_LINE(ERR, "lcore %u called rx_pkt_burst for not ready port %"PRIuPTR, rte_lcore_id(), port_id); rte_dump_stack(); queue->rx_warn_once = true; @@ -233,7 +233,7 @@ dummy_eth_tx_burst(void *txq, port_id = queue - per_port_queues; if (port_id < RTE_DIM(per_port_queues) && !queue->tx_warn_once) { - RTE_ETHDEV_LOG(ERR, "lcore %u called tx_pkt_burst for not ready port %"PRIuPTR"\n", + RTE_ETHDEV_LOG_LINE(ERR, "lcore %u called tx_pkt_burst for not ready port %"PRIuPTR, rte_lcore_id(), port_id); rte_dump_stack(); queue->tx_warn_once = true; @@ -337,7 +337,7 @@ eth_dev_shared_data_prepare(void) sizeof(*eth_dev_shared_data), rte_socket_id(), flags); if (mz == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot allocate ethdev shared data\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot allocate ethdev shared data"); goto out; } @@ -355,7 +355,7 @@ eth_dev_shared_data_prepare(void) /* Clean remaining any traces of a previous shared mem */ eth_dev_shared_mz = NULL; eth_dev_shared_data = NULL; - RTE_ETHDEV_LOG(ERR, "Cannot lookup ethdev shared data\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot lookup ethdev shared data"); goto out; } if (mz == eth_dev_shared_mz && mz->addr == eth_dev_shared_data) diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c index 311beb17cb..bc003db8af 100644 --- a/lib/ethdev/rte_class_eth.c +++ b/lib/ethdev/rte_class_eth.c @@ -165,7 +165,7 @@ eth_dev_iterate(const void *start, valid_keys = eth_params_keys; kvargs = rte_kvargs_parse(str, valid_keys); if (kvargs == NULL) { - RTE_ETHDEV_LOG(ERR, "cannot parse argument list\n"); + RTE_ETHDEV_LOG_LINE(ERR, "cannot parse argument list"); rte_errno = EINVAL; return NULL; } diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 9dd0efa9d8..c5e75a91c8 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -182,13 +182,13 @@ rte_eth_iterator_init(struct rte_dev_iterator *iter, const char *devargs_str) int str_size; if (iter == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot initialize NULL iterator\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot initialize NULL iterator"); return -EINVAL; } if (devargs_str == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot initialize iterator from NULL device description string\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot initialize iterator from NULL device description string"); return -EINVAL; } @@ -279,7 +279,7 @@ rte_eth_iterator_init(struct rte_dev_iterator *iter, const char *devargs_str) error: if (ret == -ENOTSUP) - RTE_ETHDEV_LOG(ERR, "Bus %s does not support iterating.\n", + RTE_ETHDEV_LOG_LINE(ERR, "Bus %s does not support iterating.", iter->bus->name); rte_devargs_reset(&devargs); free(bus_str); @@ -291,8 +291,8 @@ uint16_t rte_eth_iterator_next(struct rte_dev_iterator *iter) { if (iter == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get next device from NULL iterator\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get next device from NULL iterator"); return RTE_MAX_ETHPORTS; } @@ -331,7 +331,7 @@ void rte_eth_iterator_cleanup(struct rte_dev_iterator *iter) { if (iter == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot do clean up from NULL iterator\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot do clean up from NULL iterator"); return; } @@ -447,7 +447,7 @@ rte_eth_dev_owner_new(uint64_t *owner_id) int ret; if (owner_id == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get new owner ID to NULL\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get new owner ID to NULL"); return -EINVAL; } @@ -477,30 +477,30 @@ eth_dev_owner_set(const uint16_t port_id, const uint64_t old_owner_id, struct rte_eth_dev_owner *port_owner; if (port_id >= RTE_MAX_ETHPORTS || !eth_dev_is_allocated(ethdev)) { - RTE_ETHDEV_LOG(ERR, "Port ID %"PRIu16" is not allocated\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port ID %"PRIu16" is not allocated", port_id); return -ENODEV; } if (new_owner == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot set ethdev port %u owner from NULL owner\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set ethdev port %u owner from NULL owner", port_id); return -EINVAL; } if (!eth_is_valid_owner_id(new_owner->id) && !eth_is_valid_owner_id(old_owner_id)) { - RTE_ETHDEV_LOG(ERR, - "Invalid owner old_id=%016"PRIx64" new_id=%016"PRIx64"\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid owner old_id=%016"PRIx64" new_id=%016"PRIx64, old_owner_id, new_owner->id); return -EINVAL; } port_owner = &rte_eth_devices[port_id].data->owner; if (port_owner->id != old_owner_id) { - RTE_ETHDEV_LOG(ERR, - "Cannot set owner to port %u already owned by %s_%016"PRIX64"\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set owner to port %u already owned by %s_%016"PRIX64, port_id, port_owner->name, port_owner->id); return -EPERM; } @@ -510,7 +510,7 @@ eth_dev_owner_set(const uint16_t port_id, const uint64_t old_owner_id, port_owner->id = new_owner->id; - RTE_ETHDEV_LOG(DEBUG, "Port %u owner is %s_%016"PRIx64"\n", + RTE_ETHDEV_LOG_LINE(DEBUG, "Port %u owner is %s_%016"PRIx64, port_id, new_owner->name, new_owner->id); return 0; @@ -575,14 +575,14 @@ rte_eth_dev_owner_delete(const uint64_t owner_id) memset(&data->owner, 0, sizeof(struct rte_eth_dev_owner)); } - RTE_ETHDEV_LOG(NOTICE, - "All port owners owned by %016"PRIx64" identifier have removed\n", + RTE_ETHDEV_LOG_LINE(NOTICE, + "All port owners owned by %016"PRIx64" identifier have removed", owner_id); eth_dev_shared_data->allocated_owners--; eth_dev_shared_data_release(); } else { - RTE_ETHDEV_LOG(ERR, - "Invalid owner ID=%016"PRIx64"\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid owner ID=%016"PRIx64, owner_id); ret = -EINVAL; } @@ -604,13 +604,13 @@ rte_eth_dev_owner_get(const uint16_t port_id, struct rte_eth_dev_owner *owner) ethdev = &rte_eth_devices[port_id]; if (!eth_dev_is_allocated(ethdev)) { - RTE_ETHDEV_LOG(ERR, "Port ID %"PRIu16" is not allocated\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port ID %"PRIu16" is not allocated", port_id); return -ENODEV; } if (owner == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u owner to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u owner to NULL", port_id); return -EINVAL; } @@ -699,7 +699,7 @@ rte_eth_dev_get_name_by_port(uint16_t port_id, char *name) RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); if (name == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u name to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u name to NULL", port_id); return -EINVAL; } @@ -724,13 +724,13 @@ rte_eth_dev_get_port_by_name(const char *name, uint16_t *port_id) uint16_t pid; if (name == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get port ID from NULL name\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get port ID from NULL name"); return -EINVAL; } if (port_id == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get port ID to NULL for %s\n", name); + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get port ID to NULL for %s", name); return -EINVAL; } @@ -766,16 +766,16 @@ eth_dev_validate_rx_queue(const struct rte_eth_dev *dev, uint16_t rx_queue_id) if (rx_queue_id >= dev->data->nb_rx_queues) { port_id = dev->data->port_id; - RTE_ETHDEV_LOG(ERR, - "Invalid Rx queue_id=%u of device with port_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid Rx queue_id=%u of device with port_id=%u", rx_queue_id, port_id); return -EINVAL; } if (dev->data->rx_queues[rx_queue_id] == NULL) { port_id = dev->data->port_id; - RTE_ETHDEV_LOG(ERR, - "Queue %u of device with port_id=%u has not been setup\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Queue %u of device with port_id=%u has not been setup", rx_queue_id, port_id); return -EINVAL; } @@ -790,16 +790,16 @@ eth_dev_validate_tx_queue(const struct rte_eth_dev *dev, uint16_t tx_queue_id) if (tx_queue_id >= dev->data->nb_tx_queues) { port_id = dev->data->port_id; - RTE_ETHDEV_LOG(ERR, - "Invalid Tx queue_id=%u of device with port_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid Tx queue_id=%u of device with port_id=%u", tx_queue_id, port_id); return -EINVAL; } if (dev->data->tx_queues[tx_queue_id] == NULL) { port_id = dev->data->port_id; - RTE_ETHDEV_LOG(ERR, - "Queue %u of device with port_id=%u has not been setup\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Queue %u of device with port_id=%u has not been setup", tx_queue_id, port_id); return -EINVAL; } @@ -839,8 +839,8 @@ rte_eth_dev_rx_queue_start(uint16_t port_id, uint16_t rx_queue_id) dev = &rte_eth_devices[port_id]; if (!dev->data->dev_started) { - RTE_ETHDEV_LOG(ERR, - "Port %u must be started before start any queue\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u must be started before start any queue", port_id); return -EINVAL; } @@ -853,15 +853,15 @@ rte_eth_dev_rx_queue_start(uint16_t port_id, uint16_t rx_queue_id) return -ENOTSUP; if (rte_eth_dev_is_rx_hairpin_queue(dev, rx_queue_id)) { - RTE_ETHDEV_LOG(INFO, - "Can't start Rx hairpin queue %"PRIu16" of device with port_id=%"PRIu16"\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Can't start Rx hairpin queue %"PRIu16" of device with port_id=%"PRIu16, rx_queue_id, port_id); return -EINVAL; } if (dev->data->rx_queue_state[rx_queue_id] != RTE_ETH_QUEUE_STATE_STOPPED) { - RTE_ETHDEV_LOG(INFO, - "Queue %"PRIu16" of device with port_id=%"PRIu16" already started\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Queue %"PRIu16" of device with port_id=%"PRIu16" already started", rx_queue_id, port_id); return 0; } @@ -890,15 +890,15 @@ rte_eth_dev_rx_queue_stop(uint16_t port_id, uint16_t rx_queue_id) return -ENOTSUP; if (rte_eth_dev_is_rx_hairpin_queue(dev, rx_queue_id)) { - RTE_ETHDEV_LOG(INFO, - "Can't stop Rx hairpin queue %"PRIu16" of device with port_id=%"PRIu16"\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Can't stop Rx hairpin queue %"PRIu16" of device with port_id=%"PRIu16, rx_queue_id, port_id); return -EINVAL; } if (dev->data->rx_queue_state[rx_queue_id] == RTE_ETH_QUEUE_STATE_STOPPED) { - RTE_ETHDEV_LOG(INFO, - "Queue %"PRIu16" of device with port_id=%"PRIu16" already stopped\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Queue %"PRIu16" of device with port_id=%"PRIu16" already stopped", rx_queue_id, port_id); return 0; } @@ -920,8 +920,8 @@ rte_eth_dev_tx_queue_start(uint16_t port_id, uint16_t tx_queue_id) dev = &rte_eth_devices[port_id]; if (!dev->data->dev_started) { - RTE_ETHDEV_LOG(ERR, - "Port %u must be started before start any queue\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u must be started before start any queue", port_id); return -EINVAL; } @@ -934,15 +934,15 @@ rte_eth_dev_tx_queue_start(uint16_t port_id, uint16_t tx_queue_id) return -ENOTSUP; if (rte_eth_dev_is_tx_hairpin_queue(dev, tx_queue_id)) { - RTE_ETHDEV_LOG(INFO, - "Can't start Tx hairpin queue %"PRIu16" of device with port_id=%"PRIu16"\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Can't start Tx hairpin queue %"PRIu16" of device with port_id=%"PRIu16, tx_queue_id, port_id); return -EINVAL; } if (dev->data->tx_queue_state[tx_queue_id] != RTE_ETH_QUEUE_STATE_STOPPED) { - RTE_ETHDEV_LOG(INFO, - "Queue %"PRIu16" of device with port_id=%"PRIu16" already started\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Queue %"PRIu16" of device with port_id=%"PRIu16" already started", tx_queue_id, port_id); return 0; } @@ -971,15 +971,15 @@ rte_eth_dev_tx_queue_stop(uint16_t port_id, uint16_t tx_queue_id) return -ENOTSUP; if (rte_eth_dev_is_tx_hairpin_queue(dev, tx_queue_id)) { - RTE_ETHDEV_LOG(INFO, - "Can't stop Tx hairpin queue %"PRIu16" of device with port_id=%"PRIu16"\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Can't stop Tx hairpin queue %"PRIu16" of device with port_id=%"PRIu16, tx_queue_id, port_id); return -EINVAL; } if (dev->data->tx_queue_state[tx_queue_id] == RTE_ETH_QUEUE_STATE_STOPPED) { - RTE_ETHDEV_LOG(INFO, - "Queue %"PRIu16" of device with port_id=%"PRIu16" already stopped\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Queue %"PRIu16" of device with port_id=%"PRIu16" already stopped", tx_queue_id, port_id); return 0; } @@ -1153,19 +1153,19 @@ eth_dev_check_lro_pkt_size(uint16_t port_id, uint32_t config_size, if (dev_info_size == 0) { if (config_size != max_rx_pkt_len) { - RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size" - " %u != %u is not allowed\n", + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%d max_lro_pkt_size" + " %u != %u is not allowed", port_id, config_size, max_rx_pkt_len); ret = -EINVAL; } } else if (config_size > dev_info_size) { - RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size %u " - "> max allowed value %u\n", port_id, config_size, + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%d max_lro_pkt_size %u " + "> max allowed value %u", port_id, config_size, dev_info_size); ret = -EINVAL; } else if (config_size < RTE_ETHER_MIN_LEN) { - RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size %u " - "< min allowed value %u\n", port_id, config_size, + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%d max_lro_pkt_size %u " + "< min allowed value %u", port_id, config_size, (unsigned int)RTE_ETHER_MIN_LEN); ret = -EINVAL; } @@ -1203,16 +1203,16 @@ eth_dev_validate_offloads(uint16_t port_id, uint64_t req_offloads, /* Check if any offload is requested but not enabled. */ offload = RTE_BIT64(rte_ctz64(offloads_diff)); if (offload & req_offloads) { - RTE_ETHDEV_LOG(ERR, - "Port %u failed to enable %s offload %s\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u failed to enable %s offload %s", port_id, offload_type, offload_name(offload)); ret = -EINVAL; } /* Check if offload couldn't be disabled. */ if (offload & set_offloads) { - RTE_ETHDEV_LOG(DEBUG, - "Port %u %s offload %s is not requested but enabled\n", + RTE_ETHDEV_LOG_LINE(DEBUG, + "Port %u %s offload %s is not requested but enabled", port_id, offload_type, offload_name(offload)); } @@ -1244,14 +1244,14 @@ eth_dev_validate_mtu(uint16_t port_id, struct rte_eth_dev_info *dev_info, uint32_t frame_size; if (mtu < dev_info->min_mtu) { - RTE_ETHDEV_LOG(ERR, - "MTU (%u) < device min MTU (%u) for port_id %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "MTU (%u) < device min MTU (%u) for port_id %u", mtu, dev_info->min_mtu, port_id); return -EINVAL; } if (mtu > dev_info->max_mtu) { - RTE_ETHDEV_LOG(ERR, - "MTU (%u) > device max MTU (%u) for port_id %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "MTU (%u) > device max MTU (%u) for port_id %u", mtu, dev_info->max_mtu, port_id); return -EINVAL; } @@ -1260,15 +1260,15 @@ eth_dev_validate_mtu(uint16_t port_id, struct rte_eth_dev_info *dev_info, dev_info->max_mtu); frame_size = mtu + overhead_len; if (frame_size < RTE_ETHER_MIN_LEN) { - RTE_ETHDEV_LOG(ERR, - "Frame size (%u) < min frame size (%u) for port_id %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Frame size (%u) < min frame size (%u) for port_id %u", frame_size, RTE_ETHER_MIN_LEN, port_id); return -EINVAL; } if (frame_size > dev_info->max_rx_pktlen) { - RTE_ETHDEV_LOG(ERR, - "Frame size (%u) > device max frame size (%u) for port_id %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Frame size (%u) > device max frame size (%u) for port_id %u", frame_size, dev_info->max_rx_pktlen, port_id); return -EINVAL; } @@ -1292,8 +1292,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, dev = &rte_eth_devices[port_id]; if (dev_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot configure ethdev port %u from NULL config\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot configure ethdev port %u from NULL config", port_id); return -EINVAL; } @@ -1302,8 +1302,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, return -ENOTSUP; if (dev->data->dev_started) { - RTE_ETHDEV_LOG(ERR, - "Port %u must be stopped to allow configuration\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u must be stopped to allow configuration", port_id); return -EBUSY; } @@ -1334,7 +1334,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, dev_conf->rxmode.reserved_64s[1] != 0 || dev_conf->rxmode.reserved_ptrs[0] != NULL || dev_conf->rxmode.reserved_ptrs[1] != NULL) { - RTE_ETHDEV_LOG(ERR, "Rxmode reserved fields not zero\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rxmode reserved fields not zero"); ret = -EINVAL; goto rollback; } @@ -1343,7 +1343,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, dev_conf->txmode.reserved_64s[1] != 0 || dev_conf->txmode.reserved_ptrs[0] != NULL || dev_conf->txmode.reserved_ptrs[1] != NULL) { - RTE_ETHDEV_LOG(ERR, "txmode reserved fields not zero\n"); + RTE_ETHDEV_LOG_LINE(ERR, "txmode reserved fields not zero"); ret = -EINVAL; goto rollback; } @@ -1368,16 +1368,16 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, } if (nb_rx_q > RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Number of Rx queues requested (%u) is greater than max supported(%d)\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Number of Rx queues requested (%u) is greater than max supported(%d)", nb_rx_q, RTE_MAX_QUEUES_PER_PORT); ret = -EINVAL; goto rollback; } if (nb_tx_q > RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Number of Tx queues requested (%u) is greater than max supported(%d)\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Number of Tx queues requested (%u) is greater than max supported(%d)", nb_tx_q, RTE_MAX_QUEUES_PER_PORT); ret = -EINVAL; goto rollback; @@ -1389,14 +1389,14 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, * configured device. */ if (nb_rx_q > dev_info.max_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%u nb_rx_queues=%u > %u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u nb_rx_queues=%u > %u", port_id, nb_rx_q, dev_info.max_rx_queues); ret = -EINVAL; goto rollback; } if (nb_tx_q > dev_info.max_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%u nb_tx_queues=%u > %u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u nb_tx_queues=%u > %u", port_id, nb_tx_q, dev_info.max_tx_queues); ret = -EINVAL; goto rollback; @@ -1405,14 +1405,14 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, /* Check that the device supports requested interrupts */ if ((dev_conf->intr_conf.lsc == 1) && (!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC))) { - RTE_ETHDEV_LOG(ERR, "Driver %s does not support lsc\n", + RTE_ETHDEV_LOG_LINE(ERR, "Driver %s does not support lsc", dev->device->driver->name); ret = -EINVAL; goto rollback; } if ((dev_conf->intr_conf.rmv == 1) && (!(dev->data->dev_flags & RTE_ETH_DEV_INTR_RMV))) { - RTE_ETHDEV_LOG(ERR, "Driver %s does not support rmv\n", + RTE_ETHDEV_LOG_LINE(ERR, "Driver %s does not support rmv", dev->device->driver->name); ret = -EINVAL; goto rollback; @@ -1456,14 +1456,14 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, dev_conf->rxmode.offloads) { char buffer[512]; - RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%u does not support Rx offloads %s\n", + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u does not support Rx offloads %s", port_id, eth_dev_offload_names( dev_conf->rxmode.offloads & ~dev_info.rx_offload_capa, buffer, sizeof(buffer), rte_eth_dev_rx_offload_name)); - RTE_ETHDEV_LOG(DEBUG, "Ethdev port_id=%u was requested Rx offloads %s\n", + RTE_ETHDEV_LOG_LINE(DEBUG, "Ethdev port_id=%u was requested Rx offloads %s", port_id, eth_dev_offload_names(dev_conf->rxmode.offloads, buffer, sizeof(buffer), rte_eth_dev_rx_offload_name)); - RTE_ETHDEV_LOG(DEBUG, "Ethdev port_id=%u supports Rx offloads %s\n", + RTE_ETHDEV_LOG_LINE(DEBUG, "Ethdev port_id=%u supports Rx offloads %s", port_id, eth_dev_offload_names(dev_info.rx_offload_capa, buffer, sizeof(buffer), rte_eth_dev_rx_offload_name)); @@ -1474,14 +1474,14 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, dev_conf->txmode.offloads) { char buffer[512]; - RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%u does not support Tx offloads %s\n", + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u does not support Tx offloads %s", port_id, eth_dev_offload_names( dev_conf->txmode.offloads & ~dev_info.tx_offload_capa, buffer, sizeof(buffer), rte_eth_dev_tx_offload_name)); - RTE_ETHDEV_LOG(DEBUG, "Ethdev port_id=%u was requested Tx offloads %s\n", + RTE_ETHDEV_LOG_LINE(DEBUG, "Ethdev port_id=%u was requested Tx offloads %s", port_id, eth_dev_offload_names(dev_conf->txmode.offloads, buffer, sizeof(buffer), rte_eth_dev_tx_offload_name)); - RTE_ETHDEV_LOG(DEBUG, "Ethdev port_id=%u supports Tx offloads %s\n", + RTE_ETHDEV_LOG_LINE(DEBUG, "Ethdev port_id=%u supports Tx offloads %s", port_id, eth_dev_offload_names(dev_info.tx_offload_capa, buffer, sizeof(buffer), rte_eth_dev_tx_offload_name)); ret = -EINVAL; @@ -1495,8 +1495,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, if ((dev_info.flow_type_rss_offloads | dev_conf->rx_adv_conf.rss_conf.rss_hf) != dev_info.flow_type_rss_offloads) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%u invalid rss_hf: 0x%"PRIx64", valid value: 0x%"PRIx64"\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethdev port_id=%u invalid rss_hf: 0x%"PRIx64", valid value: 0x%"PRIx64, port_id, dev_conf->rx_adv_conf.rss_conf.rss_hf, dev_info.flow_type_rss_offloads); ret = -EINVAL; @@ -1506,8 +1506,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, /* Check if Rx RSS distribution is disabled but RSS hash is enabled. */ if (((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0) && (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%u config invalid Rx mq_mode without RSS but %s offload is requested\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethdev port_id=%u config invalid Rx mq_mode without RSS but %s offload is requested", port_id, rte_eth_dev_rx_offload_name(RTE_ETH_RX_OFFLOAD_RSS_HASH)); ret = -EINVAL; @@ -1516,8 +1516,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, if (dev_conf->rx_adv_conf.rss_conf.rss_key != NULL && dev_conf->rx_adv_conf.rss_conf.rss_key_len != dev_info.hash_key_size) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%u invalid RSS key len: %u, valid value: %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethdev port_id=%u invalid RSS key len: %u, valid value: %u", port_id, dev_conf->rx_adv_conf.rss_conf.rss_key_len, dev_info.hash_key_size); ret = -EINVAL; @@ -1527,9 +1527,9 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, algorithm = dev_conf->rx_adv_conf.rss_conf.algorithm; if ((size_t)algorithm >= CHAR_BIT * sizeof(dev_info.rss_algo_capa) || (dev_info.rss_algo_capa & RTE_ETH_HASH_ALGO_TO_CAPA(algorithm)) == 0) { - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u configured RSS hash algorithm (%u)" - "is not in the algorithm capability (0x%" PRIx32 ")\n", + "is not in the algorithm capability (0x%" PRIx32 ")", port_id, algorithm, dev_info.rss_algo_capa); ret = -EINVAL; goto rollback; @@ -1540,8 +1540,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, */ diag = eth_dev_rx_queue_config(dev, nb_rx_q); if (diag != 0) { - RTE_ETHDEV_LOG(ERR, - "Port%u eth_dev_rx_queue_config = %d\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port%u eth_dev_rx_queue_config = %d", port_id, diag); ret = diag; goto rollback; @@ -1549,8 +1549,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, diag = eth_dev_tx_queue_config(dev, nb_tx_q); if (diag != 0) { - RTE_ETHDEV_LOG(ERR, - "Port%u eth_dev_tx_queue_config = %d\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port%u eth_dev_tx_queue_config = %d", port_id, diag); eth_dev_rx_queue_config(dev, 0); ret = diag; @@ -1559,7 +1559,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, diag = (*dev->dev_ops->dev_configure)(dev); if (diag != 0) { - RTE_ETHDEV_LOG(ERR, "Port%u dev_configure = %d\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port%u dev_configure = %d", port_id, diag); ret = eth_err(port_id, diag); goto reset_queues; @@ -1568,7 +1568,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, /* Initialize Rx profiling if enabled at compilation time. */ diag = __rte_eth_dev_profile_init(port_id, dev); if (diag != 0) { - RTE_ETHDEV_LOG(ERR, "Port%u __rte_eth_dev_profile_init = %d\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port%u __rte_eth_dev_profile_init = %d", port_id, diag); ret = eth_err(port_id, diag); goto reset_queues; @@ -1666,8 +1666,8 @@ eth_dev_config_restore(struct rte_eth_dev *dev, ret = eth_err(port_id, (*dev->dev_ops->promiscuous_enable)(dev)); if (ret != 0 && ret != -ENOTSUP) { - RTE_ETHDEV_LOG(ERR, - "Failed to enable promiscuous mode for device (port %u): %s\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to enable promiscuous mode for device (port %u): %s", port_id, rte_strerror(-ret)); return ret; } @@ -1676,8 +1676,8 @@ eth_dev_config_restore(struct rte_eth_dev *dev, ret = eth_err(port_id, (*dev->dev_ops->promiscuous_disable)(dev)); if (ret != 0 && ret != -ENOTSUP) { - RTE_ETHDEV_LOG(ERR, - "Failed to disable promiscuous mode for device (port %u): %s\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to disable promiscuous mode for device (port %u): %s", port_id, rte_strerror(-ret)); return ret; } @@ -1693,8 +1693,8 @@ eth_dev_config_restore(struct rte_eth_dev *dev, ret = eth_err(port_id, (*dev->dev_ops->allmulticast_enable)(dev)); if (ret != 0 && ret != -ENOTSUP) { - RTE_ETHDEV_LOG(ERR, - "Failed to enable allmulticast mode for device (port %u): %s\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to enable allmulticast mode for device (port %u): %s", port_id, rte_strerror(-ret)); return ret; } @@ -1703,8 +1703,8 @@ eth_dev_config_restore(struct rte_eth_dev *dev, ret = eth_err(port_id, (*dev->dev_ops->allmulticast_disable)(dev)); if (ret != 0 && ret != -ENOTSUP) { - RTE_ETHDEV_LOG(ERR, - "Failed to disable allmulticast mode for device (port %u): %s\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to disable allmulticast mode for device (port %u): %s", port_id, rte_strerror(-ret)); return ret; } @@ -1728,15 +1728,15 @@ rte_eth_dev_start(uint16_t port_id) return -ENOTSUP; if (dev->data->dev_configured == 0) { - RTE_ETHDEV_LOG(INFO, - "Device with port_id=%"PRIu16" is not configured.\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Device with port_id=%"PRIu16" is not configured.", port_id); return -EINVAL; } if (dev->data->dev_started != 0) { - RTE_ETHDEV_LOG(INFO, - "Device with port_id=%"PRIu16" already started\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Device with port_id=%"PRIu16" already started", port_id); return 0; } @@ -1757,13 +1757,13 @@ rte_eth_dev_start(uint16_t port_id) ret = eth_dev_config_restore(dev, &dev_info, port_id); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, - "Error during restoring configuration for device (port %u): %s\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Error during restoring configuration for device (port %u): %s", port_id, rte_strerror(-ret)); ret_stop = rte_eth_dev_stop(port_id); if (ret_stop != 0) { - RTE_ETHDEV_LOG(ERR, - "Failed to stop device (port %u): %s\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to stop device (port %u): %s", port_id, rte_strerror(-ret_stop)); } @@ -1796,8 +1796,8 @@ rte_eth_dev_stop(uint16_t port_id) return -ENOTSUP; if (dev->data->dev_started == 0) { - RTE_ETHDEV_LOG(INFO, - "Device with port_id=%"PRIu16" already stopped\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Device with port_id=%"PRIu16" already stopped", port_id); return 0; } @@ -1866,7 +1866,7 @@ rte_eth_dev_close(uint16_t port_id) */ if (rte_eal_process_type() == RTE_PROC_PRIMARY && dev->data->dev_started) { - RTE_ETHDEV_LOG(ERR, "Cannot close started device (port %u)\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot close started device (port %u)", port_id); return -EINVAL; } @@ -1897,8 +1897,8 @@ rte_eth_dev_reset(uint16_t port_id) ret = rte_eth_dev_stop(port_id); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, - "Failed to stop device (port %u) before reset: %s - ignore\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to stop device (port %u) before reset: %s - ignore", port_id, rte_strerror(-ret)); } ret = eth_err(port_id, dev->dev_ops->dev_reset(dev)); @@ -1946,7 +1946,7 @@ rte_eth_check_rx_mempool(struct rte_mempool *mp, uint16_t offset, */ if (mp->private_data_size < sizeof(struct rte_pktmbuf_pool_private)) { - RTE_ETHDEV_LOG(ERR, "%s private_data_size %u < %u\n", + RTE_ETHDEV_LOG_LINE(ERR, "%s private_data_size %u < %u", mp->name, mp->private_data_size, (unsigned int) sizeof(struct rte_pktmbuf_pool_private)); @@ -1954,8 +1954,8 @@ rte_eth_check_rx_mempool(struct rte_mempool *mp, uint16_t offset, } data_room_size = rte_pktmbuf_data_room_size(mp); if (data_room_size < offset + min_length) { - RTE_ETHDEV_LOG(ERR, - "%s mbuf_data_room_size %u < %u (%u + %u)\n", + RTE_ETHDEV_LOG_LINE(ERR, + "%s mbuf_data_room_size %u < %u (%u + %u)", mp->name, data_room_size, offset + min_length, offset, min_length); return -EINVAL; @@ -2001,8 +2001,8 @@ rte_eth_rx_queue_check_split(uint16_t port_id, int i; if (n_seg > seg_capa->max_nseg) { - RTE_ETHDEV_LOG(ERR, - "Requested Rx segments %u exceed supported %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Requested Rx segments %u exceed supported %u", n_seg, seg_capa->max_nseg); return -EINVAL; } @@ -2023,24 +2023,24 @@ rte_eth_rx_queue_check_split(uint16_t port_id, uint32_t proto_hdr = rx_seg[seg_idx].proto_hdr; if (mpl == NULL) { - RTE_ETHDEV_LOG(ERR, "null mempool pointer\n"); + RTE_ETHDEV_LOG_LINE(ERR, "null mempool pointer"); ret = -EINVAL; goto out; } if (seg_idx != 0 && mp_first != mpl && seg_capa->multi_pools == 0) { - RTE_ETHDEV_LOG(ERR, "Receiving to multiple pools is not supported\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Receiving to multiple pools is not supported"); ret = -ENOTSUP; goto out; } if (offset != 0) { if (seg_capa->offset_allowed == 0) { - RTE_ETHDEV_LOG(ERR, "Rx segmentation with offset is not supported\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx segmentation with offset is not supported"); ret = -ENOTSUP; goto out; } if (offset & offset_mask) { - RTE_ETHDEV_LOG(ERR, "Rx segmentation invalid offset alignment %u, %u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Rx segmentation invalid offset alignment %u, %u", offset, seg_capa->offset_align_log2); ret = -EINVAL; @@ -2053,22 +2053,22 @@ rte_eth_rx_queue_check_split(uint16_t port_id, if (proto_hdr != 0) { /* Split based on protocol headers. */ if (length != 0) { - RTE_ETHDEV_LOG(ERR, - "Do not set length split and protocol split within a segment\n" + RTE_ETHDEV_LOG_LINE(ERR, + "Do not set length split and protocol split within a segment" ); ret = -EINVAL; goto out; } if ((proto_hdr & prev_proto_hdrs) != 0) { - RTE_ETHDEV_LOG(ERR, - "Repeat with previous protocol headers or proto-split after length-based split\n" + RTE_ETHDEV_LOG_LINE(ERR, + "Repeat with previous protocol headers or proto-split after length-based split" ); ret = -EINVAL; goto out; } if (ptype_cnt <= 0) { - RTE_ETHDEV_LOG(ERR, - "Port %u failed to get supported buffer split header protocols\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u failed to get supported buffer split header protocols", port_id); ret = -ENOTSUP; goto out; @@ -2078,8 +2078,8 @@ rte_eth_rx_queue_check_split(uint16_t port_id, break; } if (i == ptype_cnt) { - RTE_ETHDEV_LOG(ERR, - "Requested Rx split header protocols 0x%x is not supported.\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Requested Rx split header protocols 0x%x is not supported.", proto_hdr); ret = -EINVAL; goto out; @@ -2109,8 +2109,8 @@ rte_eth_rx_queue_check_mempools(struct rte_mempool **rx_mempools, int ret; if (n_mempools > dev_info->max_rx_mempools) { - RTE_ETHDEV_LOG(ERR, - "Too many Rx mempools %u vs maximum %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Too many Rx mempools %u vs maximum %u", n_mempools, dev_info->max_rx_mempools); return -EINVAL; } @@ -2119,7 +2119,7 @@ rte_eth_rx_queue_check_mempools(struct rte_mempool **rx_mempools, struct rte_mempool *mp = rx_mempools[pool_idx]; if (mp == NULL) { - RTE_ETHDEV_LOG(ERR, "null Rx mempool pointer\n"); + RTE_ETHDEV_LOG_LINE(ERR, "null Rx mempool pointer"); return -EINVAL; } @@ -2153,7 +2153,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, dev = &rte_eth_devices[port_id]; if (rx_queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", rx_queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", rx_queue_id); return -EINVAL; } @@ -2165,7 +2165,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, rx_conf->reserved_64s[1] != 0 || rx_conf->reserved_ptrs[0] != NULL || rx_conf->reserved_ptrs[1] != NULL)) { - RTE_ETHDEV_LOG(ERR, "Rx conf reserved fields not zero\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx conf reserved fields not zero"); return -EINVAL; } @@ -2181,8 +2181,8 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, if ((mp != NULL) + (rx_conf != NULL && rx_conf->rx_nseg > 0) + (rx_conf != NULL && rx_conf->rx_nmempool > 0) != 1) { - RTE_ETHDEV_LOG(ERR, - "Ambiguous Rx mempools configuration\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Ambiguous Rx mempools configuration"); return -EINVAL; } @@ -2196,9 +2196,9 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, mbp_buf_size = rte_pktmbuf_data_room_size(mp); buf_data_size = mbp_buf_size - RTE_PKTMBUF_HEADROOM; if (buf_data_size > dev_info.max_rx_bufsize) - RTE_ETHDEV_LOG(DEBUG, + RTE_ETHDEV_LOG_LINE(DEBUG, "For port_id=%u, the mbuf data buffer size (%u) is bigger than " - "max buffer size (%u) device can utilize, so mbuf size can be reduced.\n", + "max buffer size (%u) device can utilize, so mbuf size can be reduced.", port_id, buf_data_size, dev_info.max_rx_bufsize); } else if (rx_conf != NULL && rx_conf->rx_nseg > 0) { const struct rte_eth_rxseg_split *rx_seg; @@ -2206,8 +2206,8 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, /* Extended multi-segment configuration check. */ if (rx_conf->rx_seg == NULL) { - RTE_ETHDEV_LOG(ERR, - "Memory pool is null and no multi-segment configuration provided\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Memory pool is null and no multi-segment configuration provided"); return -EINVAL; } @@ -2221,13 +2221,13 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, if (ret != 0) return ret; } else { - RTE_ETHDEV_LOG(ERR, "No Rx segmentation offload configured\n"); + RTE_ETHDEV_LOG_LINE(ERR, "No Rx segmentation offload configured"); return -EINVAL; } } else if (rx_conf != NULL && rx_conf->rx_nmempool > 0) { /* Extended multi-pool configuration check. */ if (rx_conf->rx_mempools == NULL) { - RTE_ETHDEV_LOG(ERR, "Memory pools array is null\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Memory pools array is null"); return -EINVAL; } @@ -2238,7 +2238,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, if (ret != 0) return ret; } else { - RTE_ETHDEV_LOG(ERR, "Missing Rx mempool configuration\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Missing Rx mempool configuration"); return -EINVAL; } @@ -2254,8 +2254,8 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, nb_rx_desc < dev_info.rx_desc_lim.nb_min || nb_rx_desc % dev_info.rx_desc_lim.nb_align != 0) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for nb_rx_desc(=%hu), should be: <= %hu, >= %hu, and a product of %hu\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for nb_rx_desc(=%hu), should be: <= %hu, >= %hu, and a product of %hu", nb_rx_desc, dev_info.rx_desc_lim.nb_max, dev_info.rx_desc_lim.nb_min, dev_info.rx_desc_lim.nb_align); @@ -2299,9 +2299,9 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, */ if ((local_conf.offloads & dev_info.rx_queue_offload_capa) != local_conf.offloads) { - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%d rx_queue_id=%d, new added offloads 0x%"PRIx64" must be " - "within per-queue offload capabilities 0x%"PRIx64" in %s()\n", + "within per-queue offload capabilities 0x%"PRIx64" in %s()", port_id, rx_queue_id, local_conf.offloads, dev_info.rx_queue_offload_capa, __func__); @@ -2310,8 +2310,8 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, if (local_conf.share_group > 0 && (dev_info.dev_capa & RTE_ETH_DEV_CAPA_RXQ_SHARE) == 0) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%d rx_queue_id=%d, enabled share_group=%hu while device doesn't support Rx queue share\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethdev port_id=%d rx_queue_id=%d, enabled share_group=%hu while device doesn't support Rx queue share", port_id, rx_queue_id, local_conf.share_group); return -EINVAL; } @@ -2367,20 +2367,20 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id, dev = &rte_eth_devices[port_id]; if (rx_queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", rx_queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", rx_queue_id); return -EINVAL; } if (conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot setup ethdev port %u Rx hairpin queue from NULL config\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot setup ethdev port %u Rx hairpin queue from NULL config", port_id); return -EINVAL; } if (conf->reserved != 0) { - RTE_ETHDEV_LOG(ERR, - "Rx hairpin reserved field not zero\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Rx hairpin reserved field not zero"); return -EINVAL; } @@ -2393,42 +2393,42 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id, if (nb_rx_desc == 0) nb_rx_desc = cap.max_nb_desc; if (nb_rx_desc > cap.max_nb_desc) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for nb_rx_desc(=%hu), should be: <= %hu\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for nb_rx_desc(=%hu), should be: <= %hu", nb_rx_desc, cap.max_nb_desc); return -EINVAL; } if (conf->peer_count > cap.max_rx_2_tx) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Rx queue(=%u), should be: <= %hu\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for number of peers for Rx queue(=%u), should be: <= %hu", conf->peer_count, cap.max_rx_2_tx); return -EINVAL; } if (conf->use_locked_device_memory && !cap.rx_cap.locked_device_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to use locked device memory for Rx queue, which is not supported\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to use locked device memory for Rx queue, which is not supported"); return -EINVAL; } if (conf->use_rte_memory && !cap.rx_cap.rte_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to use DPDK memory for Rx queue, which is not supported\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to use DPDK memory for Rx queue, which is not supported"); return -EINVAL; } if (conf->use_locked_device_memory && conf->use_rte_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to use mutually exclusive memory settings for Rx queue\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to use mutually exclusive memory settings for Rx queue"); return -EINVAL; } if (conf->force_memory && !conf->use_locked_device_memory && !conf->use_rte_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to force Rx queue memory settings, but none is set\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to force Rx queue memory settings, but none is set"); return -EINVAL; } if (conf->peer_count == 0) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Rx queue(=%u), should be: > 0\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for number of peers for Rx queue(=%u), should be: > 0", conf->peer_count); return -EINVAL; } @@ -2438,7 +2438,7 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id, count++; } if (count > cap.max_nb_queues) { - RTE_ETHDEV_LOG(ERR, "To many Rx hairpin queues max is %d\n", + RTE_ETHDEV_LOG_LINE(ERR, "To many Rx hairpin queues max is %d", cap.max_nb_queues); return -EINVAL; } @@ -2472,7 +2472,7 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id, dev = &rte_eth_devices[port_id]; if (tx_queue_id >= dev->data->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u\n", tx_queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u", tx_queue_id); return -EINVAL; } @@ -2484,7 +2484,7 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id, tx_conf->reserved_64s[1] != 0 || tx_conf->reserved_ptrs[0] != NULL || tx_conf->reserved_ptrs[1] != NULL)) { - RTE_ETHDEV_LOG(ERR, "Tx conf reserved fields not zero\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Tx conf reserved fields not zero"); return -EINVAL; } @@ -2502,8 +2502,8 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id, if (nb_tx_desc > dev_info.tx_desc_lim.nb_max || nb_tx_desc < dev_info.tx_desc_lim.nb_min || nb_tx_desc % dev_info.tx_desc_lim.nb_align != 0) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for nb_tx_desc(=%hu), should be: <= %hu, >= %hu, and a product of %hu\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for nb_tx_desc(=%hu), should be: <= %hu, >= %hu, and a product of %hu", nb_tx_desc, dev_info.tx_desc_lim.nb_max, dev_info.tx_desc_lim.nb_min, dev_info.tx_desc_lim.nb_align); @@ -2547,9 +2547,9 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id, */ if ((local_conf.offloads & dev_info.tx_queue_offload_capa) != local_conf.offloads) { - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%d tx_queue_id=%d, new added offloads 0x%"PRIx64" must be " - "within per-queue offload capabilities 0x%"PRIx64" in %s()\n", + "within per-queue offload capabilities 0x%"PRIx64" in %s()", port_id, tx_queue_id, local_conf.offloads, dev_info.tx_queue_offload_capa, __func__); @@ -2576,13 +2576,13 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id, dev = &rte_eth_devices[port_id]; if (tx_queue_id >= dev->data->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u\n", tx_queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u", tx_queue_id); return -EINVAL; } if (conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot setup ethdev port %u Tx hairpin queue from NULL config\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot setup ethdev port %u Tx hairpin queue from NULL config", port_id); return -EINVAL; } @@ -2596,42 +2596,42 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id, if (nb_tx_desc == 0) nb_tx_desc = cap.max_nb_desc; if (nb_tx_desc > cap.max_nb_desc) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for nb_tx_desc(=%hu), should be: <= %hu\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for nb_tx_desc(=%hu), should be: <= %hu", nb_tx_desc, cap.max_nb_desc); return -EINVAL; } if (conf->peer_count > cap.max_tx_2_rx) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Tx queue(=%u), should be: <= %hu\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for number of peers for Tx queue(=%u), should be: <= %hu", conf->peer_count, cap.max_tx_2_rx); return -EINVAL; } if (conf->use_locked_device_memory && !cap.tx_cap.locked_device_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to use locked device memory for Tx queue, which is not supported\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to use locked device memory for Tx queue, which is not supported"); return -EINVAL; } if (conf->use_rte_memory && !cap.tx_cap.rte_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to use DPDK memory for Tx queue, which is not supported\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to use DPDK memory for Tx queue, which is not supported"); return -EINVAL; } if (conf->use_locked_device_memory && conf->use_rte_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to use mutually exclusive memory settings for Tx queue\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to use mutually exclusive memory settings for Tx queue"); return -EINVAL; } if (conf->force_memory && !conf->use_locked_device_memory && !conf->use_rte_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to force Tx queue memory settings, but none is set\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to force Tx queue memory settings, but none is set"); return -EINVAL; } if (conf->peer_count == 0) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Tx queue(=%u), should be: > 0\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for number of peers for Tx queue(=%u), should be: > 0", conf->peer_count); return -EINVAL; } @@ -2641,7 +2641,7 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id, count++; } if (count > cap.max_nb_queues) { - RTE_ETHDEV_LOG(ERR, "To many Tx hairpin queues max is %d\n", + RTE_ETHDEV_LOG_LINE(ERR, "To many Tx hairpin queues max is %d", cap.max_nb_queues); return -EINVAL; } @@ -2671,7 +2671,7 @@ rte_eth_hairpin_bind(uint16_t tx_port, uint16_t rx_port) dev = &rte_eth_devices[tx_port]; if (dev->data->dev_started == 0) { - RTE_ETHDEV_LOG(ERR, "Tx port %d is not started\n", tx_port); + RTE_ETHDEV_LOG_LINE(ERR, "Tx port %d is not started", tx_port); return -EBUSY; } @@ -2679,8 +2679,8 @@ rte_eth_hairpin_bind(uint16_t tx_port, uint16_t rx_port) return -ENOTSUP; ret = (*dev->dev_ops->hairpin_bind)(dev, rx_port); if (ret != 0) - RTE_ETHDEV_LOG(ERR, "Failed to bind hairpin Tx %d" - " to Rx %d (%d - all ports)\n", + RTE_ETHDEV_LOG_LINE(ERR, "Failed to bind hairpin Tx %d" + " to Rx %d (%d - all ports)", tx_port, rx_port, RTE_MAX_ETHPORTS); rte_eth_trace_hairpin_bind(tx_port, rx_port, ret); @@ -2698,7 +2698,7 @@ rte_eth_hairpin_unbind(uint16_t tx_port, uint16_t rx_port) dev = &rte_eth_devices[tx_port]; if (dev->data->dev_started == 0) { - RTE_ETHDEV_LOG(ERR, "Tx port %d is already stopped\n", tx_port); + RTE_ETHDEV_LOG_LINE(ERR, "Tx port %d is already stopped", tx_port); return -EBUSY; } @@ -2706,8 +2706,8 @@ rte_eth_hairpin_unbind(uint16_t tx_port, uint16_t rx_port) return -ENOTSUP; ret = (*dev->dev_ops->hairpin_unbind)(dev, rx_port); if (ret != 0) - RTE_ETHDEV_LOG(ERR, "Failed to unbind hairpin Tx %d" - " from Rx %d (%d - all ports)\n", + RTE_ETHDEV_LOG_LINE(ERR, "Failed to unbind hairpin Tx %d" + " from Rx %d (%d - all ports)", tx_port, rx_port, RTE_MAX_ETHPORTS); rte_eth_trace_hairpin_unbind(tx_port, rx_port, ret); @@ -2726,15 +2726,15 @@ rte_eth_hairpin_get_peer_ports(uint16_t port_id, uint16_t *peer_ports, dev = &rte_eth_devices[port_id]; if (peer_ports == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u hairpin peer ports to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u hairpin peer ports to NULL", port_id); return -EINVAL; } if (len == 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u hairpin peer ports to array with zero size\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u hairpin peer ports to array with zero size", port_id); return -EINVAL; } @@ -2745,7 +2745,7 @@ rte_eth_hairpin_get_peer_ports(uint16_t port_id, uint16_t *peer_ports, ret = (*dev->dev_ops->hairpin_get_peer_ports)(dev, peer_ports, len, direction); if (ret < 0) - RTE_ETHDEV_LOG(ERR, "Failed to get %d hairpin peer %s ports\n", + RTE_ETHDEV_LOG_LINE(ERR, "Failed to get %d hairpin peer %s ports", port_id, direction ? "Rx" : "Tx"); rte_eth_trace_hairpin_get_peer_ports(port_id, peer_ports, len, @@ -2780,8 +2780,8 @@ rte_eth_tx_buffer_set_err_callback(struct rte_eth_dev_tx_buffer *buffer, buffer_tx_error_fn cbfn, void *userdata) { if (buffer == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot set Tx buffer error callback to NULL buffer\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set Tx buffer error callback to NULL buffer"); return -EINVAL; } @@ -2799,7 +2799,7 @@ rte_eth_tx_buffer_init(struct rte_eth_dev_tx_buffer *buffer, uint16_t size) int ret = 0; if (buffer == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot initialize NULL buffer\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot initialize NULL buffer"); return -EINVAL; } @@ -2977,7 +2977,7 @@ rte_eth_link_get(uint16_t port_id, struct rte_eth_link *eth_link) dev = &rte_eth_devices[port_id]; if (eth_link == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u link to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u link to NULL", port_id); return -EINVAL; } @@ -3005,7 +3005,7 @@ rte_eth_link_get_nowait(uint16_t port_id, struct rte_eth_link *eth_link) dev = &rte_eth_devices[port_id]; if (eth_link == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u link to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u link to NULL", port_id); return -EINVAL; } @@ -3093,18 +3093,18 @@ rte_eth_link_to_str(char *str, size_t len, const struct rte_eth_link *eth_link) int ret; if (str == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot convert link to NULL string\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot convert link to NULL string"); return -EINVAL; } if (len == 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot convert link to string with zero size\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot convert link to string with zero size"); return -EINVAL; } if (eth_link == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot convert to string from NULL link\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot convert to string from NULL link"); return -EINVAL; } @@ -3133,7 +3133,7 @@ rte_eth_stats_get(uint16_t port_id, struct rte_eth_stats *stats) dev = &rte_eth_devices[port_id]; if (stats == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u stats to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u stats to NULL", port_id); return -EINVAL; } @@ -3220,15 +3220,15 @@ rte_eth_xstats_get_id_by_name(uint16_t port_id, const char *xstat_name, RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); if (xstat_name == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u xstats ID from NULL xstat name\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u xstats ID from NULL xstat name", port_id); return -ENOMEM; } if (id == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u xstats ID to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u xstats ID to NULL", port_id); return -ENOMEM; } @@ -3236,7 +3236,7 @@ rte_eth_xstats_get_id_by_name(uint16_t port_id, const char *xstat_name, /* Get count */ cnt_xstats = rte_eth_xstats_get_names_by_id(port_id, NULL, 0, NULL); if (cnt_xstats < 0) { - RTE_ETHDEV_LOG(ERR, "Cannot get count of xstats\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get count of xstats"); return -ENODEV; } @@ -3245,7 +3245,7 @@ rte_eth_xstats_get_id_by_name(uint16_t port_id, const char *xstat_name, if (cnt_xstats != rte_eth_xstats_get_names_by_id( port_id, xstats_names, cnt_xstats, NULL)) { - RTE_ETHDEV_LOG(ERR, "Cannot get xstats lookup\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get xstats lookup"); return -1; } @@ -3376,7 +3376,7 @@ rte_eth_xstats_get_names_by_id(uint16_t port_id, sizeof(struct rte_eth_xstat_name)); if (!xstats_names_copy) { - RTE_ETHDEV_LOG(ERR, "Can't allocate memory\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Can't allocate memory"); return -ENOMEM; } @@ -3404,7 +3404,7 @@ rte_eth_xstats_get_names_by_id(uint16_t port_id, /* Filter stats */ for (i = 0; i < size; i++) { if (ids[i] >= expected_entries) { - RTE_ETHDEV_LOG(ERR, "Id value isn't valid\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Id value isn't valid"); free(xstats_names_copy); return -1; } @@ -3600,7 +3600,7 @@ rte_eth_xstats_get_by_id(uint16_t port_id, const uint64_t *ids, /* Filter stats */ for (i = 0; i < size; i++) { if (ids[i] >= expected_entries) { - RTE_ETHDEV_LOG(ERR, "Id value isn't valid\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Id value isn't valid"); return -1; } values[i] = xstats[ids[i]].value; @@ -3748,8 +3748,8 @@ rte_eth_dev_fw_version_get(uint16_t port_id, char *fw_version, size_t fw_size) dev = &rte_eth_devices[port_id]; if (fw_version == NULL && fw_size > 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u FW version to NULL when string size is non zero\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u FW version to NULL when string size is non zero", port_id); return -EINVAL; } @@ -3781,7 +3781,7 @@ rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info) dev = &rte_eth_devices[port_id]; if (dev_info == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u info to NULL", port_id); return -EINVAL; } @@ -3837,8 +3837,8 @@ rte_eth_dev_conf_get(uint16_t port_id, struct rte_eth_conf *dev_conf) dev = &rte_eth_devices[port_id]; if (dev_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u configuration to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u configuration to NULL", port_id); return -EINVAL; } @@ -3862,8 +3862,8 @@ rte_eth_dev_get_supported_ptypes(uint16_t port_id, uint32_t ptype_mask, dev = &rte_eth_devices[port_id]; if (ptypes == NULL && num > 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u supported packet types to NULL when array size is non zero\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u supported packet types to NULL when array size is non zero", port_id); return -EINVAL; } @@ -3912,8 +3912,8 @@ rte_eth_dev_set_ptypes(uint16_t port_id, uint32_t ptype_mask, dev = &rte_eth_devices[port_id]; if (num > 0 && set_ptypes == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u set packet types to NULL when array size is non zero\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u set packet types to NULL when array size is non zero", port_id); return -EINVAL; } @@ -3992,7 +3992,7 @@ rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr *ma, struct rte_eth_dev_info dev_info; if (ma == NULL) { - RTE_ETHDEV_LOG(ERR, "%s: invalid parameters\n", __func__); + RTE_ETHDEV_LOG_LINE(ERR, "%s: invalid parameters", __func__); return -EINVAL; } @@ -4019,8 +4019,8 @@ rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr) dev = &rte_eth_devices[port_id]; if (mac_addr == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u MAC address to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u MAC address to NULL", port_id); return -EINVAL; } @@ -4041,7 +4041,7 @@ rte_eth_dev_get_mtu(uint16_t port_id, uint16_t *mtu) dev = &rte_eth_devices[port_id]; if (mtu == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u MTU to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u MTU to NULL", port_id); return -EINVAL; } @@ -4082,8 +4082,8 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu) } if (dev->data->dev_configured == 0) { - RTE_ETHDEV_LOG(ERR, - "Port %u must be configured before MTU set\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u must be configured before MTU set", port_id); return -EINVAL; } @@ -4110,13 +4110,13 @@ rte_eth_dev_vlan_filter(uint16_t port_id, uint16_t vlan_id, int on) if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)) { - RTE_ETHDEV_LOG(ERR, "Port %u: VLAN-filtering disabled\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port %u: VLAN-filtering disabled", port_id); return -ENOSYS; } if (vlan_id > 4095) { - RTE_ETHDEV_LOG(ERR, "Port_id=%u invalid vlan_id=%u > 4095\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port_id=%u invalid vlan_id=%u > 4095", port_id, vlan_id); return -EINVAL; } @@ -4156,7 +4156,7 @@ rte_eth_dev_set_vlan_strip_on_queue(uint16_t port_id, uint16_t rx_queue_id, dev = &rte_eth_devices[port_id]; if (rx_queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid rx_queue_id=%u\n", rx_queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid rx_queue_id=%u", rx_queue_id); return -EINVAL; } @@ -4261,10 +4261,10 @@ rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask) /* Rx VLAN offloading must be within its device capabilities */ if ((dev_offloads & dev_info.rx_offload_capa) != dev_offloads) { new_offloads = dev_offloads & ~orig_offloads; - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u requested new added VLAN offloads " "0x%" PRIx64 " must be within Rx offloads capabilities " - "0x%" PRIx64 " in %s()\n", + "0x%" PRIx64 " in %s()", port_id, new_offloads, dev_info.rx_offload_capa, __func__); return -EINVAL; @@ -4342,8 +4342,8 @@ rte_eth_dev_flow_ctrl_get(uint16_t port_id, struct rte_eth_fc_conf *fc_conf) dev = &rte_eth_devices[port_id]; if (fc_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u flow control config to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u flow control config to NULL", port_id); return -EINVAL; } @@ -4368,14 +4368,14 @@ rte_eth_dev_flow_ctrl_set(uint16_t port_id, struct rte_eth_fc_conf *fc_conf) dev = &rte_eth_devices[port_id]; if (fc_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot set ethdev port %u flow control from NULL config\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set ethdev port %u flow control from NULL config", port_id); return -EINVAL; } if ((fc_conf->send_xon != 0) && (fc_conf->send_xon != 1)) { - RTE_ETHDEV_LOG(ERR, "Invalid send_xon, only 0/1 allowed\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid send_xon, only 0/1 allowed"); return -EINVAL; } @@ -4399,14 +4399,14 @@ rte_eth_dev_priority_flow_ctrl_set(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (pfc_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot set ethdev port %u priority flow control from NULL config\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set ethdev port %u priority flow control from NULL config", port_id); return -EINVAL; } if (pfc_conf->priority > (RTE_ETH_DCB_NUM_USER_PRIORITIES - 1)) { - RTE_ETHDEV_LOG(ERR, "Invalid priority, only 0-7 allowed\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid priority, only 0-7 allowed"); return -EINVAL; } @@ -4428,16 +4428,16 @@ validate_rx_pause_config(struct rte_eth_dev_info *dev_info, uint8_t tc_max, if ((pfc_queue_conf->mode == RTE_ETH_FC_RX_PAUSE) || (pfc_queue_conf->mode == RTE_ETH_FC_FULL)) { if (pfc_queue_conf->rx_pause.tx_qid >= dev_info->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, - "PFC Tx queue not in range for Rx pause requested:%d configured:%d\n", + RTE_ETHDEV_LOG_LINE(ERR, + "PFC Tx queue not in range for Rx pause requested:%d configured:%d", pfc_queue_conf->rx_pause.tx_qid, dev_info->nb_tx_queues); return -EINVAL; } if (pfc_queue_conf->rx_pause.tc >= tc_max) { - RTE_ETHDEV_LOG(ERR, - "PFC TC not in range for Rx pause requested:%d max:%d\n", + RTE_ETHDEV_LOG_LINE(ERR, + "PFC TC not in range for Rx pause requested:%d max:%d", pfc_queue_conf->rx_pause.tc, tc_max); return -EINVAL; } @@ -4453,16 +4453,16 @@ validate_tx_pause_config(struct rte_eth_dev_info *dev_info, uint8_t tc_max, if ((pfc_queue_conf->mode == RTE_ETH_FC_TX_PAUSE) || (pfc_queue_conf->mode == RTE_ETH_FC_FULL)) { if (pfc_queue_conf->tx_pause.rx_qid >= dev_info->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, - "PFC Rx queue not in range for Tx pause requested:%d configured:%d\n", + RTE_ETHDEV_LOG_LINE(ERR, + "PFC Rx queue not in range for Tx pause requested:%d configured:%d", pfc_queue_conf->tx_pause.rx_qid, dev_info->nb_rx_queues); return -EINVAL; } if (pfc_queue_conf->tx_pause.tc >= tc_max) { - RTE_ETHDEV_LOG(ERR, - "PFC TC not in range for Tx pause requested:%d max:%d\n", + RTE_ETHDEV_LOG_LINE(ERR, + "PFC TC not in range for Tx pause requested:%d max:%d", pfc_queue_conf->tx_pause.tc, tc_max); return -EINVAL; } @@ -4482,7 +4482,7 @@ rte_eth_dev_priority_flow_ctrl_queue_info_get(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (pfc_queue_info == NULL) { - RTE_ETHDEV_LOG(ERR, "PFC info param is NULL for port (%u)\n", + RTE_ETHDEV_LOG_LINE(ERR, "PFC info param is NULL for port (%u)", port_id); return -EINVAL; } @@ -4511,7 +4511,7 @@ rte_eth_dev_priority_flow_ctrl_queue_configure(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (pfc_queue_conf == NULL) { - RTE_ETHDEV_LOG(ERR, "PFC parameters are NULL for port (%u)\n", + RTE_ETHDEV_LOG_LINE(ERR, "PFC parameters are NULL for port (%u)", port_id); return -EINVAL; } @@ -4525,7 +4525,7 @@ rte_eth_dev_priority_flow_ctrl_queue_configure(uint16_t port_id, return ret; if (pfc_info.tc_max == 0) { - RTE_ETHDEV_LOG(ERR, "Ethdev port %u does not support PFC TC values\n", + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port %u does not support PFC TC values", port_id); return -ENOTSUP; } @@ -4533,14 +4533,14 @@ rte_eth_dev_priority_flow_ctrl_queue_configure(uint16_t port_id, /* Check requested mode supported or not */ if (pfc_info.mode_capa == RTE_ETH_FC_RX_PAUSE && pfc_queue_conf->mode == RTE_ETH_FC_TX_PAUSE) { - RTE_ETHDEV_LOG(ERR, "PFC Tx pause unsupported for port (%d)\n", + RTE_ETHDEV_LOG_LINE(ERR, "PFC Tx pause unsupported for port (%d)", port_id); return -EINVAL; } if (pfc_info.mode_capa == RTE_ETH_FC_TX_PAUSE && pfc_queue_conf->mode == RTE_ETH_FC_RX_PAUSE) { - RTE_ETHDEV_LOG(ERR, "PFC Rx pause unsupported for port (%d)\n", + RTE_ETHDEV_LOG_LINE(ERR, "PFC Rx pause unsupported for port (%d)", port_id); return -EINVAL; } @@ -4597,7 +4597,7 @@ eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t i, idx, shift; if (max_rxq == 0) { - RTE_ETHDEV_LOG(ERR, "No receive queue is available\n"); + RTE_ETHDEV_LOG_LINE(ERR, "No receive queue is available"); return -EINVAL; } @@ -4606,8 +4606,8 @@ eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf, shift = i % RTE_ETH_RETA_GROUP_SIZE; if ((reta_conf[idx].mask & RTE_BIT64(shift)) && (reta_conf[idx].reta[shift] >= max_rxq)) { - RTE_ETHDEV_LOG(ERR, - "reta_conf[%u]->reta[%u]: %u exceeds the maximum rxq index: %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "reta_conf[%u]->reta[%u]: %u exceeds the maximum rxq index: %u", idx, shift, reta_conf[idx].reta[shift], max_rxq); return -EINVAL; @@ -4630,15 +4630,15 @@ rte_eth_dev_rss_reta_update(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (reta_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot update ethdev port %u RSS RETA to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot update ethdev port %u RSS RETA to NULL", port_id); return -EINVAL; } if (reta_size == 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot update ethdev port %u RSS RETA with zero size\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot update ethdev port %u RSS RETA with zero size", port_id); return -EINVAL; } @@ -4656,7 +4656,7 @@ rte_eth_dev_rss_reta_update(uint16_t port_id, mq_mode = dev->data->dev_conf.rxmode.mq_mode; if (!(mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) { - RTE_ETHDEV_LOG(ERR, "Multi-queue RSS mode isn't enabled.\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Multi-queue RSS mode isn't enabled."); return -ENOTSUP; } @@ -4682,8 +4682,8 @@ rte_eth_dev_rss_reta_query(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (reta_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot query ethdev port %u RSS RETA from NULL config\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot query ethdev port %u RSS RETA from NULL config", port_id); return -EINVAL; } @@ -4716,8 +4716,8 @@ rte_eth_dev_rss_hash_update(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (rss_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot update ethdev port %u RSS hash from NULL config\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot update ethdev port %u RSS hash from NULL config", port_id); return -EINVAL; } @@ -4729,8 +4729,8 @@ rte_eth_dev_rss_hash_update(uint16_t port_id, rss_conf->rss_hf = rte_eth_rss_hf_refine(rss_conf->rss_hf); if ((dev_info.flow_type_rss_offloads | rss_conf->rss_hf) != dev_info.flow_type_rss_offloads) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%u invalid rss_hf: 0x%"PRIx64", valid value: 0x%"PRIx64"\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethdev port_id=%u invalid rss_hf: 0x%"PRIx64", valid value: 0x%"PRIx64, port_id, rss_conf->rss_hf, dev_info.flow_type_rss_offloads); return -EINVAL; @@ -4738,14 +4738,14 @@ rte_eth_dev_rss_hash_update(uint16_t port_id, mq_mode = dev->data->dev_conf.rxmode.mq_mode; if (!(mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) { - RTE_ETHDEV_LOG(ERR, "Multi-queue RSS mode isn't enabled.\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Multi-queue RSS mode isn't enabled."); return -ENOTSUP; } if (rss_conf->rss_key != NULL && rss_conf->rss_key_len != dev_info.hash_key_size) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%u invalid RSS key len: %u, valid value: %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethdev port_id=%u invalid RSS key len: %u, valid value: %u", port_id, rss_conf->rss_key_len, dev_info.hash_key_size); return -EINVAL; } @@ -4753,9 +4753,9 @@ rte_eth_dev_rss_hash_update(uint16_t port_id, if ((size_t)rss_conf->algorithm >= CHAR_BIT * sizeof(dev_info.rss_algo_capa) || (dev_info.rss_algo_capa & RTE_ETH_HASH_ALGO_TO_CAPA(rss_conf->algorithm)) == 0) { - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u configured RSS hash algorithm (%u)" - "is not in the algorithm capability (0x%" PRIx32 ")\n", + "is not in the algorithm capability (0x%" PRIx32 ")", port_id, rss_conf->algorithm, dev_info.rss_algo_capa); return -EINVAL; } @@ -4782,8 +4782,8 @@ rte_eth_dev_rss_hash_conf_get(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (rss_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u RSS hash config to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u RSS hash config to NULL", port_id); return -EINVAL; } @@ -4794,8 +4794,8 @@ rte_eth_dev_rss_hash_conf_get(uint16_t port_id, if (rss_conf->rss_key != NULL && rss_conf->rss_key_len < dev_info.hash_key_size) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%u invalid RSS key len: %u, should not be less than: %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethdev port_id=%u invalid RSS key len: %u, should not be less than: %u", port_id, rss_conf->rss_key_len, dev_info.hash_key_size); return -EINVAL; } @@ -4837,14 +4837,14 @@ rte_eth_dev_udp_tunnel_port_add(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (udp_tunnel == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot add ethdev port %u UDP tunnel port from NULL UDP tunnel\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot add ethdev port %u UDP tunnel port from NULL UDP tunnel", port_id); return -EINVAL; } if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) { - RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid tunnel type"); return -EINVAL; } @@ -4869,14 +4869,14 @@ rte_eth_dev_udp_tunnel_port_delete(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (udp_tunnel == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot delete ethdev port %u UDP tunnel port from NULL UDP tunnel\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot delete ethdev port %u UDP tunnel port from NULL UDP tunnel", port_id); return -EINVAL; } if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) { - RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid tunnel type"); return -EINVAL; } @@ -4938,8 +4938,8 @@ rte_eth_fec_get_capability(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (speed_fec_capa == NULL && num > 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u FEC capability to NULL when array size is non zero\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u FEC capability to NULL when array size is non zero", port_id); return -EINVAL; } @@ -4963,8 +4963,8 @@ rte_eth_fec_get(uint16_t port_id, uint32_t *fec_capa) dev = &rte_eth_devices[port_id]; if (fec_capa == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u current FEC mode to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u current FEC mode to NULL", port_id); return -EINVAL; } @@ -4988,7 +4988,7 @@ rte_eth_fec_set(uint16_t port_id, uint32_t fec_capa) dev = &rte_eth_devices[port_id]; if (fec_capa == 0) { - RTE_ETHDEV_LOG(ERR, "At least one FEC mode should be specified\n"); + RTE_ETHDEV_LOG_LINE(ERR, "At least one FEC mode should be specified"); return -EINVAL; } @@ -5040,8 +5040,8 @@ rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr, dev = &rte_eth_devices[port_id]; if (addr == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot add ethdev port %u MAC address from NULL address\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot add ethdev port %u MAC address from NULL address", port_id); return -EINVAL; } @@ -5050,12 +5050,12 @@ rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr, return -ENOTSUP; if (rte_is_zero_ether_addr(addr)) { - RTE_ETHDEV_LOG(ERR, "Port %u: Cannot add NULL MAC address\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port %u: Cannot add NULL MAC address", port_id); return -EINVAL; } if (pool >= RTE_ETH_64_POOLS) { - RTE_ETHDEV_LOG(ERR, "Pool ID must be 0-%d\n", RTE_ETH_64_POOLS - 1); + RTE_ETHDEV_LOG_LINE(ERR, "Pool ID must be 0-%d", RTE_ETH_64_POOLS - 1); return -EINVAL; } @@ -5063,7 +5063,7 @@ rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr, if (index < 0) { index = eth_dev_get_mac_addr_index(port_id, &null_mac_addr); if (index < 0) { - RTE_ETHDEV_LOG(ERR, "Port %u: MAC address array full\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port %u: MAC address array full", port_id); return -ENOSPC; } @@ -5103,8 +5103,8 @@ rte_eth_dev_mac_addr_remove(uint16_t port_id, struct rte_ether_addr *addr) dev = &rte_eth_devices[port_id]; if (addr == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot remove ethdev port %u MAC address from NULL address\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot remove ethdev port %u MAC address from NULL address", port_id); return -EINVAL; } @@ -5114,8 +5114,8 @@ rte_eth_dev_mac_addr_remove(uint16_t port_id, struct rte_ether_addr *addr) index = eth_dev_get_mac_addr_index(port_id, addr); if (index == 0) { - RTE_ETHDEV_LOG(ERR, - "Port %u: Cannot remove default MAC address\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u: Cannot remove default MAC address", port_id); return -EADDRINUSE; } else if (index < 0) @@ -5146,8 +5146,8 @@ rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct rte_ether_addr *addr) dev = &rte_eth_devices[port_id]; if (addr == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot set ethdev port %u default MAC address from NULL address\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set ethdev port %u default MAC address from NULL address", port_id); return -EINVAL; } @@ -5161,8 +5161,8 @@ rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct rte_ether_addr *addr) /* Keep address unique in dev->data->mac_addrs[]. */ index = eth_dev_get_mac_addr_index(port_id, addr); if (index > 0) { - RTE_ETHDEV_LOG(ERR, - "New default address for port %u was already in the address list. Please remove it first.\n", + RTE_ETHDEV_LOG_LINE(ERR, + "New default address for port %u was already in the address list. Please remove it first.", port_id); return -EEXIST; } @@ -5220,14 +5220,14 @@ rte_eth_dev_uc_hash_table_set(uint16_t port_id, struct rte_ether_addr *addr, dev = &rte_eth_devices[port_id]; if (addr == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot set ethdev port %u unicast hash table from NULL address\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set ethdev port %u unicast hash table from NULL address", port_id); return -EINVAL; } if (rte_is_zero_ether_addr(addr)) { - RTE_ETHDEV_LOG(ERR, "Port %u: Cannot add NULL MAC address\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port %u: Cannot add NULL MAC address", port_id); return -EINVAL; } @@ -5239,15 +5239,15 @@ rte_eth_dev_uc_hash_table_set(uint16_t port_id, struct rte_ether_addr *addr, if (index < 0) { if (!on) { - RTE_ETHDEV_LOG(ERR, - "Port %u: the MAC address was not set in UTA\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u: the MAC address was not set in UTA", port_id); return -EINVAL; } index = eth_dev_get_hash_mac_addr_index(port_id, &null_mac_addr); if (index < 0) { - RTE_ETHDEV_LOG(ERR, "Port %u: MAC address array full\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port %u: MAC address array full", port_id); return -ENOSPC; } @@ -5309,15 +5309,15 @@ int rte_eth_set_queue_rate_limit(uint16_t port_id, uint16_t queue_idx, link = dev->data->dev_link; if (queue_idx > dev_info.max_tx_queues) { - RTE_ETHDEV_LOG(ERR, - "Set queue rate limit:port %u: invalid queue ID=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Set queue rate limit:port %u: invalid queue ID=%u", port_id, queue_idx); return -EINVAL; } if (tx_rate > link.link_speed) { - RTE_ETHDEV_LOG(ERR, - "Set queue rate limit:invalid tx_rate=%u, bigger than link speed= %d\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Set queue rate limit:invalid tx_rate=%u, bigger than link speed= %d", tx_rate, link.link_speed); return -EINVAL; } @@ -5342,15 +5342,15 @@ int rte_eth_rx_avail_thresh_set(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id > dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, - "Set queue avail thresh: port %u: invalid queue ID=%u.\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Set queue avail thresh: port %u: invalid queue ID=%u.", port_id, queue_id); return -EINVAL; } if (avail_thresh > 99) { - RTE_ETHDEV_LOG(ERR, - "Set queue avail thresh: port %u: threshold should be <= 99.\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Set queue avail thresh: port %u: threshold should be <= 99.", port_id); return -EINVAL; } @@ -5415,14 +5415,14 @@ rte_eth_dev_callback_register(uint16_t port_id, uint16_t last_port; if (cb_fn == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot register ethdev port %u callback from NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot register ethdev port %u callback from NULL", port_id); return -EINVAL; } if (!rte_eth_dev_is_valid_port(port_id) && port_id != RTE_ETH_ALL) { - RTE_ETHDEV_LOG(ERR, "Invalid port_id=%d\n", port_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid port_id=%d", port_id); return -EINVAL; } @@ -5485,14 +5485,14 @@ rte_eth_dev_callback_unregister(uint16_t port_id, uint16_t last_port; if (cb_fn == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot unregister ethdev port %u callback from NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot unregister ethdev port %u callback from NULL", port_id); return -EINVAL; } if (!rte_eth_dev_is_valid_port(port_id) && port_id != RTE_ETH_ALL) { - RTE_ETHDEV_LOG(ERR, "Invalid port_id=%d\n", port_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid port_id=%d", port_id); return -EINVAL; } @@ -5551,13 +5551,13 @@ rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data) dev = &rte_eth_devices[port_id]; if (!dev->intr_handle) { - RTE_ETHDEV_LOG(ERR, "Rx Intr handle unset\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx Intr handle unset"); return -ENOTSUP; } intr_handle = dev->intr_handle; if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) { - RTE_ETHDEV_LOG(ERR, "Rx Intr vector unset\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx Intr vector unset"); return -EPERM; } @@ -5568,8 +5568,8 @@ rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data) rte_ethdev_trace_rx_intr_ctl(port_id, qid, epfd, op, data, rc); if (rc && rc != -EEXIST) { - RTE_ETHDEV_LOG(ERR, - "p %u q %u Rx ctl error op %d epfd %d vec %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "p %u q %u Rx ctl error op %d epfd %d vec %u", port_id, qid, op, epfd, vec); } } @@ -5590,18 +5590,18 @@ rte_eth_dev_rx_intr_ctl_q_get_fd(uint16_t port_id, uint16_t queue_id) dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", queue_id); return -1; } if (!dev->intr_handle) { - RTE_ETHDEV_LOG(ERR, "Rx Intr handle unset\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx Intr handle unset"); return -1; } intr_handle = dev->intr_handle; if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) { - RTE_ETHDEV_LOG(ERR, "Rx Intr vector unset\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx Intr vector unset"); return -1; } @@ -5628,18 +5628,18 @@ rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", queue_id); return -EINVAL; } if (!dev->intr_handle) { - RTE_ETHDEV_LOG(ERR, "Rx Intr handle unset\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx Intr handle unset"); return -ENOTSUP; } intr_handle = dev->intr_handle; if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) { - RTE_ETHDEV_LOG(ERR, "Rx Intr vector unset\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx Intr vector unset"); return -EPERM; } @@ -5649,8 +5649,8 @@ rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id, rte_ethdev_trace_rx_intr_ctl_q(port_id, queue_id, epfd, op, data, rc); if (rc && rc != -EEXIST) { - RTE_ETHDEV_LOG(ERR, - "p %u q %u Rx ctl error op %d epfd %d vec %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "p %u q %u Rx ctl error op %d epfd %d vec %u", port_id, queue_id, op, epfd, vec); return rc; } @@ -5949,28 +5949,28 @@ rte_eth_rx_queue_info_get(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", queue_id); return -EINVAL; } if (qinfo == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u Rx queue %u info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u Rx queue %u info to NULL", port_id, queue_id); return -EINVAL; } if (dev->data->rx_queues == NULL || dev->data->rx_queues[queue_id] == NULL) { - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Rx queue %"PRIu16" of device with port_id=%" - PRIu16" has not been setup\n", + PRIu16" has not been setup", queue_id, port_id); return -EINVAL; } if (rte_eth_dev_is_rx_hairpin_queue(dev, queue_id)) { - RTE_ETHDEV_LOG(INFO, - "Can't get hairpin Rx queue %"PRIu16" info of device with port_id=%"PRIu16"\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Can't get hairpin Rx queue %"PRIu16" info of device with port_id=%"PRIu16, queue_id, port_id); return -EINVAL; } @@ -5997,28 +5997,28 @@ rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u", queue_id); return -EINVAL; } if (qinfo == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u Tx queue %u info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u Tx queue %u info to NULL", port_id, queue_id); return -EINVAL; } if (dev->data->tx_queues == NULL || dev->data->tx_queues[queue_id] == NULL) { - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Tx queue %"PRIu16" of device with port_id=%" - PRIu16" has not been setup\n", + PRIu16" has not been setup", queue_id, port_id); return -EINVAL; } if (rte_eth_dev_is_tx_hairpin_queue(dev, queue_id)) { - RTE_ETHDEV_LOG(INFO, - "Can't get hairpin Tx queue %"PRIu16" info of device with port_id=%"PRIu16"\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Can't get hairpin Tx queue %"PRIu16" info of device with port_id=%"PRIu16, queue_id, port_id); return -EINVAL; } @@ -6068,13 +6068,13 @@ rte_eth_rx_burst_mode_get(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", queue_id); return -EINVAL; } if (mode == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u Rx queue %u burst mode to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u Rx queue %u burst mode to NULL", port_id, queue_id); return -EINVAL; } @@ -6101,13 +6101,13 @@ rte_eth_tx_burst_mode_get(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u", queue_id); return -EINVAL; } if (mode == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u Tx queue %u burst mode to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u Tx queue %u burst mode to NULL", port_id, queue_id); return -EINVAL; } @@ -6134,13 +6134,13 @@ rte_eth_get_monitor_addr(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", queue_id); return -EINVAL; } if (pmc == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u Rx queue %u power monitor condition to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u Rx queue %u power monitor condition to NULL", port_id, queue_id); return -EINVAL; } @@ -6224,8 +6224,8 @@ rte_eth_timesync_read_rx_timestamp(uint16_t port_id, struct timespec *timestamp, dev = &rte_eth_devices[port_id]; if (timestamp == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot read ethdev port %u Rx timestamp to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot read ethdev port %u Rx timestamp to NULL", port_id); return -EINVAL; } @@ -6253,8 +6253,8 @@ rte_eth_timesync_read_tx_timestamp(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (timestamp == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot read ethdev port %u Tx timestamp to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot read ethdev port %u Tx timestamp to NULL", port_id); return -EINVAL; } @@ -6299,8 +6299,8 @@ rte_eth_timesync_read_time(uint16_t port_id, struct timespec *timestamp) dev = &rte_eth_devices[port_id]; if (timestamp == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot read ethdev port %u timesync time to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot read ethdev port %u timesync time to NULL", port_id); return -EINVAL; } @@ -6325,8 +6325,8 @@ rte_eth_timesync_write_time(uint16_t port_id, const struct timespec *timestamp) dev = &rte_eth_devices[port_id]; if (timestamp == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot write ethdev port %u timesync from NULL time\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot write ethdev port %u timesync from NULL time", port_id); return -EINVAL; } @@ -6351,7 +6351,7 @@ rte_eth_read_clock(uint16_t port_id, uint64_t *clock) dev = &rte_eth_devices[port_id]; if (clock == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot read ethdev port %u clock to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot read ethdev port %u clock to NULL", port_id); return -EINVAL; } @@ -6375,8 +6375,8 @@ rte_eth_dev_get_reg_info(uint16_t port_id, struct rte_dev_reg_info *info) dev = &rte_eth_devices[port_id]; if (info == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u register info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u register info to NULL", port_id); return -EINVAL; } @@ -6418,8 +6418,8 @@ rte_eth_dev_get_eeprom(uint16_t port_id, struct rte_dev_eeprom_info *info) dev = &rte_eth_devices[port_id]; if (info == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u EEPROM info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u EEPROM info to NULL", port_id); return -EINVAL; } @@ -6443,8 +6443,8 @@ rte_eth_dev_set_eeprom(uint16_t port_id, struct rte_dev_eeprom_info *info) dev = &rte_eth_devices[port_id]; if (info == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot set ethdev port %u EEPROM from NULL info\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set ethdev port %u EEPROM from NULL info", port_id); return -EINVAL; } @@ -6469,8 +6469,8 @@ rte_eth_dev_get_module_info(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (modinfo == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u EEPROM module info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u EEPROM module info to NULL", port_id); return -EINVAL; } @@ -6495,22 +6495,22 @@ rte_eth_dev_get_module_eeprom(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (info == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u module EEPROM info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u module EEPROM info to NULL", port_id); return -EINVAL; } if (info->data == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u module EEPROM data to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u module EEPROM data to NULL", port_id); return -EINVAL; } if (info->length == 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u module EEPROM to data with zero size\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u module EEPROM to data with zero size", port_id); return -EINVAL; } @@ -6535,8 +6535,8 @@ rte_eth_dev_get_dcb_info(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (dcb_info == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u DCB info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u DCB info to NULL", port_id); return -EINVAL; } @@ -6601,8 +6601,8 @@ rte_eth_dev_hairpin_capability_get(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (cap == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u hairpin capability to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u hairpin capability to NULL", port_id); return -EINVAL; } @@ -6627,8 +6627,8 @@ rte_eth_dev_pool_ops_supported(uint16_t port_id, const char *pool) dev = &rte_eth_devices[port_id]; if (pool == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot test ethdev port %u mempool operation from NULL pool\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot test ethdev port %u mempool operation from NULL pool", port_id); return -EINVAL; } @@ -6672,14 +6672,14 @@ rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features) dev = &rte_eth_devices[port_id]; if (dev->data->dev_configured != 0) { - RTE_ETHDEV_LOG(ERR, - "The port (ID=%"PRIu16") is already configured\n", + RTE_ETHDEV_LOG_LINE(ERR, + "The port (ID=%"PRIu16") is already configured", port_id); return -EBUSY; } if (features == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid features (NULL)\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid features (NULL)"); return -EINVAL; } @@ -6708,14 +6708,14 @@ rte_eth_ip_reassembly_capability_get(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (dev->data->dev_configured == 0) { - RTE_ETHDEV_LOG(ERR, - "port_id=%u is not configured, cannot get IP reassembly capability\n", + RTE_ETHDEV_LOG_LINE(ERR, + "port_id=%u is not configured, cannot get IP reassembly capability", port_id); return -EINVAL; } if (reassembly_capa == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get reassembly capability to NULL\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get reassembly capability to NULL"); return -EINVAL; } @@ -6743,14 +6743,14 @@ rte_eth_ip_reassembly_conf_get(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (dev->data->dev_configured == 0) { - RTE_ETHDEV_LOG(ERR, - "port_id=%u is not configured, cannot get IP reassembly configuration\n", + RTE_ETHDEV_LOG_LINE(ERR, + "port_id=%u is not configured, cannot get IP reassembly configuration", port_id); return -EINVAL; } if (conf == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get reassembly info to NULL\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get reassembly info to NULL"); return -EINVAL; } @@ -6776,22 +6776,22 @@ rte_eth_ip_reassembly_conf_set(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (dev->data->dev_configured == 0) { - RTE_ETHDEV_LOG(ERR, - "port_id=%u is not configured, cannot set IP reassembly configuration\n", + RTE_ETHDEV_LOG_LINE(ERR, + "port_id=%u is not configured, cannot set IP reassembly configuration", port_id); return -EINVAL; } if (dev->data->dev_started != 0) { - RTE_ETHDEV_LOG(ERR, - "port_id=%u is started, cannot configure IP reassembly params.\n", + RTE_ETHDEV_LOG_LINE(ERR, + "port_id=%u is started, cannot configure IP reassembly params.", port_id); return -EINVAL; } if (conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Invalid IP reassembly configuration (NULL)\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid IP reassembly configuration (NULL)"); return -EINVAL; } @@ -6814,7 +6814,7 @@ rte_eth_dev_priv_dump(uint16_t port_id, FILE *file) dev = &rte_eth_devices[port_id]; if (file == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid file (NULL)\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid file (NULL)"); return -EINVAL; } @@ -6833,12 +6833,12 @@ rte_eth_rx_descriptor_dump(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", queue_id); return -EINVAL; } if (file == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid file (NULL)\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid file (NULL)"); return -EINVAL; } @@ -6859,12 +6859,12 @@ rte_eth_tx_descriptor_dump(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u", queue_id); return -EINVAL; } if (file == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid file (NULL)\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid file (NULL)"); return -EINVAL; } @@ -6886,8 +6886,8 @@ rte_eth_buffer_split_get_supported_hdr_ptypes(uint16_t port_id, uint32_t *ptypes dev = &rte_eth_devices[port_id]; if (ptypes == NULL && num > 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u supported header protocol types to NULL when array size is non zero\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u supported header protocol types to NULL when array size is non zero", port_id); return -EINVAL; } @@ -6940,7 +6940,7 @@ int rte_eth_dev_map_aggr_tx_affinity(uint16_t port_id, uint16_t tx_queue_id, dev = &rte_eth_devices[port_id]; if (tx_queue_id >= dev->data->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u\n", tx_queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u", tx_queue_id); return -EINVAL; } @@ -6948,30 +6948,30 @@ int rte_eth_dev_map_aggr_tx_affinity(uint16_t port_id, uint16_t tx_queue_id, return -ENOTSUP; if (dev->data->dev_configured == 0) { - RTE_ETHDEV_LOG(ERR, - "Port %u must be configured before Tx affinity mapping\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u must be configured before Tx affinity mapping", port_id); return -EINVAL; } if (dev->data->dev_started) { - RTE_ETHDEV_LOG(ERR, - "Port %u must be stopped to allow configuration\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u must be stopped to allow configuration", port_id); return -EBUSY; } aggr_ports = rte_eth_dev_count_aggr_ports(port_id); if (aggr_ports == 0) { - RTE_ETHDEV_LOG(ERR, - "Port %u has no aggregated port\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u has no aggregated port", port_id); return -ENOTSUP; } if (affinity > aggr_ports) { - RTE_ETHDEV_LOG(ERR, - "Port %u map invalid affinity %u exceeds the maximum number %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u map invalid affinity %u exceeds the maximum number %u", port_id, affinity, aggr_ports); return -EINVAL; } diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index 77331ce652..e89e474c39 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -176,9 +176,11 @@ extern "C" { #include "rte_dev_info.h" extern int rte_eth_dev_logtype; +#define RTE_LOGTYPE_ETHDEV rte_eth_dev_logtype -#define RTE_ETHDEV_LOG(level, ...) \ - rte_log(RTE_LOG_ ## level, rte_eth_dev_logtype, "" __VA_ARGS__) +#define RTE_ETHDEV_LOG_LINE(level, ...) \ + RTE_LOG(level, ETHDEV, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + RTE_FMT_TAIL(__VA_ARGS__ ,))) struct rte_mbuf; @@ -2000,14 +2002,14 @@ struct rte_eth_fec_capa { /* Macros to check for valid port */ #define RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, retval) do { \ if (!rte_eth_dev_is_valid_port(port_id)) { \ - RTE_ETHDEV_LOG(ERR, "Invalid port_id=%u\n", port_id); \ + RTE_ETHDEV_LOG_LINE(ERR, "Invalid port_id=%u", port_id); \ return retval; \ } \ } while (0) #define RTE_ETH_VALID_PORTID_OR_RET(port_id) do { \ if (!rte_eth_dev_is_valid_port(port_id)) { \ - RTE_ETHDEV_LOG(ERR, "Invalid port_id=%u\n", port_id); \ + RTE_ETHDEV_LOG_LINE(ERR, "Invalid port_id=%u", port_id); \ return; \ } \ } while (0) @@ -6052,8 +6054,8 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id, #ifdef RTE_ETHDEV_DEBUG_RX if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Invalid port_id=%u or queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid port_id=%u or queue_id=%u", port_id, queue_id); return 0; } @@ -6067,7 +6069,7 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id, RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0); if (qd == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u for port_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u for port_id=%u", queue_id, port_id); return 0; } @@ -6123,8 +6125,8 @@ rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id) #ifdef RTE_ETHDEV_DEBUG_RX if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Invalid port_id=%u or queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid port_id=%u or queue_id=%u", port_id, queue_id); return -EINVAL; } @@ -6196,8 +6198,8 @@ rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id, #ifdef RTE_ETHDEV_DEBUG_RX if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Invalid port_id=%u or queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid port_id=%u or queue_id=%u", port_id, queue_id); return -EINVAL; } @@ -6267,8 +6269,8 @@ static inline int rte_eth_tx_descriptor_status(uint16_t port_id, #ifdef RTE_ETHDEV_DEBUG_TX if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Invalid port_id=%u or queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid port_id=%u or queue_id=%u", port_id, queue_id); return -EINVAL; } @@ -6391,8 +6393,8 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id, #ifdef RTE_ETHDEV_DEBUG_TX if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Invalid port_id=%u or queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid port_id=%u or queue_id=%u", port_id, queue_id); return 0; } @@ -6406,7 +6408,7 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id, RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0); if (qd == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u for port_id=%u", queue_id, port_id); return 0; } @@ -6501,8 +6503,8 @@ rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id, #ifdef RTE_ETHDEV_DEBUG_TX if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Invalid port_id=%u or queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid port_id=%u or queue_id=%u", port_id, queue_id); rte_errno = ENODEV; return 0; @@ -6515,12 +6517,12 @@ rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id, #ifdef RTE_ETHDEV_DEBUG_TX if (!rte_eth_dev_is_valid_port(port_id)) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx port_id=%u\n", port_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx port_id=%u", port_id); rte_errno = ENODEV; return 0; } if (qd == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u for port_id=%u", queue_id, port_id); rte_errno = EINVAL; return 0; @@ -6706,8 +6708,8 @@ rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id, #ifdef RTE_ETHDEV_DEBUG_TX if (tx_port_id >= RTE_MAX_ETHPORTS || tx_queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Invalid tx_port_id=%u or tx_queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid tx_port_id=%u or tx_queue_id=%u", tx_port_id, tx_queue_id); return 0; } @@ -6721,7 +6723,7 @@ rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id, RTE_ETH_VALID_PORTID_OR_ERR_RET(tx_port_id, 0); if (qd1 == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u for port_id=%u", tx_queue_id, tx_port_id); return 0; } @@ -6732,7 +6734,7 @@ rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id, #ifdef RTE_ETHDEV_DEBUG_RX if (rx_port_id >= RTE_MAX_ETHPORTS || rx_queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, "Invalid rx_port_id=%u or rx_queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Invalid rx_port_id=%u or rx_queue_id=%u", rx_port_id, rx_queue_id); return 0; } @@ -6746,7 +6748,7 @@ rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id, RTE_ETH_VALID_PORTID_OR_ERR_RET(rx_port_id, 0); if (qd2 == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u for port_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u for port_id=%u", rx_queue_id, rx_port_id); return 0; } diff --git a/lib/ethdev/rte_ethdev_cman.c b/lib/ethdev/rte_ethdev_cman.c index a9c4637521..41e38bdc89 100644 --- a/lib/ethdev/rte_ethdev_cman.c +++ b/lib/ethdev/rte_ethdev_cman.c @@ -21,12 +21,12 @@ rte_eth_cman_info_get(uint16_t port_id, struct rte_eth_cman_info *info) dev = &rte_eth_devices[port_id]; if (info == NULL) { - RTE_ETHDEV_LOG(ERR, "congestion management info is NULL\n"); + RTE_ETHDEV_LOG_LINE(ERR, "congestion management info is NULL"); return -EINVAL; } if (dev->dev_ops->cman_info_get == NULL) { - RTE_ETHDEV_LOG(ERR, "Function not implemented\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Function not implemented"); return -ENOTSUP; } @@ -49,12 +49,12 @@ rte_eth_cman_config_init(uint16_t port_id, struct rte_eth_cman_config *config) dev = &rte_eth_devices[port_id]; if (config == NULL) { - RTE_ETHDEV_LOG(ERR, "congestion management config is NULL\n"); + RTE_ETHDEV_LOG_LINE(ERR, "congestion management config is NULL"); return -EINVAL; } if (dev->dev_ops->cman_config_init == NULL) { - RTE_ETHDEV_LOG(ERR, "Function not implemented\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Function not implemented"); return -ENOTSUP; } @@ -77,12 +77,12 @@ rte_eth_cman_config_set(uint16_t port_id, const struct rte_eth_cman_config *conf dev = &rte_eth_devices[port_id]; if (config == NULL) { - RTE_ETHDEV_LOG(ERR, "congestion management config is NULL\n"); + RTE_ETHDEV_LOG_LINE(ERR, "congestion management config is NULL"); return -EINVAL; } if (dev->dev_ops->cman_config_set == NULL) { - RTE_ETHDEV_LOG(ERR, "Function not implemented\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Function not implemented"); return -ENOTSUP; } @@ -104,12 +104,12 @@ rte_eth_cman_config_get(uint16_t port_id, struct rte_eth_cman_config *config) dev = &rte_eth_devices[port_id]; if (config == NULL) { - RTE_ETHDEV_LOG(ERR, "congestion management config is NULL\n"); + RTE_ETHDEV_LOG_LINE(ERR, "congestion management config is NULL"); return -EINVAL; } if (dev->dev_ops->cman_config_get == NULL) { - RTE_ETHDEV_LOG(ERR, "Function not implemented\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Function not implemented"); return -ENOTSUP; } diff --git a/lib/ethdev/rte_ethdev_telemetry.c b/lib/ethdev/rte_ethdev_telemetry.c index b01028ce9b..6b873e7abe 100644 --- a/lib/ethdev/rte_ethdev_telemetry.c +++ b/lib/ethdev/rte_ethdev_telemetry.c @@ -36,8 +36,8 @@ eth_dev_parse_port_params(const char *params, uint16_t *port_id, pi = strtoul(params, end_param, 0); if (**end_param != '\0' && !has_next) - RTE_ETHDEV_LOG(NOTICE, - "Extra parameters passed to ethdev telemetry command, ignoring\n"); + RTE_ETHDEV_LOG_LINE(NOTICE, + "Extra parameters passed to ethdev telemetry command, ignoring"); if (pi >= UINT16_MAX || !rte_eth_dev_is_valid_port(pi)) return -EINVAL; @@ -153,8 +153,8 @@ eth_dev_handle_port_xstats(const char *cmd __rte_unused, kvlist = rte_kvargs_parse(end_param, valid_keys); ret = rte_kvargs_process(kvlist, NULL, eth_dev_parse_hide_zero, &hide_zero); if (kvlist == NULL || ret != 0) - RTE_ETHDEV_LOG(NOTICE, - "Unknown extra parameters passed to ethdev telemetry command, ignoring\n"); + RTE_ETHDEV_LOG_LINE(NOTICE, + "Unknown extra parameters passed to ethdev telemetry command, ignoring"); rte_kvargs_free(kvlist); } @@ -445,8 +445,8 @@ eth_dev_handle_port_flow_ctrl(const char *cmd __rte_unused, ret = rte_eth_dev_flow_ctrl_get(port_id, &fc_conf); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, - "Failed to get flow ctrl info, ret = %d\n", ret); + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to get flow ctrl info, ret = %d", ret); return ret; } @@ -496,8 +496,8 @@ ethdev_parse_queue_params(const char *params, bool is_rx, qid = strtoul(qid_param, &end_param, 0); } if (*end_param != '\0') - RTE_ETHDEV_LOG(NOTICE, - "Extra parameters passed to ethdev telemetry command, ignoring\n"); + RTE_ETHDEV_LOG_LINE(NOTICE, + "Extra parameters passed to ethdev telemetry command, ignoring"); if (qid >= UINT16_MAX) return -EINVAL; @@ -522,8 +522,8 @@ eth_dev_add_burst_mode(uint16_t port_id, uint16_t queue_id, return 0; if (ret != 0) { - RTE_ETHDEV_LOG(ERR, - "Failed to get burst mode for port %u\n", port_id); + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to get burst mode for port %u", port_id); return ret; } @@ -689,8 +689,8 @@ eth_dev_add_dcb_info(uint16_t port_id, struct rte_tel_data *d) ret = rte_eth_dev_get_dcb_info(port_id, &dcb_info); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, - "Failed to get dcb info, ret = %d\n", ret); + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to get dcb info, ret = %d", ret); return ret; } @@ -769,8 +769,8 @@ eth_dev_handle_port_rss_info(const char *cmd __rte_unused, ret = rte_eth_dev_info_get(port_id, &dev_info); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, - "Failed to get device info, ret = %d\n", ret); + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to get device info, ret = %d", ret); return ret; } @@ -823,7 +823,7 @@ eth_dev_fec_capas_to_string(uint32_t fec_capa, char *fec_name, uint32_t len) count = snprintf(fec_name, len, "unknown "); if (count >= len) { - RTE_ETHDEV_LOG(WARNING, "FEC capa names may be truncated\n"); + RTE_ETHDEV_LOG_LINE(WARNING, "FEC capa names may be truncated"); count = len; } @@ -994,8 +994,8 @@ eth_dev_handle_port_vlan(const char *cmd __rte_unused, ret = rte_eth_dev_conf_get(port_id, &dev_conf); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, - "Failed to get device configuration, ret = %d\n", ret); + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to get device configuration, ret = %d", ret); return ret; } @@ -1115,7 +1115,7 @@ eth_dev_handle_port_tm_caps(const char *cmd __rte_unused, ret = rte_tm_capabilities_get(port_id, &cap, &error); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, "error: %s, error type: %u\n", + RTE_ETHDEV_LOG_LINE(ERR, "error: %s, error type: %u", error.message ? error.message : "no stated reason", error.type); return ret; @@ -1229,8 +1229,8 @@ eth_dev_parse_tm_params(char *params, uint32_t *result) ret = strtoul(splited_param, ¶ms, 0); if (*params != '\0') - RTE_ETHDEV_LOG(NOTICE, - "Extra parameters passed to ethdev telemetry command, ignoring\n"); + RTE_ETHDEV_LOG_LINE(NOTICE, + "Extra parameters passed to ethdev telemetry command, ignoring"); if (ret >= UINT32_MAX) return -EINVAL; @@ -1263,7 +1263,7 @@ eth_dev_handle_port_tm_level_caps(const char *cmd __rte_unused, ret = rte_tm_level_capabilities_get(port_id, level_id, &cap, &error); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, "error: %s, error type: %u\n", + RTE_ETHDEV_LOG_LINE(ERR, "error: %s, error type: %u", error.message ? error.message : "no stated reason", error.type); return ret; @@ -1389,7 +1389,7 @@ eth_dev_handle_port_tm_node_caps(const char *cmd __rte_unused, return 0; out: - RTE_ETHDEV_LOG(WARNING, "error: %s, error type: %u\n", + RTE_ETHDEV_LOG_LINE(WARNING, "error: %s, error type: %u", error.message ? error.message : "no stated reason", error.type); return ret; diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 549e329558..f49d1d3767 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -18,6 +18,8 @@ #include "ethdev_trace.h" +#define FLOW_LOG RTE_ETHDEV_LOG_LINE + /* Mbuf dynamic field name for metadata. */ int32_t rte_flow_dynf_metadata_offs = -1; @@ -1614,13 +1616,13 @@ rte_flow_info_get(uint16_t port_id, if (unlikely(!ops)) return -rte_errno; if (dev->data->dev_configured == 0) { - RTE_FLOW_LOG(INFO, - "Device with port_id=%"PRIu16" is not configured.\n", + FLOW_LOG(INFO, + "Device with port_id=%"PRIu16" is not configured.", port_id); return -EINVAL; } if (port_info == NULL) { - RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id); + FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.", port_id); return -EINVAL; } if (likely(!!ops->info_get)) { @@ -1651,23 +1653,23 @@ rte_flow_configure(uint16_t port_id, if (unlikely(!ops)) return -rte_errno; if (dev->data->dev_configured == 0) { - RTE_FLOW_LOG(INFO, - "Device with port_id=%"PRIu16" is not configured.\n", + FLOW_LOG(INFO, + "Device with port_id=%"PRIu16" is not configured.", port_id); return -EINVAL; } if (dev->data->dev_started != 0) { - RTE_FLOW_LOG(INFO, - "Device with port_id=%"PRIu16" already started.\n", + FLOW_LOG(INFO, + "Device with port_id=%"PRIu16" already started.", port_id); return -EINVAL; } if (port_attr == NULL) { - RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id); + FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.", port_id); return -EINVAL; } if (queue_attr == NULL) { - RTE_FLOW_LOG(ERR, "Port %"PRIu16" queue info is NULL.\n", port_id); + FLOW_LOG(ERR, "Port %"PRIu16" queue info is NULL.", port_id); return -EINVAL; } if ((port_attr->flags & RTE_FLOW_PORT_FLAG_SHARE_INDIRECT) && @@ -1704,8 +1706,8 @@ rte_flow_pattern_template_create(uint16_t port_id, if (unlikely(!ops)) return NULL; if (dev->data->flow_configured == 0) { - RTE_FLOW_LOG(INFO, - "Flow engine on port_id=%"PRIu16" is not configured.\n", + FLOW_LOG(INFO, + "Flow engine on port_id=%"PRIu16" is not configured.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_STATE, @@ -1713,8 +1715,8 @@ rte_flow_pattern_template_create(uint16_t port_id, return NULL; } if (template_attr == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" template attr is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" template attr is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, @@ -1722,8 +1724,8 @@ rte_flow_pattern_template_create(uint16_t port_id, return NULL; } if (pattern == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" pattern is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" pattern is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, @@ -1791,8 +1793,8 @@ rte_flow_actions_template_create(uint16_t port_id, if (unlikely(!ops)) return NULL; if (dev->data->flow_configured == 0) { - RTE_FLOW_LOG(INFO, - "Flow engine on port_id=%"PRIu16" is not configured.\n", + FLOW_LOG(INFO, + "Flow engine on port_id=%"PRIu16" is not configured.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_STATE, @@ -1800,8 +1802,8 @@ rte_flow_actions_template_create(uint16_t port_id, return NULL; } if (template_attr == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" template attr is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" template attr is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, @@ -1809,8 +1811,8 @@ rte_flow_actions_template_create(uint16_t port_id, return NULL; } if (actions == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" actions is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" actions is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, @@ -1818,8 +1820,8 @@ rte_flow_actions_template_create(uint16_t port_id, return NULL; } if (masks == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" masks is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" masks is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, @@ -1889,8 +1891,8 @@ rte_flow_template_table_create(uint16_t port_id, if (unlikely(!ops)) return NULL; if (dev->data->flow_configured == 0) { - RTE_FLOW_LOG(INFO, - "Flow engine on port_id=%"PRIu16" is not configured.\n", + FLOW_LOG(INFO, + "Flow engine on port_id=%"PRIu16" is not configured.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_STATE, @@ -1898,8 +1900,8 @@ rte_flow_template_table_create(uint16_t port_id, return NULL; } if (table_attr == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" table attr is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" table attr is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, @@ -1907,8 +1909,8 @@ rte_flow_template_table_create(uint16_t port_id, return NULL; } if (pattern_templates == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" pattern templates is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" pattern templates is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, @@ -1916,8 +1918,8 @@ rte_flow_template_table_create(uint16_t port_id, return NULL; } if (actions_templates == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" actions templates is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" actions templates is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index affdc8121b..78b6bbb159 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -46,9 +46,6 @@ extern "C" { #endif -#define RTE_FLOW_LOG(level, ...) \ - rte_log(RTE_LOG_ ## level, rte_eth_dev_logtype, "" __VA_ARGS__) - /** * Flow rule attributes. * diff --git a/lib/ethdev/sff_telemetry.c b/lib/ethdev/sff_telemetry.c index f29e7fa882..b3f239d967 100644 --- a/lib/ethdev/sff_telemetry.c +++ b/lib/ethdev/sff_telemetry.c @@ -19,7 +19,7 @@ sff_port_module_eeprom_parse(uint16_t port_id, struct rte_tel_data *d) int ret; if (d == NULL) { - RTE_ETHDEV_LOG(ERR, "Dict invalid\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Dict invalid"); return; } @@ -27,16 +27,16 @@ sff_port_module_eeprom_parse(uint16_t port_id, struct rte_tel_data *d) if (ret != 0) { switch (ret) { case -ENODEV: - RTE_ETHDEV_LOG(ERR, "Port index %d invalid\n", port_id); + RTE_ETHDEV_LOG_LINE(ERR, "Port index %d invalid", port_id); break; case -ENOTSUP: - RTE_ETHDEV_LOG(ERR, "Operation not supported by device\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Operation not supported by device"); break; case -EIO: - RTE_ETHDEV_LOG(ERR, "Device is removed\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Device is removed"); break; default: - RTE_ETHDEV_LOG(ERR, "Unable to get port module info, %d\n", ret); + RTE_ETHDEV_LOG_LINE(ERR, "Unable to get port module info, %d", ret); break; } return; @@ -46,7 +46,7 @@ sff_port_module_eeprom_parse(uint16_t port_id, struct rte_tel_data *d) einfo.length = minfo.eeprom_len; einfo.data = calloc(1, minfo.eeprom_len); if (einfo.data == NULL) { - RTE_ETHDEV_LOG(ERR, "Allocation of port %u EEPROM data failed\n", port_id); + RTE_ETHDEV_LOG_LINE(ERR, "Allocation of port %u EEPROM data failed", port_id); return; } @@ -54,16 +54,16 @@ sff_port_module_eeprom_parse(uint16_t port_id, struct rte_tel_data *d) if (ret != 0) { switch (ret) { case -ENODEV: - RTE_ETHDEV_LOG(ERR, "Port index %d invalid\n", port_id); + RTE_ETHDEV_LOG_LINE(ERR, "Port index %d invalid", port_id); break; case -ENOTSUP: - RTE_ETHDEV_LOG(ERR, "Operation not supported by device\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Operation not supported by device"); break; case -EIO: - RTE_ETHDEV_LOG(ERR, "Device is removed\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Device is removed"); break; default: - RTE_ETHDEV_LOG(ERR, "Unable to get port module EEPROM, %d\n", ret); + RTE_ETHDEV_LOG_LINE(ERR, "Unable to get port module EEPROM, %d", ret); break; } free(einfo.data); @@ -84,7 +84,7 @@ sff_port_module_eeprom_parse(uint16_t port_id, struct rte_tel_data *d) sff_8636_show_all(einfo.data, einfo.length, d); break; default: - RTE_ETHDEV_LOG(NOTICE, "Unsupported module type: %u\n", minfo.type); + RTE_ETHDEV_LOG_LINE(NOTICE, "Unsupported module type: %u", minfo.type); break; } @@ -99,7 +99,7 @@ ssf_add_dict_string(struct rte_tel_data *d, const char *name_str, const char *va if (d->type != TEL_DICT) return; if (d->data_len >= RTE_TEL_MAX_DICT_ENTRIES) { - RTE_ETHDEV_LOG(ERR, "data_len has exceeded the maximum number of inserts\n"); + RTE_ETHDEV_LOG_LINE(ERR, "data_len has exceeded the maximum number of inserts"); return; } @@ -135,13 +135,13 @@ eth_dev_handle_port_module_eeprom(const char *cmd __rte_unused, const char *para port_id = strtoul(params, &end_param, 0); if (errno != 0 || port_id >= UINT16_MAX) { - RTE_ETHDEV_LOG(ERR, "Invalid argument, %d\n", errno); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid argument, %d", errno); return -1; } if (*end_param != '\0') - RTE_ETHDEV_LOG(NOTICE, - "Extra parameters [%s] passed to ethdev telemetry command, ignoring\n", + RTE_ETHDEV_LOG_LINE(NOTICE, + "Extra parameters [%s] passed to ethdev telemetry command, ignoring", end_param); rte_tel_data_start_dict(d); diff --git a/lib/member/member.h b/lib/member/member.h new file mode 100644 index 0000000000..a7b5b4a57c --- /dev/null +++ b/lib/member/member.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Red Hat, Inc. + */ + +#include <rte_log.h> + +extern int librte_member_logtype; +#define RTE_LOGTYPE_MEMBER librte_member_logtype + +#define MEMBER_LOG(level, ...) \ + RTE_LOG(level, MEMBER, \ + RTE_FMT("%s(): " RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + __func__, RTE_FMT_TAIL(__VA_ARGS__ ,))) + diff --git a/lib/member/rte_member.c b/lib/member/rte_member.c index 8f859f7fbd..57eb7affab 100644 --- a/lib/member/rte_member.c +++ b/lib/member/rte_member.c @@ -11,6 +11,7 @@ #include <rte_tailq.h> #include <rte_ring_elem.h> +#include "member.h" #include "rte_member.h" #include "rte_member_ht.h" #include "rte_member_vbf.h" @@ -102,8 +103,8 @@ rte_member_create(const struct rte_member_parameters *params) if (params->key_len == 0 || params->prim_hash_seed == params->sec_hash_seed) { rte_errno = EINVAL; - RTE_MEMBER_LOG(ERR, "Create setsummary with " - "invalid parameters\n"); + MEMBER_LOG(ERR, "Create setsummary with " + "invalid parameters"); return NULL; } @@ -112,7 +113,7 @@ rte_member_create(const struct rte_member_parameters *params) sketch_key_ring = rte_ring_create_elem(ring_name, sizeof(uint32_t), rte_align32pow2(params->top_k), params->socket_id, 0); if (sketch_key_ring == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Ring Memory allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Ring Memory allocation failed"); return NULL; } } @@ -135,7 +136,7 @@ rte_member_create(const struct rte_member_parameters *params) } te = rte_zmalloc("MEMBER_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_MEMBER_LOG(ERR, "tailq entry allocation failed\n"); + MEMBER_LOG(ERR, "tailq entry allocation failed"); goto error_unlock_exit; } @@ -144,7 +145,7 @@ rte_member_create(const struct rte_member_parameters *params) sizeof(struct rte_member_setsum), RTE_CACHE_LINE_SIZE, params->socket_id); if (setsum == NULL) { - RTE_MEMBER_LOG(ERR, "Create setsummary failed\n"); + MEMBER_LOG(ERR, "Create setsummary failed"); goto error_unlock_exit; } strlcpy(setsum->name, params->name, sizeof(setsum->name)); @@ -171,8 +172,8 @@ rte_member_create(const struct rte_member_parameters *params) if (ret < 0) goto error_unlock_exit; - RTE_MEMBER_LOG(DEBUG, "Creating a setsummary table with " - "mode %u\n", setsum->type); + MEMBER_LOG(DEBUG, "Creating a setsummary table with " + "mode %u", setsum->type); te->data = (void *)setsum; TAILQ_INSERT_TAIL(member_list, te, next); diff --git a/lib/member/rte_member.h b/lib/member/rte_member.h index b585904368..3278bbb5c1 100644 --- a/lib/member/rte_member.h +++ b/lib/member/rte_member.h @@ -100,15 +100,6 @@ typedef uint16_t member_set_t; #define MEMBER_HASH_FUNC rte_jhash #endif -extern int librte_member_logtype; - -#define RTE_MEMBER_LOG(level, ...) \ - rte_log(RTE_LOG_ ## level, \ - librte_member_logtype, \ - RTE_FMT("%s(): " RTE_FMT_HEAD(__VA_ARGS__,), \ - __func__, \ - RTE_FMT_TAIL(__VA_ARGS__,))) - /** @internal setsummary structure. */ struct rte_member_setsum; diff --git a/lib/member/rte_member_heap.h b/lib/member/rte_member_heap.h index 9c4a01aebe..e0a3d54eab 100644 --- a/lib/member/rte_member_heap.h +++ b/lib/member/rte_member_heap.h @@ -6,6 +6,7 @@ #ifndef RTE_MEMBER_HEAP_H #define RTE_MEMBER_HEAP_H +#include "member.h" #include <rte_ring_elem.h> #include "rte_member.h" @@ -129,16 +130,16 @@ resize_hash_table(struct minheap *hp) while (1) { new_bkt_cnt = hp->hashtable->bkt_cnt * HASH_RESIZE_MULTI; - RTE_MEMBER_LOG(ERR, "Sketch Minheap HT load factor is [%f]\n", + MEMBER_LOG(ERR, "Sketch Minheap HT load factor is [%f]", hp->hashtable->num_item / ((float)hp->hashtable->bkt_cnt * HASH_BKT_SIZE)); - RTE_MEMBER_LOG(ERR, "Sketch Minheap HT resize happen!\n"); + MEMBER_LOG(ERR, "Sketch Minheap HT resize happen!"); rte_free(hp->hashtable); hp->hashtable = rte_zmalloc_socket(NULL, sizeof(struct hash) + new_bkt_cnt * sizeof(struct hash_bkt), RTE_CACHE_LINE_SIZE, hp->socket); if (hp->hashtable == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Minheap HT allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Minheap HT allocation failed"); return -ENOMEM; } @@ -147,8 +148,8 @@ resize_hash_table(struct minheap *hp) for (i = 0; i < hp->size; ++i) { if (hash_table_insert(hp->elem[i].key, i + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, - "Sketch Minheap HT resize insert fail!\n"); + MEMBER_LOG(ERR, + "Sketch Minheap HT resize insert fail!"); break; } } @@ -174,7 +175,7 @@ rte_member_minheap_init(struct minheap *heap, int size, heap->elem = rte_zmalloc_socket(NULL, sizeof(struct node) * size, RTE_CACHE_LINE_SIZE, socket); if (heap->elem == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Minheap elem allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Minheap elem allocation failed"); return -ENOMEM; } @@ -188,7 +189,7 @@ rte_member_minheap_init(struct minheap *heap, int size, RTE_CACHE_LINE_SIZE, socket); if (heap->hashtable == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Minheap HT allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Minheap HT allocation failed"); rte_free(heap->elem); return -ENOMEM; } @@ -231,13 +232,13 @@ rte_member_heapify(struct minheap *hp, uint32_t idx, bool update_hash) if (update_hash) { if (hash_table_update(hp->elem[smallest].key, idx + 1, smallest + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table update failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table update failed"); return; } if (hash_table_update(hp->elem[idx].key, smallest + 1, idx + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table update failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table update failed"); return; } } @@ -255,7 +256,7 @@ rte_member_minheap_insert_node(struct minheap *hp, const void *key, uint32_t slot_id; if (rte_ring_sc_dequeue_elem(free_key_slot, &slot_id, sizeof(uint32_t)) != 0) { - RTE_MEMBER_LOG(ERR, "Minheap get empty keyslot failed\n"); + MEMBER_LOG(ERR, "Minheap get empty keyslot failed"); return -1; } @@ -270,7 +271,7 @@ rte_member_minheap_insert_node(struct minheap *hp, const void *key, hp->elem[i] = hp->elem[PARENT(i)]; if (hash_table_update(hp->elem[i].key, PARENT(i) + 1, i + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table update failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table update failed"); return -1; } i = PARENT(i); @@ -279,7 +280,7 @@ rte_member_minheap_insert_node(struct minheap *hp, const void *key, if (hash_table_insert(key, i + 1, hp->key_len, hp->hashtable) < 0) { if (resize_hash_table(hp) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table resize failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table resize failed"); return -1; } } @@ -296,7 +297,7 @@ rte_member_minheap_delete_node(struct minheap *hp, const void *key, uint32_t offset = RTE_PTR_DIFF(hp->elem[idx].key, key_slot) / hp->key_len; if (hash_table_del(key, idx + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table delete failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table delete failed"); return -1; } @@ -311,7 +312,7 @@ rte_member_minheap_delete_node(struct minheap *hp, const void *key, if (hash_table_update(hp->elem[idx].key, hp->size, idx + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table update failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table update failed"); return -1; } hp->size--; @@ -332,7 +333,7 @@ rte_member_minheap_replace_node(struct minheap *hp, recycle_key = hp->elem[0].key; if (hash_table_del(recycle_key, 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table delete failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table delete failed"); return -1; } @@ -340,7 +341,7 @@ rte_member_minheap_replace_node(struct minheap *hp, if (hash_table_update(hp->elem[0].key, hp->size, 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table update failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table update failed"); return -1; } hp->size--; @@ -358,7 +359,7 @@ rte_member_minheap_replace_node(struct minheap *hp, hp->elem[i] = hp->elem[PARENT(i)]; if (hash_table_update(hp->elem[i].key, PARENT(i) + 1, i + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table update failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table update failed"); return -1; } i = PARENT(i); @@ -367,9 +368,9 @@ rte_member_minheap_replace_node(struct minheap *hp, hp->elem[i] = nd; if (hash_table_insert(new_key, i + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table replace insert failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table replace insert failed"); if (resize_hash_table(hp) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table replace resize failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table replace resize failed"); return -1; } } diff --git a/lib/member/rte_member_ht.c b/lib/member/rte_member_ht.c index a85561b472..357097ff4b 100644 --- a/lib/member/rte_member_ht.c +++ b/lib/member/rte_member_ht.c @@ -9,6 +9,7 @@ #include <rte_log.h> #include <rte_vect.h> +#include "member.h" #include "rte_member.h" #include "rte_member_ht.h" @@ -84,8 +85,8 @@ rte_member_create_ht(struct rte_member_setsum *ss, !rte_is_power_of_2(RTE_MEMBER_BUCKET_ENTRIES) || num_entries < RTE_MEMBER_BUCKET_ENTRIES) { rte_errno = EINVAL; - RTE_MEMBER_LOG(ERR, - "Membership HT create with invalid parameters\n"); + MEMBER_LOG(ERR, + "Membership HT create with invalid parameters"); return -EINVAL; } @@ -98,8 +99,8 @@ rte_member_create_ht(struct rte_member_setsum *ss, RTE_CACHE_LINE_SIZE, ss->socket_id); if (buckets == NULL) { - RTE_MEMBER_LOG(ERR, "memory allocation failed for HT " - "setsummary\n"); + MEMBER_LOG(ERR, "memory allocation failed for HT " + "setsummary"); return -ENOMEM; } @@ -121,8 +122,8 @@ rte_member_create_ht(struct rte_member_setsum *ss, #endif ss->sig_cmp_fn = RTE_MEMBER_COMPARE_SCALAR; - RTE_MEMBER_LOG(DEBUG, "Hash table based filter created, " - "the table has %u entries, %u buckets\n", + MEMBER_LOG(DEBUG, "Hash table based filter created, " + "the table has %u entries, %u buckets", num_entries, num_buckets); return 0; } diff --git a/lib/member/rte_member_sketch.c b/lib/member/rte_member_sketch.c index d5f35aabe9..e006e835d9 100644 --- a/lib/member/rte_member_sketch.c +++ b/lib/member/rte_member_sketch.c @@ -14,6 +14,7 @@ #include <rte_prefetch.h> #include <rte_ring_elem.h> +#include "member.h" #include "rte_member.h" #include "rte_member_sketch.h" #include "rte_member_heap.h" @@ -118,8 +119,8 @@ rte_member_create_sketch(struct rte_member_setsum *ss, if (params->sample_rate == 0 || params->sample_rate > 1) { rte_errno = EINVAL; - RTE_MEMBER_LOG(ERR, - "Membership Sketch created with invalid parameters\n"); + MEMBER_LOG(ERR, + "Membership Sketch created with invalid parameters"); return -EINVAL; } @@ -141,8 +142,8 @@ rte_member_create_sketch(struct rte_member_setsum *ss, if (ss->use_avx512 == true) { #ifdef CC_AVX512_SUPPORT ss->num_row = NUM_ROW_VEC; - RTE_MEMBER_LOG(NOTICE, - "Membership Sketch AVX512 update/lookup/delete ops is selected\n"); + MEMBER_LOG(NOTICE, + "Membership Sketch AVX512 update/lookup/delete ops is selected"); ss->sketch_update = sketch_update_avx512; ss->sketch_lookup = sketch_lookup_avx512; ss->sketch_delete = sketch_delete_avx512; @@ -151,8 +152,8 @@ rte_member_create_sketch(struct rte_member_setsum *ss, #endif { ss->num_row = NUM_ROW_SCALAR; - RTE_MEMBER_LOG(NOTICE, - "Membership Sketch SCALAR update/lookup/delete ops is selected\n"); + MEMBER_LOG(NOTICE, + "Membership Sketch SCALAR update/lookup/delete ops is selected"); ss->sketch_update = sketch_update_scalar; ss->sketch_lookup = sketch_lookup_scalar; ss->sketch_delete = sketch_delete_scalar; @@ -173,21 +174,21 @@ rte_member_create_sketch(struct rte_member_setsum *ss, sizeof(uint64_t) * num_col * ss->num_row, RTE_CACHE_LINE_SIZE, ss->socket_id); if (ss->table == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Table memory allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Table memory allocation failed"); return -ENOMEM; } ss->hash_seeds = rte_zmalloc_socket(NULL, sizeof(uint64_t) * ss->num_row, RTE_CACHE_LINE_SIZE, ss->socket_id); if (ss->hash_seeds == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Hashseeds memory allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Hashseeds memory allocation failed"); return -ENOMEM; } ss->runtime_var = rte_zmalloc_socket(NULL, sizeof(struct sketch_runtime), RTE_CACHE_LINE_SIZE, ss->socket_id); if (ss->runtime_var == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Runtime memory allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Runtime memory allocation failed"); rte_free(ss); return -ENOMEM; } @@ -205,7 +206,7 @@ rte_member_create_sketch(struct rte_member_setsum *ss, runtime->key_slots = rte_zmalloc_socket(NULL, ss->key_len * ss->topk, RTE_CACHE_LINE_SIZE, ss->socket_id); if (runtime->key_slots == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Key Slots allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Key Slots allocation failed"); goto error; } @@ -216,14 +217,14 @@ rte_member_create_sketch(struct rte_member_setsum *ss, if (rte_member_minheap_init(&(runtime->heap), params->top_k, ss->socket_id, params->prim_hash_seed) < 0) { - RTE_MEMBER_LOG(ERR, "Sketch Minheap allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Minheap allocation failed"); goto error_runtime; } runtime->report_array = rte_zmalloc_socket(NULL, sizeof(struct node) * ss->topk, RTE_CACHE_LINE_SIZE, ss->socket_id); if (runtime->report_array == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Runtime Report Array allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Runtime Report Array allocation failed"); goto error_runtime; } @@ -239,8 +240,8 @@ rte_member_create_sketch(struct rte_member_setsum *ss, ss->converge_thresh = 10 * pow(ss->error_rate, -2.0) * sqrt(log(1 / delta)); } - RTE_MEMBER_LOG(DEBUG, "Sketch created, " - "the total memory required is %u Bytes\n", ss->num_col * ss->num_row * 8); + MEMBER_LOG(DEBUG, "Sketch created, " + "the total memory required is %u Bytes", ss->num_col * ss->num_row * 8); return 0; @@ -382,8 +383,8 @@ should_converge(const struct rte_member_setsum *ss) /* For count min sketch - L1 norm */ if (runtime_var->pkt_cnt > ss->converge_thresh) { runtime_var->converged = 1; - RTE_MEMBER_LOG(DEBUG, "Sketch converged, begin sampling " - "from key count %"PRIu64"\n", + MEMBER_LOG(DEBUG, "Sketch converged, begin sampling " + "from key count %"PRIu64, runtime_var->pkt_cnt); } } @@ -471,8 +472,8 @@ rte_member_add_sketch(const struct rte_member_setsum *ss, * the rte_member_add_sketch_byte_count routine should be used. */ if (ss->count_byte == 1) { - RTE_MEMBER_LOG(ERR, "Sketch is Byte Mode, " - "should use rte_member_add_byte_count()!\n"); + MEMBER_LOG(ERR, "Sketch is Byte Mode, " + "should use rte_member_add_byte_count()!"); return -EINVAL; } @@ -528,8 +529,8 @@ rte_member_add_sketch_byte_count(const struct rte_member_setsum *ss, /* should not call this API if not in count byte mode */ if (ss->count_byte == 0) { - RTE_MEMBER_LOG(ERR, "Sketch is Pkt Mode, " - "should use rte_member_add()!\n"); + MEMBER_LOG(ERR, "Sketch is Pkt Mode, " + "should use rte_member_add()!"); return -EINVAL; } diff --git a/lib/member/rte_member_vbf.c b/lib/member/rte_member_vbf.c index 5a0c51ecc0..5ad9487fad 100644 --- a/lib/member/rte_member_vbf.c +++ b/lib/member/rte_member_vbf.c @@ -9,6 +9,7 @@ #include <rte_errno.h> #include <rte_log.h> +#include "member.h" #include "rte_member.h" #include "rte_member_vbf.h" @@ -35,7 +36,7 @@ rte_member_create_vbf(struct rte_member_setsum *ss, params->false_positive_rate == 0 || params->false_positive_rate > 1) { rte_errno = EINVAL; - RTE_MEMBER_LOG(ERR, "Membership vBF create with invalid parameters\n"); + MEMBER_LOG(ERR, "Membership vBF create with invalid parameters"); return -EINVAL; } @@ -56,7 +57,7 @@ rte_member_create_vbf(struct rte_member_setsum *ss, if (fp_one_bf == 0) { rte_errno = EINVAL; - RTE_MEMBER_LOG(ERR, "Membership BF false positive rate is too small\n"); + MEMBER_LOG(ERR, "Membership BF false positive rate is too small"); return -EINVAL; } @@ -111,10 +112,10 @@ rte_member_create_vbf(struct rte_member_setsum *ss, ss->mul_shift = rte_ctz32(ss->num_set); ss->div_shift = rte_ctz32(32 >> ss->mul_shift); - RTE_MEMBER_LOG(DEBUG, "vector bloom filter created, " + MEMBER_LOG(DEBUG, "vector bloom filter created, " "each bloom filter expects %u keys, needs %u bits, %u hashes, " "with false positive rate set as %.5f, " - "The new calculated vBF false positive rate is %.5f\n", + "The new calculated vBF false positive rate is %.5f", num_keys_per_bf, ss->bits, ss->num_hashes, fp_one_bf, new_fp); ss->table = rte_zmalloc_socket(NULL, ss->num_set * (ss->bits >> 3), diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c index 5a1ec14d7a..70963e7ee7 100644 --- a/lib/pdump/rte_pdump.c +++ b/lib/pdump/rte_pdump.c @@ -16,10 +16,10 @@ #include "rte_pdump.h" RTE_LOG_REGISTER_DEFAULT(pdump_logtype, NOTICE); +#define RTE_LOGTYPE_PDUMP pdump_logtype -/* Macro for printing using RTE_LOG */ -#define PDUMP_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, pdump_logtype, "%s(): " fmt, \ +#define PDUMP_LOG_LINE(level, fmt, args...) \ + RTE_LOG(level, PDUMP, "%s(): " fmt "\n", \ __func__, ## args) /* Used for the multi-process communication */ @@ -181,8 +181,8 @@ pdump_register_rx_callbacks(enum pdump_version ver, if (operation == ENABLE) { if (cbs->cb) { - PDUMP_LOG(ERR, - "rx callback for port=%d queue=%d, already exists\n", + PDUMP_LOG_LINE(ERR, + "rx callback for port=%d queue=%d, already exists", port, qid); return -EEXIST; } @@ -195,8 +195,8 @@ pdump_register_rx_callbacks(enum pdump_version ver, cbs->cb = rte_eth_add_first_rx_callback(port, qid, pdump_rx, cbs); if (cbs->cb == NULL) { - PDUMP_LOG(ERR, - "failed to add rx callback, errno=%d\n", + PDUMP_LOG_LINE(ERR, + "failed to add rx callback, errno=%d", rte_errno); return rte_errno; } @@ -204,15 +204,15 @@ pdump_register_rx_callbacks(enum pdump_version ver, int ret; if (cbs->cb == NULL) { - PDUMP_LOG(ERR, - "no existing rx callback for port=%d queue=%d\n", + PDUMP_LOG_LINE(ERR, + "no existing rx callback for port=%d queue=%d", port, qid); return -EINVAL; } ret = rte_eth_remove_rx_callback(port, qid, cbs->cb); if (ret < 0) { - PDUMP_LOG(ERR, - "failed to remove rx callback, errno=%d\n", + PDUMP_LOG_LINE(ERR, + "failed to remove rx callback, errno=%d", -ret); return ret; } @@ -239,8 +239,8 @@ pdump_register_tx_callbacks(enum pdump_version ver, if (operation == ENABLE) { if (cbs->cb) { - PDUMP_LOG(ERR, - "tx callback for port=%d queue=%d, already exists\n", + PDUMP_LOG_LINE(ERR, + "tx callback for port=%d queue=%d, already exists", port, qid); return -EEXIST; } @@ -253,8 +253,8 @@ pdump_register_tx_callbacks(enum pdump_version ver, cbs->cb = rte_eth_add_tx_callback(port, qid, pdump_tx, cbs); if (cbs->cb == NULL) { - PDUMP_LOG(ERR, - "failed to add tx callback, errno=%d\n", + PDUMP_LOG_LINE(ERR, + "failed to add tx callback, errno=%d", rte_errno); return rte_errno; } @@ -262,15 +262,15 @@ pdump_register_tx_callbacks(enum pdump_version ver, int ret; if (cbs->cb == NULL) { - PDUMP_LOG(ERR, - "no existing tx callback for port=%d queue=%d\n", + PDUMP_LOG_LINE(ERR, + "no existing tx callback for port=%d queue=%d", port, qid); return -EINVAL; } ret = rte_eth_remove_tx_callback(port, qid, cbs->cb); if (ret < 0) { - PDUMP_LOG(ERR, - "failed to remove tx callback, errno=%d\n", + PDUMP_LOG_LINE(ERR, + "failed to remove tx callback, errno=%d", -ret); return ret; } @@ -295,22 +295,22 @@ set_pdump_rxtx_cbs(const struct pdump_request *p) /* Check for possible DPDK version mismatch */ if (!(p->ver == V1 || p->ver == V2)) { - PDUMP_LOG(ERR, - "incorrect client version %u\n", p->ver); + PDUMP_LOG_LINE(ERR, + "incorrect client version %u", p->ver); return -EINVAL; } if (p->prm) { if (p->prm->prog_arg.type != RTE_BPF_ARG_PTR_MBUF) { - PDUMP_LOG(ERR, - "invalid BPF program type: %u\n", + PDUMP_LOG_LINE(ERR, + "invalid BPF program type: %u", p->prm->prog_arg.type); return -EINVAL; } filter = rte_bpf_load(p->prm); if (filter == NULL) { - PDUMP_LOG(ERR, "cannot load BPF filter: %s\n", + PDUMP_LOG_LINE(ERR, "cannot load BPF filter: %s", rte_strerror(rte_errno)); return -rte_errno; } @@ -324,8 +324,8 @@ set_pdump_rxtx_cbs(const struct pdump_request *p) ret = rte_eth_dev_get_port_by_name(p->device, &port); if (ret < 0) { - PDUMP_LOG(ERR, - "failed to get port id for device id=%s\n", + PDUMP_LOG_LINE(ERR, + "failed to get port id for device id=%s", p->device); return -EINVAL; } @@ -336,8 +336,8 @@ set_pdump_rxtx_cbs(const struct pdump_request *p) ret = rte_eth_dev_info_get(port, &dev_info); if (ret != 0) { - PDUMP_LOG(ERR, - "Error during getting device (port %u) info: %s\n", + PDUMP_LOG_LINE(ERR, + "Error during getting device (port %u) info: %s", port, strerror(-ret)); return ret; } @@ -345,19 +345,19 @@ set_pdump_rxtx_cbs(const struct pdump_request *p) nb_rx_q = dev_info.nb_rx_queues; nb_tx_q = dev_info.nb_tx_queues; if (nb_rx_q == 0 && flags & RTE_PDUMP_FLAG_RX) { - PDUMP_LOG(ERR, - "number of rx queues cannot be 0\n"); + PDUMP_LOG_LINE(ERR, + "number of rx queues cannot be 0"); return -EINVAL; } if (nb_tx_q == 0 && flags & RTE_PDUMP_FLAG_TX) { - PDUMP_LOG(ERR, - "number of tx queues cannot be 0\n"); + PDUMP_LOG_LINE(ERR, + "number of tx queues cannot be 0"); return -EINVAL; } if ((nb_tx_q == 0 || nb_rx_q == 0) && flags == RTE_PDUMP_FLAG_RXTX) { - PDUMP_LOG(ERR, - "both tx&rx queues must be non zero\n"); + PDUMP_LOG_LINE(ERR, + "both tx&rx queues must be non zero"); return -EINVAL; } } @@ -394,7 +394,7 @@ pdump_server(const struct rte_mp_msg *mp_msg, const void *peer) /* recv client requests */ if (mp_msg->len_param != sizeof(*cli_req)) { - PDUMP_LOG(ERR, "failed to recv from client\n"); + PDUMP_LOG_LINE(ERR, "failed to recv from client"); resp->err_value = -EINVAL; } else { cli_req = (const struct pdump_request *)mp_msg->param; @@ -407,7 +407,7 @@ pdump_server(const struct rte_mp_msg *mp_msg, const void *peer) mp_resp.len_param = sizeof(*resp); mp_resp.num_fds = 0; if (rte_mp_reply(&mp_resp, peer) < 0) { - PDUMP_LOG(ERR, "failed to send to client:%s\n", + PDUMP_LOG_LINE(ERR, "failed to send to client:%s", strerror(rte_errno)); return -1; } @@ -424,7 +424,7 @@ rte_pdump_init(void) mz = rte_memzone_reserve(MZ_RTE_PDUMP_STATS, sizeof(*pdump_stats), rte_socket_id(), 0); if (mz == NULL) { - PDUMP_LOG(ERR, "cannot allocate pdump statistics\n"); + PDUMP_LOG_LINE(ERR, "cannot allocate pdump statistics"); rte_errno = ENOMEM; return -1; } @@ -454,22 +454,22 @@ static int pdump_validate_ring_mp(struct rte_ring *ring, struct rte_mempool *mp) { if (ring == NULL || mp == NULL) { - PDUMP_LOG(ERR, "NULL ring or mempool\n"); + PDUMP_LOG_LINE(ERR, "NULL ring or mempool"); rte_errno = EINVAL; return -1; } if (mp->flags & RTE_MEMPOOL_F_SP_PUT || mp->flags & RTE_MEMPOOL_F_SC_GET) { - PDUMP_LOG(ERR, + PDUMP_LOG_LINE(ERR, "mempool with SP or SC set not valid for pdump," - "must have MP and MC set\n"); + "must have MP and MC set"); rte_errno = EINVAL; return -1; } if (rte_ring_is_prod_single(ring) || rte_ring_is_cons_single(ring)) { - PDUMP_LOG(ERR, + PDUMP_LOG_LINE(ERR, "ring with SP or SC set is not valid for pdump," - "must have MP and MC set\n"); + "must have MP and MC set"); rte_errno = EINVAL; return -1; } @@ -481,16 +481,16 @@ static int pdump_validate_flags(uint32_t flags) { if ((flags & RTE_PDUMP_FLAG_RXTX) == 0) { - PDUMP_LOG(ERR, - "invalid flags, should be either rx/tx/rxtx\n"); + PDUMP_LOG_LINE(ERR, + "invalid flags, should be either rx/tx/rxtx"); rte_errno = EINVAL; return -1; } /* mask off the flags we know about */ if (flags & ~(RTE_PDUMP_FLAG_RXTX | RTE_PDUMP_FLAG_PCAPNG)) { - PDUMP_LOG(ERR, - "unknown flags: %#x\n", flags); + PDUMP_LOG_LINE(ERR, + "unknown flags: %#x", flags); rte_errno = ENOTSUP; return -1; } @@ -504,14 +504,14 @@ pdump_validate_port(uint16_t port, char *name) int ret = 0; if (port >= RTE_MAX_ETHPORTS) { - PDUMP_LOG(ERR, "Invalid port id %u\n", port); + PDUMP_LOG_LINE(ERR, "Invalid port id %u", port); rte_errno = EINVAL; return -1; } ret = rte_eth_dev_get_name_by_port(port, name); if (ret < 0) { - PDUMP_LOG(ERR, "port %u to name mapping failed\n", + PDUMP_LOG_LINE(ERR, "port %u to name mapping failed", port); rte_errno = EINVAL; return -1; @@ -536,8 +536,8 @@ pdump_prepare_client_request(const char *device, uint16_t queue, struct pdump_response *resp; if (rte_eal_process_type() == RTE_PROC_PRIMARY) { - PDUMP_LOG(ERR, - "pdump enable/disable not allowed in primary process\n"); + PDUMP_LOG_LINE(ERR, + "pdump enable/disable not allowed in primary process"); return -EINVAL; } @@ -570,8 +570,8 @@ pdump_prepare_client_request(const char *device, uint16_t queue, } if (ret < 0) - PDUMP_LOG(ERR, - "client request for pdump enable/disable failed\n"); + PDUMP_LOG_LINE(ERR, + "client request for pdump enable/disable failed"); return ret; } @@ -738,8 +738,8 @@ rte_pdump_stats(uint16_t port, struct rte_pdump_stats *stats) memset(stats, 0, sizeof(*stats)); ret = rte_eth_dev_info_get(port, &dev_info); if (ret != 0) { - PDUMP_LOG(ERR, - "Error during getting device (port %u) info: %s\n", + PDUMP_LOG_LINE(ERR, + "Error during getting device (port %u) info: %s", port, strerror(-ret)); return ret; } @@ -747,7 +747,7 @@ rte_pdump_stats(uint16_t port, struct rte_pdump_stats *stats) if (pdump_stats == NULL) { if (rte_eal_process_type() == RTE_PROC_PRIMARY) { /* rte_pdump_init was not called */ - PDUMP_LOG(ERR, "pdump stats not initialized\n"); + PDUMP_LOG_LINE(ERR, "pdump stats not initialized"); rte_errno = EINVAL; return -1; } @@ -756,7 +756,7 @@ rte_pdump_stats(uint16_t port, struct rte_pdump_stats *stats) mz = rte_memzone_lookup(MZ_RTE_PDUMP_STATS); if (mz == NULL) { /* rte_pdump_init was not called in primary process?? */ - PDUMP_LOG(ERR, "can not find pdump stats\n"); + PDUMP_LOG_LINE(ERR, "can not find pdump stats"); rte_errno = EINVAL; return -1; } diff --git a/lib/power/power_acpi_cpufreq.c b/lib/power/power_acpi_cpufreq.c index dd143f2cc8..aecfdfa15d 100644 --- a/lib/power/power_acpi_cpufreq.c +++ b/lib/power/power_acpi_cpufreq.c @@ -72,7 +72,7 @@ set_freq_internal(struct acpi_power_info *pi, uint32_t idx) if (idx == pi->curr_idx) return 0; - POWER_DEBUG_TRACE("Frequency[%u] %u to be set for lcore %u\n", + POWER_DEBUG_LOG("Frequency[%u] %u to be set for lcore %u", idx, pi->freqs[idx], pi->lcore_id); if (fseek(pi->f, 0, SEEK_SET) < 0) { RTE_LOG_LINE(ERR, POWER, "Fail to set file position indicator to 0 " @@ -155,7 +155,7 @@ power_get_available_freqs(struct acpi_power_info *pi) /* Store the available frequencies into power context */ for (i = 0, pi->nb_freqs = 0; i < count; i++) { - POWER_DEBUG_TRACE("Lcore %u frequency[%d]: %s\n", pi->lcore_id, + POWER_DEBUG_LOG("Lcore %u frequency[%d]: %s", pi->lcore_id, i, freqs[i]); pi->freqs[pi->nb_freqs++] = strtoul(freqs[i], &p, POWER_CONVERT_TO_DECIMAL); @@ -164,17 +164,17 @@ power_get_available_freqs(struct acpi_power_info *pi) if ((pi->freqs[0]-1000) == pi->freqs[1]) { pi->turbo_available = 1; pi->turbo_enable = 1; - POWER_DEBUG_TRACE("Lcore %u Can do Turbo Boost\n", + POWER_DEBUG_LOG("Lcore %u Can do Turbo Boost", pi->lcore_id); } else { pi->turbo_available = 0; pi->turbo_enable = 0; - POWER_DEBUG_TRACE("Turbo Boost not available on Lcore %u\n", + POWER_DEBUG_LOG("Turbo Boost not available on Lcore %u", pi->lcore_id); } ret = 0; - POWER_DEBUG_TRACE("%d frequency(s) of lcore %u are available\n", + POWER_DEBUG_LOG("%d frequency(s) of lcore %u are available", count, pi->lcore_id); out: if (f != NULL) diff --git a/lib/power/power_amd_pstate_cpufreq.c b/lib/power/power_amd_pstate_cpufreq.c index 44581fd48b..f8f43a49b2 100644 --- a/lib/power/power_amd_pstate_cpufreq.c +++ b/lib/power/power_amd_pstate_cpufreq.c @@ -79,7 +79,7 @@ set_freq_internal(struct amd_pstate_power_info *pi, uint32_t idx) if (idx == pi->curr_idx) return 0; - POWER_DEBUG_TRACE("Frequency[%u] %u to be set for lcore %u\n", + POWER_DEBUG_LOG("Frequency[%u] %u to be set for lcore %u", idx, pi->freqs[idx], pi->lcore_id); if (fseek(pi->f, 0, SEEK_SET) < 0) { RTE_LOG_LINE(ERR, POWER, "Fail to set file position indicator to 0 " @@ -153,14 +153,14 @@ power_check_turbo(struct amd_pstate_power_info *pi) pi->turbo_available = 1; pi->turbo_enable = 1; ret = 0; - POWER_DEBUG_TRACE("Lcore %u can do Turbo Boost! highest perf %u, " - "nominal perf %u\n", + POWER_DEBUG_LOG("Lcore %u can do Turbo Boost! highest perf %u, " + "nominal perf %u", pi->lcore_id, highest_perf, nominal_perf); } else { pi->turbo_available = 0; pi->turbo_enable = 0; - POWER_DEBUG_TRACE("Lcore %u Turbo not available! highest perf %u, " - "nominal perf %u\n", + POWER_DEBUG_LOG("Lcore %u Turbo not available! highest perf %u, " + "nominal perf %u", pi->lcore_id, highest_perf, nominal_perf); } @@ -277,7 +277,7 @@ power_get_available_freqs(struct amd_pstate_power_info *pi) ret = 0; - POWER_DEBUG_TRACE("%d frequency(s) of lcore %u are available\n", + POWER_DEBUG_LOG("%d frequency(s) of lcore %u are available", num_freqs, pi->lcore_id); out: diff --git a/lib/power/power_common.c b/lib/power/power_common.c index bc57642cd1..b3d438c4de 100644 --- a/lib/power/power_common.c +++ b/lib/power/power_common.c @@ -182,8 +182,8 @@ power_set_governor(unsigned int lcore_id, const char *new_governor, /* Check if current governor is already what we want */ if (strcmp(buf, new_governor) == 0) { ret = 0; - POWER_DEBUG_TRACE("Power management governor of lcore %u is " - "already %s\n", lcore_id, new_governor); + POWER_DEBUG_LOG("Power management governor of lcore %u is " + "already %s", lcore_id, new_governor); goto out; } diff --git a/lib/power/power_common.h b/lib/power/power_common.h index c3fcbf4c10..ea2febbd86 100644 --- a/lib/power/power_common.h +++ b/lib/power/power_common.h @@ -14,10 +14,10 @@ extern int power_logtype; #define RTE_LOGTYPE_POWER power_logtype #ifdef RTE_LIBRTE_POWER_DEBUG -#define POWER_DEBUG_TRACE(fmt, args...) \ - RTE_LOG(ERR, POWER, "%s: " fmt, __func__, ## args) +#define POWER_DEBUG_LOG(fmt, args...) \ + RTE_LOG(ERR, POWER, "%s: " fmt "\n", __func__, ## args) #else -#define POWER_DEBUG_TRACE(fmt, args...) +#define POWER_DEBUG_LOG(fmt, args...) #endif /* check if scaling driver matches one we want */ diff --git a/lib/power/power_cppc_cpufreq.c b/lib/power/power_cppc_cpufreq.c index 83e1e62830..31eb6942a2 100644 --- a/lib/power/power_cppc_cpufreq.c +++ b/lib/power/power_cppc_cpufreq.c @@ -82,7 +82,7 @@ set_freq_internal(struct cppc_power_info *pi, uint32_t idx) if (idx == pi->curr_idx) return 0; - POWER_DEBUG_TRACE("Frequency[%u] %u to be set for lcore %u\n", + POWER_DEBUG_LOG("Frequency[%u] %u to be set for lcore %u", idx, pi->freqs[idx], pi->lcore_id); if (fseek(pi->f, 0, SEEK_SET) < 0) { RTE_LOG_LINE(ERR, POWER, "Fail to set file position indicator to 0 " @@ -172,14 +172,14 @@ power_check_turbo(struct cppc_power_info *pi) pi->turbo_available = 1; pi->turbo_enable = 1; ret = 0; - POWER_DEBUG_TRACE("Lcore %u can do Turbo Boost! highest perf %u, " - "nominal perf %u\n", + POWER_DEBUG_LOG("Lcore %u can do Turbo Boost! highest perf %u, " + "nominal perf %u", pi->lcore_id, highest_perf, nominal_perf); } else { pi->turbo_available = 0; pi->turbo_enable = 0; - POWER_DEBUG_TRACE("Lcore %u Turbo not available! highest perf %u, " - "nominal perf %u\n", + POWER_DEBUG_LOG("Lcore %u Turbo not available! highest perf %u, " + "nominal perf %u", pi->lcore_id, highest_perf, nominal_perf); } @@ -265,7 +265,7 @@ power_get_available_freqs(struct cppc_power_info *pi) ret = 0; - POWER_DEBUG_TRACE("%d frequency(s) of lcore %u are available\n", + POWER_DEBUG_LOG("%d frequency(s) of lcore %u are available", num_freqs, pi->lcore_id); out: diff --git a/lib/power/power_intel_uncore.c b/lib/power/power_intel_uncore.c index 0ee8e603d2..2cc3045056 100644 --- a/lib/power/power_intel_uncore.c +++ b/lib/power/power_intel_uncore.c @@ -90,7 +90,7 @@ set_uncore_freq_internal(struct uncore_power_info *ui, uint32_t idx) return -1; } - POWER_DEBUG_TRACE("Uncore frequency '%u' to be set for pkg %02u die %02u\n", + POWER_DEBUG_LOG("Uncore frequency '%u' to be set for pkg %02u die %02u", target_uncore_freq, ui->pkg, ui->die); /* write the minimum value first if the target freq is less than current max */ @@ -235,7 +235,7 @@ power_get_available_uncore_freqs(struct uncore_power_info *ui) ret = 0; - POWER_DEBUG_TRACE("%d frequency(s) of pkg %02u die %02u are available\n", + POWER_DEBUG_LOG("%d frequency(s) of pkg %02u die %02u are available", num_uncore_freqs, ui->pkg, ui->die); out: diff --git a/lib/power/power_pstate_cpufreq.c b/lib/power/power_pstate_cpufreq.c index 56aa302b5d..ca704e672c 100644 --- a/lib/power/power_pstate_cpufreq.c +++ b/lib/power/power_pstate_cpufreq.c @@ -104,7 +104,7 @@ power_read_turbo_pct(uint64_t *outVal) goto out; } - POWER_DEBUG_TRACE("power turbo pct: %"PRIu64"\n", *outVal); + POWER_DEBUG_LOG("power turbo pct: %"PRIu64, *outVal); out: close(fd); return ret; @@ -204,7 +204,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) max_non_turbo = base_min_ratio + (100 - max_non_turbo) * (base_max_ratio - base_min_ratio) / 100; - POWER_DEBUG_TRACE("no turbo perf %"PRIu64"\n", max_non_turbo); + POWER_DEBUG_LOG("no turbo perf %"PRIu64, max_non_turbo); pi->non_turbo_max_ratio = (uint32_t)max_non_turbo; @@ -310,7 +310,7 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx) return -1; } - POWER_DEBUG_TRACE("Frequency '%u' to be set for lcore %u\n", + POWER_DEBUG_LOG("Frequency '%u' to be set for lcore %u", target_freq, pi->lcore_id); fflush(pi->f_cur_min); @@ -333,7 +333,7 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx) return -1; } - POWER_DEBUG_TRACE("Frequency '%u' to be set for lcore %u\n", + POWER_DEBUG_LOG("Frequency '%u' to be set for lcore %u", target_freq, pi->lcore_id); fflush(pi->f_cur_max); @@ -434,7 +434,7 @@ power_get_available_freqs(struct pstate_power_info *pi) else base_max_freq = pi->non_turbo_max_ratio * BUS_FREQ; - POWER_DEBUG_TRACE("sys min %u, sys max %u, base_max %u\n", + POWER_DEBUG_LOG("sys min %u, sys max %u, base_max %u", sys_min_freq, sys_max_freq, base_max_freq); @@ -471,7 +471,7 @@ power_get_available_freqs(struct pstate_power_info *pi) ret = 0; - POWER_DEBUG_TRACE("%d frequency(s) of lcore %u are available\n", + POWER_DEBUG_LOG("%d frequency(s) of lcore %u are available", num_freqs, pi->lcore_id); out: diff --git a/lib/regexdev/rte_regexdev.c b/lib/regexdev/rte_regexdev.c index d38a85eb0b..b2c4b49d97 100644 --- a/lib/regexdev/rte_regexdev.c +++ b/lib/regexdev/rte_regexdev.c @@ -73,16 +73,16 @@ regexdev_check_name(const char *name) size_t name_len; if (name == NULL) { - RTE_REGEXDEV_LOG(ERR, "Name can't be NULL\n"); + RTE_REGEXDEV_LOG_LINE(ERR, "Name can't be NULL"); return -EINVAL; } name_len = strnlen(name, RTE_REGEXDEV_NAME_MAX_LEN); if (name_len == 0) { - RTE_REGEXDEV_LOG(ERR, "Zero length RegEx device name\n"); + RTE_REGEXDEV_LOG_LINE(ERR, "Zero length RegEx device name"); return -EINVAL; } if (name_len >= RTE_REGEXDEV_NAME_MAX_LEN) { - RTE_REGEXDEV_LOG(ERR, "RegEx device name is too long\n"); + RTE_REGEXDEV_LOG_LINE(ERR, "RegEx device name is too long"); return -EINVAL; } return (int)name_len; @@ -101,17 +101,17 @@ rte_regexdev_register(const char *name) return NULL; dev = regexdev_allocated(name); if (dev != NULL) { - RTE_REGEXDEV_LOG(ERR, "RegEx device already allocated\n"); + RTE_REGEXDEV_LOG_LINE(ERR, "RegEx device already allocated"); return NULL; } dev_id = regexdev_find_free_dev(); if (dev_id == RTE_MAX_REGEXDEV_DEVS) { - RTE_REGEXDEV_LOG - (ERR, "Reached maximum number of RegEx devices\n"); + RTE_REGEXDEV_LOG_LINE + (ERR, "Reached maximum number of RegEx devices"); return NULL; } if (regexdev_shared_data_prepare() < 0) { - RTE_REGEXDEV_LOG(ERR, "Cannot allocate RegEx shared data\n"); + RTE_REGEXDEV_LOG_LINE(ERR, "Cannot allocate RegEx shared data"); return NULL; } @@ -215,8 +215,8 @@ rte_regexdev_configure(uint8_t dev_id, const struct rte_regexdev_config *cfg) if (*dev->dev_ops->dev_configure == NULL) return -ENOTSUP; if (dev->data->dev_started) { - RTE_REGEXDEV_LOG - (ERR, "Dev %u must be stopped to allow configuration\n", + RTE_REGEXDEV_LOG_LINE + (ERR, "Dev %u must be stopped to allow configuration", dev_id); return -EBUSY; } @@ -225,66 +225,66 @@ rte_regexdev_configure(uint8_t dev_id, const struct rte_regexdev_config *cfg) return ret; if ((cfg->dev_cfg_flags & RTE_REGEXDEV_CFG_CROSS_BUFFER_SCAN_F) && !(dev_info.regexdev_capa & RTE_REGEXDEV_SUPP_CROSS_BUFFER_F)) { - RTE_REGEXDEV_LOG(ERR, - "Dev %u doesn't support cross buffer scan\n", + RTE_REGEXDEV_LOG_LINE(ERR, + "Dev %u doesn't support cross buffer scan", dev_id); return -EINVAL; } if ((cfg->dev_cfg_flags & RTE_REGEXDEV_CFG_MATCH_AS_END_F) && !(dev_info.regexdev_capa & RTE_REGEXDEV_SUPP_MATCH_AS_END_F)) { - RTE_REGEXDEV_LOG(ERR, - "Dev %u doesn't support match as end\n", + RTE_REGEXDEV_LOG_LINE(ERR, + "Dev %u doesn't support match as end", dev_id); return -EINVAL; } if ((cfg->dev_cfg_flags & RTE_REGEXDEV_CFG_MATCH_ALL_F) && !(dev_info.regexdev_capa & RTE_REGEXDEV_SUPP_MATCH_ALL_F)) { - RTE_REGEXDEV_LOG(ERR, - "Dev %u doesn't support match all\n", + RTE_REGEXDEV_LOG_LINE(ERR, + "Dev %u doesn't support match all", dev_id); return -EINVAL; } if (cfg->nb_groups == 0) { - RTE_REGEXDEV_LOG(ERR, "Dev %u num of groups must be > 0\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %u num of groups must be > 0", dev_id); return -EINVAL; } if (cfg->nb_groups > dev_info.max_groups) { - RTE_REGEXDEV_LOG(ERR, "Dev %u num of groups %d > %d\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %u num of groups %d > %d", dev_id, cfg->nb_groups, dev_info.max_groups); return -EINVAL; } if (cfg->nb_max_matches == 0) { - RTE_REGEXDEV_LOG(ERR, "Dev %u num of matches must be > 0\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %u num of matches must be > 0", dev_id); return -EINVAL; } if (cfg->nb_max_matches > dev_info.max_matches) { - RTE_REGEXDEV_LOG(ERR, "Dev %u num of matches %d > %d\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %u num of matches %d > %d", dev_id, cfg->nb_max_matches, dev_info.max_matches); return -EINVAL; } if (cfg->nb_queue_pairs == 0) { - RTE_REGEXDEV_LOG(ERR, "Dev %u num of queues must be > 0\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %u num of queues must be > 0", dev_id); return -EINVAL; } if (cfg->nb_queue_pairs > dev_info.max_queue_pairs) { - RTE_REGEXDEV_LOG(ERR, "Dev %u num of queues %d > %d\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %u num of queues %d > %d", dev_id, cfg->nb_queue_pairs, dev_info.max_queue_pairs); return -EINVAL; } if (cfg->nb_rules_per_group == 0) { - RTE_REGEXDEV_LOG(ERR, - "Dev %u num of rules per group must be > 0\n", + RTE_REGEXDEV_LOG_LINE(ERR, + "Dev %u num of rules per group must be > 0", dev_id); return -EINVAL; } if (cfg->nb_rules_per_group > dev_info.max_rules_per_group) { - RTE_REGEXDEV_LOG(ERR, - "Dev %u num of rules per group %d > %d\n", + RTE_REGEXDEV_LOG_LINE(ERR, + "Dev %u num of rules per group %d > %d", dev_id, cfg->nb_rules_per_group, dev_info.max_rules_per_group); return -EINVAL; @@ -306,21 +306,21 @@ rte_regexdev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id, if (*dev->dev_ops->dev_qp_setup == NULL) return -ENOTSUP; if (dev->data->dev_started) { - RTE_REGEXDEV_LOG - (ERR, "Dev %u must be stopped to allow configuration\n", + RTE_REGEXDEV_LOG_LINE + (ERR, "Dev %u must be stopped to allow configuration", dev_id); return -EBUSY; } if (queue_pair_id >= dev->data->dev_conf.nb_queue_pairs) { - RTE_REGEXDEV_LOG(ERR, - "Dev %u invalid queue %d > %d\n", + RTE_REGEXDEV_LOG_LINE(ERR, + "Dev %u invalid queue %d > %d", dev_id, queue_pair_id, dev->data->dev_conf.nb_queue_pairs); return -EINVAL; } if (dev->data->dev_started) { - RTE_REGEXDEV_LOG - (ERR, "Dev %u must be stopped to allow configuration\n", + RTE_REGEXDEV_LOG_LINE + (ERR, "Dev %u must be stopped to allow configuration", dev_id); return -EBUSY; } @@ -383,7 +383,7 @@ rte_regexdev_attr_get(uint8_t dev_id, enum rte_regexdev_attr_id attr_id, if (*dev->dev_ops->dev_attr_get == NULL) return -ENOTSUP; if (attr_value == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d attribute value can't be NULL\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d attribute value can't be NULL", dev_id); return -EINVAL; } @@ -401,7 +401,7 @@ rte_regexdev_attr_set(uint8_t dev_id, enum rte_regexdev_attr_id attr_id, if (*dev->dev_ops->dev_attr_set == NULL) return -ENOTSUP; if (attr_value == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d attribute value can't be NULL\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d attribute value can't be NULL", dev_id); return -EINVAL; } @@ -420,7 +420,7 @@ rte_regexdev_rule_db_update(uint8_t dev_id, if (*dev->dev_ops->dev_rule_db_update == NULL) return -ENOTSUP; if (rules == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d rules can't be NULL\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d rules can't be NULL", dev_id); return -EINVAL; } @@ -450,7 +450,7 @@ rte_regexdev_rule_db_import(uint8_t dev_id, const char *rule_db, if (*dev->dev_ops->dev_db_import == NULL) return -ENOTSUP; if (rule_db == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d rules can't be NULL\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d rules can't be NULL", dev_id); return -EINVAL; } @@ -480,7 +480,7 @@ rte_regexdev_xstats_names_get(uint8_t dev_id, if (*dev->dev_ops->dev_xstats_names_get == NULL) return -ENOTSUP; if (xstats_map == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d xstats map can't be NULL\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d xstats map can't be NULL", dev_id); return -EINVAL; } @@ -498,11 +498,11 @@ rte_regexdev_xstats_get(uint8_t dev_id, const uint16_t *ids, if (*dev->dev_ops->dev_xstats_get == NULL) return -ENOTSUP; if (ids == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d ids can't be NULL\n", dev_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d ids can't be NULL", dev_id); return -EINVAL; } if (values == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d values can't be NULL\n", dev_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d values can't be NULL", dev_id); return -EINVAL; } return (*dev->dev_ops->dev_xstats_get)(dev, ids, values, n); @@ -519,15 +519,15 @@ rte_regexdev_xstats_by_name_get(uint8_t dev_id, const char *name, if (*dev->dev_ops->dev_xstats_by_name_get == NULL) return -ENOTSUP; if (name == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d name can't be NULL\n", dev_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d name can't be NULL", dev_id); return -EINVAL; } if (id == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d id can't be NULL\n", dev_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d id can't be NULL", dev_id); return -EINVAL; } if (value == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d value can't be NULL\n", dev_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d value can't be NULL", dev_id); return -EINVAL; } return (*dev->dev_ops->dev_xstats_by_name_get)(dev, name, id, value); @@ -544,7 +544,7 @@ rte_regexdev_xstats_reset(uint8_t dev_id, const uint16_t *ids, if (*dev->dev_ops->dev_xstats_reset == NULL) return -ENOTSUP; if (ids == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d ids can't be NULL\n", dev_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d ids can't be NULL", dev_id); return -EINVAL; } return (*dev->dev_ops->dev_xstats_reset)(dev, ids, nb_ids); @@ -572,7 +572,7 @@ rte_regexdev_dump(uint8_t dev_id, FILE *f) if (*dev->dev_ops->dev_dump == NULL) return -ENOTSUP; if (f == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d file can't be NULL\n", dev_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d file can't be NULL", dev_id); return -EINVAL; } return (*dev->dev_ops->dev_dump)(dev, f); diff --git a/lib/regexdev/rte_regexdev.h b/lib/regexdev/rte_regexdev.h index d50af775b5..a215d8768e 100644 --- a/lib/regexdev/rte_regexdev.h +++ b/lib/regexdev/rte_regexdev.h @@ -206,21 +206,23 @@ extern "C" { #define RTE_REGEXDEV_NAME_MAX_LEN RTE_DEV_NAME_MAX_LEN extern int rte_regexdev_logtype; +#define RTE_LOGTYPE_REGEXDEV rte_regexdev_logtype -#define RTE_REGEXDEV_LOG(level, ...) \ - rte_log(RTE_LOG_ ## level, rte_regexdev_logtype, "" __VA_ARGS__) +#define RTE_REGEXDEV_LOG_LINE(level, ...) \ + RTE_LOG(level, REGEXDEV, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + RTE_FMT_TAIL(__VA_ARGS__ ,))) /* Macros to check for valid port */ #define RTE_REGEXDEV_VALID_DEV_ID_OR_ERR_RET(dev_id, retval) do { \ if (!rte_regexdev_is_valid_dev(dev_id)) { \ - RTE_REGEXDEV_LOG(ERR, "Invalid dev_id=%u\n", dev_id); \ + RTE_REGEXDEV_LOG_LINE(ERR, "Invalid dev_id=%u", dev_id); \ return retval; \ } \ } while (0) #define RTE_REGEXDEV_VALID_DEV_ID_OR_RET(dev_id) do { \ if (!rte_regexdev_is_valid_dev(dev_id)) { \ - RTE_REGEXDEV_LOG(ERR, "Invalid dev_id=%u\n", dev_id); \ + RTE_REGEXDEV_LOG_LINE(ERR, "Invalid dev_id=%u", dev_id); \ return; \ } \ } while (0) @@ -1475,7 +1477,7 @@ rte_regexdev_enqueue_burst(uint8_t dev_id, uint16_t qp_id, if (*dev->enqueue == NULL) return -ENOTSUP; if (qp_id >= dev->data->dev_conf.nb_queue_pairs) { - RTE_REGEXDEV_LOG(ERR, "Invalid queue %d\n", qp_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Invalid queue %d", qp_id); return -EINVAL; } #endif @@ -1535,7 +1537,7 @@ rte_regexdev_dequeue_burst(uint8_t dev_id, uint16_t qp_id, if (*dev->dequeue == NULL) return -ENOTSUP; if (qp_id >= dev->data->dev_conf.nb_queue_pairs) { - RTE_REGEXDEV_LOG(ERR, "Invalid queue %d\n", qp_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Invalid queue %d", qp_id); return -EINVAL; } #endif diff --git a/lib/telemetry/telemetry.c b/lib/telemetry/telemetry.c index 92982842a8..747eba2656 100644 --- a/lib/telemetry/telemetry.c +++ b/lib/telemetry/telemetry.c @@ -56,7 +56,10 @@ static const char *socket_dir; /* runtime directory */ static rte_cpuset_t *thread_cpuset; RTE_LOG_REGISTER_DEFAULT(logtype, WARNING); -#define TMTY_LOG(l, ...) rte_log(RTE_LOG_ ## l, logtype, "TELEMETRY: " __VA_ARGS__) +#define RTE_LOGTYPE_TMTY logtype +#define TMTY_LOG_LINE(l, ...) \ + RTE_LOG(l, TMTY, RTE_FMT("TELEMETRY: " RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + RTE_FMT_TAIL(__VA_ARGS__ ,))) /* list of command callbacks, with one command registered by default */ static struct cmd_callback *callbacks; @@ -417,7 +420,7 @@ socket_listener(void *socket) struct socket *s = (struct socket *)socket; int s_accepted = accept(s->sock, NULL, NULL); if (s_accepted < 0) { - TMTY_LOG(ERR, "Error with accept, telemetry thread quitting\n"); + TMTY_LOG_LINE(ERR, "Error with accept, telemetry thread quitting"); return NULL; } if (s->num_clients != NULL) { @@ -433,7 +436,7 @@ socket_listener(void *socket) rc = pthread_create(&th, NULL, s->fn, (void *)(uintptr_t)s_accepted); if (rc != 0) { - TMTY_LOG(ERR, "Error with create client thread: %s\n", + TMTY_LOG_LINE(ERR, "Error with create client thread: %s", strerror(rc)); close(s_accepted); if (s->num_clients != NULL) @@ -469,22 +472,22 @@ create_socket(char *path) { int sock = socket(AF_UNIX, SOCK_SEQPACKET, 0); if (sock < 0) { - TMTY_LOG(ERR, "Error with socket creation, %s\n", strerror(errno)); + TMTY_LOG_LINE(ERR, "Error with socket creation, %s", strerror(errno)); return -1; } struct sockaddr_un sun = {.sun_family = AF_UNIX}; strlcpy(sun.sun_path, path, sizeof(sun.sun_path)); - TMTY_LOG(DEBUG, "Attempting socket bind to path '%s'\n", path); + TMTY_LOG_LINE(DEBUG, "Attempting socket bind to path '%s'", path); if (bind(sock, (void *) &sun, sizeof(sun)) < 0) { struct stat st; - TMTY_LOG(DEBUG, "Initial bind to socket '%s' failed.\n", path); + TMTY_LOG_LINE(DEBUG, "Initial bind to socket '%s' failed.", path); /* first check if we have a runtime dir */ if (stat(socket_dir, &st) < 0 || !S_ISDIR(st.st_mode)) { - TMTY_LOG(ERR, "Cannot access DPDK runtime directory: %s\n", socket_dir); + TMTY_LOG_LINE(ERR, "Cannot access DPDK runtime directory: %s", socket_dir); close(sock); return -ENOENT; } @@ -496,22 +499,22 @@ create_socket(char *path) } /* socket is not active, delete and attempt rebind */ - TMTY_LOG(DEBUG, "Attempting unlink and retrying bind\n"); + TMTY_LOG_LINE(DEBUG, "Attempting unlink and retrying bind"); unlink(sun.sun_path); if (bind(sock, (void *) &sun, sizeof(sun)) < 0) { - TMTY_LOG(ERR, "Error binding socket: %s\n", strerror(errno)); + TMTY_LOG_LINE(ERR, "Error binding socket: %s", strerror(errno)); close(sock); return -errno; /* if unlink failed, this will be -EADDRINUSE as above */ } } if (listen(sock, 1) < 0) { - TMTY_LOG(ERR, "Error calling listen for socket: %s\n", strerror(errno)); + TMTY_LOG_LINE(ERR, "Error calling listen for socket: %s", strerror(errno)); unlink(sun.sun_path); close(sock); return -errno; } - TMTY_LOG(DEBUG, "Socket creation and binding ok\n"); + TMTY_LOG_LINE(DEBUG, "Socket creation and binding ok"); return sock; } @@ -535,14 +538,14 @@ telemetry_legacy_init(void) int rc; if (num_legacy_callbacks == 1) { - TMTY_LOG(WARNING, "No legacy callbacks, legacy socket not created\n"); + TMTY_LOG_LINE(WARNING, "No legacy callbacks, legacy socket not created"); return -1; } v1_socket.fn = legacy_client_handler; if ((size_t) snprintf(v1_socket.path, sizeof(v1_socket.path), "%s/telemetry", socket_dir) >= sizeof(v1_socket.path)) { - TMTY_LOG(ERR, "Error with socket binding, path too long\n"); + TMTY_LOG_LINE(ERR, "Error with socket binding, path too long"); return -1; } v1_socket.sock = create_socket(v1_socket.path); @@ -552,7 +555,7 @@ telemetry_legacy_init(void) } rc = pthread_create(&t_old, NULL, socket_listener, &v1_socket); if (rc != 0) { - TMTY_LOG(ERR, "Error with create legacy socket thread: %s\n", + TMTY_LOG_LINE(ERR, "Error with create legacy socket thread: %s", strerror(rc)); close(v1_socket.sock); v1_socket.sock = -1; @@ -562,7 +565,7 @@ telemetry_legacy_init(void) } pthread_setaffinity_np(t_old, sizeof(*thread_cpuset), thread_cpuset); set_thread_name(t_old, "dpdk-telemet-v1"); - TMTY_LOG(DEBUG, "Legacy telemetry socket initialized ok\n"); + TMTY_LOG_LINE(DEBUG, "Legacy telemetry socket initialized ok"); pthread_detach(t_old); return 0; } @@ -584,7 +587,7 @@ telemetry_v2_init(void) "Returns help text for a command. Parameters: string command"); v2_socket.fn = client_handler; if (strlcpy(spath, get_socket_path(socket_dir, 2), sizeof(spath)) >= sizeof(spath)) { - TMTY_LOG(ERR, "Error with socket binding, path too long\n"); + TMTY_LOG_LINE(ERR, "Error with socket binding, path too long"); return -1; } memcpy(v2_socket.path, spath, sizeof(v2_socket.path)); @@ -599,14 +602,14 @@ telemetry_v2_init(void) /* add a suffix to the path if the basic version fails */ if (snprintf(v2_socket.path, sizeof(v2_socket.path), "%s:%d", spath, ++suffix) >= (int)sizeof(v2_socket.path)) { - TMTY_LOG(ERR, "Error with socket binding, path too long\n"); + TMTY_LOG_LINE(ERR, "Error with socket binding, path too long"); return -1; } v2_socket.sock = create_socket(v2_socket.path); } rc = pthread_create(&t_new, NULL, socket_listener, &v2_socket); if (rc != 0) { - TMTY_LOG(ERR, "Error with create socket thread: %s\n", + TMTY_LOG_LINE(ERR, "Error with create socket thread: %s", strerror(rc)); close(v2_socket.sock); v2_socket.sock = -1; @@ -634,7 +637,7 @@ rte_telemetry_init(const char *runtime_dir, const char *rte_version, rte_cpuset_ #ifndef RTE_EXEC_ENV_WINDOWS if (telemetry_v2_init() != 0) return -1; - TMTY_LOG(DEBUG, "Telemetry initialized ok\n"); + TMTY_LOG_LINE(DEBUG, "Telemetry initialized ok"); telemetry_legacy_init(); #endif /* RTE_EXEC_ENV_WINDOWS */ diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c index 10ab77262e..f2c275a7d7 100644 --- a/lib/vhost/iotlb.c +++ b/lib/vhost/iotlb.c @@ -150,16 +150,16 @@ vhost_user_iotlb_pending_insert(struct virtio_net *dev, uint64_t iova, uint8_t p node = vhost_user_iotlb_pool_get(dev); if (node == NULL) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "IOTLB pool empty, clear entries for pending insertion\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "IOTLB pool empty, clear entries for pending insertion"); if (!TAILQ_EMPTY(&dev->iotlb_pending_list)) vhost_user_iotlb_pending_remove_all(dev); else vhost_user_iotlb_cache_random_evict(dev); node = vhost_user_iotlb_pool_get(dev); if (node == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "IOTLB pool still empty, pending insertion failure\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "IOTLB pool still empty, pending insertion failure"); return; } } @@ -253,16 +253,16 @@ vhost_user_iotlb_cache_insert(struct virtio_net *dev, uint64_t iova, uint64_t ua new_node = vhost_user_iotlb_pool_get(dev); if (new_node == NULL) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "IOTLB pool empty, clear entries for cache insertion\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "IOTLB pool empty, clear entries for cache insertion"); if (!TAILQ_EMPTY(&dev->iotlb_list)) vhost_user_iotlb_cache_random_evict(dev); else vhost_user_iotlb_pending_remove_all(dev); new_node = vhost_user_iotlb_pool_get(dev); if (new_node == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "IOTLB pool still empty, cache insertion failed\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "IOTLB pool still empty, cache insertion failed"); return; } } @@ -415,7 +415,7 @@ vhost_user_iotlb_init(struct virtio_net *dev) dev->iotlb_pool = rte_calloc_socket("iotlb", IOTLB_CACHE_SIZE, sizeof(struct vhost_iotlb_entry), 0, socket); if (!dev->iotlb_pool) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to create IOTLB cache pool\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to create IOTLB cache pool"); return -1; } for (i = 0; i < IOTLB_CACHE_SIZE; i++) diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c index 5882e44176..a2fdac30a4 100644 --- a/lib/vhost/socket.c +++ b/lib/vhost/socket.c @@ -128,17 +128,17 @@ read_fd_message(char *ifname, int sockfd, char *buf, int buflen, int *fds, int m ret = recvmsg(sockfd, &msgh, 0); if (ret <= 0) { if (ret) - VHOST_LOG_CONFIG(ifname, ERR, "recvmsg failed on fd %d (%s)\n", + VHOST_CONFIG_LOG(ifname, ERR, "recvmsg failed on fd %d (%s)", sockfd, strerror(errno)); return ret; } if (msgh.msg_flags & MSG_TRUNC) - VHOST_LOG_CONFIG(ifname, ERR, "truncated msg (fd %d)\n", sockfd); + VHOST_CONFIG_LOG(ifname, ERR, "truncated msg (fd %d)", sockfd); /* MSG_CTRUNC may be caused by LSM misconfiguration */ if (msgh.msg_flags & MSG_CTRUNC) - VHOST_LOG_CONFIG(ifname, ERR, "truncated control data (fd %d)\n", sockfd); + VHOST_CONFIG_LOG(ifname, ERR, "truncated control data (fd %d)", sockfd); for (cmsg = CMSG_FIRSTHDR(&msgh); cmsg != NULL; cmsg = CMSG_NXTHDR(&msgh, cmsg)) { @@ -181,7 +181,7 @@ send_fd_message(char *ifname, int sockfd, char *buf, int buflen, int *fds, int f msgh.msg_controllen = sizeof(control); cmsg = CMSG_FIRSTHDR(&msgh); if (cmsg == NULL) { - VHOST_LOG_CONFIG(ifname, ERR, "cmsg == NULL\n"); + VHOST_CONFIG_LOG(ifname, ERR, "cmsg == NULL"); errno = EINVAL; return -1; } @@ -199,7 +199,7 @@ send_fd_message(char *ifname, int sockfd, char *buf, int buflen, int *fds, int f } while (ret < 0 && errno == EINTR); if (ret < 0) { - VHOST_LOG_CONFIG(ifname, ERR, "sendmsg error on fd %d (%s)\n", + VHOST_CONFIG_LOG(ifname, ERR, "sendmsg error on fd %d (%s)", sockfd, strerror(errno)); return ret; } @@ -252,13 +252,13 @@ vhost_user_add_connection(int fd, struct vhost_user_socket *vsocket) dev->async_copy = 1; } - VHOST_LOG_CONFIG(vsocket->path, INFO, "new device, handle is %d\n", vid); + VHOST_CONFIG_LOG(vsocket->path, INFO, "new device, handle is %d", vid); if (vsocket->notify_ops->new_connection) { ret = vsocket->notify_ops->new_connection(vid); if (ret < 0) { - VHOST_LOG_CONFIG(vsocket->path, ERR, - "failed to add vhost user connection with fd %d\n", + VHOST_CONFIG_LOG(vsocket->path, ERR, + "failed to add vhost user connection with fd %d", fd); goto err_cleanup; } @@ -270,8 +270,8 @@ vhost_user_add_connection(int fd, struct vhost_user_socket *vsocket) ret = fdset_add(&vhost_user.fdset, fd, vhost_user_read_cb, NULL, conn); if (ret < 0) { - VHOST_LOG_CONFIG(vsocket->path, ERR, - "failed to add fd %d into vhost server fdset\n", + VHOST_CONFIG_LOG(vsocket->path, ERR, + "failed to add fd %d into vhost server fdset", fd); if (vsocket->notify_ops->destroy_connection) @@ -304,7 +304,7 @@ vhost_user_server_new_connection(int fd, void *dat, int *remove __rte_unused) if (fd < 0) return; - VHOST_LOG_CONFIG(vsocket->path, INFO, "new vhost user connection is %d\n", fd); + VHOST_CONFIG_LOG(vsocket->path, INFO, "new vhost user connection is %d", fd); vhost_user_add_connection(fd, vsocket); } @@ -352,12 +352,12 @@ create_unix_socket(struct vhost_user_socket *vsocket) fd = socket(AF_UNIX, SOCK_STREAM, 0); if (fd < 0) return -1; - VHOST_LOG_CONFIG(vsocket->path, INFO, "vhost-user %s: socket created, fd: %d\n", + VHOST_CONFIG_LOG(vsocket->path, INFO, "vhost-user %s: socket created, fd: %d", vsocket->is_server ? "server" : "client", fd); if (!vsocket->is_server && fcntl(fd, F_SETFL, O_NONBLOCK)) { - VHOST_LOG_CONFIG(vsocket->path, ERR, - "vhost-user: can't set nonblocking mode for socket, fd: %d (%s)\n", + VHOST_CONFIG_LOG(vsocket->path, ERR, + "vhost-user: can't set nonblocking mode for socket, fd: %d (%s)", fd, strerror(errno)); close(fd); return -1; @@ -391,11 +391,11 @@ vhost_user_start_server(struct vhost_user_socket *vsocket) */ ret = bind(fd, (struct sockaddr *)&vsocket->un, sizeof(vsocket->un)); if (ret < 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to bind: %s; remove it and try again\n", + VHOST_CONFIG_LOG(path, ERR, "failed to bind: %s; remove it and try again", strerror(errno)); goto err; } - VHOST_LOG_CONFIG(path, INFO, "binding succeeded\n"); + VHOST_CONFIG_LOG(path, INFO, "binding succeeded"); ret = listen(fd, MAX_VIRTIO_BACKLOG); if (ret < 0) @@ -404,7 +404,7 @@ vhost_user_start_server(struct vhost_user_socket *vsocket) ret = fdset_add(&vhost_user.fdset, fd, vhost_user_server_new_connection, NULL, vsocket); if (ret < 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to add listen fd %d to vhost server fdset\n", + VHOST_CONFIG_LOG(path, ERR, "failed to add listen fd %d to vhost server fdset", fd); goto err; } @@ -444,12 +444,12 @@ vhost_user_connect_nonblock(char *path, int fd, struct sockaddr *un, size_t sz) flags = fcntl(fd, F_GETFL, 0); if (flags < 0) { - VHOST_LOG_CONFIG(path, ERR, "can't get flags for connfd %d (%s)\n", + VHOST_CONFIG_LOG(path, ERR, "can't get flags for connfd %d (%s)", fd, strerror(errno)); return -2; } if ((flags & O_NONBLOCK) && fcntl(fd, F_SETFL, flags & ~O_NONBLOCK)) { - VHOST_LOG_CONFIG(path, ERR, "can't disable nonblocking on fd %d\n", fd); + VHOST_CONFIG_LOG(path, ERR, "can't disable nonblocking on fd %d", fd); return -2; } return 0; @@ -477,15 +477,15 @@ vhost_user_client_reconnect(void *arg __rte_unused) sizeof(reconn->un)); if (ret == -2) { close(reconn->fd); - VHOST_LOG_CONFIG(reconn->vsocket->path, ERR, - "reconnection for fd %d failed\n", + VHOST_CONFIG_LOG(reconn->vsocket->path, ERR, + "reconnection for fd %d failed", reconn->fd); goto remove_fd; } if (ret == -1) continue; - VHOST_LOG_CONFIG(reconn->vsocket->path, INFO, "connected\n"); + VHOST_CONFIG_LOG(reconn->vsocket->path, INFO, "connected"); vhost_user_add_connection(reconn->fd, reconn->vsocket); remove_fd: TAILQ_REMOVE(&reconn_list.head, reconn, next); @@ -506,7 +506,7 @@ vhost_user_reconnect_init(void) ret = pthread_mutex_init(&reconn_list.mutex, NULL); if (ret < 0) { - VHOST_LOG_CONFIG("thread", ERR, "%s: failed to initialize mutex\n", __func__); + VHOST_CONFIG_LOG("thread", ERR, "%s: failed to initialize mutex", __func__); return ret; } TAILQ_INIT(&reconn_list.head); @@ -514,10 +514,10 @@ vhost_user_reconnect_init(void) ret = rte_thread_create_internal_control(&reconn_tid, "vhost-reco", vhost_user_client_reconnect, NULL); if (ret != 0) { - VHOST_LOG_CONFIG("thread", ERR, "failed to create reconnect thread\n"); + VHOST_CONFIG_LOG("thread", ERR, "failed to create reconnect thread"); if (pthread_mutex_destroy(&reconn_list.mutex)) - VHOST_LOG_CONFIG("thread", ERR, - "%s: failed to destroy reconnect mutex\n", + VHOST_CONFIG_LOG("thread", ERR, + "%s: failed to destroy reconnect mutex", __func__); } @@ -539,17 +539,17 @@ vhost_user_start_client(struct vhost_user_socket *vsocket) return 0; } - VHOST_LOG_CONFIG(path, WARNING, "failed to connect: %s\n", strerror(errno)); + VHOST_CONFIG_LOG(path, WARNING, "failed to connect: %s", strerror(errno)); if (ret == -2 || !vsocket->reconnect) { close(fd); return -1; } - VHOST_LOG_CONFIG(path, INFO, "reconnecting...\n"); + VHOST_CONFIG_LOG(path, INFO, "reconnecting..."); reconn = malloc(sizeof(*reconn)); if (reconn == NULL) { - VHOST_LOG_CONFIG(path, ERR, "failed to allocate memory for reconnect\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to allocate memory for reconnect"); close(fd); return -1; } @@ -638,7 +638,7 @@ rte_vhost_driver_get_vdpa_dev_type(const char *path, uint32_t *type) pthread_mutex_lock(&vhost_user.mutex); vsocket = find_vhost_user_socket(path); if (!vsocket) { - VHOST_LOG_CONFIG(path, ERR, "socket file is not registered yet.\n"); + VHOST_CONFIG_LOG(path, ERR, "socket file is not registered yet."); ret = -1; goto unlock_exit; } @@ -731,7 +731,7 @@ rte_vhost_driver_get_features(const char *path, uint64_t *features) pthread_mutex_lock(&vhost_user.mutex); vsocket = find_vhost_user_socket(path); if (!vsocket) { - VHOST_LOG_CONFIG(path, ERR, "socket file is not registered yet.\n"); + VHOST_CONFIG_LOG(path, ERR, "socket file is not registered yet."); ret = -1; goto unlock_exit; } @@ -743,7 +743,7 @@ rte_vhost_driver_get_features(const char *path, uint64_t *features) } if (vdpa_dev->ops->get_features(vdpa_dev, &vdpa_features) < 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to get vdpa features for socket file.\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to get vdpa features for socket file."); ret = -1; goto unlock_exit; } @@ -781,7 +781,7 @@ rte_vhost_driver_get_protocol_features(const char *path, pthread_mutex_lock(&vhost_user.mutex); vsocket = find_vhost_user_socket(path); if (!vsocket) { - VHOST_LOG_CONFIG(path, ERR, "socket file is not registered yet.\n"); + VHOST_CONFIG_LOG(path, ERR, "socket file is not registered yet."); ret = -1; goto unlock_exit; } @@ -794,7 +794,7 @@ rte_vhost_driver_get_protocol_features(const char *path, if (vdpa_dev->ops->get_protocol_features(vdpa_dev, &vdpa_protocol_features) < 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to get vdpa protocol features.\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to get vdpa protocol features."); ret = -1; goto unlock_exit; } @@ -818,7 +818,7 @@ rte_vhost_driver_get_queue_num(const char *path, uint32_t *queue_num) pthread_mutex_lock(&vhost_user.mutex); vsocket = find_vhost_user_socket(path); if (!vsocket) { - VHOST_LOG_CONFIG(path, ERR, "socket file is not registered yet.\n"); + VHOST_CONFIG_LOG(path, ERR, "socket file is not registered yet."); ret = -1; goto unlock_exit; } @@ -830,7 +830,7 @@ rte_vhost_driver_get_queue_num(const char *path, uint32_t *queue_num) } if (vdpa_dev->ops->get_queue_num(vdpa_dev, &vdpa_queue_num) < 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to get vdpa queue number.\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to get vdpa queue number."); ret = -1; goto unlock_exit; } @@ -848,10 +848,10 @@ rte_vhost_driver_set_max_queue_num(const char *path, uint32_t max_queue_pairs) struct vhost_user_socket *vsocket; int ret = 0; - VHOST_LOG_CONFIG(path, INFO, "Setting max queue pairs to %u\n", max_queue_pairs); + VHOST_CONFIG_LOG(path, INFO, "Setting max queue pairs to %u", max_queue_pairs); if (max_queue_pairs > VHOST_MAX_QUEUE_PAIRS) { - VHOST_LOG_CONFIG(path, ERR, "Library only supports up to %u queue pairs\n", + VHOST_CONFIG_LOG(path, ERR, "Library only supports up to %u queue pairs", VHOST_MAX_QUEUE_PAIRS); return -1; } @@ -859,7 +859,7 @@ rte_vhost_driver_set_max_queue_num(const char *path, uint32_t max_queue_pairs) pthread_mutex_lock(&vhost_user.mutex); vsocket = find_vhost_user_socket(path); if (!vsocket) { - VHOST_LOG_CONFIG(path, ERR, "socket file is not registered yet.\n"); + VHOST_CONFIG_LOG(path, ERR, "socket file is not registered yet."); ret = -1; goto unlock_exit; } @@ -898,7 +898,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags) pthread_mutex_lock(&vhost_user.mutex); if (vhost_user.vsocket_cnt == MAX_VHOST_SOCKET) { - VHOST_LOG_CONFIG(path, ERR, "the number of vhost sockets reaches maximum\n"); + VHOST_CONFIG_LOG(path, ERR, "the number of vhost sockets reaches maximum"); goto out; } @@ -908,14 +908,14 @@ rte_vhost_driver_register(const char *path, uint64_t flags) memset(vsocket, 0, sizeof(struct vhost_user_socket)); vsocket->path = strdup(path); if (vsocket->path == NULL) { - VHOST_LOG_CONFIG(path, ERR, "failed to copy socket path string\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to copy socket path string"); vhost_user_socket_mem_free(vsocket); goto out; } TAILQ_INIT(&vsocket->conn_list); ret = pthread_mutex_init(&vsocket->conn_mutex, NULL); if (ret) { - VHOST_LOG_CONFIG(path, ERR, "failed to init connection mutex\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to init connection mutex"); goto out_free; } @@ -936,7 +936,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags) if (vsocket->async_copy && (vsocket->iommu_support || (flags & RTE_VHOST_USER_POSTCOPY_SUPPORT))) { - VHOST_LOG_CONFIG(path, ERR, "async copy with IOMMU or post-copy not supported\n"); + VHOST_CONFIG_LOG(path, ERR, "async copy with IOMMU or post-copy not supported"); goto out_mutex; } @@ -965,7 +965,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags) if (vsocket->async_copy) { vsocket->supported_features &= ~(1ULL << VHOST_F_LOG_ALL); vsocket->features &= ~(1ULL << VHOST_F_LOG_ALL); - VHOST_LOG_CONFIG(path, INFO, "logging feature is disabled in async copy mode\n"); + VHOST_CONFIG_LOG(path, INFO, "logging feature is disabled in async copy mode"); } /* @@ -979,8 +979,8 @@ rte_vhost_driver_register(const char *path, uint64_t flags) (1ULL << VIRTIO_NET_F_HOST_TSO6) | (1ULL << VIRTIO_NET_F_HOST_UFO); - VHOST_LOG_CONFIG(path, INFO, "Linear buffers requested without external buffers,\n"); - VHOST_LOG_CONFIG(path, INFO, "disabling host segmentation offloading support\n"); + VHOST_CONFIG_LOG(path, INFO, "Linear buffers requested without external buffers,"); + VHOST_CONFIG_LOG(path, INFO, "disabling host segmentation offloading support"); vsocket->supported_features &= ~seg_offload_features; vsocket->features &= ~seg_offload_features; } @@ -995,7 +995,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags) ~(1ULL << VHOST_USER_PROTOCOL_F_PAGEFAULT); } else { #ifndef RTE_LIBRTE_VHOST_POSTCOPY - VHOST_LOG_CONFIG(path, ERR, "Postcopy requested but not compiled\n"); + VHOST_CONFIG_LOG(path, ERR, "Postcopy requested but not compiled"); ret = -1; goto out_mutex; #endif @@ -1023,7 +1023,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags) out_mutex: if (pthread_mutex_destroy(&vsocket->conn_mutex)) { - VHOST_LOG_CONFIG(path, ERR, "failed to destroy connection mutex\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to destroy connection mutex"); } out_free: vhost_user_socket_mem_free(vsocket); @@ -1113,7 +1113,7 @@ rte_vhost_driver_unregister(const char *path) goto again; } - VHOST_LOG_CONFIG(path, INFO, "free connfd %d\n", conn->connfd); + VHOST_CONFIG_LOG(path, INFO, "free connfd %d", conn->connfd); close(conn->connfd); vhost_destroy_device(conn->vid); TAILQ_REMOVE(&vsocket->conn_list, conn, next); @@ -1192,14 +1192,14 @@ rte_vhost_driver_start(const char *path) * rebuild the wait list of poll. */ if (fdset_pipe_init(&vhost_user.fdset) < 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to create pipe for vhost fdset\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to create pipe for vhost fdset"); return -1; } int ret = rte_thread_create_internal_control(&fdset_tid, "vhost-evt", fdset_event_dispatch, &vhost_user.fdset); if (ret != 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to create fdset handling thread\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to create fdset handling thread"); fdset_pipe_uninit(&vhost_user.fdset); return -1; } diff --git a/lib/vhost/vdpa.c b/lib/vhost/vdpa.c index 219eef879c..9776fc07a9 100644 --- a/lib/vhost/vdpa.c +++ b/lib/vhost/vdpa.c @@ -84,8 +84,8 @@ rte_vdpa_register_device(struct rte_device *rte_dev, !ops->get_protocol_features || !ops->dev_conf || !ops->dev_close || !ops->set_vring_state || !ops->set_features) { - VHOST_LOG_CONFIG(rte_dev->name, ERR, - "Some mandatory vDPA ops aren't implemented\n"); + VHOST_CONFIG_LOG(rte_dev->name, ERR, + "Some mandatory vDPA ops aren't implemented"); return NULL; } @@ -107,8 +107,8 @@ rte_vdpa_register_device(struct rte_device *rte_dev, if (ops->get_dev_type) { ret = ops->get_dev_type(dev, &dev->type); if (ret) { - VHOST_LOG_CONFIG(rte_dev->name, ERR, - "Failed to get vdpa dev type.\n"); + VHOST_CONFIG_LOG(rte_dev->name, ERR, + "Failed to get vdpa dev type."); ret = -1; goto out_unlock; } diff --git a/lib/vhost/vduse.c b/lib/vhost/vduse.c index 080b58f7de..c7ba5a61dd 100644 --- a/lib/vhost/vduse.c +++ b/lib/vhost/vduse.c @@ -78,32 +78,32 @@ vduse_iotlb_miss(struct virtio_net *dev, uint64_t iova, uint8_t perm __rte_unuse ret = ioctl(dev->vduse_dev_fd, VDUSE_IOTLB_GET_FD, &entry); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to get IOTLB entry for 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to get IOTLB entry for 0x%" PRIx64, iova); return -1; } fd = ret; - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "New IOTLB entry:\n"); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "\tIOVA: %" PRIx64 " - %" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "New IOTLB entry:"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "\tIOVA: %" PRIx64 " - %" PRIx64, (uint64_t)entry.start, (uint64_t)entry.last); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "\toffset: %" PRIx64 "\n", (uint64_t)entry.offset); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "\tfd: %d\n", fd); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "\tperm: %x\n", entry.perm); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "\toffset: %" PRIx64, (uint64_t)entry.offset); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "\tfd: %d", fd); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "\tperm: %x", entry.perm); size = entry.last - entry.start + 1; mmap_addr = mmap(0, size + entry.offset, entry.perm, MAP_SHARED, fd, 0); if (!mmap_addr) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Failed to mmap IOTLB entry for 0x%" PRIx64 "\n", iova); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Failed to mmap IOTLB entry for 0x%" PRIx64, iova); ret = -1; goto close_fd; } ret = fstat(fd, &stat); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to get page size.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to get page size."); munmap(mmap_addr, entry.offset + size); goto close_fd; } @@ -134,14 +134,14 @@ vduse_control_queue_event(int fd, void *arg, int *remove __rte_unused) ret = read(fd, &buf, sizeof(buf)); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to read control queue event: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to read control queue event: %s", strerror(errno)); return; } - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "Control queue kicked\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "Control queue kicked"); if (virtio_net_ctrl_handle(dev)) - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to handle ctrl request\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to handle ctrl request"); } static void @@ -156,21 +156,21 @@ vduse_vring_setup(struct virtio_net *dev, unsigned int index) vq_info.index = index; ret = ioctl(dev->vduse_dev_fd, VDUSE_VQ_GET_INFO, &vq_info); if (ret) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to get VQ %u info: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to get VQ %u info: %s", index, strerror(errno)); return; } - VHOST_LOG_CONFIG(dev->ifname, INFO, "VQ %u info:\n", index); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tnum: %u\n", vq_info.num); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tdesc_addr: %llx\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "VQ %u info:", index); + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tnum: %u", vq_info.num); + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tdesc_addr: %llx", (unsigned long long)vq_info.desc_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tdriver_addr: %llx\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tdriver_addr: %llx", (unsigned long long)vq_info.driver_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tdevice_addr: %llx\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tdevice_addr: %llx", (unsigned long long)vq_info.device_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tavail_idx: %u\n", vq_info.split.avail_index); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tready: %u\n", vq_info.ready); + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tavail_idx: %u", vq_info.split.avail_index); + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tready: %u", vq_info.ready); vq->last_avail_idx = vq_info.split.avail_index; vq->size = vq_info.num; @@ -182,12 +182,12 @@ vduse_vring_setup(struct virtio_net *dev, unsigned int index) vq->kickfd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC); if (vq->kickfd < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to init kickfd for VQ %u: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to init kickfd for VQ %u: %s", index, strerror(errno)); vq->kickfd = VIRTIO_INVALID_EVENTFD; return; } - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tkick fd: %d\n", vq->kickfd); + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tkick fd: %d", vq->kickfd); vq->shadow_used_split = rte_malloc_socket(NULL, vq->size * sizeof(struct vring_used_elem), @@ -198,12 +198,12 @@ vduse_vring_setup(struct virtio_net *dev, unsigned int index) vhost_user_iotlb_rd_lock(vq); if (vring_translate(dev, vq)) - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to translate vring %d addresses\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to translate vring %d addresses", index); if (vhost_enable_guest_notification(dev, vq, 0)) - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Failed to disable guest notifications on vring %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Failed to disable guest notifications on vring %d", index); vhost_user_iotlb_rd_unlock(vq); @@ -212,7 +212,7 @@ vduse_vring_setup(struct virtio_net *dev, unsigned int index) ret = ioctl(dev->vduse_dev_fd, VDUSE_VQ_SETUP_KICKFD, &vq_efd); if (ret) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to setup kickfd for VQ %u: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to setup kickfd for VQ %u: %s", index, strerror(errno)); close(vq->kickfd); vq->kickfd = VIRTIO_UNINITIALIZED_EVENTFD; @@ -222,8 +222,8 @@ vduse_vring_setup(struct virtio_net *dev, unsigned int index) if (vq == dev->cvq) { ret = fdset_add(&vduse.fdset, vq->kickfd, vduse_control_queue_event, NULL, dev); if (ret) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Failed to setup kickfd handler for VQ %u: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Failed to setup kickfd handler for VQ %u: %s", index, strerror(errno)); vq_efd.fd = VDUSE_EVENTFD_DEASSIGN; ioctl(dev->vduse_dev_fd, VDUSE_VQ_SETUP_KICKFD, &vq_efd); @@ -232,7 +232,7 @@ vduse_vring_setup(struct virtio_net *dev, unsigned int index) } fdset_pipe_notify(&vduse.fdset); vhost_enable_guest_notification(dev, vq, 1); - VHOST_LOG_CONFIG(dev->ifname, INFO, "Ctrl queue event handler installed\n"); + VHOST_CONFIG_LOG(dev->ifname, INFO, "Ctrl queue event handler installed"); } } @@ -253,7 +253,7 @@ vduse_vring_cleanup(struct virtio_net *dev, unsigned int index) ret = ioctl(dev->vduse_dev_fd, VDUSE_VQ_SETUP_KICKFD, &vq_efd); if (ret) - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to cleanup kickfd for VQ %u: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to cleanup kickfd for VQ %u: %s", index, strerror(errno)); close(vq->kickfd); @@ -279,23 +279,23 @@ vduse_device_start(struct virtio_net *dev) { unsigned int i, ret; - VHOST_LOG_CONFIG(dev->ifname, INFO, "Starting device...\n"); + VHOST_CONFIG_LOG(dev->ifname, INFO, "Starting device..."); dev->notify_ops = vhost_driver_callback_get(dev->ifname); if (!dev->notify_ops) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Failed to get callback ops for driver\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Failed to get callback ops for driver"); return; } ret = ioctl(dev->vduse_dev_fd, VDUSE_DEV_GET_FEATURES, &dev->features); if (ret) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to get features: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to get features: %s", strerror(errno)); return; } - VHOST_LOG_CONFIG(dev->ifname, INFO, "Negotiated Virtio features: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "Negotiated Virtio features: 0x%" PRIx64, dev->features); if (dev->features & @@ -331,7 +331,7 @@ vduse_device_stop(struct virtio_net *dev) { unsigned int i; - VHOST_LOG_CONFIG(dev->ifname, INFO, "Stopping device...\n"); + VHOST_CONFIG_LOG(dev->ifname, INFO, "Stopping device..."); vhost_destroy_device_notify(dev); @@ -357,34 +357,34 @@ vduse_events_handler(int fd, void *arg, int *remove __rte_unused) ret = read(fd, &req, sizeof(req)); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to read request: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to read request: %s", strerror(errno)); return; } else if (ret < (int)sizeof(req)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Incomplete to read request %d\n", ret); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Incomplete to read request %d", ret); return; } - VHOST_LOG_CONFIG(dev->ifname, INFO, "New request: %s (%u)\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "New request: %s (%u)", vduse_req_id_to_str(req.type), req.type); switch (req.type) { case VDUSE_GET_VQ_STATE: vq = dev->virtqueue[req.vq_state.index]; - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tvq index: %u, avail_index: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tvq index: %u, avail_index: %u", req.vq_state.index, vq->last_avail_idx); resp.vq_state.split.avail_index = vq->last_avail_idx; resp.result = VDUSE_REQ_RESULT_OK; break; case VDUSE_SET_STATUS: - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tnew status: 0x%08x\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tnew status: 0x%08x", req.s.status); old_status = dev->status; dev->status = req.s.status; resp.result = VDUSE_REQ_RESULT_OK; break; case VDUSE_UPDATE_IOTLB: - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tIOVA range: %" PRIx64 " - %" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tIOVA range: %" PRIx64 " - %" PRIx64, (uint64_t)req.iova.start, (uint64_t)req.iova.last); vhost_user_iotlb_cache_remove(dev, req.iova.start, req.iova.last - req.iova.start + 1); @@ -399,7 +399,7 @@ vduse_events_handler(int fd, void *arg, int *remove __rte_unused) ret = write(dev->vduse_dev_fd, &resp, sizeof(resp)); if (ret != sizeof(resp)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to write response %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to write response %s", strerror(errno)); return; } @@ -411,7 +411,7 @@ vduse_events_handler(int fd, void *arg, int *remove __rte_unused) vduse_device_stop(dev); } - VHOST_LOG_CONFIG(dev->ifname, INFO, "Request %s (%u) handled successfully\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "Request %s (%u) handled successfully", vduse_req_id_to_str(req.type), req.type); } @@ -435,14 +435,14 @@ vduse_device_create(const char *path, bool compliant_ol_flags) * rebuild the wait list of poll. */ if (fdset_pipe_init(&vduse.fdset) < 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to create pipe for vduse fdset\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to create pipe for vduse fdset"); return -1; } ret = rte_thread_create_internal_control(&fdset_tid, "vduse-evt", fdset_event_dispatch, &vduse.fdset); if (ret != 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to create vduse fdset handling thread\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to create vduse fdset handling thread"); fdset_pipe_uninit(&vduse.fdset); return -1; } @@ -452,13 +452,13 @@ vduse_device_create(const char *path, bool compliant_ol_flags) control_fd = open(VDUSE_CTRL_PATH, O_RDWR); if (control_fd < 0) { - VHOST_LOG_CONFIG(name, ERR, "Failed to open %s: %s\n", + VHOST_CONFIG_LOG(name, ERR, "Failed to open %s: %s", VDUSE_CTRL_PATH, strerror(errno)); return -1; } if (ioctl(control_fd, VDUSE_SET_API_VERSION, &ver)) { - VHOST_LOG_CONFIG(name, ERR, "Failed to set API version: %" PRIu64 ": %s\n", + VHOST_CONFIG_LOG(name, ERR, "Failed to set API version: %" PRIu64 ": %s", ver, strerror(errno)); ret = -1; goto out_ctrl_close; @@ -467,24 +467,24 @@ vduse_device_create(const char *path, bool compliant_ol_flags) dev_config = malloc(offsetof(struct vduse_dev_config, config) + sizeof(vnet_config)); if (!dev_config) { - VHOST_LOG_CONFIG(name, ERR, "Failed to allocate VDUSE config\n"); + VHOST_CONFIG_LOG(name, ERR, "Failed to allocate VDUSE config"); ret = -1; goto out_ctrl_close; } ret = rte_vhost_driver_get_features(path, &features); if (ret < 0) { - VHOST_LOG_CONFIG(name, ERR, "Failed to get backend features\n"); + VHOST_CONFIG_LOG(name, ERR, "Failed to get backend features"); goto out_free; } ret = rte_vhost_driver_get_queue_num(path, &max_queue_pairs); if (ret < 0) { - VHOST_LOG_CONFIG(name, ERR, "Failed to get max queue pairs\n"); + VHOST_CONFIG_LOG(name, ERR, "Failed to get max queue pairs"); goto out_free; } - VHOST_LOG_CONFIG(path, INFO, "VDUSE max queue pairs: %u\n", max_queue_pairs); + VHOST_CONFIG_LOG(path, INFO, "VDUSE max queue pairs: %u", max_queue_pairs); total_queues = max_queue_pairs * 2; if (max_queue_pairs == 1) @@ -506,14 +506,14 @@ vduse_device_create(const char *path, bool compliant_ol_flags) ret = ioctl(control_fd, VDUSE_CREATE_DEV, dev_config); if (ret < 0) { - VHOST_LOG_CONFIG(name, ERR, "Failed to create VDUSE device: %s\n", + VHOST_CONFIG_LOG(name, ERR, "Failed to create VDUSE device: %s", strerror(errno)); goto out_free; } dev_fd = open(path, O_RDWR); if (dev_fd < 0) { - VHOST_LOG_CONFIG(name, ERR, "Failed to open device %s: %s\n", + VHOST_CONFIG_LOG(name, ERR, "Failed to open device %s: %s", path, strerror(errno)); ret = -1; goto out_dev_close; @@ -521,14 +521,14 @@ vduse_device_create(const char *path, bool compliant_ol_flags) ret = fcntl(dev_fd, F_SETFL, O_NONBLOCK); if (ret < 0) { - VHOST_LOG_CONFIG(name, ERR, "Failed to set chardev as non-blocking: %s\n", + VHOST_CONFIG_LOG(name, ERR, "Failed to set chardev as non-blocking: %s", strerror(errno)); goto out_dev_close; } vid = vhost_new_device(&vduse_backend_ops); if (vid < 0) { - VHOST_LOG_CONFIG(name, ERR, "Failed to create new Vhost device\n"); + VHOST_CONFIG_LOG(name, ERR, "Failed to create new Vhost device"); ret = -1; goto out_dev_close; } @@ -549,7 +549,7 @@ vduse_device_create(const char *path, bool compliant_ol_flags) ret = alloc_vring_queue(dev, i); if (ret) { - VHOST_LOG_CONFIG(name, ERR, "Failed to alloc vring %d metadata\n", i); + VHOST_CONFIG_LOG(name, ERR, "Failed to alloc vring %d metadata", i); goto out_dev_destroy; } @@ -558,7 +558,7 @@ vduse_device_create(const char *path, bool compliant_ol_flags) ret = ioctl(dev->vduse_dev_fd, VDUSE_VQ_SETUP, &vq_cfg); if (ret) { - VHOST_LOG_CONFIG(name, ERR, "Failed to set-up VQ %d\n", i); + VHOST_CONFIG_LOG(name, ERR, "Failed to set-up VQ %d", i); goto out_dev_destroy; } } @@ -567,7 +567,7 @@ vduse_device_create(const char *path, bool compliant_ol_flags) ret = fdset_add(&vduse.fdset, dev->vduse_dev_fd, vduse_events_handler, NULL, dev); if (ret) { - VHOST_LOG_CONFIG(name, ERR, "Failed to add fd %d to vduse fdset\n", + VHOST_CONFIG_LOG(name, ERR, "Failed to add fd %d to vduse fdset", dev->vduse_dev_fd); goto out_dev_destroy; } @@ -624,7 +624,7 @@ vduse_device_destroy(const char *path) if (dev->vduse_ctrl_fd >= 0) { ret = ioctl(dev->vduse_ctrl_fd, VDUSE_DESTROY_DEV, name); if (ret) - VHOST_LOG_CONFIG(name, ERR, "Failed to destroy VDUSE device: %s\n", + VHOST_CONFIG_LOG(name, ERR, "Failed to destroy VDUSE device: %s", strerror(errno)); close(dev->vduse_ctrl_fd); dev->vduse_ctrl_fd = -1; diff --git a/lib/vhost/vduse.h b/lib/vhost/vduse.h index 4879b1f900..0d8f3f1205 100644 --- a/lib/vhost/vduse.h +++ b/lib/vhost/vduse.h @@ -21,14 +21,14 @@ vduse_device_create(const char *path, bool compliant_ol_flags) { RTE_SET_USED(compliant_ol_flags); - VHOST_LOG_CONFIG(path, ERR, "VDUSE support disabled at build time\n"); + VHOST_CONFIG_LOG(path, ERR, "VDUSE support disabled at build time"); return -1; } static inline int vduse_device_destroy(const char *path) { - VHOST_LOG_CONFIG(path, ERR, "VDUSE support disabled at build time\n"); + VHOST_CONFIG_LOG(path, ERR, "VDUSE support disabled at build time"); return -1; } diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index 8a1f992d9d..5912a42979 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -100,8 +100,8 @@ __vhost_iova_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq, vhost_user_iotlb_pending_insert(dev, iova, perm); if (vhost_iotlb_miss(dev, iova, perm)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "IOTLB miss req failed for IOVA 0x%" PRIx64 "\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "IOTLB miss req failed for IOVA 0x%" PRIx64, iova); vhost_user_iotlb_pending_remove(dev, iova, 1, perm); } @@ -174,8 +174,8 @@ __vhost_log_write_iova(struct virtio_net *dev, struct vhost_virtqueue *vq, hva = __vhost_iova_to_vva(dev, vq, iova, &map_len, VHOST_ACCESS_RW); if (map_len != len) { - VHOST_LOG_DATA(dev->ifname, ERR, - "failed to write log for IOVA 0x%" PRIx64 ". No IOTLB entry found\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "failed to write log for IOVA 0x%" PRIx64 ". No IOTLB entry found", iova); return; } @@ -292,8 +292,8 @@ __vhost_log_cache_write_iova(struct virtio_net *dev, struct vhost_virtqueue *vq, hva = __vhost_iova_to_vva(dev, vq, iova, &map_len, VHOST_ACCESS_RW); if (map_len != len) { - VHOST_LOG_DATA(dev->ifname, ERR, - "failed to write log for IOVA 0x%" PRIx64 ". No IOTLB entry found\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "failed to write log for IOVA 0x%" PRIx64 ". No IOTLB entry found", iova); return; } @@ -473,9 +473,9 @@ translate_log_addr(struct virtio_net *dev, struct vhost_virtqueue *vq, gpa = hva_to_gpa(dev, hva, exp_size); if (!gpa) { - VHOST_LOG_DATA(dev->ifname, ERR, + VHOST_DATA_LOG(dev->ifname, ERR, "failed to find GPA for log_addr: 0x%" - PRIx64 " hva: 0x%" PRIx64 "\n", + PRIx64 " hva: 0x%" PRIx64, log_addr, hva); return 0; } @@ -609,7 +609,7 @@ init_vring_queue(struct virtio_net *dev __rte_unused, struct vhost_virtqueue *vq #ifdef RTE_LIBRTE_VHOST_NUMA if (get_mempolicy(&numa_node, NULL, 0, vq, MPOL_F_NODE | MPOL_F_ADDR)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to query numa node: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to query numa node: %s", rte_strerror(errno)); numa_node = SOCKET_ID_ANY; } @@ -640,8 +640,8 @@ alloc_vring_queue(struct virtio_net *dev, uint32_t vring_idx) vq = rte_zmalloc(NULL, sizeof(struct vhost_virtqueue), 0); if (vq == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for vring %u.\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for vring %u.", i); return -1; } @@ -678,8 +678,8 @@ reset_device(struct virtio_net *dev) struct vhost_virtqueue *vq = dev->virtqueue[i]; if (!vq) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to reset vring, virtqueue not allocated (%d)\n", i); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to reset vring, virtqueue not allocated (%d)", i); continue; } reset_vring_queue(dev, vq); @@ -697,17 +697,17 @@ vhost_new_device(struct vhost_backend_ops *ops) int i; if (ops == NULL) { - VHOST_LOG_CONFIG("device", ERR, "missing backend ops.\n"); + VHOST_CONFIG_LOG("device", ERR, "missing backend ops."); return -1; } if (ops->iotlb_miss == NULL) { - VHOST_LOG_CONFIG("device", ERR, "missing IOTLB miss backend op.\n"); + VHOST_CONFIG_LOG("device", ERR, "missing IOTLB miss backend op."); return -1; } if (ops->inject_irq == NULL) { - VHOST_LOG_CONFIG("device", ERR, "missing IRQ injection backend op.\n"); + VHOST_CONFIG_LOG("device", ERR, "missing IRQ injection backend op."); return -1; } @@ -718,14 +718,14 @@ vhost_new_device(struct vhost_backend_ops *ops) } if (i == RTE_MAX_VHOST_DEVICE) { - VHOST_LOG_CONFIG("device", ERR, "failed to find a free slot for new device.\n"); + VHOST_CONFIG_LOG("device", ERR, "failed to find a free slot for new device."); pthread_mutex_unlock(&vhost_dev_lock); return -1; } dev = rte_zmalloc(NULL, sizeof(struct virtio_net), 0); if (dev == NULL) { - VHOST_LOG_CONFIG("device", ERR, "failed to allocate memory for new device.\n"); + VHOST_CONFIG_LOG("device", ERR, "failed to allocate memory for new device."); pthread_mutex_unlock(&vhost_dev_lock); return -1; } @@ -832,7 +832,7 @@ vhost_setup_virtio_net(int vid, bool enable, bool compliant_ol_flags, bool stats dev->flags &= ~VIRTIO_DEV_SUPPORT_IOMMU; if (vhost_user_iotlb_init(dev) < 0) - VHOST_LOG_CONFIG("device", ERR, "failed to init IOTLB\n"); + VHOST_CONFIG_LOG("device", ERR, "failed to init IOTLB"); } @@ -891,7 +891,7 @@ rte_vhost_get_numa_node(int vid) ret = get_mempolicy(&numa_node, NULL, 0, dev, MPOL_F_NODE | MPOL_F_ADDR); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to query numa node: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to query numa node: %s", rte_strerror(errno)); return -1; } @@ -1608,8 +1608,8 @@ rte_vhost_rx_queue_count(int vid, uint16_t qid) return 0; if (unlikely(qid >= dev->nr_vring || (qid & 1) == 0)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid virtqueue idx %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid virtqueue idx %d.", __func__, qid); return 0; } @@ -1775,16 +1775,16 @@ async_channel_register(struct virtio_net *dev, struct vhost_virtqueue *vq) int node = vq->numa_node; if (unlikely(vq->async)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "async register failed: already registered (qid: %d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "async register failed: already registered (qid: %d)", vq->index); return -1; } async = rte_zmalloc_socket(NULL, sizeof(struct vhost_async), 0, node); if (!async) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate async metadata (qid: %d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate async metadata (qid: %d)", vq->index); return -1; } @@ -1792,8 +1792,8 @@ async_channel_register(struct virtio_net *dev, struct vhost_virtqueue *vq) async->pkts_info = rte_malloc_socket(NULL, vq->size * sizeof(struct async_inflight_info), RTE_CACHE_LINE_SIZE, node); if (!async->pkts_info) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate async_pkts_info (qid: %d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate async_pkts_info (qid: %d)", vq->index); goto out_free_async; } @@ -1801,8 +1801,8 @@ async_channel_register(struct virtio_net *dev, struct vhost_virtqueue *vq) async->pkts_cmpl_flag = rte_zmalloc_socket(NULL, vq->size * sizeof(bool), RTE_CACHE_LINE_SIZE, node); if (!async->pkts_cmpl_flag) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate async pkts_cmpl_flag (qid: %d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate async pkts_cmpl_flag (qid: %d)", vq->index); goto out_free_async; } @@ -1812,8 +1812,8 @@ async_channel_register(struct virtio_net *dev, struct vhost_virtqueue *vq) vq->size * sizeof(struct vring_used_elem_packed), RTE_CACHE_LINE_SIZE, node); if (!async->buffers_packed) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate async buffers (qid: %d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate async buffers (qid: %d)", vq->index); goto out_free_inflight; } @@ -1822,8 +1822,8 @@ async_channel_register(struct virtio_net *dev, struct vhost_virtqueue *vq) vq->size * sizeof(struct vring_used_elem), RTE_CACHE_LINE_SIZE, node); if (!async->descs_split) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate async descs (qid: %d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate async descs (qid: %d)", vq->index); goto out_free_inflight; } @@ -1914,8 +1914,8 @@ rte_vhost_async_channel_unregister(int vid, uint16_t queue_id) return ret; if (rte_rwlock_write_trylock(&vq->access_lock)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to unregister async channel, virtqueue busy.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to unregister async channel, virtqueue busy."); return ret; } @@ -1927,9 +1927,9 @@ rte_vhost_async_channel_unregister(int vid, uint16_t queue_id) if (!vq->async) { ret = 0; } else if (vq->async->pkts_inflight_n) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to unregister async channel.\n"); - VHOST_LOG_CONFIG(dev->ifname, ERR, - "inflight packets must be completed before unregistration.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to unregister async channel."); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "inflight packets must be completed before unregistration."); } else { vhost_free_async_mem(vq); ret = 0; @@ -1964,9 +1964,9 @@ rte_vhost_async_channel_unregister_thread_unsafe(int vid, uint16_t queue_id) return 0; if (vq->async->pkts_inflight_n) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to unregister async channel.\n"); - VHOST_LOG_CONFIG(dev->ifname, ERR, - "inflight packets must be completed before unregistration.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to unregister async channel."); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "inflight packets must be completed before unregistration."); return -1; } @@ -1985,17 +1985,17 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id) pthread_mutex_lock(&vhost_dma_lock); if (!rte_dma_is_valid(dma_id)) { - VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id); + VHOST_CONFIG_LOG("dma", ERR, "DMA %d is not found.", dma_id); goto error; } if (rte_dma_info_get(dma_id, &info) != 0) { - VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d information.\n", dma_id); + VHOST_CONFIG_LOG("dma", ERR, "Fail to get DMA %d information.", dma_id); goto error; } if (vchan_id >= info.max_vchans) { - VHOST_LOG_CONFIG("dma", ERR, "Invalid DMA %d vChannel %u.\n", dma_id, vchan_id); + VHOST_CONFIG_LOG("dma", ERR, "Invalid DMA %d vChannel %u.", dma_id, vchan_id); goto error; } @@ -2005,8 +2005,8 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id) vchans = rte_zmalloc(NULL, sizeof(struct async_dma_vchan_info) * info.max_vchans, RTE_CACHE_LINE_SIZE); if (vchans == NULL) { - VHOST_LOG_CONFIG("dma", ERR, - "Failed to allocate vchans for DMA %d vChannel %u.\n", + VHOST_CONFIG_LOG("dma", ERR, + "Failed to allocate vchans for DMA %d vChannel %u.", dma_id, vchan_id); goto error; } @@ -2015,7 +2015,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id) } if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr) { - VHOST_LOG_CONFIG("dma", INFO, "DMA %d vChannel %u already registered.\n", + VHOST_CONFIG_LOG("dma", INFO, "DMA %d vChannel %u already registered.", dma_id, vchan_id); pthread_mutex_unlock(&vhost_dma_lock); return 0; @@ -2027,8 +2027,8 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id) pkts_cmpl_flag_addr = rte_zmalloc(NULL, sizeof(bool *) * max_desc, RTE_CACHE_LINE_SIZE); if (!pkts_cmpl_flag_addr) { - VHOST_LOG_CONFIG("dma", ERR, - "Failed to allocate pkts_cmpl_flag_addr for DMA %d vChannel %u.\n", + VHOST_CONFIG_LOG("dma", ERR, + "Failed to allocate pkts_cmpl_flag_addr for DMA %d vChannel %u.", dma_id, vchan_id); if (dma_copy_track[dma_id].nr_vchans == 0) { @@ -2070,8 +2070,8 @@ rte_vhost_async_get_inflight(int vid, uint16_t queue_id) return ret; if (rte_rwlock_write_trylock(&vq->access_lock)) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "failed to check in-flight packets. virtqueue busy.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "failed to check in-flight packets. virtqueue busy."); return ret; } @@ -2284,30 +2284,30 @@ rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id) pthread_mutex_lock(&vhost_dma_lock); if (!rte_dma_is_valid(dma_id)) { - VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id); + VHOST_CONFIG_LOG("dma", ERR, "DMA %d is not found.", dma_id); goto error; } if (rte_dma_info_get(dma_id, &info) != 0) { - VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d information.\n", dma_id); + VHOST_CONFIG_LOG("dma", ERR, "Fail to get DMA %d information.", dma_id); goto error; } if (vchan_id >= info.max_vchans || !dma_copy_track[dma_id].vchans || !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr) { - VHOST_LOG_CONFIG("dma", ERR, "Invalid channel %d:%u.\n", dma_id, vchan_id); + VHOST_CONFIG_LOG("dma", ERR, "Invalid channel %d:%u.", dma_id, vchan_id); goto error; } if (rte_dma_stats_get(dma_id, vchan_id, &stats) != 0) { - VHOST_LOG_CONFIG("dma", ERR, - "Failed to get stats for DMA %d vChannel %u.\n", dma_id, vchan_id); + VHOST_CONFIG_LOG("dma", ERR, + "Failed to get stats for DMA %d vChannel %u.", dma_id, vchan_id); goto error; } if (stats.submitted - stats.completed != 0) { - VHOST_LOG_CONFIG("dma", ERR, - "Do not unconfigure when there are inflight packets.\n"); + VHOST_CONFIG_LOG("dma", ERR, + "Do not unconfigure when there are inflight packets."); goto error; } diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index 5f24911190..5a74d0e628 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -673,17 +673,17 @@ vhost_log_write_iova(struct virtio_net *dev, struct vhost_virtqueue *vq, } extern int vhost_config_log_level; +#define RTE_LOGTYPE_VHOST_CONFIG vhost_config_log_level extern int vhost_data_log_level; +#define RTE_LOGTYPE_VHOST_DATA vhost_data_log_level -#define VHOST_LOG_CONFIG(prefix, level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, vhost_config_log_level, \ - "VHOST_CONFIG: (%s) " fmt, prefix, ##args) +#define VHOST_CONFIG_LOG(prefix, level, fmt, args...) \ + RTE_LOG(level, VHOST_CONFIG, \ + "VHOST_CONFIG: (%s) " fmt "\n", prefix, ##args) -#define VHOST_LOG_DATA(prefix, level, fmt, args...) \ - (void)((RTE_LOG_ ## level <= RTE_LOG_DP_LEVEL) ? \ - rte_log(RTE_LOG_ ## level, vhost_data_log_level, \ - "VHOST_DATA: (%s) " fmt, prefix, ##args) : \ - 0) +#define VHOST_DATA_LOG(prefix, level, fmt, args...) \ + RTE_LOG_DP(level, VHOST_DATA, \ + "VHOST_DATA: (%s) " fmt "\n", prefix, ##args) #ifdef RTE_LIBRTE_VHOST_DEBUG #define VHOST_MAX_PRINT_BUFF 6072 @@ -702,7 +702,7 @@ extern int vhost_data_log_level; } \ snprintf(packet + strnlen(packet, VHOST_MAX_PRINT_BUFF), VHOST_MAX_PRINT_BUFF - strnlen(packet, VHOST_MAX_PRINT_BUFF), "\n"); \ \ - VHOST_LOG_DATA(device->ifname, DEBUG, "%s", packet); \ + RTE_LOG_DP(DEBUG, VHOST_DATA, "VHOST_DATA: (%s) %s", dev->ifname, packet); \ } while (0) #else #define PRINT_PACKET(device, addr, size, header) do {} while (0) @@ -830,7 +830,7 @@ get_device(int vid) dev = vhost_devices[vid]; if (unlikely(!dev)) { - VHOST_LOG_CONFIG("device", ERR, "(%d) device not found.\n", vid); + VHOST_CONFIG_LOG("device", ERR, "(%d) device not found.", vid); } return dev; @@ -963,8 +963,8 @@ vhost_vring_call_split(struct virtio_net *dev, struct vhost_virtqueue *vq) vq->signalled_used = new; vq->signalled_used_valid = true; - VHOST_LOG_DATA(dev->ifname, DEBUG, - "%s: used_event_idx=%d, old=%d, new=%d\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "%s: used_event_idx=%d, old=%d, new=%d", __func__, vhost_used_event(vq), old, new); if (vhost_need_event(vhost_used_event(vq), new, old) || diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index 413f068bcd..bac10e6182 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -93,8 +93,8 @@ validate_msg_fds(struct virtio_net *dev, struct vhu_msg_context *ctx, int expect if (ctx->fd_num == expected_fds) return 0; - VHOST_LOG_CONFIG(dev->ifname, ERR, - "expect %d FDs for request %s, received %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "expect %d FDs for request %s, received %d", expected_fds, vhost_message_handlers[ctx->msg.request.frontend].description, ctx->fd_num); @@ -144,7 +144,7 @@ async_dma_map(struct virtio_net *dev, bool do_map) return; /* DMA mapping errors won't stop VHOST_USER_SET_MEM_TABLE. */ - VHOST_LOG_CONFIG(dev->ifname, ERR, "DMA engine map failed\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "DMA engine map failed"); } } @@ -160,7 +160,7 @@ async_dma_map(struct virtio_net *dev, bool do_map) if (rte_errno == EINVAL) return; - VHOST_LOG_CONFIG(dev->ifname, ERR, "DMA engine unmap failed\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "DMA engine unmap failed"); } } } @@ -339,7 +339,7 @@ vhost_user_set_features(struct virtio_net **pdev, rte_vhost_driver_get_features(dev->ifname, &vhost_features); if (features & ~vhost_features) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "received invalid negotiated features.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "received invalid negotiated features."); dev->flags |= VIRTIO_DEV_FEATURES_FAILED; dev->status &= ~VIRTIO_DEVICE_STATUS_FEATURES_OK; @@ -356,8 +356,8 @@ vhost_user_set_features(struct virtio_net **pdev, * is enabled when the live-migration starts. */ if ((dev->features ^ features) & ~(1ULL << VHOST_F_LOG_ALL)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "features changed while device is running.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "features changed while device is running."); return RTE_VHOST_MSG_RESULT_ERR; } @@ -374,11 +374,11 @@ vhost_user_set_features(struct virtio_net **pdev, } else { dev->vhost_hlen = sizeof(struct virtio_net_hdr); } - VHOST_LOG_CONFIG(dev->ifname, INFO, - "negotiated Virtio features: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "negotiated Virtio features: 0x%" PRIx64, dev->features); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "mergeable RX buffers %s, virtio 1 %s\n", + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "mergeable RX buffers %s, virtio 1 %s", (dev->features & (1 << VIRTIO_NET_F_MRG_RXBUF)) ? "on" : "off", (dev->features & (1ULL << VIRTIO_F_VERSION_1)) ? "on" : "off"); @@ -426,8 +426,8 @@ vhost_user_set_vring_num(struct virtio_net **pdev, struct vhost_virtqueue *vq = dev->virtqueue[ctx->msg.payload.state.index]; if (ctx->msg.payload.state.num > 32768) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid virtqueue size %u\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid virtqueue size %u", ctx->msg.payload.state.num); return RTE_VHOST_MSG_RESULT_ERR; } @@ -445,8 +445,8 @@ vhost_user_set_vring_num(struct virtio_net **pdev, */ if (!vq_is_packed(dev)) { if (vq->size & (vq->size - 1)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid virtqueue size %u\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid virtqueue size %u", vq->size); return RTE_VHOST_MSG_RESULT_ERR; } @@ -459,8 +459,8 @@ vhost_user_set_vring_num(struct virtio_net **pdev, sizeof(struct vring_used_elem_packed), RTE_CACHE_LINE_SIZE, vq->numa_node); if (!vq->shadow_used_packed) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for shadow used ring.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for shadow used ring."); return RTE_VHOST_MSG_RESULT_ERR; } @@ -472,8 +472,8 @@ vhost_user_set_vring_num(struct virtio_net **pdev, RTE_CACHE_LINE_SIZE, vq->numa_node); if (!vq->shadow_used_split) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for vq internal data.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for vq internal data."); return RTE_VHOST_MSG_RESULT_ERR; } } @@ -483,8 +483,8 @@ vhost_user_set_vring_num(struct virtio_net **pdev, vq->size * sizeof(struct batch_copy_elem), RTE_CACHE_LINE_SIZE, vq->numa_node); if (!vq->batch_copy_elems) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for batching copy.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for batching copy."); return RTE_VHOST_MSG_RESULT_ERR; } @@ -520,8 +520,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) ret = get_mempolicy(&node, NULL, 0, vq->desc, MPOL_F_NODE | MPOL_F_ADDR); if (ret) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "unable to get virtqueue %d numa information.\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "unable to get virtqueue %d numa information.", vq->index); return; } @@ -531,15 +531,15 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) vq = rte_realloc_socket(*pvq, sizeof(**pvq), 0, node); if (!vq) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc virtqueue %d on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc virtqueue %d on node %d", (*pvq)->index, node); return; } *pvq = vq; if (vq != dev->virtqueue[vq->index]) { - VHOST_LOG_CONFIG(dev->ifname, INFO, "reallocated virtqueue on node %d\n", node); + VHOST_CONFIG_LOG(dev->ifname, INFO, "reallocated virtqueue on node %d", node); dev->virtqueue[vq->index] = vq; } @@ -549,8 +549,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) sup = rte_realloc_socket(vq->shadow_used_packed, vq->size * sizeof(*sup), RTE_CACHE_LINE_SIZE, node); if (!sup) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc shadow packed on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc shadow packed on node %d", node); return; } @@ -561,8 +561,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) sus = rte_realloc_socket(vq->shadow_used_split, vq->size * sizeof(*sus), RTE_CACHE_LINE_SIZE, node); if (!sus) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc shadow split on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc shadow split on node %d", node); return; } @@ -572,8 +572,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) bce = rte_realloc_socket(vq->batch_copy_elems, vq->size * sizeof(*bce), RTE_CACHE_LINE_SIZE, node); if (!bce) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc batch copy elem on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc batch copy elem on node %d", node); return; } @@ -584,8 +584,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) lc = rte_realloc_socket(vq->log_cache, sizeof(*lc) * VHOST_LOG_CACHE_NR, 0, node); if (!lc) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc log cache on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc log cache on node %d", node); return; } @@ -597,8 +597,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) ri = rte_realloc_socket(vq->resubmit_inflight, sizeof(*ri), 0, node); if (!ri) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc resubmit inflight on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc resubmit inflight on node %d", node); return; } @@ -610,8 +610,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) rd = rte_realloc_socket(ri->resubmit_list, sizeof(*rd) * ri->resubmit_num, 0, node); if (!rd) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc resubmit list on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc resubmit list on node %d", node); return; } @@ -628,7 +628,7 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) ret = get_mempolicy(&dev_node, NULL, 0, dev, MPOL_F_NODE | MPOL_F_ADDR); if (ret) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "unable to get numa information.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "unable to get numa information."); return; } @@ -637,20 +637,20 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) dev = rte_realloc_socket(*pdev, sizeof(**pdev), 0, node); if (!dev) { - VHOST_LOG_CONFIG((*pdev)->ifname, ERR, "failed to realloc dev on node %d\n", node); + VHOST_CONFIG_LOG((*pdev)->ifname, ERR, "failed to realloc dev on node %d", node); return; } *pdev = dev; - VHOST_LOG_CONFIG(dev->ifname, INFO, "reallocated device on node %d\n", node); + VHOST_CONFIG_LOG(dev->ifname, INFO, "reallocated device on node %d", node); vhost_devices[dev->vid] = dev; mem_size = sizeof(struct rte_vhost_memory) + sizeof(struct rte_vhost_mem_region) * dev->mem->nregions; mem = rte_realloc_socket(dev->mem, mem_size, 0, node); if (!mem) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc mem table on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc mem table on node %d", node); return; } @@ -659,8 +659,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) gp = rte_realloc_socket(dev->guest_pages, dev->max_guest_pages * sizeof(*gp), RTE_CACHE_LINE_SIZE, node); if (!gp) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc guest pages on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc guest pages on node %d", node); return; } @@ -771,8 +771,8 @@ mem_set_dump(struct virtio_net *dev, void *ptr, size_t size, bool enable, uint64 size_t len = end - (uintptr_t)start; if (madvise(start, len, enable ? MADV_DODUMP : MADV_DONTDUMP) == -1) { - VHOST_LOG_CONFIG(dev->ifname, INFO, - "could not set coredump preference (%s).\n", strerror(errno)); + VHOST_CONFIG_LOG(dev->ifname, INFO, + "could not set coredump preference (%s).", strerror(errno)); } #endif } @@ -791,7 +791,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) vq->log_guest_addr = log_addr_to_gpa(dev, vq); if (vq->log_guest_addr == 0) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "failed to map log_guest_addr.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "failed to map log_guest_addr."); return; } } @@ -803,7 +803,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) if (vq->desc_packed == NULL || len != sizeof(struct vring_packed_desc) * vq->size) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "failed to map desc_packed ring.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "failed to map desc_packed ring."); return; } @@ -819,8 +819,8 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) vq, vq->ring_addrs.avail_user_addr, &len); if (vq->driver_event == NULL || len != sizeof(struct vring_packed_desc_event)) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "failed to find driver area address.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "failed to find driver area address."); return; } @@ -832,8 +832,8 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) vq, vq->ring_addrs.used_user_addr, &len); if (vq->device_event == NULL || len != sizeof(struct vring_packed_desc_event)) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "failed to find device area address.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "failed to find device area address."); return; } @@ -851,7 +851,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) vq->desc = (struct vring_desc *)(uintptr_t)ring_addr_to_vva(dev, vq, vq->ring_addrs.desc_user_addr, &len); if (vq->desc == 0 || len != sizeof(struct vring_desc) * vq->size) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "failed to map desc ring.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "failed to map desc ring."); return; } @@ -867,7 +867,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) vq->avail = (struct vring_avail *)(uintptr_t)ring_addr_to_vva(dev, vq, vq->ring_addrs.avail_user_addr, &len); if (vq->avail == 0 || len != expected_len) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "failed to map avail ring.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "failed to map avail ring."); return; } @@ -880,28 +880,28 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) vq->used = (struct vring_used *)(uintptr_t)ring_addr_to_vva(dev, vq, vq->ring_addrs.used_user_addr, &len); if (vq->used == 0 || len != expected_len) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "failed to map used ring.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "failed to map used ring."); return; } mem_set_dump(dev, vq->used, len, true, hua_to_alignment(dev->mem, vq->used)); if (vq->last_used_idx != vq->used->idx) { - VHOST_LOG_CONFIG(dev->ifname, WARNING, - "last_used_idx (%u) and vq->used->idx (%u) mismatches;\n", + VHOST_CONFIG_LOG(dev->ifname, WARNING, + "last_used_idx (%u) and vq->used->idx (%u) mismatches;", vq->last_used_idx, vq->used->idx); vq->last_used_idx = vq->used->idx; vq->last_avail_idx = vq->used->idx; - VHOST_LOG_CONFIG(dev->ifname, WARNING, - "some packets maybe resent for Tx and dropped for Rx\n"); + VHOST_CONFIG_LOG(dev->ifname, WARNING, + "some packets maybe resent for Tx and dropped for Rx"); } vq->access_ok = true; - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "mapped address desc: %p\n", vq->desc); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "mapped address avail: %p\n", vq->avail); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "mapped address used: %p\n", vq->used); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "log_guest_addr: %" PRIx64 "\n", vq->log_guest_addr); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "mapped address desc: %p", vq->desc); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "mapped address avail: %p", vq->avail); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "mapped address used: %p", vq->used); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "log_guest_addr: %" PRIx64, vq->log_guest_addr); } /* @@ -975,8 +975,8 @@ vhost_user_set_vring_base(struct virtio_net **pdev, vq->last_avail_idx = ctx->msg.payload.state.num; } - VHOST_LOG_CONFIG(dev->ifname, INFO, - "vring base idx:%u last_used_idx:%u last_avail_idx:%u.\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "vring base idx:%u last_used_idx:%u last_avail_idx:%u.", ctx->msg.payload.state.index, vq->last_used_idx, vq->last_avail_idx); return RTE_VHOST_MSG_RESULT_OK; @@ -996,7 +996,7 @@ add_one_guest_page(struct virtio_net *dev, uint64_t guest_phys_addr, dev->max_guest_pages * sizeof(*page), RTE_CACHE_LINE_SIZE); if (dev->guest_pages == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "cannot realloc guest_pages\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "cannot realloc guest_pages"); rte_free(old_pages); return -1; } @@ -1077,12 +1077,12 @@ dump_guest_pages(struct virtio_net *dev) for (i = 0; i < dev->nr_guest_pages; i++) { page = &dev->guest_pages[i]; - VHOST_LOG_CONFIG(dev->ifname, INFO, "guest physical page region %u\n", i); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tguest_phys_addr: %" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "guest physical page region %u", i); + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tguest_phys_addr: %" PRIx64, page->guest_phys_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\thost_iova : %" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "\thost_iova : %" PRIx64, page->host_iova); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tsize : %" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tsize : %" PRIx64, page->size); } } @@ -1131,9 +1131,9 @@ vhost_user_postcopy_region_register(struct virtio_net *dev, if (ioctl(dev->postcopy_ufd, UFFDIO_REGISTER, ®_struct)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to register ufd for region " - "%" PRIx64 " - %" PRIx64 " (ufd = %d) %s\n", + "%" PRIx64 " - %" PRIx64 " (ufd = %d) %s", (uint64_t)reg_struct.range.start, (uint64_t)reg_struct.range.start + (uint64_t)reg_struct.range.len - 1, @@ -1142,8 +1142,8 @@ vhost_user_postcopy_region_register(struct virtio_net *dev, return -1; } - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t userfaultfd registered for range : %" PRIx64 " - %" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t userfaultfd registered for range : %" PRIx64 " - %" PRIx64, (uint64_t)reg_struct.range.start, (uint64_t)reg_struct.range.start + (uint64_t)reg_struct.range.len - 1); @@ -1190,8 +1190,8 @@ vhost_user_postcopy_register(struct virtio_net *dev, int main_fd, * we've got to wait before we're allowed to generate faults. */ if (read_vhost_message(dev, main_fd, &ack_ctx) <= 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to read qemu ack on postcopy set-mem-table\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to read qemu ack on postcopy set-mem-table"); return -1; } @@ -1199,8 +1199,8 @@ vhost_user_postcopy_register(struct virtio_net *dev, int main_fd, return -1; if (ack_ctx.msg.request.frontend != VHOST_USER_SET_MEM_TABLE) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "bad qemu ack on postcopy set-mem-table (%d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "bad qemu ack on postcopy set-mem-table (%d)", ack_ctx.msg.request.frontend); return -1; } @@ -1227,8 +1227,8 @@ vhost_user_mmap_region(struct virtio_net *dev, /* Check for memory_size + mmap_offset overflow */ if (mmap_offset >= -region->size) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "mmap_offset (%#"PRIx64") and memory_size (%#"PRIx64") overflow\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "mmap_offset (%#"PRIx64") and memory_size (%#"PRIx64") overflow", mmap_offset, region->size); return -1; } @@ -1243,7 +1243,7 @@ vhost_user_mmap_region(struct virtio_net *dev, */ alignment = get_blk_size(region->fd); if (alignment == (uint64_t)-1) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "couldn't get hugepage size through fstat\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "couldn't get hugepage size through fstat"); return -1; } mmap_size = RTE_ALIGN_CEIL(mmap_size, alignment); @@ -1256,8 +1256,8 @@ vhost_user_mmap_region(struct virtio_net *dev, * mmap() kernel implementation would return an error, but * better catch it before and provide useful info in the logs. */ - VHOST_LOG_CONFIG(dev->ifname, ERR, - "mmap size (0x%" PRIx64 ") or alignment (0x%" PRIx64 ") is invalid\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "mmap size (0x%" PRIx64 ") or alignment (0x%" PRIx64 ") is invalid", region->size + mmap_offset, alignment); return -1; } @@ -1267,7 +1267,7 @@ vhost_user_mmap_region(struct virtio_net *dev, MAP_SHARED | populate, region->fd, 0); if (mmap_addr == MAP_FAILED) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "mmap failed (%s).\n", strerror(errno)); + VHOST_CONFIG_LOG(dev->ifname, ERR, "mmap failed (%s).", strerror(errno)); return -1; } @@ -1278,35 +1278,35 @@ vhost_user_mmap_region(struct virtio_net *dev, if (dev->async_copy) { if (add_guest_pages(dev, region, alignment) < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "adding guest pages to region failed.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "adding guest pages to region failed."); return -1; } } - VHOST_LOG_CONFIG(dev->ifname, INFO, - "guest memory region size: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "guest memory region size: 0x%" PRIx64, region->size); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t guest physical addr: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t guest physical addr: 0x%" PRIx64, region->guest_phys_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t guest virtual addr: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t guest virtual addr: 0x%" PRIx64, region->guest_user_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t host virtual addr: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t host virtual addr: 0x%" PRIx64, region->host_user_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t mmap addr : 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t mmap addr : 0x%" PRIx64, (uint64_t)(uintptr_t)mmap_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t mmap size : 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t mmap size : 0x%" PRIx64, mmap_size); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t mmap align: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t mmap align: 0x%" PRIx64, alignment); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t mmap off : 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t mmap off : 0x%" PRIx64, mmap_offset); return 0; @@ -1329,14 +1329,14 @@ vhost_user_set_mem_table(struct virtio_net **pdev, return RTE_VHOST_MSG_RESULT_ERR; if (memory->nregions > VHOST_MEMORY_MAX_NREGIONS) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "too many memory regions (%u)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "too many memory regions (%u)", memory->nregions); goto close_msg_fds; } if (dev->mem && !vhost_memory_changed(memory, dev->mem)) { - VHOST_LOG_CONFIG(dev->ifname, INFO, "memory regions not changed\n"); + VHOST_CONFIG_LOG(dev->ifname, INFO, "memory regions not changed"); close_msg_fds(ctx); @@ -1386,8 +1386,8 @@ vhost_user_set_mem_table(struct virtio_net **pdev, RTE_CACHE_LINE_SIZE, numa_node); if (dev->guest_pages == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for dev->guest_pages\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for dev->guest_pages"); goto close_msg_fds; } } @@ -1395,7 +1395,7 @@ vhost_user_set_mem_table(struct virtio_net **pdev, dev->mem = rte_zmalloc_socket("vhost-mem-table", sizeof(struct rte_vhost_memory) + sizeof(struct rte_vhost_mem_region) * memory->nregions, 0, numa_node); if (dev->mem == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to allocate memory for dev->mem\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to allocate memory for dev->mem"); goto free_guest_pages; } @@ -1416,7 +1416,7 @@ vhost_user_set_mem_table(struct virtio_net **pdev, mmap_offset = memory->regions[i].mmap_offset; if (vhost_user_mmap_region(dev, reg, mmap_offset) < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to mmap region %u\n", i); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to mmap region %u", i); goto free_mem_table; } @@ -1538,7 +1538,7 @@ virtio_is_ready(struct virtio_net *dev) dev->flags |= VIRTIO_DEV_READY; if (!(dev->flags & VIRTIO_DEV_RUNNING)) - VHOST_LOG_CONFIG(dev->ifname, INFO, "virtio is now ready for processing.\n"); + VHOST_CONFIG_LOG(dev->ifname, INFO, "virtio is now ready for processing."); return 1; } @@ -1559,7 +1559,7 @@ inflight_mem_alloc(struct virtio_net *dev, const char *name, size_t size, int *f if (mfd == -1) { mfd = mkstemp(fname); if (mfd == -1) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to get inflight buffer fd\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to get inflight buffer fd"); return NULL; } @@ -1567,14 +1567,14 @@ inflight_mem_alloc(struct virtio_net *dev, const char *name, size_t size, int *f } if (ftruncate(mfd, size) == -1) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to alloc inflight buffer\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to alloc inflight buffer"); close(mfd); return NULL; } ptr = mmap(0, size, PROT_READ | PROT_WRITE, MAP_SHARED, mfd, 0); if (ptr == MAP_FAILED) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to mmap inflight buffer\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to mmap inflight buffer"); close(mfd); return NULL; } @@ -1616,8 +1616,8 @@ vhost_user_get_inflight_fd(struct virtio_net **pdev, void *addr; if (ctx->msg.size != sizeof(ctx->msg.payload.inflight)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid get_inflight_fd message size is %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid get_inflight_fd message size is %d", ctx->msg.size); return RTE_VHOST_MSG_RESULT_ERR; } @@ -1633,7 +1633,7 @@ vhost_user_get_inflight_fd(struct virtio_net **pdev, dev->inflight_info = rte_zmalloc_socket("inflight_info", sizeof(struct inflight_mem_info), 0, numa_node); if (!dev->inflight_info) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to alloc dev inflight area\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to alloc dev inflight area"); return RTE_VHOST_MSG_RESULT_ERR; } dev->inflight_info->fd = -1; @@ -1642,11 +1642,11 @@ vhost_user_get_inflight_fd(struct virtio_net **pdev, num_queues = ctx->msg.payload.inflight.num_queues; queue_size = ctx->msg.payload.inflight.queue_size; - VHOST_LOG_CONFIG(dev->ifname, INFO, - "get_inflight_fd num_queues: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "get_inflight_fd num_queues: %u", ctx->msg.payload.inflight.num_queues); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "get_inflight_fd queue_size: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "get_inflight_fd queue_size: %u", ctx->msg.payload.inflight.queue_size); if (vq_is_packed(dev)) @@ -1657,7 +1657,7 @@ vhost_user_get_inflight_fd(struct virtio_net **pdev, mmap_size = num_queues * pervq_inflight_size; addr = inflight_mem_alloc(dev, "vhost-inflight", mmap_size, &fd); if (!addr) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to alloc vhost inflight area\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to alloc vhost inflight area"); ctx->msg.payload.inflight.mmap_size = 0; return RTE_VHOST_MSG_RESULT_ERR; } @@ -1691,14 +1691,14 @@ vhost_user_get_inflight_fd(struct virtio_net **pdev, } } - VHOST_LOG_CONFIG(dev->ifname, INFO, - "send inflight mmap_size: %"PRIu64"\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "send inflight mmap_size: %"PRIu64, ctx->msg.payload.inflight.mmap_size); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "send inflight mmap_offset: %"PRIu64"\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "send inflight mmap_offset: %"PRIu64, ctx->msg.payload.inflight.mmap_offset); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "send inflight fd: %d\n", ctx->fds[0]); + VHOST_CONFIG_LOG(dev->ifname, INFO, + "send inflight fd: %d", ctx->fds[0]); return RTE_VHOST_MSG_RESULT_REPLY; } @@ -1722,8 +1722,8 @@ vhost_user_set_inflight_fd(struct virtio_net **pdev, fd = ctx->fds[0]; if (ctx->msg.size != sizeof(ctx->msg.payload.inflight) || fd < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid set_inflight_fd message size is %d,fd is %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid set_inflight_fd message size is %d,fd is %d", ctx->msg.size, fd); return RTE_VHOST_MSG_RESULT_ERR; } @@ -1738,21 +1738,21 @@ vhost_user_set_inflight_fd(struct virtio_net **pdev, else pervq_inflight_size = get_pervq_shm_size_split(queue_size); - VHOST_LOG_CONFIG(dev->ifname, INFO, "set_inflight_fd mmap_size: %"PRIu64"\n", mmap_size); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "set_inflight_fd mmap_offset: %"PRIu64"\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "set_inflight_fd mmap_size: %"PRIu64, mmap_size); + VHOST_CONFIG_LOG(dev->ifname, INFO, + "set_inflight_fd mmap_offset: %"PRIu64, mmap_offset); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "set_inflight_fd num_queues: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "set_inflight_fd num_queues: %u", num_queues); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "set_inflight_fd queue_size: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "set_inflight_fd queue_size: %u", queue_size); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "set_inflight_fd fd: %d\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "set_inflight_fd fd: %d", fd); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "set_inflight_fd pervq_inflight_size: %d\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "set_inflight_fd pervq_inflight_size: %d", pervq_inflight_size); /* @@ -1766,7 +1766,7 @@ vhost_user_set_inflight_fd(struct virtio_net **pdev, dev->inflight_info = rte_zmalloc_socket("inflight_info", sizeof(struct inflight_mem_info), 0, numa_node); if (dev->inflight_info == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to alloc dev inflight area\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to alloc dev inflight area"); return RTE_VHOST_MSG_RESULT_ERR; } dev->inflight_info->fd = -1; @@ -1780,7 +1780,7 @@ vhost_user_set_inflight_fd(struct virtio_net **pdev, addr = mmap(0, mmap_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, mmap_offset); if (addr == MAP_FAILED) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to mmap share memory.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to mmap share memory."); return RTE_VHOST_MSG_RESULT_ERR; } @@ -1831,8 +1831,8 @@ vhost_user_set_vring_call(struct virtio_net **pdev, file.fd = VIRTIO_INVALID_EVENTFD; else file.fd = ctx->fds[0]; - VHOST_LOG_CONFIG(dev->ifname, INFO, - "vring call idx:%d file:%d\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "vring call idx:%d file:%d", file.index, file.fd); vq = dev->virtqueue[file.index]; @@ -1863,7 +1863,7 @@ static int vhost_user_set_vring_err(struct virtio_net **pdev, if (!(ctx->msg.payload.u64 & VHOST_USER_VRING_NOFD_MASK)) close(ctx->fds[0]); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "not implemented\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "not implemented"); return RTE_VHOST_MSG_RESULT_OK; } @@ -1929,8 +1929,8 @@ vhost_check_queue_inflights_split(struct virtio_net *dev, resubmit = rte_zmalloc_socket("resubmit", sizeof(struct rte_vhost_resubmit_info), 0, vq->numa_node); if (!resubmit) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for resubmit info.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for resubmit info."); return RTE_VHOST_MSG_RESULT_ERR; } @@ -1938,8 +1938,8 @@ vhost_check_queue_inflights_split(struct virtio_net *dev, resubmit_num * sizeof(struct rte_vhost_resubmit_desc), 0, vq->numa_node); if (!resubmit->resubmit_list) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for inflight desc.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for inflight desc."); rte_free(resubmit); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2025,8 +2025,8 @@ vhost_check_queue_inflights_packed(struct virtio_net *dev, resubmit = rte_zmalloc_socket("resubmit", sizeof(struct rte_vhost_resubmit_info), 0, vq->numa_node); if (resubmit == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for resubmit info.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for resubmit info."); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2034,8 +2034,8 @@ vhost_check_queue_inflights_packed(struct virtio_net *dev, resubmit_num * sizeof(struct rte_vhost_resubmit_desc), 0, vq->numa_node); if (resubmit->resubmit_list == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for resubmit desc.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for resubmit desc."); rte_free(resubmit); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2082,8 +2082,8 @@ vhost_user_set_vring_kick(struct virtio_net **pdev, file.fd = VIRTIO_INVALID_EVENTFD; else file.fd = ctx->fds[0]; - VHOST_LOG_CONFIG(dev->ifname, INFO, - "vring kick idx:%d file:%d\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "vring kick idx:%d file:%d", file.index, file.fd); /* Interpret ring addresses only when ring is started. */ @@ -2111,15 +2111,15 @@ vhost_user_set_vring_kick(struct virtio_net **pdev, if (vq_is_packed(dev)) { if (vhost_check_queue_inflights_packed(dev, vq)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to inflights for vq: %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to inflights for vq: %d", file.index); return RTE_VHOST_MSG_RESULT_ERR; } } else { if (vhost_check_queue_inflights_split(dev, vq)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to inflights for vq: %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to inflights for vq: %d", file.index); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2159,8 +2159,8 @@ vhost_user_get_vring_base(struct virtio_net **pdev, ctx->msg.payload.state.num = vq->last_avail_idx; } - VHOST_LOG_CONFIG(dev->ifname, INFO, - "vring base idx:%d file:%d\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "vring base idx:%d file:%d", ctx->msg.payload.state.index, ctx->msg.payload.state.num); /* * Based on current qemu vhost-user implementation, this message is @@ -2217,8 +2217,8 @@ vhost_user_set_vring_enable(struct virtio_net **pdev, bool enable = !!ctx->msg.payload.state.num; int index = (int)ctx->msg.payload.state.index; - VHOST_LOG_CONFIG(dev->ifname, INFO, - "set queue enable: %d to qp idx: %d\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "set queue enable: %d to qp idx: %d", enable, index); vq = dev->virtqueue[index]; @@ -2226,8 +2226,8 @@ vhost_user_set_vring_enable(struct virtio_net **pdev, /* vhost_user_lock_all_queue_pairs locked all qps */ vq_assert_lock(dev, vq); if (enable && vq->async && vq->async->pkts_inflight_n) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to enable vring. Inflight packets must be completed first\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to enable vring. Inflight packets must be completed first"); return RTE_VHOST_MSG_RESULT_ERR; } } @@ -2267,13 +2267,13 @@ vhost_user_set_protocol_features(struct virtio_net **pdev, rte_vhost_driver_get_protocol_features(dev->ifname, &backend_protocol_features); if (protocol_features & ~backend_protocol_features) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "received invalid protocol features.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "received invalid protocol features."); return RTE_VHOST_MSG_RESULT_ERR; } dev->protocol_features = protocol_features; - VHOST_LOG_CONFIG(dev->ifname, INFO, - "negotiated Vhost-user protocol features: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "negotiated Vhost-user protocol features: 0x%" PRIx64, dev->protocol_features); return RTE_VHOST_MSG_RESULT_OK; @@ -2295,13 +2295,13 @@ vhost_user_set_log_base(struct virtio_net **pdev, return RTE_VHOST_MSG_RESULT_ERR; if (fd < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "invalid log fd: %d\n", fd); + VHOST_CONFIG_LOG(dev->ifname, ERR, "invalid log fd: %d", fd); return RTE_VHOST_MSG_RESULT_ERR; } if (ctx->msg.size != sizeof(VhostUserLog)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid log base msg size: %"PRId32" != %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid log base msg size: %"PRId32" != %d", ctx->msg.size, (int)sizeof(VhostUserLog)); goto close_msg_fds; } @@ -2311,14 +2311,14 @@ vhost_user_set_log_base(struct virtio_net **pdev, /* Check for mmap size and offset overflow. */ if (off >= -size) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "log offset %#"PRIx64" and log size %#"PRIx64" overflow\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "log offset %#"PRIx64" and log size %#"PRIx64" overflow", off, size); goto close_msg_fds; } - VHOST_LOG_CONFIG(dev->ifname, INFO, - "log mmap size: %"PRId64", offset: %"PRId64"\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "log mmap size: %"PRId64", offset: %"PRId64, size, off); /* @@ -2329,7 +2329,7 @@ vhost_user_set_log_base(struct virtio_net **pdev, alignment = get_blk_size(fd); close(fd); if (addr == MAP_FAILED) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "mmap log base failed!\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "mmap log base failed!"); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2359,8 +2359,8 @@ vhost_user_set_log_base(struct virtio_net **pdev, * caching will be done, which will impact performance */ if (!vq->log_cache) - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate VQ logging cache\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate VQ logging cache"); } /* @@ -2387,7 +2387,7 @@ static int vhost_user_set_log_fd(struct virtio_net **pdev, return RTE_VHOST_MSG_RESULT_ERR; close(ctx->fds[0]); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "not implemented.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "not implemented."); return RTE_VHOST_MSG_RESULT_OK; } @@ -2409,8 +2409,8 @@ vhost_user_send_rarp(struct virtio_net **pdev, uint8_t *mac = (uint8_t *)&ctx->msg.payload.u64; struct rte_vdpa_device *vdpa_dev; - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "MAC: " RTE_ETHER_ADDR_PRT_FMT "\n", + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "MAC: " RTE_ETHER_ADDR_PRT_FMT, mac[0], mac[1], mac[2], mac[3], mac[4], mac[5]); memcpy(dev->mac.addr_bytes, mac, 6); @@ -2438,8 +2438,8 @@ vhost_user_net_set_mtu(struct virtio_net **pdev, if (ctx->msg.payload.u64 < VIRTIO_MIN_MTU || ctx->msg.payload.u64 > VIRTIO_MAX_MTU) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid MTU size (%"PRIu64")\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid MTU size (%"PRIu64")", ctx->msg.payload.u64); return RTE_VHOST_MSG_RESULT_ERR; @@ -2462,8 +2462,8 @@ vhost_user_set_req_fd(struct virtio_net **pdev, return RTE_VHOST_MSG_RESULT_ERR; if (fd < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid file descriptor for backend channel (%d)\n", fd); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid file descriptor for backend channel (%d)", fd); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2563,7 +2563,7 @@ vhost_user_get_config(struct virtio_net **pdev, return RTE_VHOST_MSG_RESULT_ERR; if (!vdpa_dev) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "is not vDPA device!\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "is not vDPA device!"); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2573,10 +2573,10 @@ vhost_user_get_config(struct virtio_net **pdev, ctx->msg.payload.cfg.size); if (ret != 0) { ctx->msg.size = 0; - VHOST_LOG_CONFIG(dev->ifname, ERR, "get_config() return error!\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "get_config() return error!"); } } else { - VHOST_LOG_CONFIG(dev->ifname, ERR, "get_config() not supported!\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "get_config() not supported!"); } return RTE_VHOST_MSG_RESULT_REPLY; @@ -2595,14 +2595,14 @@ vhost_user_set_config(struct virtio_net **pdev, return RTE_VHOST_MSG_RESULT_ERR; if (ctx->msg.payload.cfg.size > VHOST_USER_MAX_CONFIG_SIZE) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "vhost_user_config size: %"PRIu32", should not be larger than %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "vhost_user_config size: %"PRIu32", should not be larger than %d", ctx->msg.payload.cfg.size, VHOST_USER_MAX_CONFIG_SIZE); goto out; } if (!vdpa_dev) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "is not vDPA device!\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "is not vDPA device!"); goto out; } @@ -2613,9 +2613,9 @@ vhost_user_set_config(struct virtio_net **pdev, ctx->msg.payload.cfg.size, ctx->msg.payload.cfg.flags); if (ret) - VHOST_LOG_CONFIG(dev->ifname, ERR, "set_config() return error!\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "set_config() return error!"); } else { - VHOST_LOG_CONFIG(dev->ifname, ERR, "set_config() not supported!\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "set_config() not supported!"); } return RTE_VHOST_MSG_RESULT_OK; @@ -2676,7 +2676,7 @@ vhost_user_iotlb_msg(struct virtio_net **pdev, } break; default: - VHOST_LOG_CONFIG(dev->ifname, ERR, "invalid IOTLB message type (%d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "invalid IOTLB message type (%d)", imsg->type); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2696,16 +2696,16 @@ vhost_user_set_postcopy_advise(struct virtio_net **pdev, dev->postcopy_ufd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK); if (dev->postcopy_ufd == -1) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "userfaultfd not available: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "userfaultfd not available: %s", strerror(errno)); return RTE_VHOST_MSG_RESULT_ERR; } api_struct.api = UFFD_API; api_struct.features = 0; if (ioctl(dev->postcopy_ufd, UFFDIO_API, &api_struct)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "UFFDIO_API ioctl failure: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "UFFDIO_API ioctl failure: %s", strerror(errno)); close(dev->postcopy_ufd); dev->postcopy_ufd = -1; @@ -2731,8 +2731,8 @@ vhost_user_set_postcopy_listen(struct virtio_net **pdev, struct virtio_net *dev = *pdev; if (dev->mem && dev->mem->nregions) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "regions already registered at postcopy-listen\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "regions already registered at postcopy-listen"); return RTE_VHOST_MSG_RESULT_ERR; } dev->postcopy_listening = 1; @@ -2783,8 +2783,8 @@ vhost_user_set_status(struct virtio_net **pdev, /* As per Virtio specification, the device status is 8bits long */ if (ctx->msg.payload.u64 > UINT8_MAX) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid VHOST_USER_SET_STATUS payload 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid VHOST_USER_SET_STATUS payload 0x%" PRIx64, ctx->msg.payload.u64); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2793,8 +2793,8 @@ vhost_user_set_status(struct virtio_net **pdev, if ((dev->status & VIRTIO_DEVICE_STATUS_FEATURES_OK) && (dev->flags & VIRTIO_DEV_FEATURES_FAILED)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "FEATURES_OK bit is set but feature negotiation failed\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "FEATURES_OK bit is set but feature negotiation failed"); /* * Clear the bit to let the driver know about the feature * negotiation failure @@ -2802,27 +2802,27 @@ vhost_user_set_status(struct virtio_net **pdev, dev->status &= ~VIRTIO_DEVICE_STATUS_FEATURES_OK; } - VHOST_LOG_CONFIG(dev->ifname, INFO, "new device status(0x%08x):\n", dev->status); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t-RESET: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "new device status(0x%08x):", dev->status); + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t-RESET: %u", (dev->status == VIRTIO_DEVICE_STATUS_RESET)); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t-ACKNOWLEDGE: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t-ACKNOWLEDGE: %u", !!(dev->status & VIRTIO_DEVICE_STATUS_ACK)); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t-DRIVER: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t-DRIVER: %u", !!(dev->status & VIRTIO_DEVICE_STATUS_DRIVER)); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t-FEATURES_OK: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t-FEATURES_OK: %u", !!(dev->status & VIRTIO_DEVICE_STATUS_FEATURES_OK)); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t-DRIVER_OK: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t-DRIVER_OK: %u", !!(dev->status & VIRTIO_DEVICE_STATUS_DRIVER_OK)); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t-DEVICE_NEED_RESET: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t-DEVICE_NEED_RESET: %u", !!(dev->status & VIRTIO_DEVICE_STATUS_DEV_NEED_RESET)); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t-FAILED: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t-FAILED: %u", !!(dev->status & VIRTIO_DEVICE_STATUS_FAILED)); return RTE_VHOST_MSG_RESULT_OK; @@ -2881,14 +2881,14 @@ read_vhost_message(struct virtio_net *dev, int sockfd, struct vhu_msg_context * goto out; if (ret != VHOST_USER_HDR_SIZE) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Unexpected header size read\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Unexpected header size read"); ret = -1; goto out; } if (ctx->msg.size) { if (ctx->msg.size > sizeof(ctx->msg.payload)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "invalid msg size: %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "invalid msg size: %d", ctx->msg.size); ret = -1; goto out; @@ -2897,7 +2897,7 @@ read_vhost_message(struct virtio_net *dev, int sockfd, struct vhu_msg_context * if (ret <= 0) goto out; if (ret != (int)ctx->msg.size) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "read control message failed\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "read control message failed"); ret = -1; goto out; } @@ -2949,24 +2949,24 @@ send_vhost_backend_message_process_reply(struct virtio_net *dev, struct vhu_msg_ rte_spinlock_lock(&dev->backend_req_lock); ret = send_vhost_backend_message(dev, ctx); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to send config change (%d)\n", ret); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to send config change (%d)", ret); goto out; } ret = read_vhost_message(dev, dev->backend_req_fd, &msg_reply); if (ret <= 0) { if (ret < 0) - VHOST_LOG_CONFIG(dev->ifname, ERR, - "vhost read backend message reply failed\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "vhost read backend message reply failed"); else - VHOST_LOG_CONFIG(dev->ifname, INFO, "vhost peer closed\n"); + VHOST_CONFIG_LOG(dev->ifname, INFO, "vhost peer closed"); ret = -1; goto out; } if (msg_reply.msg.request.backend != ctx->msg.request.backend) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "received unexpected msg type (%u), expected %u\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "received unexpected msg type (%u), expected %u", msg_reply.msg.request.backend, ctx->msg.request.backend); ret = -1; goto out; @@ -3010,7 +3010,7 @@ vhost_user_check_and_alloc_queue_pair(struct virtio_net *dev, } if (vring_idx >= VHOST_MAX_VRING) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "invalid vring index: %u\n", vring_idx); + VHOST_CONFIG_LOG(dev->ifname, ERR, "invalid vring index: %u", vring_idx); return -1; } @@ -3078,8 +3078,8 @@ vhost_user_msg_handler(int vid, int fd) if (!dev->notify_ops) { dev->notify_ops = vhost_driver_callback_get(dev->ifname); if (!dev->notify_ops) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to get callback ops for driver\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to get callback ops for driver"); return -1; } } @@ -3087,7 +3087,7 @@ vhost_user_msg_handler(int vid, int fd) ctx.msg.request.frontend = VHOST_USER_NONE; ret = read_vhost_message(dev, fd, &ctx); if (ret == 0) { - VHOST_LOG_CONFIG(dev->ifname, INFO, "vhost peer closed\n"); + VHOST_CONFIG_LOG(dev->ifname, INFO, "vhost peer closed"); return -1; } @@ -3098,7 +3098,7 @@ vhost_user_msg_handler(int vid, int fd) msg_handler = NULL; if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "vhost read message %s%s%sfailed\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "vhost read message %s%s%sfailed", msg_handler != NULL ? "for " : "", msg_handler != NULL ? msg_handler->description : "", msg_handler != NULL ? " " : ""); @@ -3107,20 +3107,20 @@ vhost_user_msg_handler(int vid, int fd) if (msg_handler != NULL && msg_handler->description != NULL) { if (request != VHOST_USER_IOTLB_MSG) - VHOST_LOG_CONFIG(dev->ifname, INFO, - "read message %s\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "read message %s", msg_handler->description); else - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "read message %s\n", + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "read message %s", msg_handler->description); } else { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "external request %d\n", request); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "external request %d", request); } ret = vhost_user_check_and_alloc_queue_pair(dev, &ctx); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to alloc queue\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to alloc queue"); return -1; } @@ -3187,20 +3187,20 @@ vhost_user_msg_handler(int vid, int fd) switch (msg_result) { case RTE_VHOST_MSG_RESULT_ERR: - VHOST_LOG_CONFIG(dev->ifname, ERR, - "processing %s failed.\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "processing %s failed.", msg_handler->description); handled = true; break; case RTE_VHOST_MSG_RESULT_OK: - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "processing %s succeeded.\n", + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "processing %s succeeded.", msg_handler->description); handled = true; break; case RTE_VHOST_MSG_RESULT_REPLY: - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "processing %s succeeded and needs reply.\n", + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "processing %s succeeded and needs reply.", msg_handler->description); send_vhost_reply(dev, fd, &ctx); handled = true; @@ -3229,8 +3229,8 @@ vhost_user_msg_handler(int vid, int fd) /* If message was not handled at this stage, treat it as an error */ if (!handled) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "vhost message (req: %d) was not handled.\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "vhost message (req: %d) was not handled.", request); close_msg_fds(&ctx); msg_result = RTE_VHOST_MSG_RESULT_ERR; @@ -3247,7 +3247,7 @@ vhost_user_msg_handler(int vid, int fd) ctx.fd_num = 0; send_vhost_reply(dev, fd, &ctx); } else if (msg_result == RTE_VHOST_MSG_RESULT_ERR) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "vhost message handling failed.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "vhost message handling failed."); ret = -1; goto unlock; } @@ -3296,7 +3296,7 @@ vhost_user_msg_handler(int vid, int fd) if (!(dev->flags & VIRTIO_DEV_VDPA_CONFIGURED)) { if (vdpa_dev->ops->dev_conf(dev->vid)) - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to configure vDPA device\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to configure vDPA device"); else dev->flags |= VIRTIO_DEV_VDPA_CONFIGURED; } @@ -3324,8 +3324,8 @@ vhost_user_iotlb_miss(struct virtio_net *dev, uint64_t iova, uint8_t perm) ret = send_vhost_message(dev, dev->backend_req_fd, &ctx); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to send IOTLB miss message (%d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to send IOTLB miss message (%d)", ret); return ret; } @@ -3358,7 +3358,7 @@ rte_vhost_backend_config_change(int vid, bool need_reply) } if (ret < 0) - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to send config change (%d)\n", ret); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to send config change (%d)", ret); return ret; } @@ -3390,7 +3390,7 @@ static int vhost_user_backend_set_vring_host_notifier(struct virtio_net *dev, ret = send_vhost_backend_message_process_reply(dev, &ctx); if (ret < 0) - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to set host notifier (%d)\n", ret); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to set host notifier (%d)", ret); return ret; } diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 8af20f1487..280d4845f8 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -130,8 +130,8 @@ vhost_async_dma_transfer_one(struct virtio_net *dev, struct vhost_virtqueue *vq, */ if (unlikely(copy_idx < 0)) { if (!vhost_async_dma_copy_log) { - VHOST_LOG_DATA(dev->ifname, ERR, - "DMA copy failed for channel %d:%u\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "DMA copy failed for channel %d:%u", dma_id, vchan_id); vhost_async_dma_copy_log = true; } @@ -201,8 +201,8 @@ vhost_async_dma_check_completed(struct virtio_net *dev, int16_t dma_id, uint16_t */ nr_copies = rte_dma_completed(dma_id, vchan_id, max_pkts, &last_idx, &has_error); if (unlikely(!vhost_async_dma_complete_log && has_error)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "DMA completion failure on channel %d:%u\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "DMA completion failure on channel %d:%u", dma_id, vchan_id); vhost_async_dma_complete_log = true; } else if (nr_copies == 0) { @@ -1062,7 +1062,7 @@ async_iter_initialize(struct virtio_net *dev, struct vhost_async *async) struct vhost_iov_iter *iter; if (unlikely(async->iovec_idx >= VHOST_MAX_ASYNC_VEC)) { - VHOST_LOG_DATA(dev->ifname, ERR, "no more async iovec available\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "no more async iovec available"); return -1; } @@ -1084,7 +1084,7 @@ async_iter_add_iovec(struct virtio_net *dev, struct vhost_async *async, static bool vhost_max_async_vec_log; if (!vhost_max_async_vec_log) { - VHOST_LOG_DATA(dev->ifname, ERR, "no more async iovec available\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "no more async iovec available"); vhost_max_async_vec_log = true; } @@ -1145,8 +1145,8 @@ async_fill_seg(struct virtio_net *dev, struct vhost_virtqueue *vq, host_iova = (void *)(uintptr_t)gpa_to_first_hpa(dev, buf_iova + buf_offset, cpy_len, &mapped_len); if (unlikely(!host_iova)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: failed to get host iova.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: failed to get host iova.", __func__); return -1; } @@ -1243,7 +1243,7 @@ mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, } else hdr = (struct virtio_net_hdr_mrg_rxbuf *)(uintptr_t)hdr_addr; - VHOST_LOG_DATA(dev->ifname, DEBUG, "RX: num merge buffers %d\n", num_buffers); + VHOST_DATA_LOG(dev->ifname, DEBUG, "RX: num merge buffers %d", num_buffers); if (unlikely(buf_len < dev->vhost_hlen)) { buf_offset = dev->vhost_hlen - buf_len; @@ -1428,14 +1428,14 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, if (unlikely(reserve_avail_buf_split(dev, vq, pkt_len, buf_vec, &num_buffers, avail_head, &nr_vec) < 0)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, - "failed to get enough desc from vring\n"); + VHOST_DATA_LOG(dev->ifname, DEBUG, + "failed to get enough desc from vring"); vq->shadow_used_idx -= num_buffers; break; } - VHOST_LOG_DATA(dev->ifname, DEBUG, - "current index %d | end index %d\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "current index %d | end index %d", vq->last_avail_idx, vq->last_avail_idx + num_buffers); if (mbuf_to_desc(dev, vq, pkts[pkt_idx], buf_vec, nr_vec, @@ -1645,12 +1645,12 @@ virtio_dev_rx_single_packed(struct virtio_net *dev, if (unlikely(vhost_enqueue_single_packed(dev, vq, pkt, buf_vec, &nr_descs) < 0)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, "failed to get enough desc from vring\n"); + VHOST_DATA_LOG(dev->ifname, DEBUG, "failed to get enough desc from vring"); return -1; } - VHOST_LOG_DATA(dev->ifname, DEBUG, - "current index %d | end index %d\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "current index %d | end index %d", vq->last_avail_idx, vq->last_avail_idx + nr_descs); vq_inc_last_avail_packed(vq, nr_descs); @@ -1702,7 +1702,7 @@ virtio_dev_rx(struct virtio_net *dev, struct vhost_virtqueue *vq, { uint32_t nb_tx = 0; - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s\n", __func__); + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s", __func__); rte_rwlock_read_lock(&vq->access_lock); if (unlikely(!vq->enabled)) @@ -1744,15 +1744,15 @@ rte_vhost_enqueue_burst(int vid, uint16_t queue_id, return 0; if (unlikely(!(dev->flags & VIRTIO_DEV_BUILTIN_VIRTIO_NET))) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: built-in vhost net backend is disabled.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: built-in vhost net backend is disabled.", __func__); return 0; } if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid virtqueue idx %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid virtqueue idx %d.", __func__, queue_id); return 0; } @@ -1821,14 +1821,14 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev, struct vhost_virtqueue if (unlikely(reserve_avail_buf_split(dev, vq, pkt_len, buf_vec, &num_buffers, avail_head, &nr_vec) < 0)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, - "failed to get enough desc from vring\n"); + VHOST_DATA_LOG(dev->ifname, DEBUG, + "failed to get enough desc from vring"); vq->shadow_used_idx -= num_buffers; break; } - VHOST_LOG_DATA(dev->ifname, DEBUG, - "current index %d | end index %d\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "current index %d | end index %d", vq->last_avail_idx, vq->last_avail_idx + num_buffers); if (mbuf_to_desc(dev, vq, pkts[pkt_idx], buf_vec, nr_vec, num_buffers, true) < 0) { @@ -1853,8 +1853,8 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev, struct vhost_virtqueue if (unlikely(pkt_err)) { uint16_t num_descs = 0; - VHOST_LOG_DATA(dev->ifname, DEBUG, - "%s: failed to transfer %u packets for queue %u.\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "%s: failed to transfer %u packets for queue %u.", __func__, pkt_err, vq->index); /* update number of completed packets */ @@ -1967,12 +1967,12 @@ virtio_dev_rx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, if (unlikely(vhost_enqueue_async_packed(dev, vq, pkt, buf_vec, nr_descs, nr_buffers) < 0)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, "failed to get enough desc from vring\n"); + VHOST_DATA_LOG(dev->ifname, DEBUG, "failed to get enough desc from vring"); return -1; } - VHOST_LOG_DATA(dev->ifname, DEBUG, - "current index %d | end index %d\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "current index %d | end index %d", vq->last_avail_idx, vq->last_avail_idx + *nr_descs); return 0; @@ -2151,8 +2151,8 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, struct vhost_virtqueue pkt_err = pkt_idx - n_xfer; if (unlikely(pkt_err)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, - "%s: failed to transfer %u packets for queue %u.\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "%s: failed to transfer %u packets for queue %u.", __func__, pkt_err, vq->index); dma_error_handler_packed(vq, slot_idx, pkt_err, &pkt_idx); } @@ -2344,18 +2344,18 @@ rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id, if (unlikely(!dev)) return 0; - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s\n", __func__); + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s", __func__); if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid virtqueue idx %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid virtqueue idx %d.", __func__, queue_id); return 0; } if (unlikely(!dma_copy_track[dma_id].vchans || !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid channel %d:%u.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid channel %d:%u.", __func__, dma_id, vchan_id); return 0; } @@ -2363,15 +2363,15 @@ rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id, vq = dev->virtqueue[queue_id]; if (rte_rwlock_read_trylock(&vq->access_lock)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, - "%s: virtqueue %u is busy.\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "%s: virtqueue %u is busy.", __func__, queue_id); return 0; } if (unlikely(!vq->async)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: async not registered for virtqueue %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: async not registered for virtqueue %d.", __func__, queue_id); goto out; } @@ -2399,15 +2399,15 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, if (!dev) return 0; - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s\n", __func__); + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s", __func__); if (unlikely(queue_id >= dev->nr_vring)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid virtqueue idx %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid virtqueue idx %d.", __func__, queue_id); return 0; } if (unlikely(dma_id < 0 || dma_id >= RTE_DMADEV_DEFAULT_MAX)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid dma id %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid dma id %d.", __func__, dma_id); return 0; } @@ -2417,16 +2417,16 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, vq_assert_lock(dev, vq); if (unlikely(!vq->async)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: async not registered for virtqueue %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: async not registered for virtqueue %d.", __func__, queue_id); return 0; } if (unlikely(!dma_copy_track[dma_id].vchans || !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid channel %d:%u.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid channel %d:%u.", __func__, dma_id, vchan_id); return 0; } @@ -2455,15 +2455,15 @@ rte_vhost_clear_queue(int vid, uint16_t queue_id, struct rte_mbuf **pkts, if (!dev) return 0; - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s\n", __func__); + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s", __func__); if (unlikely(queue_id >= dev->nr_vring)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid virtqueue idx %u.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid virtqueue idx %u.", __func__, queue_id); return 0; } if (unlikely(dma_id < 0 || dma_id >= RTE_DMADEV_DEFAULT_MAX)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid dma id %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid dma id %d.", __func__, dma_id); return 0; } @@ -2471,20 +2471,20 @@ rte_vhost_clear_queue(int vid, uint16_t queue_id, struct rte_mbuf **pkts, vq = dev->virtqueue[queue_id]; if (rte_rwlock_read_trylock(&vq->access_lock)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s: virtqueue %u is busy.\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s: virtqueue %u is busy.", __func__, queue_id); return 0; } if (unlikely(!vq->async)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: async not registered for queue id %u.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: async not registered for queue id %u.", __func__, queue_id); goto out_access_unlock; } if (unlikely(!dma_copy_track[dma_id].vchans || !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid channel %d:%u.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid channel %d:%u.", __func__, dma_id, vchan_id); goto out_access_unlock; } @@ -2511,12 +2511,12 @@ virtio_dev_rx_async_submit(struct virtio_net *dev, struct vhost_virtqueue *vq, { uint32_t nb_tx = 0; - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s\n", __func__); + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s", __func__); if (unlikely(!dma_copy_track[dma_id].vchans || !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid channel %d:%u.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid channel %d:%u.", __func__, dma_id, vchan_id); return 0; } @@ -2565,15 +2565,15 @@ rte_vhost_submit_enqueue_burst(int vid, uint16_t queue_id, return 0; if (unlikely(!(dev->flags & VIRTIO_DEV_BUILTIN_VIRTIO_NET))) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: built-in vhost net backend is disabled.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: built-in vhost net backend is disabled.", __func__); return 0; } if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid virtqueue idx %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid virtqueue idx %d.", __func__, queue_id); return 0; } @@ -2743,8 +2743,8 @@ vhost_dequeue_offload_legacy(struct virtio_net *dev, struct virtio_net_hdr *hdr, m->l4_len = sizeof(struct rte_udp_hdr); break; default: - VHOST_LOG_DATA(dev->ifname, WARNING, - "unsupported gso type %u.\n", + VHOST_DATA_LOG(dev->ifname, WARNING, + "unsupported gso type %u.", hdr->gso_type); goto error; } @@ -2975,8 +2975,8 @@ desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, if (mbuf_avail == 0) { cur = rte_pktmbuf_alloc(mbuf_pool); if (unlikely(cur == NULL)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "failed to allocate memory for mbuf.\n"); + VHOST_DATA_LOG(dev->ifname, ERR, + "failed to allocate memory for mbuf."); goto error; } @@ -3041,7 +3041,7 @@ virtio_dev_extbuf_alloc(struct virtio_net *dev, struct rte_mbuf *pkt, uint32_t s virtio_dev_extbuf_free, buf); if (unlikely(shinfo == NULL)) { rte_free(buf); - VHOST_LOG_DATA(dev->ifname, ERR, "failed to init shinfo\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "failed to init shinfo"); return -1; } @@ -3097,11 +3097,11 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]); - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s\n", __func__); + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s", __func__); count = RTE_MIN(count, MAX_PKT_BURST); count = RTE_MIN(count, avail_entries); - VHOST_LOG_DATA(dev->ifname, DEBUG, "about to dequeue %u buffers\n", count); + VHOST_DATA_LOG(dev->ifname, DEBUG, "about to dequeue %u buffers", count); if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, count)) return 0; @@ -3138,8 +3138,8 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, * is required. Drop this packet. */ if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, - "failed mbuf alloc of size %d from %s.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "failed mbuf alloc of size %d from %s.", buf_len, mbuf_pool->name); allocerr_warned = true; } @@ -3152,7 +3152,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, mbuf_pool, legacy_ol_flags, 0, false); if (unlikely(err)) { if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, "failed to copy desc to mbuf.\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "failed to copy desc to mbuf."); allocerr_warned = true; } dropped += 1; @@ -3421,8 +3421,8 @@ vhost_dequeue_single_packed(struct virtio_net *dev, if (unlikely(virtio_dev_pktmbuf_prep(dev, pkts, buf_len))) { if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, - "failed mbuf alloc of size %d from %s.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "failed mbuf alloc of size %d from %s.", buf_len, mbuf_pool->name); allocerr_warned = true; } @@ -3433,7 +3433,7 @@ vhost_dequeue_single_packed(struct virtio_net *dev, mbuf_pool, legacy_ol_flags, 0, false); if (unlikely(err)) { if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, "failed to copy desc to mbuf.\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "failed to copy desc to mbuf."); allocerr_warned = true; } return -1; @@ -3556,15 +3556,15 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, return 0; if (unlikely(!(dev->flags & VIRTIO_DEV_BUILTIN_VIRTIO_NET))) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: built-in vhost net backend is disabled.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: built-in vhost net backend is disabled.", __func__); return 0; } if (unlikely(!is_valid_virt_queue_idx(queue_id, 1, dev->nr_vring))) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid virtqueue idx %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid virtqueue idx %d.", __func__, queue_id); return 0; } @@ -3609,7 +3609,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, rarp_mbuf = rte_net_make_rarp_packet(mbuf_pool, &dev->mac); if (rarp_mbuf == NULL) { - VHOST_LOG_DATA(dev->ifname, ERR, "failed to make RARP packet.\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "failed to make RARP packet."); count = 0; goto out; } @@ -3731,7 +3731,7 @@ virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq, count = RTE_MIN(count, MAX_PKT_BURST); count = RTE_MIN(count, avail_entries); - VHOST_LOG_DATA(dev->ifname, DEBUG, "about to dequeue %u buffers\n", count); + VHOST_DATA_LOG(dev->ifname, DEBUG, "about to dequeue %u buffers", count); if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts_prealloc, count)) goto out; @@ -3768,8 +3768,8 @@ virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq, * is required. Drop this packet. */ if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: Failed mbuf alloc of size %d from %s\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: Failed mbuf alloc of size %d from %s", __func__, buf_len, mbuf_pool->name); allocerr_warned = true; } @@ -3783,8 +3783,8 @@ virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq, legacy_ol_flags, slot_idx, true); if (unlikely(err)) { if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: Failed to offload copies to async channel.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: Failed to offload copies to async channel.", __func__); allocerr_warned = true; } @@ -3814,7 +3814,7 @@ virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq, pkt_err = pkt_idx - n_xfer; if (unlikely(pkt_err)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s: failed to transfer data.\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s: failed to transfer data.", __func__); pkt_idx = n_xfer; @@ -3914,7 +3914,7 @@ virtio_dev_tx_async_single_packed(struct virtio_net *dev, if (unlikely(virtio_dev_pktmbuf_prep(dev, pkts, buf_len))) { if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, "Failed mbuf alloc of size %d from %s.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "Failed mbuf alloc of size %d from %s.", buf_len, mbuf_pool->name); allocerr_warned = true; @@ -3927,7 +3927,7 @@ virtio_dev_tx_async_single_packed(struct virtio_net *dev, if (unlikely(err)) { rte_pktmbuf_free(pkts); if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, "Failed to copy desc to mbuf on.\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "Failed to copy desc to mbuf on."); allocerr_warned = true; } return -1; @@ -4019,7 +4019,7 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct async_inflight_info *pkts_info = async->pkts_info; struct rte_mbuf *pkts_prealloc[MAX_PKT_BURST]; - VHOST_LOG_DATA(dev->ifname, DEBUG, "(%d) about to dequeue %u buffers\n", dev->vid, count); + VHOST_DATA_LOG(dev->ifname, DEBUG, "(%d) about to dequeue %u buffers", dev->vid, count); async_iter_reset(async); @@ -4153,26 +4153,26 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id, *nr_inflight = -1; if (unlikely(!(dev->flags & VIRTIO_DEV_BUILTIN_VIRTIO_NET))) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: built-in vhost net backend is disabled.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: built-in vhost net backend is disabled.", __func__); return 0; } if (unlikely(!is_valid_virt_queue_idx(queue_id, 1, dev->nr_vring))) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid virtqueue idx %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid virtqueue idx %d.", __func__, queue_id); return 0; } if (unlikely(dma_id < 0 || dma_id >= RTE_DMADEV_DEFAULT_MAX)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid dma id %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid dma id %d.", __func__, dma_id); return 0; } if (unlikely(!dma_copy_track[dma_id].vchans || !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid channel %d:%u.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid channel %d:%u.", __func__, dma_id, vchan_id); return 0; } @@ -4188,7 +4188,7 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id, } if (unlikely(!vq->async)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: async not registered for queue id %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: async not registered for queue id %d.", __func__, queue_id); count = 0; goto out_access_unlock; @@ -4224,7 +4224,7 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id, rarp_mbuf = rte_net_make_rarp_packet(mbuf_pool, &dev->mac); if (rarp_mbuf == NULL) { - VHOST_LOG_DATA(dev->ifname, ERR, "failed to make RARP packet.\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "failed to make RARP packet."); count = 0; goto out; } diff --git a/lib/vhost/virtio_net_ctrl.c b/lib/vhost/virtio_net_ctrl.c index c4847f84ed..8f78122361 100644 --- a/lib/vhost/virtio_net_ctrl.c +++ b/lib/vhost/virtio_net_ctrl.c @@ -36,13 +36,13 @@ virtio_net_ctrl_pop(struct virtio_net *dev, struct vhost_virtqueue *cvq, avail_idx = rte_atomic_load_explicit((unsigned short __rte_atomic *)&cvq->avail->idx, rte_memory_order_acquire); if (avail_idx == cvq->last_avail_idx) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "Control queue empty\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "Control queue empty"); return 0; } desc_idx = cvq->avail->ring[cvq->last_avail_idx]; if (desc_idx >= cvq->size) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Out of range desc index, dropping\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Out of range desc index, dropping"); goto err; } @@ -55,7 +55,7 @@ virtio_net_ctrl_pop(struct virtio_net *dev, struct vhost_virtqueue *cvq, descs = (struct vring_desc *)(uintptr_t)vhost_iova_to_vva(dev, cvq, desc_iova, &desc_len, VHOST_ACCESS_RO); if (!descs || desc_len != cvq->desc[desc_idx].len) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to map ctrl indirect descs\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to map ctrl indirect descs"); goto err; } @@ -72,28 +72,28 @@ virtio_net_ctrl_pop(struct virtio_net *dev, struct vhost_virtqueue *cvq, if (descs[desc_idx].flags & VRING_DESC_F_WRITE) { if (ctrl_elem->desc_ack) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Unexpected ctrl chain layout\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Unexpected ctrl chain layout"); goto err; } if (desc_len != sizeof(uint8_t)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Invalid ack size for ctrl req, dropping\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Invalid ack size for ctrl req, dropping"); goto err; } ctrl_elem->desc_ack = (uint8_t *)(uintptr_t)vhost_iova_to_vva(dev, cvq, desc_iova, &desc_len, VHOST_ACCESS_WO); if (!ctrl_elem->desc_ack || desc_len != sizeof(uint8_t)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Failed to map ctrl ack descriptor\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Failed to map ctrl ack descriptor"); goto err; } } else { if (ctrl_elem->desc_ack) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Unexpected ctrl chain layout\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Unexpected ctrl chain layout"); goto err; } @@ -114,18 +114,18 @@ virtio_net_ctrl_pop(struct virtio_net *dev, struct vhost_virtqueue *cvq, ctrl_elem->n_descs = n_descs; if (!ctrl_elem->desc_ack) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Missing ctrl ack descriptor\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Missing ctrl ack descriptor"); goto err; } if (data_len < sizeof(ctrl_elem->ctrl_req->class) + sizeof(ctrl_elem->ctrl_req->command)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Invalid control header size\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Invalid control header size"); goto err; } ctrl_elem->ctrl_req = malloc(data_len); if (!ctrl_elem->ctrl_req) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to alloc ctrl request\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to alloc ctrl request"); goto err; } @@ -138,7 +138,7 @@ virtio_net_ctrl_pop(struct virtio_net *dev, struct vhost_virtqueue *cvq, descs = (struct vring_desc *)(uintptr_t)vhost_iova_to_vva(dev, cvq, desc_iova, &desc_len, VHOST_ACCESS_RO); if (!descs || desc_len != cvq->desc[desc_idx].len) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to map ctrl indirect descs\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to map ctrl indirect descs"); goto free_err; } @@ -153,7 +153,7 @@ virtio_net_ctrl_pop(struct virtio_net *dev, struct vhost_virtqueue *cvq, desc_addr = vhost_iova_to_vva(dev, cvq, desc_iova, &desc_len, VHOST_ACCESS_RO); if (!desc_addr || desc_len < descs[desc_idx].len) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to map ctrl descriptor\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to map ctrl descriptor"); goto free_err; } @@ -199,7 +199,7 @@ virtio_net_ctrl_handle_req(struct virtio_net *dev, struct virtio_net_ctrl *ctrl_ uint32_t i; queue_pairs = *(uint16_t *)(uintptr_t)ctrl_req->command_data; - VHOST_LOG_CONFIG(dev->ifname, INFO, "Ctrl req: MQ %u queue pairs\n", queue_pairs); + VHOST_CONFIG_LOG(dev->ifname, INFO, "Ctrl req: MQ %u queue pairs", queue_pairs); ret = VIRTIO_NET_OK; for (i = 0; i < dev->nr_vring; i++) { @@ -253,12 +253,12 @@ virtio_net_ctrl_handle(struct virtio_net *dev) int ret = 0; if (dev->features & (1ULL << VIRTIO_F_RING_PACKED)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Packed ring not supported yet\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Packed ring not supported yet"); return -1; } if (!dev->cvq) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "missing control queue\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "missing control queue"); return -1; } -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v4 14/14] lib: use per line logging in helpers 2023-12-18 14:37 ` [PATCH v4 00/14] Detect superfluous newline in logs David Marchand ` (12 preceding siblings ...) 2023-12-18 14:38 ` [PATCH v4 13/14] lib: replace logging helpers David Marchand @ 2023-12-18 14:38 ` David Marchand 13 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-18 14:38 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Chengwen Feng, Andrew Rybchenko, Nicolas Chautru, Konstantin Ananyev, Cristian Dumitrescu, Fan Zhang, Ashish Gupta, Akhil Goyal, Kevin Laatz, Jerin Jacob, Erik Gabriel Carrillo, Elena Agostini, Kiran Kumar K, Nithin Dabilpuram, Zhirun Yan, Yipeng Wang, Sameh Gobriel, Srikanth Yalavarthi, Jasvinder Singh, Pavan Nikhilesh, Reshma Pattan, Anatoly Burakov, David Hunt, Sivaprasad Tummala, Sachin Saxena, Hemant Agrawal, Honnappa Nagarahalli, Ori Kam, Ciara Power, Maxime Coquelin, Chenbo Xia Use RTE_LOG_LINE in existing macros that append a \n. Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Chengwen Feng <fengchengwen@huawei.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> --- Changes since v3: - fixed some checkpatch complaints, Changes since RFC v1: - converted all logging helpers in lib/, --- lib/bbdev/rte_bbdev.c | 5 +++-- lib/bpf/bpf_impl.h | 2 +- lib/cfgfile/rte_cfgfile.c | 4 ++-- lib/compressdev/rte_compressdev_internal.h | 5 +++-- lib/cryptodev/rte_cryptodev.h | 22 ++++++++++------------ lib/dmadev/rte_dmadev.c | 6 ++++-- lib/ethdev/rte_ethdev.h | 3 +-- lib/eventdev/eventdev_pmd.h | 12 ++++++------ lib/eventdev/rte_event_timer_adapter.c | 17 ++++++++++------- lib/gpudev/gpudev.c | 6 ++++-- lib/graph/graph_private.h | 7 ++++--- lib/member/member.h | 4 ++-- lib/metrics/rte_metrics_telemetry.c | 4 ++-- lib/mldev/rte_mldev.h | 5 +++-- lib/net/rte_net_crc.c | 8 ++++---- lib/node/node_private.h | 8 +++++--- lib/pdump/rte_pdump.c | 5 ++--- lib/power/power_common.h | 2 +- lib/rawdev/rte_rawdev_pmd.h | 4 ++-- lib/rcu/rte_rcu_qsbr.h | 8 +++----- lib/regexdev/rte_regexdev.h | 3 +-- lib/stack/stack_pvt.h | 4 ++-- lib/telemetry/telemetry.c | 4 +--- lib/vhost/vhost.h | 8 ++++---- lib/vhost/vhost_crypto.c | 6 +++--- 25 files changed, 83 insertions(+), 79 deletions(-) diff --git a/lib/bbdev/rte_bbdev.c b/lib/bbdev/rte_bbdev.c index e09bb97abb..13bde3c25b 100644 --- a/lib/bbdev/rte_bbdev.c +++ b/lib/bbdev/rte_bbdev.c @@ -28,10 +28,11 @@ /* BBDev library logging ID */ RTE_LOG_REGISTER_DEFAULT(bbdev_logtype, NOTICE); +#define RTE_LOGTYPE_BBDEV bbdev_logtype /* Helper macro for logging */ -#define rte_bbdev_log(level, fmt, ...) \ - rte_log(RTE_LOG_ ## level, bbdev_logtype, fmt "\n", ##__VA_ARGS__) +#define rte_bbdev_log(level, ...) \ + RTE_LOG_LINE(level, BBDEV, "" __VA_ARGS__) #define rte_bbdev_log_debug(fmt, ...) \ rte_bbdev_log(DEBUG, RTE_STR(__LINE__) ":%s() " fmt, __func__, \ diff --git a/lib/bpf/bpf_impl.h b/lib/bpf/bpf_impl.h index 6a82ae4ef2..1a3d97d0c7 100644 --- a/lib/bpf/bpf_impl.h +++ b/lib/bpf/bpf_impl.h @@ -30,7 +30,7 @@ extern int rte_bpf_logtype; #define RTE_LOGTYPE_BPF rte_bpf_logtype #define RTE_BPF_LOG_LINE(lvl, fmt, args...) \ - RTE_LOG(lvl, BPF, fmt "\n", ##args) + RTE_LOG_LINE(lvl, BPF, fmt, ##args) static inline size_t bpf_size(uint32_t bpf_op_sz) diff --git a/lib/cfgfile/rte_cfgfile.c b/lib/cfgfile/rte_cfgfile.c index 2f9cc0722a..6a5e4fd942 100644 --- a/lib/cfgfile/rte_cfgfile.c +++ b/lib/cfgfile/rte_cfgfile.c @@ -29,10 +29,10 @@ struct rte_cfgfile { /* Setting up dynamic logging 8< */ RTE_LOG_REGISTER_DEFAULT(cfgfile_logtype, INFO); +#define RTE_LOGTYPE_CFGFILE cfgfile_logtype #define CFG_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, cfgfile_logtype, "%s(): " fmt "\n", \ - __func__, ## args) + RTE_LOG_LINE(level, CFGFILE, "%s(): " fmt, __func__, ## args) /* >8 End of setting up dynamic logging */ /** when we resize a file structure, how many extra entries diff --git a/lib/compressdev/rte_compressdev_internal.h b/lib/compressdev/rte_compressdev_internal.h index b3b193e3ee..01b7764282 100644 --- a/lib/compressdev/rte_compressdev_internal.h +++ b/lib/compressdev/rte_compressdev_internal.h @@ -21,9 +21,10 @@ extern "C" { /* Logging Macros */ extern int compressdev_logtype; +#define RTE_LOGTYPE_COMPRESSDEV compressdev_logtype + #define COMPRESSDEV_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, compressdev_logtype, "%s(): " fmt "\n", \ - __func__, ##args) + RTE_LOG_LINE(level, COMPRESSDEV, "%s(): " fmt, __func__, ## args) /** * Dequeue processed packets from queue pair of a device. diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h index 30ad2d9a95..359b6c2b29 100644 --- a/lib/cryptodev/rte_cryptodev.h +++ b/lib/cryptodev/rte_cryptodev.h @@ -36,24 +36,22 @@ extern int rte_cryptodev_logtype; /* Logging Macros */ #define CDEV_LOG_ERR(...) \ - RTE_LOG(ERR, CRYPTODEV, \ - RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ - __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(ERR, CRYPTODEV, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__ ,), \ + __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) #define CDEV_LOG_INFO(...) \ - RTE_LOG(INFO, CRYPTODEV, \ - RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ - RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(INFO, CRYPTODEV, "" __VA_ARGS__) #define CDEV_LOG_DEBUG(...) \ - RTE_LOG(DEBUG, CRYPTODEV, \ - RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ - __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(DEBUG, CRYPTODEV, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__ ,), \ + __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) #define CDEV_PMD_TRACE(...) \ - RTE_LOG(DEBUG, CRYPTODEV, \ - RTE_FMT("[%s] %s: " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ - dev, __func__, RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(DEBUG, CRYPTODEV, \ + RTE_FMT("[%s] %s: " RTE_FMT_HEAD(__VA_ARGS__ ,), \ + dev, __func__, RTE_FMT_TAIL(__VA_ARGS__ ,))) /** * A macro that points to an offset from the start diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c index 009a21849a..5953a77bd6 100644 --- a/lib/dmadev/rte_dmadev.c +++ b/lib/dmadev/rte_dmadev.c @@ -32,9 +32,11 @@ static struct { } *dma_devices_shared_data; RTE_LOG_REGISTER_DEFAULT(rte_dma_logtype, INFO); +#define RTE_LOGTYPE_DMA rte_dma_logtype + #define RTE_DMA_LOG(level, ...) \ - rte_log(RTE_LOG_ ## level, rte_dma_logtype, RTE_FMT("dma: " \ - RTE_FMT_HEAD(__VA_ARGS__,) "\n", RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(level, DMA, RTE_FMT("dma: " RTE_FMT_HEAD(__VA_ARGS__ ,), \ + RTE_FMT_TAIL(__VA_ARGS__ ,))) int rte_dma_dev_max(size_t dev_max) diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index e89e474c39..21e3a21903 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -179,8 +179,7 @@ extern int rte_eth_dev_logtype; #define RTE_LOGTYPE_ETHDEV rte_eth_dev_logtype #define RTE_ETHDEV_LOG_LINE(level, ...) \ - RTE_LOG(level, ETHDEV, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ - RTE_FMT_TAIL(__VA_ARGS__ ,))) + RTE_LOG_LINE(level, ETHDEV, "" __VA_ARGS__) struct rte_mbuf; diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index 2ec5aec0a8..1790587808 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -33,15 +33,15 @@ extern "C" { /* Logging Macros */ #define RTE_EDEV_LOG_ERR(...) \ - RTE_LOG(ERR, EVENTDEV, \ - RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ - __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(ERR, EVENTDEV, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__ ,), \ + __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) #ifdef RTE_LIBRTE_EVENTDEV_DEBUG #define RTE_EDEV_LOG_DEBUG(...) \ - RTE_LOG(DEBUG, EVENTDEV, \ - RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ - __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(DEBUG, EVENTDEV, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__ ,), \ + __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) #else #define RTE_EDEV_LOG_DEBUG(...) (void)0 #endif diff --git a/lib/eventdev/rte_event_timer_adapter.c b/lib/eventdev/rte_event_timer_adapter.c index 3f22e85173..e6d3492056 100644 --- a/lib/eventdev/rte_event_timer_adapter.c +++ b/lib/eventdev/rte_event_timer_adapter.c @@ -30,27 +30,30 @@ #define DATA_MZ_NAME_FORMAT "rte_event_timer_adapter_data_%d" RTE_LOG_REGISTER_SUFFIX(evtim_logtype, adapter.timer, NOTICE); +#define RTE_LOGTYPE_EVTIM evtim_logtype RTE_LOG_REGISTER_SUFFIX(evtim_buffer_logtype, adapter.timer, NOTICE); +#define RTE_LOGTYPE_EVTIM_BUF evtim_buffer_logtype RTE_LOG_REGISTER_SUFFIX(evtim_svc_logtype, adapter.timer.svc, NOTICE); +#define RTE_LOGTYPE_EVTIM_SVC evtim_svc_logtype static struct rte_event_timer_adapter *adapters; static const struct event_timer_adapter_ops swtim_ops; #define EVTIM_LOG(level, logtype, ...) \ - rte_log(RTE_LOG_ ## level, logtype, \ - RTE_FMT("EVTIMER: %s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,) \ - "\n", __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(level, logtype, \ + RTE_FMT("EVTIMER: %s() line %u: " RTE_FMT_HEAD(__VA_ARGS__ ,), \ + __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) -#define EVTIM_LOG_ERR(...) EVTIM_LOG(ERR, evtim_logtype, __VA_ARGS__) +#define EVTIM_LOG_ERR(...) EVTIM_LOG(ERR, EVTIM, __VA_ARGS__) #ifdef RTE_LIBRTE_EVENTDEV_DEBUG #define EVTIM_LOG_DBG(...) \ - EVTIM_LOG(DEBUG, evtim_logtype, __VA_ARGS__) + EVTIM_LOG(DEBUG, EVTIM, __VA_ARGS__) #define EVTIM_BUF_LOG_DBG(...) \ - EVTIM_LOG(DEBUG, evtim_buffer_logtype, __VA_ARGS__) + EVTIM_LOG(DEBUG, EVTIM_BUF, __VA_ARGS__) #define EVTIM_SVC_LOG_DBG(...) \ - EVTIM_LOG(DEBUG, evtim_svc_logtype, __VA_ARGS__) + EVTIM_LOG(DEBUG, EVTIM_SVC, __VA_ARGS__) #else #define EVTIM_LOG_DBG(...) (void)0 #define EVTIM_BUF_LOG_DBG(...) (void)0 diff --git a/lib/gpudev/gpudev.c b/lib/gpudev/gpudev.c index 6845d18b4d..de8291151f 100644 --- a/lib/gpudev/gpudev.c +++ b/lib/gpudev/gpudev.c @@ -17,9 +17,11 @@ /* Logging */ RTE_LOG_REGISTER_DEFAULT(gpu_logtype, NOTICE); +#define RTE_LOGTYPE_GPUDEV gpu_logtype + #define GPU_LOG(level, ...) \ - rte_log(RTE_LOG_ ## level, gpu_logtype, RTE_FMT("gpu: " \ - RTE_FMT_HEAD(__VA_ARGS__, ) "\n", RTE_FMT_TAIL(__VA_ARGS__, ))) + RTE_LOG_LINE(level, GPUDEV, RTE_FMT("gpu: " RTE_FMT_HEAD(__VA_ARGS__ ,), \ + RTE_FMT_TAIL(__VA_ARGS__ ,))) /* Set any driver error as EPERM */ #define GPU_DRV_RET(function) \ diff --git a/lib/graph/graph_private.h b/lib/graph/graph_private.h index d0ef13b205..f9274ce96c 100644 --- a/lib/graph/graph_private.h +++ b/lib/graph/graph_private.h @@ -18,11 +18,12 @@ #include "rte_graph_worker.h" extern int rte_graph_logtype; +#define RTE_LOGTYPE_GRAPH rte_graph_logtype #define GRAPH_LOG(level, ...) \ - rte_log(RTE_LOG_##level, rte_graph_logtype, \ - RTE_FMT("GRAPH: %s():%u " RTE_FMT_HEAD(__VA_ARGS__, ) "\n", \ - __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__, ))) + RTE_LOG_LINE(level, GRAPH, \ + RTE_FMT("GRAPH: %s():%u " RTE_FMT_HEAD(__VA_ARGS__ ,), \ + __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) #define graph_err(...) GRAPH_LOG(ERR, __VA_ARGS__) #define graph_warn(...) GRAPH_LOG(WARNING, __VA_ARGS__) diff --git a/lib/member/member.h b/lib/member/member.h index a7b5b4a57c..cf600c4838 100644 --- a/lib/member/member.h +++ b/lib/member/member.h @@ -8,7 +8,7 @@ extern int librte_member_logtype; #define RTE_LOGTYPE_MEMBER librte_member_logtype #define MEMBER_LOG(level, ...) \ - RTE_LOG(level, MEMBER, \ - RTE_FMT("%s(): " RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + RTE_LOG_LINE(level, MEMBER, \ + RTE_FMT("%s(): " RTE_FMT_HEAD(__VA_ARGS__ ,), \ __func__, RTE_FMT_TAIL(__VA_ARGS__ ,))) diff --git a/lib/metrics/rte_metrics_telemetry.c b/lib/metrics/rte_metrics_telemetry.c index 1d133e1f8c..b8c9d75a7d 100644 --- a/lib/metrics/rte_metrics_telemetry.c +++ b/lib/metrics/rte_metrics_telemetry.c @@ -16,11 +16,11 @@ struct telemetry_metrics_data tel_met_data; int metrics_log_level; +#define RTE_LOGTYPE_METRICS metrics_log_level /* Logging Macros */ #define METRICS_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ##level, metrics_log_level, "%s(): "fmt "\n", \ - __func__, ##args) + RTE_LOG_LINE(level, METRICS, "%s(): "fmt, __func__, ## args) #define METRICS_LOG_ERR(fmt, args...) \ METRICS_LOG(ERR, fmt, ## args) diff --git a/lib/mldev/rte_mldev.h b/lib/mldev/rte_mldev.h index 63b2670bb0..5cf6f0566f 100644 --- a/lib/mldev/rte_mldev.h +++ b/lib/mldev/rte_mldev.h @@ -144,9 +144,10 @@ extern "C" { /* Logging Macro */ extern int rte_ml_dev_logtype; +#define RTE_LOGTYPE_MLDEV rte_ml_dev_logtype -#define RTE_MLDEV_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_##level, rte_ml_dev_logtype, "%s(): " fmt "\n", __func__, ##args) +#define RTE_MLDEV_LOG(level, fmt, args...) \ + RTE_LOG_LINE(level, MLDEV, "%s(): " fmt, __func__, ##args) #define RTE_ML_STR_MAX 128 /**< Maximum length of name string */ diff --git a/lib/net/rte_net_crc.c b/lib/net/rte_net_crc.c index 900d6de7f4..b401ea3dd8 100644 --- a/lib/net/rte_net_crc.c +++ b/lib/net/rte_net_crc.c @@ -70,11 +70,11 @@ static const rte_net_crc_handler handlers_neon[] = { static uint16_t max_simd_bitwidth; -#define NET_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, libnet_logtype, "%s(): " fmt "\n", \ - __func__, ## args) - RTE_LOG_REGISTER_DEFAULT(libnet_logtype, INFO); +#define RTE_LOGTYPE_NET libnet_logtype + +#define NET_LOG(level, fmt, args...) \ + RTE_LOG_LINE(level, NET, "%s(): " fmt, __func__, ## args) /* Scalar handling */ diff --git a/lib/node/node_private.h b/lib/node/node_private.h index 26135aaa5b..845fdaa12e 100644 --- a/lib/node/node_private.h +++ b/lib/node/node_private.h @@ -11,11 +11,13 @@ #include <rte_mbuf_dyn.h> extern int rte_node_logtype; +#define RTE_LOGTYPE_NODE rte_node_logtype + #define NODE_LOG(level, node_name, ...) \ - rte_log(RTE_LOG_##level, rte_node_logtype, \ - RTE_FMT("NODE %s: %s():%u " RTE_FMT_HEAD(__VA_ARGS__, ) "\n", \ + RTE_LOG_LINE(level, NODE, \ + RTE_FMT("NODE %s: %s():%u " RTE_FMT_HEAD(__VA_ARGS__ ,), \ node_name, __func__, __LINE__, \ - RTE_FMT_TAIL(__VA_ARGS__, ))) + RTE_FMT_TAIL(__VA_ARGS__ ,))) #define node_err(node_name, ...) NODE_LOG(ERR, node_name, __VA_ARGS__) #define node_info(node_name, ...) NODE_LOG(INFO, node_name, __VA_ARGS__) diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c index 70963e7ee7..f6160f9911 100644 --- a/lib/pdump/rte_pdump.c +++ b/lib/pdump/rte_pdump.c @@ -18,9 +18,8 @@ RTE_LOG_REGISTER_DEFAULT(pdump_logtype, NOTICE); #define RTE_LOGTYPE_PDUMP pdump_logtype -#define PDUMP_LOG_LINE(level, fmt, args...) \ - RTE_LOG(level, PDUMP, "%s(): " fmt "\n", \ - __func__, ## args) +#define PDUMP_LOG_LINE(level, fmt, args...) \ + RTE_LOG_LINE(level, PDUMP, "%s(): " fmt, __func__, ## args) /* Used for the multi-process communication */ #define PDUMP_MP "mp_pdump" diff --git a/lib/power/power_common.h b/lib/power/power_common.h index ea2febbd86..4e32548169 100644 --- a/lib/power/power_common.h +++ b/lib/power/power_common.h @@ -15,7 +15,7 @@ extern int power_logtype; #ifdef RTE_LIBRTE_POWER_DEBUG #define POWER_DEBUG_LOG(fmt, args...) \ - RTE_LOG(ERR, POWER, "%s: " fmt "\n", __func__, ## args) + RTE_LOG_LINE(ERR, POWER, "%s: " fmt, __func__, ## args) #else #define POWER_DEBUG_LOG(fmt, args...) #endif diff --git a/lib/rawdev/rte_rawdev_pmd.h b/lib/rawdev/rte_rawdev_pmd.h index 7b9ef1d09f..7173282c66 100644 --- a/lib/rawdev/rte_rawdev_pmd.h +++ b/lib/rawdev/rte_rawdev_pmd.h @@ -27,11 +27,11 @@ extern "C" { #include "rte_rawdev.h" extern int librawdev_logtype; +#define RTE_LOGTYPE_RAWDEV librawdev_logtype /* Logging Macros */ #define RTE_RDEV_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, librawdev_logtype, "%s(): " fmt "\n", \ - __func__, ##args) + RTE_LOG_LINE(level, RAWDEV, "%s(): " fmt, __func__, ##args) #define RTE_RDEV_ERR(fmt, args...) \ RTE_RDEV_LOG(ERR, fmt, ## args) diff --git a/lib/rcu/rte_rcu_qsbr.h b/lib/rcu/rte_rcu_qsbr.h index 0dca8310c0..23c9f89805 100644 --- a/lib/rcu/rte_rcu_qsbr.h +++ b/lib/rcu/rte_rcu_qsbr.h @@ -40,17 +40,15 @@ extern int rte_rcu_log_type; #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG #define __RTE_RCU_DP_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, rte_rcu_log_type, \ - "%s(): " fmt "\n", __func__, ## args) + RTE_LOG_LINE(level, RCU, "%s(): " fmt, __func__, ## args) #else #define __RTE_RCU_DP_LOG(level, fmt, args...) #endif #if defined(RTE_LIBRTE_RCU_DEBUG) -#define __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...) do {\ +#define __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...) do { \ if (v->qsbr_cnt[thread_id].lock_cnt) \ - rte_log(RTE_LOG_ ## level, rte_rcu_log_type, \ - "%s(): " fmt "\n", __func__, ## args); \ + RTE_LOG_LINE(level, RCU, "%s(): " fmt, __func__, ## args); \ } while (0) #else #define __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...) diff --git a/lib/regexdev/rte_regexdev.h b/lib/regexdev/rte_regexdev.h index a215d8768e..a50b841b1e 100644 --- a/lib/regexdev/rte_regexdev.h +++ b/lib/regexdev/rte_regexdev.h @@ -209,8 +209,7 @@ extern int rte_regexdev_logtype; #define RTE_LOGTYPE_REGEXDEV rte_regexdev_logtype #define RTE_REGEXDEV_LOG_LINE(level, ...) \ - RTE_LOG(level, REGEXDEV, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ - RTE_FMT_TAIL(__VA_ARGS__ ,))) + RTE_LOG_LINE(level, REGEXDEV, "" __VA_ARGS__) /* Macros to check for valid port */ #define RTE_REGEXDEV_VALID_DEV_ID_OR_ERR_RET(dev_id, retval) do { \ diff --git a/lib/stack/stack_pvt.h b/lib/stack/stack_pvt.h index c7eab4027d..2dce42a9da 100644 --- a/lib/stack/stack_pvt.h +++ b/lib/stack/stack_pvt.h @@ -8,10 +8,10 @@ #include <rte_log.h> extern int stack_logtype; +#define RTE_LOGTYPE_STACK stack_logtype #define STACK_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ##level, stack_logtype, "%s(): "fmt "\n", \ - __func__, ##args) + RTE_LOG_LINE(level, STACK, "%s(): "fmt, __func__, ##args) #define STACK_LOG_ERR(fmt, args...) \ STACK_LOG(ERR, fmt, ## args) diff --git a/lib/telemetry/telemetry.c b/lib/telemetry/telemetry.c index 747eba2656..31e2391867 100644 --- a/lib/telemetry/telemetry.c +++ b/lib/telemetry/telemetry.c @@ -57,9 +57,7 @@ static rte_cpuset_t *thread_cpuset; RTE_LOG_REGISTER_DEFAULT(logtype, WARNING); #define RTE_LOGTYPE_TMTY logtype -#define TMTY_LOG_LINE(l, ...) \ - RTE_LOG(l, TMTY, RTE_FMT("TELEMETRY: " RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ - RTE_FMT_TAIL(__VA_ARGS__ ,))) +#define TMTY_LOG_LINE(l, ...) RTE_LOG_LINE(l, TMTY, "TELEMETRY: " __VA_ARGS__) /* list of command callbacks, with one command registered by default */ static struct cmd_callback *callbacks; diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index 5a74d0e628..25c0f86e55 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -678,12 +678,12 @@ extern int vhost_data_log_level; #define RTE_LOGTYPE_VHOST_DATA vhost_data_log_level #define VHOST_CONFIG_LOG(prefix, level, fmt, args...) \ - RTE_LOG(level, VHOST_CONFIG, \ - "VHOST_CONFIG: (%s) " fmt "\n", prefix, ##args) + RTE_LOG_LINE(level, VHOST_CONFIG, \ + "VHOST_CONFIG: (%s) " fmt, prefix, ##args) #define VHOST_DATA_LOG(prefix, level, fmt, args...) \ - RTE_LOG_DP(level, VHOST_DATA, \ - "VHOST_DATA: (%s) " fmt "\n", prefix, ##args) + RTE_LOG_DP_LINE(level, VHOST_DATA, \ + "VHOST_DATA: (%s) " fmt, prefix, ##args) #ifdef RTE_LIBRTE_VHOST_DEBUG #define VHOST_MAX_PRINT_BUFF 6072 diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c index 6e5443e5f8..3704fbbb3d 100644 --- a/lib/vhost/vhost_crypto.c +++ b/lib/vhost/vhost_crypto.c @@ -21,15 +21,15 @@ RTE_LOG_REGISTER_SUFFIX(vhost_crypto_logtype, crypto, INFO); #define RTE_LOGTYPE_VHOST_CRYPTO vhost_crypto_logtype #define VC_LOG_ERR(fmt, args...) \ - RTE_LOG(ERR, VHOST_CRYPTO, "%s() line %u: " fmt "\n", \ + RTE_LOG_LINE(ERR, VHOST_CRYPTO, "%s() line %u: " fmt, \ __func__, __LINE__, ## args) #define VC_LOG_INFO(fmt, args...) \ - RTE_LOG(INFO, VHOST_CRYPTO, "%s() line %u: " fmt "\n", \ + RTE_LOG_LINE(INFO, VHOST_CRYPTO, "%s() line %u: " fmt, \ __func__, __LINE__, ## args) #ifdef RTE_LIBRTE_VHOST_DEBUG #define VC_LOG_DBG(fmt, args...) \ - RTE_LOG(DEBUG, VHOST_CRYPTO, "%s() line %u: " fmt "\n", \ + RTE_LOG_LINE(DEBUG, VHOST_CRYPTO, "%s() line %u: " fmt, \ __func__, __LINE__, ## args) #else #define VC_LOG_DBG(fmt, args...) -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v5 00/13] Detect superfluous newline in logs 2023-11-17 13:18 [RFC 0/3] Detect superfluous newline in logs David Marchand ` (7 preceding siblings ...) 2023-12-18 14:37 ` [PATCH v4 00/14] Detect superfluous newline in logs David Marchand @ 2023-12-20 15:35 ` David Marchand 2023-12-20 15:35 ` [PATCH v5 01/13] hash: remove some dead code David Marchand ` (14 more replies) 8 siblings, 15 replies; 122+ messages in thread From: David Marchand @ 2023-12-20 15:35 UTC (permalink / raw) To: dev; +Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb Getting readable and consistent logs is important when running a DPDK application, especially when troubleshooting. A common issue with logs is when a DPDK change do not add (or on the contrary add too many \n) in the format string. This issue would only get noticed when actually hitting this log (which may be a situation hard to reach). This series proposes to introduce a new RTE_LOG_LINE helper that is responsible for logging a one line message and spews a build error (with gcc) if any \n is part of the format string. Since the v1 discussion on the cover letter, I changed my mind, and made the choice to break existing logging helpers exported in the public API. The reasoning is that those should not be used in the first place: logs should be produced only by the library that registers the logtype. Some multiline logging for debugging and the test assert macros are still present, but in this case an explicit call to RTE_LOG() is done. This can be checked with a simple: $ git grep -E 'RTE_LOG(_DP)?\(' -- lib/ :^lib/log/ lib/acl/acl_bld.c: RTE_LOG(DEBUG, ACL, "Build phase for ACL \"%s\":\n" lib/acl/acl_gen.c: RTE_LOG(DEBUG, ACL, "Gen phase for ACL \"%s\":\n" lib/bpf/bpf_validate.c: RTE_LOG(DEBUG, BPF, "%s(%p) stats:\n" lib/bpf/bpf_validate.c: RTE_LOG(DEBUG, BPF, lib/eal/common/eal_common_debug.c: RTE_LOG(CRIT, EAL, "Error - exiting with code: %d\n" lib/eal/include/rte_test.h: RTE_LOG(ERR, EAL, "Test assert %s line %d failed: " \ lib/ip_frag/ip_frag_common.h:#define IP_FRAG_LOG(lvl, fmt, args...) RTE_LOG(lvl, IPFRAG, fmt, ##args) lib/sched/rte_sched.c: RTE_LOG(DEBUG, SCHED, "Low level config for pipe profile %u:\n" lib/sched/rte_sched.c: RTE_LOG(DEBUG, SCHED, "Low level config for subport profile %u:\n" lib/vhost/vhost.h: RTE_LOG_DP(DEBUG, VHOST_DATA, "VHOST_DATA: (%s) %s", dev->ifname, packet); \ Changes since v4: - fixed build with -pedantic, - reworked patch 12 so it introduce new logging helpers (like for rcu), - moved the transition to RTE_LOG_LINE previously in patch 12 to the last patch of this v5 series, Changes since v3: - fixed some checkpatch complaints, Changes since RFC v2: - sent as non RFC, - fixed format string crossing line boundaries, - avoided potential collision with BPF_ namespace, Changes since RFC v1: - rebased after Stephen log changes, - added more fixes as I was making progress on the topic, - added a check so dpdk developers stop using RTE_LOG(), - added preparation patches, like "lib: replace logging helpers", - converted all libraries, keeping some special cases with explicit calls to RTE_LOG, -- David Marchand David Marchand (13): hash: remove some dead code regexdev: fix logtype register lib: use dedicated logtypes and macros lib: add newline in logs lib: remove redundant newline from logs eal/linux: remove log paraphrasing the doc bpf: remove log level in internal helper lib: simplify multilines log messages lib: add more logging helpers vhost: improve log for memory dumping configuration log: add a per line log helper lib: replace logging helpers lib: use per line logging in helpers devtools/checkpatches.sh | 8 + drivers/crypto/ipsec_mb/ipsec_mb_ops.c | 2 +- lib/acl/acl_bld.c | 28 +- lib/acl/acl_gen.c | 8 +- lib/acl/acl_log.h | 2 + lib/acl/rte_acl.c | 8 +- lib/acl/tb_mem.c | 4 +- lib/bbdev/rte_bbdev.c | 11 +- lib/bpf/bpf.c | 2 +- lib/bpf/bpf_convert.c | 16 +- lib/bpf/bpf_exec.c | 12 +- lib/bpf/bpf_impl.h | 5 +- lib/bpf/bpf_jit_arm64.c | 8 +- lib/bpf/bpf_jit_x86.c | 4 +- lib/bpf/bpf_load.c | 2 +- lib/bpf/bpf_load_elf.c | 24 +- lib/bpf/bpf_pkt.c | 4 +- lib/bpf/bpf_stub.c | 6 +- lib/bpf/bpf_validate.c | 44 +- lib/cfgfile/rte_cfgfile.c | 18 +- lib/compressdev/rte_compressdev_internal.h | 5 +- lib/compressdev/rte_compressdev_pmd.c | 4 +- lib/cryptodev/rte_cryptodev.c | 4 +- lib/cryptodev/rte_cryptodev.h | 22 +- lib/dispatcher/rte_dispatcher.c | 12 +- lib/dmadev/rte_dmadev.c | 8 +- lib/eal/common/eal_common_bus.c | 22 +- lib/eal/common/eal_common_class.c | 6 +- lib/eal/common/eal_common_config.c | 2 +- lib/eal/common/eal_common_debug.c | 8 +- lib/eal/common/eal_common_dev.c | 80 +- lib/eal/common/eal_common_devargs.c | 18 +- lib/eal/common/eal_common_dynmem.c | 34 +- lib/eal/common/eal_common_fbarray.c | 12 +- lib/eal/common/eal_common_interrupts.c | 39 +- lib/eal/common/eal_common_lcore.c | 26 +- lib/eal/common/eal_common_memalloc.c | 12 +- lib/eal/common/eal_common_memory.c | 66 +- lib/eal/common/eal_common_memzone.c | 24 +- lib/eal/common/eal_common_options.c | 236 +++--- lib/eal/common/eal_common_proc.c | 112 +-- lib/eal/common/eal_common_tailqs.c | 12 +- lib/eal/common/eal_common_thread.c | 12 +- lib/eal/common/eal_common_timer.c | 6 +- lib/eal/common/eal_common_trace_utils.c | 3 +- lib/eal/common/eal_private.h | 4 + lib/eal/common/eal_trace.h | 4 +- lib/eal/common/hotplug_mp.c | 54 +- lib/eal/common/malloc_elem.c | 6 +- lib/eal/common/malloc_heap.c | 40 +- lib/eal/common/malloc_mp.c | 72 +- lib/eal/common/rte_keepalive.c | 4 +- lib/eal/common/rte_malloc.c | 10 +- lib/eal/common/rte_service.c | 8 +- lib/eal/freebsd/eal.c | 75 +- lib/eal/freebsd/eal_alarm.c | 2 +- lib/eal/freebsd/eal_dev.c | 10 +- lib/eal/freebsd/eal_hugepage_info.c | 22 +- lib/eal/freebsd/eal_interrupts.c | 60 +- lib/eal/freebsd/eal_lcore.c | 2 +- lib/eal/freebsd/eal_memalloc.c | 11 +- lib/eal/freebsd/eal_memory.c | 34 +- lib/eal/freebsd/eal_thread.c | 2 +- lib/eal/freebsd/eal_timer.c | 10 +- lib/eal/linux/eal.c | 122 +-- lib/eal/linux/eal_alarm.c | 2 +- lib/eal/linux/eal_dev.c | 40 +- lib/eal/linux/eal_hugepage_info.c | 38 +- lib/eal/linux/eal_interrupts.c | 116 +-- lib/eal/linux/eal_lcore.c | 4 +- lib/eal/linux/eal_memalloc.c | 120 +-- lib/eal/linux/eal_memory.c | 208 ++--- lib/eal/linux/eal_thread.c | 6 +- lib/eal/linux/eal_timer.c | 14 +- lib/eal/linux/eal_vfio.c | 270 +++---- lib/eal/linux/eal_vfio_mp_sync.c | 5 +- lib/eal/riscv/rte_cycles.c | 4 +- lib/eal/unix/eal_filesystem.c | 14 +- lib/eal/unix/eal_firmware.c | 3 +- lib/eal/unix/eal_unix_memory.c | 8 +- lib/eal/unix/rte_thread.c | 36 +- lib/eal/windows/eal.c | 36 +- lib/eal/windows/eal_alarm.c | 13 +- lib/eal/windows/eal_debug.c | 10 +- lib/eal/windows/eal_dev.c | 10 +- lib/eal/windows/eal_hugepages.c | 10 +- lib/eal/windows/eal_interrupts.c | 10 +- lib/eal/windows/eal_lcore.c | 7 +- lib/eal/windows/eal_memalloc.c | 50 +- lib/eal/windows/eal_memory.c | 22 +- lib/eal/windows/eal_windows.h | 6 +- lib/eal/windows/include/rte_windows.h | 6 +- lib/eal/windows/rte_thread.c | 29 +- lib/efd/rte_efd.c | 60 +- lib/ethdev/ethdev_driver.c | 44 +- lib/ethdev/ethdev_pci.h | 2 +- lib/ethdev/ethdev_private.c | 10 +- lib/ethdev/rte_class_eth.c | 2 +- lib/ethdev/rte_ethdev.c | 854 ++++++++++----------- lib/ethdev/rte_ethdev.h | 51 +- lib/ethdev/rte_ethdev_cman.c | 16 +- lib/ethdev/rte_ethdev_telemetry.c | 44 +- lib/ethdev/rte_flow.c | 64 +- lib/ethdev/rte_flow.h | 3 - lib/ethdev/sff_telemetry.c | 30 +- lib/eventdev/eventdev_pmd.h | 18 +- lib/eventdev/rte_event_crypto_adapter.c | 12 +- lib/eventdev/rte_event_dma_adapter.c | 18 +- lib/eventdev/rte_event_eth_rx_adapter.c | 40 +- lib/eventdev/rte_event_eth_tx_adapter.c | 2 +- lib/eventdev/rte_event_timer_adapter.c | 21 +- lib/eventdev/rte_eventdev.c | 10 +- lib/fib/fib_log.h | 4 +- lib/fib/rte_fib.c | 14 +- lib/fib/rte_fib6.c | 14 +- lib/gpudev/gpudev.c | 6 +- lib/graph/graph_private.h | 7 +- lib/hash/rte_cuckoo_hash.c | 54 +- lib/hash/rte_cuckoo_hash.h | 11 - lib/hash/rte_fbk_hash.c | 6 +- lib/hash/rte_hash_crc.c | 14 +- lib/hash/rte_thash.c | 22 +- lib/hash/rte_thash_gfni.c | 10 +- lib/ip_frag/ip_frag_common.h | 3 + lib/ip_frag/rte_ip_frag_common.c | 8 +- lib/latencystats/rte_latencystats.c | 43 +- lib/log/rte_log.h | 21 + lib/lpm/lpm_log.h | 2 + lib/lpm/rte_lpm.c | 12 +- lib/lpm/rte_lpm6.c | 10 +- lib/mbuf/mbuf_log.h | 2 + lib/mbuf/rte_mbuf.c | 14 +- lib/mbuf/rte_mbuf_dyn.c | 14 +- lib/mbuf/rte_mbuf_pool_ops.c | 4 +- lib/member/member.h | 14 + lib/member/rte_member.c | 15 +- lib/member/rte_member.h | 9 - lib/member/rte_member_heap.h | 39 +- lib/member/rte_member_ht.c | 13 +- lib/member/rte_member_sketch.c | 41 +- lib/member/rte_member_vbf.c | 9 +- lib/mempool/rte_mempool.c | 24 +- lib/mempool/rte_mempool.h | 4 +- lib/mempool/rte_mempool_ops.c | 10 +- lib/metrics/rte_metrics_telemetry.c | 6 +- lib/mldev/rte_mldev.c | 102 +-- lib/mldev/rte_mldev.h | 5 +- lib/net/rte_net_crc.c | 14 +- lib/node/ethdev_rx.c | 4 +- lib/node/ip4_lookup.c | 2 +- lib/node/ip6_lookup.c | 2 +- lib/node/kernel_rx.c | 8 +- lib/node/kernel_tx.c | 4 +- lib/node/node_private.h | 8 +- lib/pdump/rte_pdump.c | 113 ++- lib/pipeline/rte_pipeline.c | 231 +++--- lib/port/port_log.h | 9 + lib/port/rte_port_ethdev.c | 20 +- lib/port/rte_port_eventdev.c | 20 +- lib/port/rte_port_fd.c | 26 +- lib/port/rte_port_frag.c | 16 +- lib/port/rte_port_ras.c | 14 +- lib/port/rte_port_ring.c | 20 +- lib/port/rte_port_sched.c | 14 +- lib/port/rte_port_source_sink.c | 50 +- lib/port/rte_port_sym_crypto.c | 20 +- lib/power/guest_channel.c | 38 +- lib/power/power_acpi_cpufreq.c | 116 +-- lib/power/power_amd_pstate_cpufreq.c | 132 ++-- lib/power/power_common.c | 14 +- lib/power/power_common.h | 10 +- lib/power/power_cppc_cpufreq.c | 130 ++-- lib/power/power_intel_uncore.c | 72 +- lib/power/power_kvm_vm.c | 22 +- lib/power/power_pstate_cpufreq.c | 156 ++-- lib/power/rte_power.c | 22 +- lib/power/rte_power_pmd_mgmt.c | 34 +- lib/power/rte_power_uncore.c | 14 +- lib/rawdev/rte_rawdev_pmd.h | 4 +- lib/rcu/rte_rcu_qsbr.c | 66 +- lib/rcu/rte_rcu_qsbr.h | 17 +- lib/regexdev/rte_regexdev.c | 88 +-- lib/regexdev/rte_regexdev.h | 13 +- lib/reorder/rte_reorder.c | 34 +- lib/rib/rib_log.h | 6 +- lib/rib/rte_rib.c | 13 +- lib/rib/rte_rib6.c | 10 +- lib/ring/rte_ring.c | 26 +- lib/sched/rte_pie.c | 18 +- lib/sched/rte_sched.c | 274 +++---- lib/sched/rte_sched_log.h | 2 + lib/stack/rte_stack.c | 8 +- lib/stack/stack_pvt.h | 4 +- lib/table/rte_table_acl.c | 74 +- lib/table/rte_table_array.c | 18 +- lib/table/rte_table_hash_cuckoo.c | 24 +- lib/table/rte_table_hash_ext.c | 24 +- lib/table/rte_table_hash_key16.c | 40 +- lib/table/rte_table_hash_key32.c | 40 +- lib/table/rte_table_hash_key8.c | 40 +- lib/table/rte_table_hash_lru.c | 24 +- lib/table/rte_table_lpm.c | 44 +- lib/table/rte_table_lpm_ipv6.c | 46 +- lib/table/rte_table_stub.c | 6 +- lib/table/table_log.h | 9 + lib/telemetry/telemetry.c | 39 +- lib/vhost/fd_man.c | 10 +- lib/vhost/iotlb.c | 36 +- lib/vhost/socket.c | 102 +-- lib/vhost/vdpa.c | 8 +- lib/vhost/vduse.c | 120 +-- lib/vhost/vduse.h | 4 +- lib/vhost/vhost.c | 118 +-- lib/vhost/vhost.h | 24 +- lib/vhost/vhost_crypto.c | 12 +- lib/vhost/vhost_user.c | 530 ++++++------- lib/vhost/virtio_net.c | 188 ++--- lib/vhost/virtio_net_ctrl.c | 38 +- 218 files changed, 4120 insertions(+), 3973 deletions(-) create mode 100644 lib/member/member.h create mode 100644 lib/port/port_log.h create mode 100644 lib/table/table_log.h -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v5 01/13] hash: remove some dead code 2023-12-20 15:35 ` [PATCH v5 00/13] Detect superfluous newline in logs David Marchand @ 2023-12-20 15:35 ` David Marchand 2023-12-21 5:58 ` Ruifeng Wang 2023-12-21 6:26 ` Ruifeng Wang 2023-12-20 15:35 ` [PATCH v5 02/13] regexdev: fix logtype register David Marchand ` (13 subsequent siblings) 14 siblings, 2 replies; 122+ messages in thread From: David Marchand @ 2023-12-20 15:35 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Tyler Retzlaff, Yipeng Wang, Sameh Gobriel, Vladimir Medvedkin, Ruifeng Wang, Ray Kinsella, Dharmik Thakkar This macro is not used. Fixes: 769b2de7fb52 ("hash: implement RCU resources reclamation") Cc: stable@dpdk.org Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Stephen Hemminger <stephen@networkplumber.org> Reviewed-by: Tyler Retzlaff <roretzla@linux.microsoft.com> --- lib/hash/rte_cuckoo_hash.h | 11 ----------- 1 file changed, 11 deletions(-) diff --git a/lib/hash/rte_cuckoo_hash.h b/lib/hash/rte_cuckoo_hash.h index f7afc4dd79..8ea793c66e 100644 --- a/lib/hash/rte_cuckoo_hash.h +++ b/lib/hash/rte_cuckoo_hash.h @@ -29,17 +29,6 @@ #define RETURN_IF_TRUE(cond, retval) #endif -#if defined(RTE_LIBRTE_HASH_DEBUG) -#define ERR_IF_TRUE(cond, fmt, args...) do { \ - if (cond) { \ - RTE_LOG(ERR, HASH, fmt, ##args); \ - return; \ - } \ -} while (0) -#else -#define ERR_IF_TRUE(cond, fmt, args...) -#endif - #include <rte_hash_crc.h> #include <rte_jhash.h> -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [PATCH v5 01/13] hash: remove some dead code 2023-12-20 15:35 ` [PATCH v5 01/13] hash: remove some dead code David Marchand @ 2023-12-21 5:58 ` Ruifeng Wang 2023-12-21 6:26 ` Ruifeng Wang 1 sibling, 0 replies; 122+ messages in thread From: Ruifeng Wang @ 2023-12-21 5:58 UTC (permalink / raw) To: David Marchand, dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Tyler Retzlaff, Yipeng Wang, Sameh Gobriel, Vladimir Medvedkin, Ray Kinsella, Dharmik Thakkar On 2023/12/20 11:35 PM, David Marchand wrote: > This macro is not used. > > Fixes: 769b2de7fb52 ("hash: implement RCU resources reclamation") > Cc: stable@dpdk.org > > Signed-off-by: David Marchand <david.marchand@redhat.com> > Reviewed-by: Stephen Hemminger <stephen@networkplumber.org> > Reviewed-by: Tyler Retzlaff <roretzla@linux.microsoft.com> > --- > lib/hash/rte_cuckoo_hash.h | 11 ----------- > 1 file changed, 11 deletions(-) > > diff --git a/lib/hash/rte_cuckoo_hash.h b/lib/hash/rte_cuckoo_hash.h > index f7afc4dd79..8ea793c66e 100644 > --- a/lib/hash/rte_cuckoo_hash.h > +++ b/lib/hash/rte_cuckoo_hash.h > @@ -29,17 +29,6 @@ > #define RETURN_IF_TRUE(cond, retval) > #endif > > -#if defined(RTE_LIBRTE_HASH_DEBUG) > -#define ERR_IF_TRUE(cond, fmt, args...) do { \ > - if (cond) { \ > - RTE_LOG(ERR, HASH, fmt, ##args); \ > - return; \ > - } \ > -} while (0) > -#else > -#define ERR_IF_TRUE(cond, fmt, args...) > -#endif > - > #include <rte_hash_crc.h> > #include <rte_jhash.h> > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com> IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you. ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [PATCH v5 01/13] hash: remove some dead code 2023-12-20 15:35 ` [PATCH v5 01/13] hash: remove some dead code David Marchand 2023-12-21 5:58 ` Ruifeng Wang @ 2023-12-21 6:26 ` Ruifeng Wang 1 sibling, 0 replies; 122+ messages in thread From: Ruifeng Wang @ 2023-12-21 6:26 UTC (permalink / raw) To: David Marchand, dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Tyler Retzlaff, Yipeng Wang, Sameh Gobriel, Vladimir Medvedkin, Ray Kinsella, Dharmik Thakkar, nd On 2023/12/20 11:35 PM, David Marchand wrote: > This macro is not used. > > Fixes: 769b2de7fb52 ("hash: implement RCU resources reclamation") > Cc: stable@dpdk.org > > Signed-off-by: David Marchand <david.marchand@redhat.com> > Reviewed-by: Stephen Hemminger <stephen@networkplumber.org> > Reviewed-by: Tyler Retzlaff <roretzla@linux.microsoft.com> > --- > lib/hash/rte_cuckoo_hash.h | 11 ----------- > 1 file changed, 11 deletions(-) > > diff --git a/lib/hash/rte_cuckoo_hash.h b/lib/hash/rte_cuckoo_hash.h > index f7afc4dd79..8ea793c66e 100644 > --- a/lib/hash/rte_cuckoo_hash.h > +++ b/lib/hash/rte_cuckoo_hash.h > @@ -29,17 +29,6 @@ > #define RETURN_IF_TRUE(cond, retval) > #endif > > -#if defined(RTE_LIBRTE_HASH_DEBUG) > -#define ERR_IF_TRUE(cond, fmt, args...) do { \ > - if (cond) { \ > - RTE_LOG(ERR, HASH, fmt, ##args); \ > - return; \ > - } \ > -} while (0) > -#else > -#define ERR_IF_TRUE(cond, fmt, args...) > -#endif > - > #include <rte_hash_crc.h> > #include <rte_jhash.h> > To correct the disclaimer issue. Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com> ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v5 02/13] regexdev: fix logtype register 2023-12-20 15:35 ` [PATCH v5 00/13] Detect superfluous newline in logs David Marchand 2023-12-20 15:35 ` [PATCH v5 01/13] hash: remove some dead code David Marchand @ 2023-12-20 15:35 ` David Marchand 2023-12-20 15:35 ` [PATCH v5 03/13] lib: use dedicated logtypes and macros David Marchand ` (12 subsequent siblings) 14 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-20 15:35 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Tyler Retzlaff, Ori Kam, Guy Kaneti, Parav Pandit This library logtype was not initialized so its logs would end up under the 0 logtype, iow, RTE_LOGTYPE_EAL. Fixes: b25246beaefc ("regexdev: add core functions") Cc: stable@dpdk.org Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Stephen Hemminger <stephen@networkplumber.org> Reviewed-by: Tyler Retzlaff <roretzla@linux.microsoft.com> Acked-by: Ori Kam <orika@nvidia.com> --- lib/regexdev/rte_regexdev.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/regexdev/rte_regexdev.c b/lib/regexdev/rte_regexdev.c index caec069182..d38a85eb0b 100644 --- a/lib/regexdev/rte_regexdev.c +++ b/lib/regexdev/rte_regexdev.c @@ -19,7 +19,7 @@ static struct { struct rte_regexdev_data data[RTE_MAX_REGEXDEV_DEVS]; } *rte_regexdev_shared_data; -int rte_regexdev_logtype; +RTE_LOG_REGISTER_DEFAULT(rte_regexdev_logtype, INFO); static uint16_t regexdev_find_free_dev(void) -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v5 03/13] lib: use dedicated logtypes and macros 2023-12-20 15:35 ` [PATCH v5 00/13] Detect superfluous newline in logs David Marchand 2023-12-20 15:35 ` [PATCH v5 01/13] hash: remove some dead code David Marchand 2023-12-20 15:35 ` [PATCH v5 02/13] regexdev: fix logtype register David Marchand @ 2023-12-20 15:35 ` David Marchand 2023-12-20 15:35 ` [PATCH v5 04/13] lib: add newline in logs David Marchand ` (11 subsequent siblings) 14 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-20 15:35 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Tyler Retzlaff, Andrew Rybchenko, Akhil Goyal, Fan Zhang, Amit Prakash Shukla, Jerin Jacob, Naga Harish K S V No printf! When a dedicated log helper exists, use it. And no usurpation please: a library should log under its logtype (see the eventdev rx adapter update for example). Note: the RTE_ETH_VALID_PORTID_OR_GOTO_ERR_RET macro is renamed for consistency with the rest of eventdev (private) macros. Cc: stable@dpdk.org Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Stephen Hemminger <stephen@networkplumber.org> Reviewed-by: Tyler Retzlaff <roretzla@linux.microsoft.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> --- lib/cryptodev/rte_cryptodev.c | 2 +- lib/ethdev/ethdev_driver.c | 4 ++-- lib/ethdev/ethdev_private.c | 2 +- lib/ethdev/rte_class_eth.c | 2 +- lib/eventdev/rte_event_dma_adapter.c | 4 ++-- lib/eventdev/rte_event_eth_rx_adapter.c | 12 ++++++------ lib/eventdev/rte_eventdev.c | 6 +++--- lib/mempool/rte_mempool_ops.c | 2 +- 8 files changed, 17 insertions(+), 17 deletions(-) diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c index 25e3ec12d1..ead8c9a623 100644 --- a/lib/cryptodev/rte_cryptodev.c +++ b/lib/cryptodev/rte_cryptodev.c @@ -2684,7 +2684,7 @@ rte_cryptodev_driver_id_get(const char *name) int driver_id = -1; if (name == NULL) { - RTE_LOG(DEBUG, CRYPTODEV, "name pointer NULL"); + CDEV_LOG_DEBUG("name pointer NULL"); return -1; } diff --git a/lib/ethdev/ethdev_driver.c b/lib/ethdev/ethdev_driver.c index fff4b7b4cd..55a9dcc565 100644 --- a/lib/ethdev/ethdev_driver.c +++ b/lib/ethdev/ethdev_driver.c @@ -487,7 +487,7 @@ rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da) pair = &args.pairs[i]; if (strcmp("representor", pair->key) == 0) { if (eth_da->type != RTE_ETH_REPRESENTOR_NONE) { - RTE_LOG(ERR, EAL, "duplicated representor key: %s\n", + RTE_ETHDEV_LOG(ERR, "duplicated representor key: %s\n", dargs); result = -1; goto parse_cleanup; @@ -713,7 +713,7 @@ rte_eth_representor_id_get(uint16_t port_id, if (info->ranges[i].controller != controller) continue; if (info->ranges[i].id_end < info->ranges[i].id_base) { - RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n", + RTE_ETHDEV_LOG(WARNING, "Port %hu invalid representor ID Range %u - %u, entry %d\n", port_id, info->ranges[i].id_base, info->ranges[i].id_end, i); continue; diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c index e98b7188b0..0e1c7b23c1 100644 --- a/lib/ethdev/ethdev_private.c +++ b/lib/ethdev/ethdev_private.c @@ -182,7 +182,7 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data) RTE_DIM(eth_da->representor_ports)); done: if (str == NULL) - RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str); + RTE_ETHDEV_LOG(ERR, "wrong representor format: %s\n", str); return str == NULL ? -1 : 0; } diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c index b61dae849d..311beb17cb 100644 --- a/lib/ethdev/rte_class_eth.c +++ b/lib/ethdev/rte_class_eth.c @@ -165,7 +165,7 @@ eth_dev_iterate(const void *start, valid_keys = eth_params_keys; kvargs = rte_kvargs_parse(str, valid_keys); if (kvargs == NULL) { - RTE_LOG(ERR, EAL, "cannot parse argument list\n"); + RTE_ETHDEV_LOG(ERR, "cannot parse argument list\n"); rte_errno = EINVAL; return NULL; } diff --git a/lib/eventdev/rte_event_dma_adapter.c b/lib/eventdev/rte_event_dma_adapter.c index af4b5ad388..cbf9405438 100644 --- a/lib/eventdev/rte_event_dma_adapter.c +++ b/lib/eventdev/rte_event_dma_adapter.c @@ -1046,7 +1046,7 @@ rte_event_dma_adapter_vchan_add(uint8_t id, int16_t dma_dev_id, uint16_t vchan, sizeof(struct dma_vchan_info), 0, adapter->socket_id); if (dev_info->vchanq == NULL) { - printf("Queue pair add not supported\n"); + RTE_EDEV_LOG_ERR("Queue pair add not supported"); return -ENOMEM; } } @@ -1057,7 +1057,7 @@ rte_event_dma_adapter_vchan_add(uint8_t id, int16_t dma_dev_id, uint16_t vchan, sizeof(struct dma_vchan_info), 0, adapter->socket_id); if (dev_info->tqmap == NULL) { - printf("tq pair add not supported\n"); + RTE_EDEV_LOG_ERR("tq pair add not supported"); return -ENOMEM; } } diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c index 6db03adf04..82ae31712d 100644 --- a/lib/eventdev/rte_event_eth_rx_adapter.c +++ b/lib/eventdev/rte_event_eth_rx_adapter.c @@ -314,9 +314,9 @@ rxa_event_buf_get(struct event_eth_rx_adapter *rx_adapter, uint16_t eth_dev_id, } \ } while (0) -#define RTE_ETH_VALID_PORTID_OR_GOTO_ERR_RET(port_id, retval) do { \ +#define RTE_EVENT_ETH_RX_ADAPTER_PORTID_VALID_OR_GOTO_ERR_RET(port_id, retval) do { \ if (!rte_eth_dev_is_valid_port(port_id)) { \ - RTE_ETHDEV_LOG(ERR, "Invalid port_id=%u\n", port_id); \ + RTE_EDEV_LOG_ERR("Invalid port_id=%u", port_id); \ ret = retval; \ goto error; \ } \ @@ -3671,7 +3671,7 @@ handle_rxa_get_queue_conf(const char *cmd __rte_unused, /* Get device ID from parameter string */ eth_dev_id = strtoul(token, NULL, 10); - RTE_ETH_VALID_PORTID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); + RTE_EVENT_ETH_RX_ADAPTER_PORTID_VALID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); token = strtok(NULL, ","); RTE_EVENT_ETH_RX_ADAPTER_TOKEN_VALID_OR_GOTO_ERR_RET(token, -1); @@ -3743,7 +3743,7 @@ handle_rxa_get_queue_stats(const char *cmd __rte_unused, /* Get device ID from parameter string */ eth_dev_id = strtoul(token, NULL, 10); - RTE_ETH_VALID_PORTID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); + RTE_EVENT_ETH_RX_ADAPTER_PORTID_VALID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); token = strtok(NULL, ","); RTE_EVENT_ETH_RX_ADAPTER_TOKEN_VALID_OR_GOTO_ERR_RET(token, -1); @@ -3813,7 +3813,7 @@ handle_rxa_queue_stats_reset(const char *cmd __rte_unused, /* Get device ID from parameter string */ eth_dev_id = strtoul(token, NULL, 10); - RTE_ETH_VALID_PORTID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); + RTE_EVENT_ETH_RX_ADAPTER_PORTID_VALID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); token = strtok(NULL, ","); RTE_EVENT_ETH_RX_ADAPTER_TOKEN_VALID_OR_GOTO_ERR_RET(token, -1); @@ -3868,7 +3868,7 @@ handle_rxa_instance_get(const char *cmd __rte_unused, /* Get device ID from parameter string */ eth_dev_id = strtoul(token, NULL, 10); - RTE_ETH_VALID_PORTID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); + RTE_EVENT_ETH_RX_ADAPTER_PORTID_VALID_OR_GOTO_ERR_RET(eth_dev_id, -EINVAL); token = strtok(NULL, ","); RTE_EVENT_ETH_RX_ADAPTER_TOKEN_VALID_OR_GOTO_ERR_RET(token, -1); diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index 0ca32d6721..ae50821a3f 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -1428,8 +1428,8 @@ rte_event_vector_pool_create(const char *name, unsigned int n, int ret; if (!nb_elem) { - RTE_LOG(ERR, EVENTDEV, - "Invalid number of elements=%d requested\n", nb_elem); + RTE_EDEV_LOG_ERR("Invalid number of elements=%d requested", + nb_elem); rte_errno = EINVAL; return NULL; } @@ -1444,7 +1444,7 @@ rte_event_vector_pool_create(const char *name, unsigned int n, mp_ops_name = rte_mbuf_best_mempool_ops(); ret = rte_mempool_set_ops_byname(mp, mp_ops_name, NULL); if (ret != 0) { - RTE_LOG(ERR, EVENTDEV, "error setting mempool handler\n"); + RTE_EDEV_LOG_ERR("error setting mempool handler"); goto err; } diff --git a/lib/mempool/rte_mempool_ops.c b/lib/mempool/rte_mempool_ops.c index ae1d288f27..e871de9ec9 100644 --- a/lib/mempool/rte_mempool_ops.c +++ b/lib/mempool/rte_mempool_ops.c @@ -46,7 +46,7 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h) if (strlen(h->name) >= sizeof(ops->name) - 1) { rte_spinlock_unlock(&rte_mempool_ops_table.sl); - RTE_LOG(DEBUG, EAL, "%s(): mempool_ops <%s>: name too long\n", + RTE_LOG(DEBUG, MEMPOOL, "%s(): mempool_ops <%s>: name too long\n", __func__, h->name); rte_errno = EEXIST; return -EEXIST; -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v5 04/13] lib: add newline in logs 2023-12-20 15:35 ` [PATCH v5 00/13] Detect superfluous newline in logs David Marchand ` (2 preceding siblings ...) 2023-12-20 15:35 ` [PATCH v5 03/13] lib: use dedicated logtypes and macros David Marchand @ 2023-12-20 15:35 ` David Marchand 2023-12-20 15:35 ` [PATCH v5 05/13] lib: remove redundant newline from logs David Marchand ` (10 subsequent siblings) 14 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-20 15:35 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Tyler Retzlaff, Andrew Rybchenko, Harman Kalra, Vladimir Medvedkin, Anatoly Burakov, David Hunt, Sivaprasad Tummala Fix places leading to a log message not terminated with a newline. Cc: stable@dpdk.org Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Reviewed-by: Tyler Retzlaff <roretzla@linux.microsoft.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> --- lib/eal/common/eal_common_options.c | 2 +- lib/eal/linux/eal_hugepage_info.c | 2 +- lib/eal/linux/eal_interrupts.c | 2 +- lib/ethdev/ethdev_pci.h | 2 +- lib/ethdev/rte_ethdev.c | 40 ++++++++++++++--------------- lib/lpm/rte_lpm6.c | 6 ++--- lib/power/guest_channel.c | 2 +- lib/power/rte_power_pmd_mgmt.c | 6 ++--- 8 files changed, 31 insertions(+), 31 deletions(-) diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c index a6d21f1cba..e9ba01fb89 100644 --- a/lib/eal/common/eal_common_options.c +++ b/lib/eal/common/eal_common_options.c @@ -2141,7 +2141,7 @@ rte_vect_set_max_simd_bitwidth(uint16_t bitwidth) struct internal_config *internal_conf = eal_get_internal_configuration(); if (internal_conf->max_simd_bitwidth.forced) { - RTE_LOG(NOTICE, EAL, "Cannot set max SIMD bitwidth - user runtime override enabled"); + RTE_LOG(NOTICE, EAL, "Cannot set max SIMD bitwidth - user runtime override enabled\n"); return -EPERM; } diff --git a/lib/eal/linux/eal_hugepage_info.c b/lib/eal/linux/eal_hugepage_info.c index 581d9dfc91..36a495fb1f 100644 --- a/lib/eal/linux/eal_hugepage_info.c +++ b/lib/eal/linux/eal_hugepage_info.c @@ -403,7 +403,7 @@ inspect_hugedir_cb(const struct walk_hugedir_data *whd) struct stat st; if (fstat(whd->file_fd, &st) < 0) - RTE_LOG(DEBUG, EAL, "%s(): stat(\"%s\") failed: %s", + RTE_LOG(DEBUG, EAL, "%s(): stat(\"%s\") failed: %s\n", __func__, whd->file_name, strerror(errno)); else (*total_size) += st.st_size; diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c index d4919dff45..eabac24992 100644 --- a/lib/eal/linux/eal_interrupts.c +++ b/lib/eal/linux/eal_interrupts.c @@ -1542,7 +1542,7 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd) /* only check, initialization would be done in vdev driver.*/ if ((uint64_t)rte_intr_efd_counter_size_get(intr_handle) > sizeof(union rte_intr_read_buffer)) { - RTE_LOG(ERR, EAL, "the efd_counter_size is oversized"); + RTE_LOG(ERR, EAL, "the efd_counter_size is oversized\n"); return -EINVAL; } } else { diff --git a/lib/ethdev/ethdev_pci.h b/lib/ethdev/ethdev_pci.h index 320e3e0093..ddb559aa95 100644 --- a/lib/ethdev/ethdev_pci.h +++ b/lib/ethdev/ethdev_pci.h @@ -31,7 +31,7 @@ rte_eth_copy_pci_info(struct rte_eth_dev *eth_dev, struct rte_pci_device *pci_dev) { if ((eth_dev == NULL) || (pci_dev == NULL)) { - RTE_ETHDEV_LOG(ERR, "NULL pointer eth_dev=%p pci_dev=%p", + RTE_ETHDEV_LOG(ERR, "NULL pointer eth_dev=%p pci_dev=%p\n", (void *)eth_dev, (void *)pci_dev); return; } diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 3858983fcc..b9d99ece15 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -724,7 +724,7 @@ rte_eth_dev_get_port_by_name(const char *name, uint16_t *port_id) uint16_t pid; if (name == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get port ID from NULL name"); + RTE_ETHDEV_LOG(ERR, "Cannot get port ID from NULL name\n"); return -EINVAL; } @@ -2394,41 +2394,41 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id, nb_rx_desc = cap.max_nb_desc; if (nb_rx_desc > cap.max_nb_desc) { RTE_ETHDEV_LOG(ERR, - "Invalid value for nb_rx_desc(=%hu), should be: <= %hu", + "Invalid value for nb_rx_desc(=%hu), should be: <= %hu\n", nb_rx_desc, cap.max_nb_desc); return -EINVAL; } if (conf->peer_count > cap.max_rx_2_tx) { RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Rx queue(=%u), should be: <= %hu", + "Invalid value for number of peers for Rx queue(=%u), should be: <= %hu\n", conf->peer_count, cap.max_rx_2_tx); return -EINVAL; } if (conf->use_locked_device_memory && !cap.rx_cap.locked_device_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to use locked device memory for Rx queue, which is not supported"); + "Attempt to use locked device memory for Rx queue, which is not supported\n"); return -EINVAL; } if (conf->use_rte_memory && !cap.rx_cap.rte_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to use DPDK memory for Rx queue, which is not supported"); + "Attempt to use DPDK memory for Rx queue, which is not supported\n"); return -EINVAL; } if (conf->use_locked_device_memory && conf->use_rte_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to use mutually exclusive memory settings for Rx queue"); + "Attempt to use mutually exclusive memory settings for Rx queue\n"); return -EINVAL; } if (conf->force_memory && !conf->use_locked_device_memory && !conf->use_rte_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to force Rx queue memory settings, but none is set"); + "Attempt to force Rx queue memory settings, but none is set\n"); return -EINVAL; } if (conf->peer_count == 0) { RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Rx queue(=%u), should be: > 0", + "Invalid value for number of peers for Rx queue(=%u), should be: > 0\n", conf->peer_count); return -EINVAL; } @@ -2438,7 +2438,7 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id, count++; } if (count > cap.max_nb_queues) { - RTE_ETHDEV_LOG(ERR, "To many Rx hairpin queues max is %d", + RTE_ETHDEV_LOG(ERR, "To many Rx hairpin queues max is %d\n", cap.max_nb_queues); return -EINVAL; } @@ -2597,41 +2597,41 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id, nb_tx_desc = cap.max_nb_desc; if (nb_tx_desc > cap.max_nb_desc) { RTE_ETHDEV_LOG(ERR, - "Invalid value for nb_tx_desc(=%hu), should be: <= %hu", + "Invalid value for nb_tx_desc(=%hu), should be: <= %hu\n", nb_tx_desc, cap.max_nb_desc); return -EINVAL; } if (conf->peer_count > cap.max_tx_2_rx) { RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Tx queue(=%u), should be: <= %hu", + "Invalid value for number of peers for Tx queue(=%u), should be: <= %hu\n", conf->peer_count, cap.max_tx_2_rx); return -EINVAL; } if (conf->use_locked_device_memory && !cap.tx_cap.locked_device_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to use locked device memory for Tx queue, which is not supported"); + "Attempt to use locked device memory for Tx queue, which is not supported\n"); return -EINVAL; } if (conf->use_rte_memory && !cap.tx_cap.rte_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to use DPDK memory for Tx queue, which is not supported"); + "Attempt to use DPDK memory for Tx queue, which is not supported\n"); return -EINVAL; } if (conf->use_locked_device_memory && conf->use_rte_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to use mutually exclusive memory settings for Tx queue"); + "Attempt to use mutually exclusive memory settings for Tx queue\n"); return -EINVAL; } if (conf->force_memory && !conf->use_locked_device_memory && !conf->use_rte_memory) { RTE_ETHDEV_LOG(ERR, - "Attempt to force Tx queue memory settings, but none is set"); + "Attempt to force Tx queue memory settings, but none is set\n"); return -EINVAL; } if (conf->peer_count == 0) { RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Tx queue(=%u), should be: > 0", + "Invalid value for number of peers for Tx queue(=%u), should be: > 0\n", conf->peer_count); return -EINVAL; } @@ -2641,7 +2641,7 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id, count++; } if (count > cap.max_nb_queues) { - RTE_ETHDEV_LOG(ERR, "To many Tx hairpin queues max is %d", + RTE_ETHDEV_LOG(ERR, "To many Tx hairpin queues max is %d\n", cap.max_nb_queues); return -EINVAL; } @@ -6716,7 +6716,7 @@ rte_eth_ip_reassembly_capability_get(uint16_t port_id, } if (reassembly_capa == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get reassembly capability to NULL"); + RTE_ETHDEV_LOG(ERR, "Cannot get reassembly capability to NULL\n"); return -EINVAL; } @@ -6752,7 +6752,7 @@ rte_eth_ip_reassembly_conf_get(uint16_t port_id, } if (conf == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get reassembly info to NULL"); + RTE_ETHDEV_LOG(ERR, "Cannot get reassembly info to NULL\n"); return -EINVAL; } @@ -6780,7 +6780,7 @@ rte_eth_ip_reassembly_conf_set(uint16_t port_id, if (dev->data->dev_configured == 0) { RTE_ETHDEV_LOG(ERR, "Device with port_id=%u is not configured.\n" - "Cannot set IP reassembly configuration", + "Cannot set IP reassembly configuration\n", port_id); return -EINVAL; } diff --git a/lib/lpm/rte_lpm6.c b/lib/lpm/rte_lpm6.c index 873cc8bc26..24ce7dd022 100644 --- a/lib/lpm/rte_lpm6.c +++ b/lib/lpm/rte_lpm6.c @@ -280,7 +280,7 @@ rte_lpm6_create(const char *name, int socket_id, rules_tbl = rte_hash_create(&rule_hash_tbl_params); if (rules_tbl == NULL) { - RTE_LOG(ERR, LPM, "LPM rules hash table allocation failed: %s (%d)", + RTE_LOG(ERR, LPM, "LPM rules hash table allocation failed: %s (%d)\n", rte_strerror(rte_errno), rte_errno); goto fail_wo_unlock; } @@ -290,7 +290,7 @@ rte_lpm6_create(const char *name, int socket_id, sizeof(uint32_t) * config->number_tbl8s, RTE_CACHE_LINE_SIZE); if (tbl8_pool == NULL) { - RTE_LOG(ERR, LPM, "LPM tbl8 pool allocation failed: %s (%d)", + RTE_LOG(ERR, LPM, "LPM tbl8 pool allocation failed: %s (%d)\n", rte_strerror(rte_errno), rte_errno); rte_errno = ENOMEM; goto fail_wo_unlock; @@ -301,7 +301,7 @@ rte_lpm6_create(const char *name, int socket_id, sizeof(struct rte_lpm_tbl8_hdr) * config->number_tbl8s, RTE_CACHE_LINE_SIZE); if (tbl8_hdrs == NULL) { - RTE_LOG(ERR, LPM, "LPM tbl8 headers allocation failed: %s (%d)", + RTE_LOG(ERR, LPM, "LPM tbl8 headers allocation failed: %s (%d)\n", rte_strerror(rte_errno), rte_errno); rte_errno = ENOMEM; goto fail_wo_unlock; diff --git a/lib/power/guest_channel.c b/lib/power/guest_channel.c index cc05347425..a6f2097d5b 100644 --- a/lib/power/guest_channel.c +++ b/lib/power/guest_channel.c @@ -90,7 +90,7 @@ guest_channel_host_connect(const char *path, unsigned int lcore_id) flags |= O_NONBLOCK; if (fcntl(fd, F_SETFL, flags) < 0) { RTE_LOG(ERR, GUEST_CHANNEL, "Failed on setting non-blocking mode for " - "file %s", fd_path); + "file %s\n", fd_path); goto error; } /* QEMU needs a delay after connection */ diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c index 38f8384085..6f18ed0adf 100644 --- a/lib/power/rte_power_pmd_mgmt.c +++ b/lib/power/rte_power_pmd_mgmt.c @@ -686,7 +686,7 @@ int rte_power_pmd_mgmt_set_pause_duration(unsigned int duration) { if (duration == 0) { - RTE_LOG(ERR, POWER, "Pause duration must be greater than 0, value unchanged"); + RTE_LOG(ERR, POWER, "Pause duration must be greater than 0, value unchanged\n"); return -EINVAL; } pause_duration = duration; @@ -709,7 +709,7 @@ rte_power_pmd_mgmt_set_scaling_freq_min(unsigned int lcore, unsigned int min) } if (min > scale_freq_max[lcore]) { - RTE_LOG(ERR, POWER, "Invalid min frequency: Cannot be greater than max frequency"); + RTE_LOG(ERR, POWER, "Invalid min frequency: Cannot be greater than max frequency\n"); return -EINVAL; } scale_freq_min[lcore] = min; @@ -729,7 +729,7 @@ rte_power_pmd_mgmt_set_scaling_freq_max(unsigned int lcore, unsigned int max) if (max == 0) max = UINT32_MAX; if (max < scale_freq_min[lcore]) { - RTE_LOG(ERR, POWER, "Invalid max frequency: Cannot be less than min frequency"); + RTE_LOG(ERR, POWER, "Invalid max frequency: Cannot be less than min frequency\n"); return -EINVAL; } -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v5 05/13] lib: remove redundant newline from logs 2023-12-20 15:35 ` [PATCH v5 00/13] Detect superfluous newline in logs David Marchand ` (3 preceding siblings ...) 2023-12-20 15:35 ` [PATCH v5 04/13] lib: add newline in logs David Marchand @ 2023-12-20 15:35 ` David Marchand 2023-12-20 15:35 ` [PATCH v5 06/13] eal/linux: remove log paraphrasing the doc David Marchand ` (9 subsequent siblings) 14 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-20 15:35 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, stable, Chengwen Feng, Mattias Rönnblom, Kai Ji, Pablo de Lara, Nicolas Chautru, Cristian Dumitrescu, Fan Zhang, Ashish Gupta, Akhil Goyal, Kevin Laatz, Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam, Jerin Jacob, Abhinandan Gujjar, Amit Prakash Shukla, Naga Harish K S V, Erik Gabriel Carrillo, Srikanth Yalavarthi, Jasvinder Singh, Nithin Dabilpuram, Pavan Nikhilesh, Honnappa Nagarahalli, Maxime Coquelin, Chenbo Xia Fix places where two newline characters may be logged. Cc: stable@dpdk.org Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Reviewed-by: Chengwen Feng <fengchengwen@huawei.com> Acked-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com> --- Changes since RFC v1: - split fixes on direct calls to printf or RTE_LOG in a previous patch, --- drivers/crypto/ipsec_mb/ipsec_mb_ops.c | 2 +- lib/bbdev/rte_bbdev.c | 6 +- lib/cfgfile/rte_cfgfile.c | 14 ++-- lib/compressdev/rte_compressdev_pmd.c | 4 +- lib/cryptodev/rte_cryptodev.c | 2 +- lib/dispatcher/rte_dispatcher.c | 12 +-- lib/dmadev/rte_dmadev.c | 2 +- lib/eal/windows/eal_memory.c | 2 +- lib/eventdev/eventdev_pmd.h | 6 +- lib/eventdev/rte_event_crypto_adapter.c | 12 +-- lib/eventdev/rte_event_dma_adapter.c | 14 ++-- lib/eventdev/rte_event_eth_rx_adapter.c | 28 +++---- lib/eventdev/rte_event_eth_tx_adapter.c | 2 +- lib/eventdev/rte_event_timer_adapter.c | 4 +- lib/eventdev/rte_eventdev.c | 4 +- lib/metrics/rte_metrics_telemetry.c | 2 +- lib/mldev/rte_mldev.c | 102 ++++++++++++------------ lib/net/rte_net_crc.c | 6 +- lib/node/ethdev_rx.c | 4 +- lib/node/ip4_lookup.c | 2 +- lib/node/ip6_lookup.c | 2 +- lib/node/kernel_rx.c | 8 +- lib/node/kernel_tx.c | 4 +- lib/rcu/rte_rcu_qsbr.c | 4 +- lib/rcu/rte_rcu_qsbr.h | 8 +- lib/stack/rte_stack.c | 8 +- lib/vhost/vhost_crypto.c | 6 +- 27 files changed, 135 insertions(+), 135 deletions(-) diff --git a/drivers/crypto/ipsec_mb/ipsec_mb_ops.c b/drivers/crypto/ipsec_mb/ipsec_mb_ops.c index 52d6d010c7..f21f9cc5a0 100644 --- a/drivers/crypto/ipsec_mb/ipsec_mb_ops.c +++ b/drivers/crypto/ipsec_mb/ipsec_mb_ops.c @@ -407,7 +407,7 @@ ipsec_mb_ipc_request(const struct rte_mp_msg *mp_msg, const void *peer) resp_param->result = ipsec_mb_qp_release(dev, qp_id); break; default: - CDEV_LOG_ERR("invalid mp request type\n"); + CDEV_LOG_ERR("invalid mp request type"); } out: diff --git a/lib/bbdev/rte_bbdev.c b/lib/bbdev/rte_bbdev.c index cfebea09c7..e09bb97abb 100644 --- a/lib/bbdev/rte_bbdev.c +++ b/lib/bbdev/rte_bbdev.c @@ -1106,12 +1106,12 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op, intr_handle = dev->intr_handle; if (intr_handle == NULL) { - rte_bbdev_log(ERR, "Device %u intr handle unset\n", dev_id); + rte_bbdev_log(ERR, "Device %u intr handle unset", dev_id); return -ENOTSUP; } if (queue_id >= RTE_MAX_RXTX_INTR_VEC_ID) { - rte_bbdev_log(ERR, "Device %u queue_id %u is too big\n", + rte_bbdev_log(ERR, "Device %u queue_id %u is too big", dev_id, queue_id); return -ENOTSUP; } @@ -1120,7 +1120,7 @@ rte_bbdev_queue_intr_ctl(uint16_t dev_id, uint16_t queue_id, int epfd, int op, ret = rte_intr_rx_ctl(intr_handle, epfd, op, vec, data); if (ret && (ret != -EEXIST)) { rte_bbdev_log(ERR, - "dev %u q %u int ctl error op %d epfd %d vec %u\n", + "dev %u q %u int ctl error op %d epfd %d vec %u", dev_id, queue_id, op, epfd, vec); return ret; } diff --git a/lib/cfgfile/rte_cfgfile.c b/lib/cfgfile/rte_cfgfile.c index eefba6e408..2f9cc0722a 100644 --- a/lib/cfgfile/rte_cfgfile.c +++ b/lib/cfgfile/rte_cfgfile.c @@ -137,7 +137,7 @@ rte_cfgfile_check_params(const struct rte_cfgfile_parameters *params) unsigned int i; if (!params) { - CFG_LOG(ERR, "missing cfgfile parameters\n"); + CFG_LOG(ERR, "missing cfgfile parameters"); return -EINVAL; } @@ -150,7 +150,7 @@ rte_cfgfile_check_params(const struct rte_cfgfile_parameters *params) } if (valid_comment == 0) { - CFG_LOG(ERR, "invalid comment characters %c\n", + CFG_LOG(ERR, "invalid comment characters %c", params->comment_character); return -ENOTSUP; } @@ -188,7 +188,7 @@ rte_cfgfile_load_with_params(const char *filename, int flags, lineno++; if ((len >= sizeof(buffer) - 1) && (buffer[len-1] != '\n')) { CFG_LOG(ERR, " line %d - no \\n found on string. " - "Check if line too long\n", lineno); + "Check if line too long", lineno); goto error1; } /* skip parsing if comment character found */ @@ -209,7 +209,7 @@ rte_cfgfile_load_with_params(const char *filename, int flags, char *end = memchr(buffer, ']', len); if (end == NULL) { CFG_LOG(ERR, - "line %d - no terminating ']' character found\n", + "line %d - no terminating ']' character found", lineno); goto error1; } @@ -225,7 +225,7 @@ rte_cfgfile_load_with_params(const char *filename, int flags, split[1] = memchr(buffer, '=', len); if (split[1] == NULL) { CFG_LOG(ERR, - "line %d - no '=' character found\n", + "line %d - no '=' character found", lineno); goto error1; } @@ -249,7 +249,7 @@ rte_cfgfile_load_with_params(const char *filename, int flags, if (!(flags & CFG_FLAG_EMPTY_VALUES) && (*split[1] == '\0')) { CFG_LOG(ERR, - "line %d - cannot use empty values\n", + "line %d - cannot use empty values", lineno); goto error1; } @@ -414,7 +414,7 @@ int rte_cfgfile_set_entry(struct rte_cfgfile *cfg, const char *sectionname, return 0; } - CFG_LOG(ERR, "entry name doesn't exist\n"); + CFG_LOG(ERR, "entry name doesn't exist"); return -EINVAL; } diff --git a/lib/compressdev/rte_compressdev_pmd.c b/lib/compressdev/rte_compressdev_pmd.c index 156bccd972..762b44f03e 100644 --- a/lib/compressdev/rte_compressdev_pmd.c +++ b/lib/compressdev/rte_compressdev_pmd.c @@ -100,12 +100,12 @@ rte_compressdev_pmd_create(const char *name, struct rte_compressdev *compressdev; if (params->name[0] != '\0') { - COMPRESSDEV_LOG(INFO, "User specified device name = %s\n", + COMPRESSDEV_LOG(INFO, "User specified device name = %s", params->name); name = params->name; } - COMPRESSDEV_LOG(INFO, "Creating compressdev %s\n", name); + COMPRESSDEV_LOG(INFO, "Creating compressdev %s", name); COMPRESSDEV_LOG(INFO, "Init parameters - name: %s, socket id: %d", name, params->socket_id); diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c index ead8c9a623..b233c0ecd7 100644 --- a/lib/cryptodev/rte_cryptodev.c +++ b/lib/cryptodev/rte_cryptodev.c @@ -2074,7 +2074,7 @@ rte_cryptodev_sym_session_create(uint8_t dev_id, } if (xforms == NULL) { - CDEV_LOG_ERR("Invalid xform\n"); + CDEV_LOG_ERR("Invalid xform"); rte_errno = EINVAL; return NULL; } diff --git a/lib/dispatcher/rte_dispatcher.c b/lib/dispatcher/rte_dispatcher.c index 10d02edde9..95dd41b818 100644 --- a/lib/dispatcher/rte_dispatcher.c +++ b/lib/dispatcher/rte_dispatcher.c @@ -246,7 +246,7 @@ evd_service_register(struct rte_dispatcher *dispatcher) rc = rte_service_component_register(&service, &dispatcher->service_id); if (rc != 0) RTE_EDEV_LOG_ERR("Registration of dispatcher service " - "%s failed with error code %d\n", + "%s failed with error code %d", service.name, rc); return rc; @@ -260,7 +260,7 @@ evd_service_unregister(struct rte_dispatcher *dispatcher) rc = rte_service_component_unregister(dispatcher->service_id); if (rc != 0) RTE_EDEV_LOG_ERR("Unregistration of dispatcher service " - "failed with error code %d\n", rc); + "failed with error code %d", rc); return rc; } @@ -279,7 +279,7 @@ rte_dispatcher_create(uint8_t event_dev_id) RTE_CACHE_LINE_SIZE, socket_id); if (dispatcher == NULL) { - RTE_EDEV_LOG_ERR("Unable to allocate memory for dispatcher\n"); + RTE_EDEV_LOG_ERR("Unable to allocate memory for dispatcher"); rte_errno = ENOMEM; return NULL; } @@ -483,7 +483,7 @@ evd_lcore_uninstall_handler(struct rte_dispatcher_lcore *lcore, unreg_handler = evd_lcore_get_handler_by_id(lcore, handler_id); if (unreg_handler == NULL) { - RTE_EDEV_LOG_ERR("Invalid handler id %d\n", handler_id); + RTE_EDEV_LOG_ERR("Invalid handler id %d", handler_id); return -EINVAL; } @@ -602,7 +602,7 @@ rte_dispatcher_finalize_unregister(struct rte_dispatcher *dispatcher, unreg_finalizer = evd_get_finalizer_by_id(dispatcher, finalizer_id); if (unreg_finalizer == NULL) { - RTE_EDEV_LOG_ERR("Invalid finalizer id %d\n", finalizer_id); + RTE_EDEV_LOG_ERR("Invalid finalizer id %d", finalizer_id); return -EINVAL; } @@ -636,7 +636,7 @@ evd_set_service_runstate(struct rte_dispatcher *dispatcher, int state) */ if (rc != 0) RTE_EDEV_LOG_ERR("Unexpected error %d occurred while setting " - "service component run state to %d\n", rc, + "service component run state to %d", rc, state); RTE_VERIFY(rc == 0); diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c index 4e5e420c82..009a21849a 100644 --- a/lib/dmadev/rte_dmadev.c +++ b/lib/dmadev/rte_dmadev.c @@ -726,7 +726,7 @@ rte_dma_vchan_status(int16_t dev_id, uint16_t vchan, enum rte_dma_vchan_status * return -EINVAL; if (vchan >= dev->data->dev_conf.nb_vchans) { - RTE_DMA_LOG(ERR, "Device %u vchan %u out of range\n", dev_id, vchan); + RTE_DMA_LOG(ERR, "Device %u vchan %u out of range", dev_id, vchan); return -EINVAL; } diff --git a/lib/eal/windows/eal_memory.c b/lib/eal/windows/eal_memory.c index 31410a41fd..fd39155163 100644 --- a/lib/eal/windows/eal_memory.c +++ b/lib/eal/windows/eal_memory.c @@ -110,7 +110,7 @@ eal_mem_win32api_init(void) VirtualAlloc2_ptr = (VirtualAlloc2_type)( (void *)GetProcAddress(library, function)); if (VirtualAlloc2_ptr == NULL) { - RTE_LOG_WIN32_ERR("GetProcAddress(\"%s\", \"%s\")\n", + RTE_LOG_WIN32_ERR("GetProcAddress(\"%s\", \"%s\")", library_name, function); /* Contrary to the docs, Server 2016 is not supported. */ diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index 30bd90085c..2ec5aec0a8 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -49,14 +49,14 @@ extern "C" { /* Macros to check for valid device */ #define RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, retval) do { \ if (!rte_event_pmd_is_valid_dev((dev_id))) { \ - RTE_EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \ + RTE_EDEV_LOG_ERR("Invalid dev_id=%d", dev_id); \ return retval; \ } \ } while (0) #define RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, errno, retval) do { \ if (!rte_event_pmd_is_valid_dev((dev_id))) { \ - RTE_EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \ + RTE_EDEV_LOG_ERR("Invalid dev_id=%d", dev_id); \ rte_errno = errno; \ return retval; \ } \ @@ -64,7 +64,7 @@ extern "C" { #define RTE_EVENTDEV_VALID_DEVID_OR_RET(dev_id) do { \ if (!rte_event_pmd_is_valid_dev((dev_id))) { \ - RTE_EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \ + RTE_EDEV_LOG_ERR("Invalid dev_id=%d", dev_id); \ return; \ } \ } while (0) diff --git a/lib/eventdev/rte_event_crypto_adapter.c b/lib/eventdev/rte_event_crypto_adapter.c index 1b435c9f0e..d46595d190 100644 --- a/lib/eventdev/rte_event_crypto_adapter.c +++ b/lib/eventdev/rte_event_crypto_adapter.c @@ -133,7 +133,7 @@ static struct event_crypto_adapter **event_crypto_adapter; /* Macros to check for valid adapter */ #define EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) do { \ if (!eca_valid_id(id)) { \ - RTE_EDEV_LOG_ERR("Invalid crypto adapter id = %d\n", id); \ + RTE_EDEV_LOG_ERR("Invalid crypto adapter id = %d", id); \ return retval; \ } \ } while (0) @@ -309,7 +309,7 @@ eca_default_config_cb(uint8_t id, uint8_t dev_id, ret = rte_event_dev_configure(dev_id, &dev_conf); if (ret) { - RTE_EDEV_LOG_ERR("failed to configure event dev %u\n", dev_id); + RTE_EDEV_LOG_ERR("failed to configure event dev %u", dev_id); if (started) { if (rte_event_dev_start(dev_id)) return -EIO; @@ -319,7 +319,7 @@ eca_default_config_cb(uint8_t id, uint8_t dev_id, ret = rte_event_port_setup(dev_id, port_id, port_conf); if (ret) { - RTE_EDEV_LOG_ERR("failed to setup event port %u\n", port_id); + RTE_EDEV_LOG_ERR("failed to setup event port %u", port_id); return ret; } @@ -391,7 +391,7 @@ rte_event_crypto_adapter_create_ext(uint8_t id, uint8_t dev_id, sizeof(struct crypto_device_info), 0, socket_id); if (adapter->cdevs == NULL) { - RTE_EDEV_LOG_ERR("Failed to get mem for crypto devices\n"); + RTE_EDEV_LOG_ERR("Failed to get mem for crypto devices"); eca_circular_buffer_free(&adapter->ebuf); rte_free(adapter); return -ENOMEM; @@ -1403,7 +1403,7 @@ rte_event_crypto_adapter_runtime_params_set(uint8_t id, EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); if (params == NULL) { - RTE_EDEV_LOG_ERR("params pointer is NULL\n"); + RTE_EDEV_LOG_ERR("params pointer is NULL"); return -EINVAL; } @@ -1436,7 +1436,7 @@ rte_event_crypto_adapter_runtime_params_get(uint8_t id, EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); if (params == NULL) { - RTE_EDEV_LOG_ERR("params pointer is NULL\n"); + RTE_EDEV_LOG_ERR("params pointer is NULL"); return -EINVAL; } diff --git a/lib/eventdev/rte_event_dma_adapter.c b/lib/eventdev/rte_event_dma_adapter.c index cbf9405438..4196164305 100644 --- a/lib/eventdev/rte_event_dma_adapter.c +++ b/lib/eventdev/rte_event_dma_adapter.c @@ -20,7 +20,7 @@ #define EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) \ do { \ if (!edma_adapter_valid_id(id)) { \ - RTE_EDEV_LOG_ERR("Invalid DMA adapter id = %d\n", id); \ + RTE_EDEV_LOG_ERR("Invalid DMA adapter id = %d", id); \ return retval; \ } \ } while (0) @@ -313,7 +313,7 @@ edma_default_config_cb(uint8_t id, uint8_t evdev_id, struct rte_event_dma_adapte ret = rte_event_dev_configure(evdev_id, &dev_conf); if (ret) { - RTE_EDEV_LOG_ERR("Failed to configure event dev %u\n", evdev_id); + RTE_EDEV_LOG_ERR("Failed to configure event dev %u", evdev_id); if (started) { if (rte_event_dev_start(evdev_id)) return -EIO; @@ -323,7 +323,7 @@ edma_default_config_cb(uint8_t id, uint8_t evdev_id, struct rte_event_dma_adapte ret = rte_event_port_setup(evdev_id, port_id, port_conf); if (ret) { - RTE_EDEV_LOG_ERR("Failed to setup event port %u\n", port_id); + RTE_EDEV_LOG_ERR("Failed to setup event port %u", port_id); return ret; } @@ -407,7 +407,7 @@ rte_event_dma_adapter_create_ext(uint8_t id, uint8_t evdev_id, num_dma_dev * sizeof(struct dma_device_info), 0, socket_id); if (adapter->dma_devs == NULL) { - RTE_EDEV_LOG_ERR("Failed to get memory for DMA devices\n"); + RTE_EDEV_LOG_ERR("Failed to get memory for DMA devices"); edma_circular_buffer_free(&adapter->ebuf); rte_free(adapter); return -ENOMEM; @@ -417,7 +417,7 @@ rte_event_dma_adapter_create_ext(uint8_t id, uint8_t evdev_id, for (i = 0; i < num_dma_dev; i++) { ret = rte_dma_info_get(i, &info); if (ret) { - RTE_EDEV_LOG_ERR("Failed to get dma device info\n"); + RTE_EDEV_LOG_ERR("Failed to get dma device info"); edma_circular_buffer_free(&adapter->ebuf); rte_free(adapter); return ret; @@ -1297,7 +1297,7 @@ rte_event_dma_adapter_runtime_params_set(uint8_t id, EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); if (params == NULL) { - RTE_EDEV_LOG_ERR("params pointer is NULL\n"); + RTE_EDEV_LOG_ERR("params pointer is NULL"); return -EINVAL; } @@ -1326,7 +1326,7 @@ rte_event_dma_adapter_runtime_params_get(uint8_t id, EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); if (params == NULL) { - RTE_EDEV_LOG_ERR("params pointer is NULL\n"); + RTE_EDEV_LOG_ERR("params pointer is NULL"); return -EINVAL; } diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c index 82ae31712d..1b83a55b5c 100644 --- a/lib/eventdev/rte_event_eth_rx_adapter.c +++ b/lib/eventdev/rte_event_eth_rx_adapter.c @@ -293,14 +293,14 @@ rxa_event_buf_get(struct event_eth_rx_adapter *rx_adapter, uint16_t eth_dev_id, #define RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) do { \ if (!rxa_validate_id(id)) { \ - RTE_EDEV_LOG_ERR("Invalid eth Rx adapter id = %d\n", id); \ + RTE_EDEV_LOG_ERR("Invalid eth Rx adapter id = %d", id); \ return retval; \ } \ } while (0) #define RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_GOTO_ERR_RET(id, retval) do { \ if (!rxa_validate_id(id)) { \ - RTE_EDEV_LOG_ERR("Invalid eth Rx adapter id = %d\n", id); \ + RTE_EDEV_LOG_ERR("Invalid eth Rx adapter id = %d", id); \ ret = retval; \ goto error; \ } \ @@ -308,7 +308,7 @@ rxa_event_buf_get(struct event_eth_rx_adapter *rx_adapter, uint16_t eth_dev_id, #define RTE_EVENT_ETH_RX_ADAPTER_TOKEN_VALID_OR_GOTO_ERR_RET(token, retval) do { \ if ((token) == NULL || strlen(token) == 0 || !isdigit(*token)) { \ - RTE_EDEV_LOG_ERR("Invalid eth Rx adapter token\n"); \ + RTE_EDEV_LOG_ERR("Invalid eth Rx adapter token"); \ ret = retval; \ goto error; \ } \ @@ -1540,7 +1540,7 @@ rxa_default_conf_cb(uint8_t id, uint8_t dev_id, ret = rte_event_dev_configure(dev_id, &dev_conf); if (ret) { - RTE_EDEV_LOG_ERR("failed to configure event dev %u\n", + RTE_EDEV_LOG_ERR("failed to configure event dev %u", dev_id); if (started) { if (rte_event_dev_start(dev_id)) @@ -1551,7 +1551,7 @@ rxa_default_conf_cb(uint8_t id, uint8_t dev_id, ret = rte_event_port_setup(dev_id, port_id, port_conf); if (ret) { - RTE_EDEV_LOG_ERR("failed to setup event port %u\n", + RTE_EDEV_LOG_ERR("failed to setup event port %u", port_id); return ret; } @@ -1628,7 +1628,7 @@ rxa_create_intr_thread(struct event_eth_rx_adapter *rx_adapter) if (!err) return 0; - RTE_EDEV_LOG_ERR("Failed to create interrupt thread err = %d\n", err); + RTE_EDEV_LOG_ERR("Failed to create interrupt thread err = %d", err); rte_free(rx_adapter->epoll_events); error: rte_ring_free(rx_adapter->intr_ring); @@ -1644,12 +1644,12 @@ rxa_destroy_intr_thread(struct event_eth_rx_adapter *rx_adapter) err = pthread_cancel((pthread_t)rx_adapter->rx_intr_thread.opaque_id); if (err) - RTE_EDEV_LOG_ERR("Can't cancel interrupt thread err = %d\n", + RTE_EDEV_LOG_ERR("Can't cancel interrupt thread err = %d", err); err = rte_thread_join(rx_adapter->rx_intr_thread, NULL); if (err) - RTE_EDEV_LOG_ERR("Can't join interrupt thread err = %d\n", err); + RTE_EDEV_LOG_ERR("Can't join interrupt thread err = %d", err); rte_free(rx_adapter->epoll_events); rte_ring_free(rx_adapter->intr_ring); @@ -1915,7 +1915,7 @@ rxa_init_service(struct event_eth_rx_adapter *rx_adapter, uint8_t id) if (rte_mbuf_dyn_rx_timestamp_register( &event_eth_rx_timestamp_dynfield_offset, &event_eth_rx_timestamp_dynflag) != 0) { - RTE_EDEV_LOG_ERR("Error registering timestamp field in mbuf\n"); + RTE_EDEV_LOG_ERR("Error registering timestamp field in mbuf"); return -rte_errno; } @@ -2445,7 +2445,7 @@ rxa_create(uint8_t id, uint8_t dev_id, RTE_DIM(default_rss_key)); if (rx_adapter->eth_devices == NULL) { - RTE_EDEV_LOG_ERR("failed to get mem for eth devices\n"); + RTE_EDEV_LOG_ERR("failed to get mem for eth devices"); rte_free(rx_adapter); return -ENOMEM; } @@ -2497,12 +2497,12 @@ rxa_config_params_validate(struct rte_event_eth_rx_adapter_params *rxa_params, return 0; } else if (!rxa_params->use_queue_event_buf && rxa_params->event_buf_size == 0) { - RTE_EDEV_LOG_ERR("event buffer size can't be zero\n"); + RTE_EDEV_LOG_ERR("event buffer size can't be zero"); return -EINVAL; } else if (rxa_params->use_queue_event_buf && rxa_params->event_buf_size != 0) { RTE_EDEV_LOG_ERR("event buffer size needs to be configured " - "as part of queue add\n"); + "as part of queue add"); return -EINVAL; } @@ -3597,7 +3597,7 @@ handle_rxa_stats(const char *cmd __rte_unused, /* Get Rx adapter stats */ if (rte_event_eth_rx_adapter_stats_get(rx_adapter_id, &rx_adptr_stats)) { - RTE_EDEV_LOG_ERR("Failed to get Rx adapter stats\n"); + RTE_EDEV_LOG_ERR("Failed to get Rx adapter stats"); return -1; } @@ -3636,7 +3636,7 @@ handle_rxa_stats_reset(const char *cmd __rte_unused, /* Reset Rx adapter stats */ if (rte_event_eth_rx_adapter_stats_reset(rx_adapter_id)) { - RTE_EDEV_LOG_ERR("Failed to reset Rx adapter stats\n"); + RTE_EDEV_LOG_ERR("Failed to reset Rx adapter stats"); return -1; } diff --git a/lib/eventdev/rte_event_eth_tx_adapter.c b/lib/eventdev/rte_event_eth_tx_adapter.c index 360d5caf6a..56435be991 100644 --- a/lib/eventdev/rte_event_eth_tx_adapter.c +++ b/lib/eventdev/rte_event_eth_tx_adapter.c @@ -334,7 +334,7 @@ txa_service_conf_cb(uint8_t __rte_unused id, uint8_t dev_id, ret = rte_event_port_setup(dev_id, port_id, pc); if (ret) { - RTE_EDEV_LOG_ERR("failed to setup event port %u\n", + RTE_EDEV_LOG_ERR("failed to setup event port %u", port_id); if (started) { if (rte_event_dev_start(dev_id)) diff --git a/lib/eventdev/rte_event_timer_adapter.c b/lib/eventdev/rte_event_timer_adapter.c index 27466707bc..3f22e85173 100644 --- a/lib/eventdev/rte_event_timer_adapter.c +++ b/lib/eventdev/rte_event_timer_adapter.c @@ -106,7 +106,7 @@ default_port_conf_cb(uint16_t id, uint8_t event_dev_id, uint8_t *event_port_id, ret = rte_event_dev_configure(dev_id, &dev_conf); if (ret < 0) { - EVTIM_LOG_ERR("failed to configure event dev %u\n", dev_id); + EVTIM_LOG_ERR("failed to configure event dev %u", dev_id); if (started) if (rte_event_dev_start(dev_id)) return -EIO; @@ -116,7 +116,7 @@ default_port_conf_cb(uint16_t id, uint8_t event_dev_id, uint8_t *event_port_id, ret = rte_event_port_setup(dev_id, port_id, port_conf); if (ret < 0) { - EVTIM_LOG_ERR("failed to setup event port %u on event dev %u\n", + EVTIM_LOG_ERR("failed to setup event port %u on event dev %u", port_id, dev_id); return ret; } diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index ae50821a3f..157752868d 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -1007,13 +1007,13 @@ rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t } if (*dev->dev_ops->port_link == NULL) { - RTE_EDEV_LOG_ERR("Function not supported\n"); + RTE_EDEV_LOG_ERR("Function not supported"); rte_errno = ENOTSUP; return 0; } if (profile_id && *dev->dev_ops->port_link_profile == NULL) { - RTE_EDEV_LOG_ERR("Function not supported\n"); + RTE_EDEV_LOG_ERR("Function not supported"); rte_errno = ENOTSUP; return 0; } diff --git a/lib/metrics/rte_metrics_telemetry.c b/lib/metrics/rte_metrics_telemetry.c index 5be21b2e86..1d133e1f8c 100644 --- a/lib/metrics/rte_metrics_telemetry.c +++ b/lib/metrics/rte_metrics_telemetry.c @@ -363,7 +363,7 @@ rte_metrics_tel_stat_names_to_ids(const char * const *stat_names, } } if (j == num_metrics) { - METRICS_LOG_WARN("Invalid stat name %s\n", + METRICS_LOG_WARN("Invalid stat name %s", stat_names[i]); free(names); return -EINVAL; diff --git a/lib/mldev/rte_mldev.c b/lib/mldev/rte_mldev.c index cc5f2e0cc6..196b1850e6 100644 --- a/lib/mldev/rte_mldev.c +++ b/lib/mldev/rte_mldev.c @@ -159,7 +159,7 @@ int rte_ml_dev_init(size_t dev_max) { if (dev_max == 0 || dev_max > INT16_MAX) { - RTE_MLDEV_LOG(ERR, "Invalid dev_max = %zu (> %d)\n", dev_max, INT16_MAX); + RTE_MLDEV_LOG(ERR, "Invalid dev_max = %zu (> %d)", dev_max, INT16_MAX); rte_errno = EINVAL; return -rte_errno; } @@ -217,7 +217,7 @@ rte_ml_dev_socket_id(int16_t dev_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -232,7 +232,7 @@ rte_ml_dev_info_get(int16_t dev_id, struct rte_ml_dev_info *dev_info) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -241,7 +241,7 @@ rte_ml_dev_info_get(int16_t dev_id, struct rte_ml_dev_info *dev_info) return -ENOTSUP; if (dev_info == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, dev_info cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, dev_info cannot be NULL", dev_id); return -EINVAL; } memset(dev_info, 0, sizeof(struct rte_ml_dev_info)); @@ -257,7 +257,7 @@ rte_ml_dev_configure(int16_t dev_id, const struct rte_ml_dev_config *config) int ret; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -271,7 +271,7 @@ rte_ml_dev_configure(int16_t dev_id, const struct rte_ml_dev_config *config) } if (config == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, config cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, config cannot be NULL", dev_id); return -EINVAL; } @@ -280,7 +280,7 @@ rte_ml_dev_configure(int16_t dev_id, const struct rte_ml_dev_config *config) return ret; if (config->nb_queue_pairs > dev_info.max_queue_pairs) { - RTE_MLDEV_LOG(ERR, "Device %d num of queues %u > %u\n", dev_id, + RTE_MLDEV_LOG(ERR, "Device %d num of queues %u > %u", dev_id, config->nb_queue_pairs, dev_info.max_queue_pairs); return -EINVAL; } @@ -294,7 +294,7 @@ rte_ml_dev_close(int16_t dev_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -318,7 +318,7 @@ rte_ml_dev_start(int16_t dev_id) int ret; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -345,7 +345,7 @@ rte_ml_dev_stop(int16_t dev_id) int ret; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -372,7 +372,7 @@ rte_ml_dev_queue_pair_setup(int16_t dev_id, uint16_t queue_pair_id, struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -386,7 +386,7 @@ rte_ml_dev_queue_pair_setup(int16_t dev_id, uint16_t queue_pair_id, } if (qp_conf == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, qp_conf cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, qp_conf cannot be NULL", dev_id); return -EINVAL; } @@ -404,7 +404,7 @@ rte_ml_dev_stats_get(int16_t dev_id, struct rte_ml_dev_stats *stats) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -413,7 +413,7 @@ rte_ml_dev_stats_get(int16_t dev_id, struct rte_ml_dev_stats *stats) return -ENOTSUP; if (stats == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, stats cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, stats cannot be NULL", dev_id); return -EINVAL; } memset(stats, 0, sizeof(struct rte_ml_dev_stats)); @@ -427,7 +427,7 @@ rte_ml_dev_stats_reset(int16_t dev_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return; } @@ -445,7 +445,7 @@ rte_ml_dev_xstats_names_get(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, in struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -462,7 +462,7 @@ rte_ml_dev_xstats_by_name_get(int16_t dev_id, const char *name, uint16_t *stat_i struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -471,12 +471,12 @@ rte_ml_dev_xstats_by_name_get(int16_t dev_id, const char *name, uint16_t *stat_i return -ENOTSUP; if (name == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, name cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, name cannot be NULL", dev_id); return -EINVAL; } if (value == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, value cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, value cannot be NULL", dev_id); return -EINVAL; } @@ -490,7 +490,7 @@ rte_ml_dev_xstats_get(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, int32_t struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -499,12 +499,12 @@ rte_ml_dev_xstats_get(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, int32_t return -ENOTSUP; if (stat_ids == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, stat_ids cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, stat_ids cannot be NULL", dev_id); return -EINVAL; } if (values == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, values cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, values cannot be NULL", dev_id); return -EINVAL; } @@ -518,7 +518,7 @@ rte_ml_dev_xstats_reset(int16_t dev_id, enum rte_ml_dev_xstats_mode mode, int32_ struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -535,7 +535,7 @@ rte_ml_dev_dump(int16_t dev_id, FILE *fd) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -544,7 +544,7 @@ rte_ml_dev_dump(int16_t dev_id, FILE *fd) return -ENOTSUP; if (fd == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, file descriptor cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, file descriptor cannot be NULL", dev_id); return -EINVAL; } @@ -557,7 +557,7 @@ rte_ml_dev_selftest(int16_t dev_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -574,7 +574,7 @@ rte_ml_model_load(int16_t dev_id, struct rte_ml_model_params *params, uint16_t * struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -583,12 +583,12 @@ rte_ml_model_load(int16_t dev_id, struct rte_ml_model_params *params, uint16_t * return -ENOTSUP; if (params == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, params cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, params cannot be NULL", dev_id); return -EINVAL; } if (model_id == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, model_id cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, model_id cannot be NULL", dev_id); return -EINVAL; } @@ -601,7 +601,7 @@ rte_ml_model_unload(int16_t dev_id, uint16_t model_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -618,7 +618,7 @@ rte_ml_model_start(int16_t dev_id, uint16_t model_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -635,7 +635,7 @@ rte_ml_model_stop(int16_t dev_id, uint16_t model_id) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -652,7 +652,7 @@ rte_ml_model_info_get(int16_t dev_id, uint16_t model_id, struct rte_ml_model_inf struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -661,7 +661,7 @@ rte_ml_model_info_get(int16_t dev_id, uint16_t model_id, struct rte_ml_model_inf return -ENOTSUP; if (model_info == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, model_id %u, model_info cannot be NULL\n", dev_id, + RTE_MLDEV_LOG(ERR, "Dev %d, model_id %u, model_info cannot be NULL", dev_id, model_id); return -EINVAL; } @@ -675,7 +675,7 @@ rte_ml_model_params_update(int16_t dev_id, uint16_t model_id, void *buffer) struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -684,7 +684,7 @@ rte_ml_model_params_update(int16_t dev_id, uint16_t model_id, void *buffer) return -ENOTSUP; if (buffer == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, buffer cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, buffer cannot be NULL", dev_id); return -EINVAL; } @@ -698,7 +698,7 @@ rte_ml_io_quantize(int16_t dev_id, uint16_t model_id, struct rte_ml_buff_seg **d struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -707,12 +707,12 @@ rte_ml_io_quantize(int16_t dev_id, uint16_t model_id, struct rte_ml_buff_seg **d return -ENOTSUP; if (dbuffer == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, dbuffer cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, dbuffer cannot be NULL", dev_id); return -EINVAL; } if (qbuffer == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, qbuffer cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, qbuffer cannot be NULL", dev_id); return -EINVAL; } @@ -726,7 +726,7 @@ rte_ml_io_dequantize(int16_t dev_id, uint16_t model_id, struct rte_ml_buff_seg * struct rte_ml_dev *dev; if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -735,12 +735,12 @@ rte_ml_io_dequantize(int16_t dev_id, uint16_t model_id, struct rte_ml_buff_seg * return -ENOTSUP; if (qbuffer == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, qbuffer cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, qbuffer cannot be NULL", dev_id); return -EINVAL; } if (dbuffer == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, dbuffer cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, dbuffer cannot be NULL", dev_id); return -EINVAL; } @@ -811,7 +811,7 @@ rte_ml_enqueue_burst(int16_t dev_id, uint16_t qp_id, struct rte_ml_op **ops, uin #ifdef RTE_LIBRTE_ML_DEV_DEBUG if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); rte_errno = -EINVAL; return 0; } @@ -823,13 +823,13 @@ rte_ml_enqueue_burst(int16_t dev_id, uint16_t qp_id, struct rte_ml_op **ops, uin } if (ops == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, ops cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, ops cannot be NULL", dev_id); rte_errno = -EINVAL; return 0; } if (qp_id >= dev->data->nb_queue_pairs) { - RTE_MLDEV_LOG(ERR, "Invalid qp_id %u\n", qp_id); + RTE_MLDEV_LOG(ERR, "Invalid qp_id %u", qp_id); rte_errno = -EINVAL; return 0; } @@ -847,7 +847,7 @@ rte_ml_dequeue_burst(int16_t dev_id, uint16_t qp_id, struct rte_ml_op **ops, uin #ifdef RTE_LIBRTE_ML_DEV_DEBUG if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); rte_errno = -EINVAL; return 0; } @@ -859,13 +859,13 @@ rte_ml_dequeue_burst(int16_t dev_id, uint16_t qp_id, struct rte_ml_op **ops, uin } if (ops == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, ops cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, ops cannot be NULL", dev_id); rte_errno = -EINVAL; return 0; } if (qp_id >= dev->data->nb_queue_pairs) { - RTE_MLDEV_LOG(ERR, "Invalid qp_id %u\n", qp_id); + RTE_MLDEV_LOG(ERR, "Invalid qp_id %u", qp_id); rte_errno = -EINVAL; return 0; } @@ -883,7 +883,7 @@ rte_ml_op_error_get(int16_t dev_id, struct rte_ml_op *op, struct rte_ml_op_error #ifdef RTE_LIBRTE_ML_DEV_DEBUG if (!rte_ml_dev_is_valid_dev(dev_id)) { - RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d\n", dev_id); + RTE_MLDEV_LOG(ERR, "Invalid dev_id = %d", dev_id); return -EINVAL; } @@ -892,12 +892,12 @@ rte_ml_op_error_get(int16_t dev_id, struct rte_ml_op *op, struct rte_ml_op_error return -ENOTSUP; if (op == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, op cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, op cannot be NULL", dev_id); return -EINVAL; } if (error == NULL) { - RTE_MLDEV_LOG(ERR, "Dev %d, error cannot be NULL\n", dev_id); + RTE_MLDEV_LOG(ERR, "Dev %d, error cannot be NULL", dev_id); return -EINVAL; } #else diff --git a/lib/net/rte_net_crc.c b/lib/net/rte_net_crc.c index a685f9e7bb..900d6de7f4 100644 --- a/lib/net/rte_net_crc.c +++ b/lib/net/rte_net_crc.c @@ -179,7 +179,7 @@ avx512_vpclmulqdq_get_handlers(void) max_simd_bitwidth >= RTE_VECT_SIMD_512) return handlers_avx512; #endif - NET_LOG(INFO, "Requirements not met, can't use AVX512\n"); + NET_LOG(INFO, "Requirements not met, can't use AVX512"); return NULL; } @@ -205,7 +205,7 @@ sse42_pclmulqdq_get_handlers(void) max_simd_bitwidth >= RTE_VECT_SIMD_128) return handlers_sse42; #endif - NET_LOG(INFO, "Requirements not met, can't use SSE\n"); + NET_LOG(INFO, "Requirements not met, can't use SSE"); return NULL; } @@ -231,7 +231,7 @@ neon_pmull_get_handlers(void) max_simd_bitwidth >= RTE_VECT_SIMD_128) return handlers_neon; #endif - NET_LOG(INFO, "Requirements not met, can't use NEON\n"); + NET_LOG(INFO, "Requirements not met, can't use NEON"); return NULL; } diff --git a/lib/node/ethdev_rx.c b/lib/node/ethdev_rx.c index 3e8fac1df4..475eff6abe 100644 --- a/lib/node/ethdev_rx.c +++ b/lib/node/ethdev_rx.c @@ -160,13 +160,13 @@ ethdev_ptype_setup(uint16_t port, uint16_t queue) if (!l3_ipv4 || !l3_ipv6) { node_info("ethdev_rx", - "Enabling ptype callback for required ptypes on port %u\n", + "Enabling ptype callback for required ptypes on port %u", port); if (!rte_eth_add_rx_callback(port, queue, eth_pkt_parse_cb, NULL)) { node_err("ethdev_rx", - "Failed to add rx ptype cb: port=%d, queue=%d\n", + "Failed to add rx ptype cb: port=%d, queue=%d", port, queue); return -EINVAL; } diff --git a/lib/node/ip4_lookup.c b/lib/node/ip4_lookup.c index 0dbfde64fe..18955971f6 100644 --- a/lib/node/ip4_lookup.c +++ b/lib/node/ip4_lookup.c @@ -143,7 +143,7 @@ rte_node_ip4_route_add(uint32_t ip, uint8_t depth, uint16_t next_hop, ip, depth, val); if (ret < 0) { node_err("ip4_lookup", - "Unable to add entry %s / %d nh (%x) to LPM table on sock %d, rc=%d\n", + "Unable to add entry %s / %d nh (%x) to LPM table on sock %d, rc=%d", abuf, depth, val, socket, ret); return ret; } diff --git a/lib/node/ip6_lookup.c b/lib/node/ip6_lookup.c index 6f56eb5ec5..309964f60f 100644 --- a/lib/node/ip6_lookup.c +++ b/lib/node/ip6_lookup.c @@ -283,7 +283,7 @@ rte_node_ip6_route_add(const uint8_t *ip, uint8_t depth, uint16_t next_hop, if (ret < 0) { node_err("ip6_lookup", "Unable to add entry %s / %d nh (%x) to LPM " - "table on sock %d, rc=%d\n", + "table on sock %d, rc=%d", abuf, depth, val, socket, ret); return ret; } diff --git a/lib/node/kernel_rx.c b/lib/node/kernel_rx.c index 2dba7c8cc7..6c20cdbb1e 100644 --- a/lib/node/kernel_rx.c +++ b/lib/node/kernel_rx.c @@ -134,7 +134,7 @@ kernel_rx_node_do(struct rte_graph *graph, struct rte_node *node, kernel_rx_node if (len == 0 || len == 0xFFFF) { rte_pktmbuf_free(m); if (rx->idx <= 0) - node_dbg("kernel_rx", "rx_mbuf array is empty\n"); + node_dbg("kernel_rx", "rx_mbuf array is empty"); rx->idx--; break; } @@ -207,20 +207,20 @@ kernel_rx_node_init(const struct rte_graph *graph, struct rte_node *node) RTE_VERIFY(elem != NULL); if (ctx->pktmbuf_pool == NULL) { - node_err("kernel_rx", "Invalid mbuf pool on graph %s\n", graph->name); + node_err("kernel_rx", "Invalid mbuf pool on graph %s", graph->name); return -EINVAL; } recv_info = rte_zmalloc_socket("kernel_rx_info", sizeof(kernel_rx_info_t), RTE_CACHE_LINE_SIZE, graph->socket); if (!recv_info) { - node_err("kernel_rx", "Kernel recv_info is NULL\n"); + node_err("kernel_rx", "Kernel recv_info is NULL"); return -ENOMEM; } sock = socket(AF_INET, SOCK_RAW, IPPROTO_RAW); if (sock < 0) { - node_err("kernel_rx", "Unable to open RAW socket\n"); + node_err("kernel_rx", "Unable to open RAW socket"); return sock; } diff --git a/lib/node/kernel_tx.c b/lib/node/kernel_tx.c index 27d1808c71..3a96741622 100644 --- a/lib/node/kernel_tx.c +++ b/lib/node/kernel_tx.c @@ -36,7 +36,7 @@ kernel_tx_process_mbuf(struct rte_node *node, struct rte_mbuf **mbufs, uint16_t sin.sin_addr.s_addr = ip4->dst_addr; if (sendto(ctx->sock, buf, len, 0, (struct sockaddr *)&sin, sizeof(sin)) < 0) - node_err("kernel_tx", "Unable to send packets: %s\n", strerror(errno)); + node_err("kernel_tx", "Unable to send packets: %s", strerror(errno)); } } @@ -87,7 +87,7 @@ kernel_tx_node_init(const struct rte_graph *graph __rte_unused, struct rte_node ctx->sock = socket(AF_INET, SOCK_RAW, IPPROTO_RAW); if (ctx->sock < 0) - node_err("kernel_tx", "Unable to open RAW socket\n"); + node_err("kernel_tx", "Unable to open RAW socket"); return 0; } diff --git a/lib/rcu/rte_rcu_qsbr.c b/lib/rcu/rte_rcu_qsbr.c index a9f3d6cc98..41a44be4b9 100644 --- a/lib/rcu/rte_rcu_qsbr.c +++ b/lib/rcu/rte_rcu_qsbr.c @@ -92,7 +92,7 @@ rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id) return 1; } - __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n", + __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u", v->qsbr_cnt[thread_id].lock_cnt); id = thread_id & __RTE_QSBR_THRID_MASK; @@ -144,7 +144,7 @@ rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id) return 1; } - __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n", + __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u", v->qsbr_cnt[thread_id].lock_cnt); id = thread_id & __RTE_QSBR_THRID_MASK; diff --git a/lib/rcu/rte_rcu_qsbr.h b/lib/rcu/rte_rcu_qsbr.h index 5979fb0efb..6b908e7ee0 100644 --- a/lib/rcu/rte_rcu_qsbr.h +++ b/lib/rcu/rte_rcu_qsbr.h @@ -299,7 +299,7 @@ rte_rcu_qsbr_thread_online(struct rte_rcu_qsbr *v, unsigned int thread_id) RTE_ASSERT(v != NULL && thread_id < v->max_threads); - __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n", + __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u", v->qsbr_cnt[thread_id].lock_cnt); /* Copy the current value of token. @@ -350,7 +350,7 @@ rte_rcu_qsbr_thread_offline(struct rte_rcu_qsbr *v, unsigned int thread_id) { RTE_ASSERT(v != NULL && thread_id < v->max_threads); - __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n", + __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u", v->qsbr_cnt[thread_id].lock_cnt); /* The reader can go offline only after the load of the @@ -427,7 +427,7 @@ rte_rcu_qsbr_unlock(__rte_unused struct rte_rcu_qsbr *v, 1, rte_memory_order_release); __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, WARNING, - "Lock counter %u. Nested locks?\n", + "Lock counter %u. Nested locks?", v->qsbr_cnt[thread_id].lock_cnt); #endif } @@ -481,7 +481,7 @@ rte_rcu_qsbr_quiescent(struct rte_rcu_qsbr *v, unsigned int thread_id) RTE_ASSERT(v != NULL && thread_id < v->max_threads); - __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u\n", + __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, ERR, "Lock counter %u", v->qsbr_cnt[thread_id].lock_cnt); /* Acquire the changes to the shared data structure released diff --git a/lib/stack/rte_stack.c b/lib/stack/rte_stack.c index 1fabec2bfe..1dab6d6645 100644 --- a/lib/stack/rte_stack.c +++ b/lib/stack/rte_stack.c @@ -56,7 +56,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id, int ret; if (flags & ~(RTE_STACK_F_LF)) { - STACK_LOG_ERR("Unsupported stack flags %#x\n", flags); + STACK_LOG_ERR("Unsupported stack flags %#x", flags); return NULL; } @@ -65,7 +65,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id, #endif #if !defined(RTE_STACK_LF_SUPPORTED) if (flags & RTE_STACK_F_LF) { - STACK_LOG_ERR("Lock-free stack is not supported on your platform\n"); + STACK_LOG_ERR("Lock-free stack is not supported on your platform"); rte_errno = ENOTSUP; return NULL; } @@ -82,7 +82,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id, te = rte_zmalloc("STACK_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - STACK_LOG_ERR("Cannot reserve memory for tailq\n"); + STACK_LOG_ERR("Cannot reserve memory for tailq"); rte_errno = ENOMEM; return NULL; } @@ -92,7 +92,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id, mz = rte_memzone_reserve_aligned(mz_name, sz, socket_id, 0, __alignof__(*s)); if (mz == NULL) { - STACK_LOG_ERR("Cannot reserve stack memzone!\n"); + STACK_LOG_ERR("Cannot reserve stack memzone!"); rte_mcfg_tailq_write_unlock(); rte_free(te); return NULL; diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c index 3e1ef1ac25..6e5443e5f8 100644 --- a/lib/vhost/vhost_crypto.c +++ b/lib/vhost/vhost_crypto.c @@ -245,7 +245,7 @@ transform_cipher_param(struct rte_crypto_sym_xform *xform, return ret; if (param->cipher_key_len > VHOST_USER_CRYPTO_MAX_CIPHER_KEY_LENGTH) { - VC_LOG_DBG("Invalid cipher key length\n"); + VC_LOG_DBG("Invalid cipher key length"); return -VIRTIO_CRYPTO_BADMSG; } @@ -301,7 +301,7 @@ transform_chain_param(struct rte_crypto_sym_xform *xforms, return ret; if (param->cipher_key_len > VHOST_USER_CRYPTO_MAX_CIPHER_KEY_LENGTH) { - VC_LOG_DBG("Invalid cipher key length\n"); + VC_LOG_DBG("Invalid cipher key length"); return -VIRTIO_CRYPTO_BADMSG; } @@ -321,7 +321,7 @@ transform_chain_param(struct rte_crypto_sym_xform *xforms, return ret; if (param->auth_key_len > VHOST_USER_CRYPTO_MAX_HMAC_KEY_LENGTH) { - VC_LOG_DBG("Invalid auth key length\n"); + VC_LOG_DBG("Invalid auth key length"); return -VIRTIO_CRYPTO_BADMSG; } -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v5 06/13] eal/linux: remove log paraphrasing the doc 2023-12-20 15:35 ` [PATCH v5 00/13] Detect superfluous newline in logs David Marchand ` (4 preceding siblings ...) 2023-12-20 15:35 ` [PATCH v5 05/13] lib: remove redundant newline from logs David Marchand @ 2023-12-20 15:35 ` David Marchand 2023-12-20 15:36 ` [PATCH v5 07/13] bpf: remove log level in internal helper David Marchand ` (8 subsequent siblings) 14 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-20 15:35 UTC (permalink / raw) To: dev; +Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Tyler Retzlaff An error log message does not need to paraphrase the DPDK documentation. Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com> --- lib/eal/linux/eal_timer.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/lib/eal/linux/eal_timer.c b/lib/eal/linux/eal_timer.c index 3a30284e3a..df9ad61ae9 100644 --- a/lib/eal/linux/eal_timer.c +++ b/lib/eal/linux/eal_timer.c @@ -152,11 +152,7 @@ rte_eal_hpet_init(int make_default) } eal_hpet = mmap(NULL, 1024, PROT_READ, MAP_SHARED, fd, 0); if (eal_hpet == MAP_FAILED) { - RTE_LOG(ERR, EAL, "ERROR: Cannot mmap "DEV_HPET"!\n" - "Please enable CONFIG_HPET_MMAP in your kernel configuration " - "to allow HPET support.\n" - "To run without using HPET, unset RTE_LIBEAL_USE_HPET " - "in your build configuration or use '--no-hpet' EAL flag.\n"); + RTE_LOG(ERR, EAL, "ERROR: Cannot mmap "DEV_HPET"!\n"); close(fd); internal_conf->no_hpet = 1; return -1; -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v5 07/13] bpf: remove log level in internal helper 2023-12-20 15:35 ` [PATCH v5 00/13] Detect superfluous newline in logs David Marchand ` (5 preceding siblings ...) 2023-12-20 15:35 ` [PATCH v5 06/13] eal/linux: remove log paraphrasing the doc David Marchand @ 2023-12-20 15:36 ` David Marchand 2023-12-20 15:36 ` [PATCH v5 08/13] lib: simplify multilines log messages David Marchand ` (7 subsequent siblings) 14 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-20 15:36 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Tyler Retzlaff, Konstantin Ananyev There is no other log level than debug, simplify this helper. Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com> --- lib/bpf/bpf_validate.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c index 95b9ef99ef..f246b3c5eb 100644 --- a/lib/bpf/bpf_validate.c +++ b/lib/bpf/bpf_validate.c @@ -2178,18 +2178,18 @@ restore_eval_state(struct bpf_verifier *bvf, struct inst_node *node) } static void -log_eval_state(const struct bpf_verifier *bvf, const struct ebpf_insn *ins, - uint32_t pc, int32_t loglvl) +log_dbg_eval_state(const struct bpf_verifier *bvf, const struct ebpf_insn *ins, + uint32_t pc) { const struct bpf_eval_state *st; const struct bpf_reg_val *rv; - rte_log(loglvl, rte_bpf_logtype, "%s(pc=%u):\n", __func__, pc); + RTE_BPF_LOG(DEBUG, "%s(pc=%u):\n", __func__, pc); st = bvf->evst; rv = st->rv + ins->dst_reg; - rte_log(loglvl, rte_bpf_logtype, + RTE_BPF_LOG(DEBUG, "r%u={\n" "\tv={type=%u, size=%zu},\n" "\tmask=0x%" PRIx64 ",\n" @@ -2269,7 +2269,7 @@ evaluate(struct bpf_verifier *bvf) } } - log_eval_state(bvf, ins + idx, idx, RTE_LOG_DEBUG); + log_dbg_eval_state(bvf, ins + idx, idx); bvf->evin = NULL; } -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v5 08/13] lib: simplify multilines log messages 2023-12-20 15:35 ` [PATCH v5 00/13] Detect superfluous newline in logs David Marchand ` (6 preceding siblings ...) 2023-12-20 15:36 ` [PATCH v5 07/13] bpf: remove log level in internal helper David Marchand @ 2023-12-20 15:36 ` David Marchand 2023-12-20 15:36 ` [PATCH v5 09/13] lib: add more logging helpers David Marchand ` (6 subsequent siblings) 14 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-20 15:36 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Tyler Retzlaff, Andrew Rybchenko, Konstantin Ananyev, Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam Those error log messages don't need to span on multiple lines. Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> --- Changes since RFC v2: - fixed format string crossing line boundaries, --- lib/acl/tb_mem.c | 4 ++-- lib/bpf/bpf_stub.c | 6 ++---- lib/eal/windows/eal_hugepages.c | 4 ++-- lib/ethdev/rte_ethdev.c | 12 ++++-------- 4 files changed, 10 insertions(+), 16 deletions(-) diff --git a/lib/acl/tb_mem.c b/lib/acl/tb_mem.c index 6a9d96aaed..4ee65b23da 100644 --- a/lib/acl/tb_mem.c +++ b/lib/acl/tb_mem.c @@ -26,8 +26,8 @@ tb_pool(struct tb_mem_pool *pool, size_t sz) size = sz + pool->alignment - 1; block = calloc(1, size + sizeof(*pool->block)); if (block == NULL) { - RTE_LOG(ERR, ACL, "%s(%zu)\n failed, currently allocated " - "by pool: %zu bytes\n", __func__, sz, pool->alloc); + RTE_LOG(ERR, ACL, "%s(%zu) failed, currently allocated by pool: %zu bytes\n", + __func__, sz, pool->alloc); siglongjmp(pool->fail, -ENOMEM); return NULL; } diff --git a/lib/bpf/bpf_stub.c b/lib/bpf/bpf_stub.c index ebc5343896..83c2203622 100644 --- a/lib/bpf/bpf_stub.c +++ b/lib/bpf/bpf_stub.c @@ -19,8 +19,7 @@ rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname, return NULL; } - RTE_BPF_LOG(ERR, "%s() is not supported with current config\n" - "rebuild with libelf installed\n", + RTE_BPF_LOG(ERR, "%s() is not supported, rebuild with libelf installed\n", __func__); rte_errno = ENOTSUP; return NULL; @@ -36,8 +35,7 @@ rte_bpf_convert(const struct bpf_program *prog) return NULL; } - RTE_BPF_LOG(ERR, "%s() is not supported with current config\n" - "rebuild with libpcap installed\n", + RTE_BPF_LOG(ERR, "%s() is not supported, rebuild with libpcap installed\n", __func__); rte_errno = ENOTSUP; return NULL; diff --git a/lib/eal/windows/eal_hugepages.c b/lib/eal/windows/eal_hugepages.c index b007dceb39..775c67e4c4 100644 --- a/lib/eal/windows/eal_hugepages.c +++ b/lib/eal/windows/eal_hugepages.c @@ -105,8 +105,8 @@ int eal_hugepage_info_init(void) { if (hugepage_claim_privilege() < 0) { - RTE_LOG(ERR, EAL, "Cannot claim hugepage privilege\n" - "Verify that large-page support privilege is assigned to the current user\n"); + RTE_LOG(ERR, EAL, + "Cannot claim hugepage privilege, check large-page support privilege\n"); return -1; } diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index b9d99ece15..9dd0efa9d8 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -6709,8 +6709,7 @@ rte_eth_ip_reassembly_capability_get(uint16_t port_id, if (dev->data->dev_configured == 0) { RTE_ETHDEV_LOG(ERR, - "Device with port_id=%u is not configured.\n" - "Cannot get IP reassembly capability\n", + "port_id=%u is not configured, cannot get IP reassembly capability\n", port_id); return -EINVAL; } @@ -6745,8 +6744,7 @@ rte_eth_ip_reassembly_conf_get(uint16_t port_id, if (dev->data->dev_configured == 0) { RTE_ETHDEV_LOG(ERR, - "Device with port_id=%u is not configured.\n" - "Cannot get IP reassembly configuration\n", + "port_id=%u is not configured, cannot get IP reassembly configuration\n", port_id); return -EINVAL; } @@ -6779,16 +6777,14 @@ rte_eth_ip_reassembly_conf_set(uint16_t port_id, if (dev->data->dev_configured == 0) { RTE_ETHDEV_LOG(ERR, - "Device with port_id=%u is not configured.\n" - "Cannot set IP reassembly configuration\n", + "port_id=%u is not configured, cannot set IP reassembly configuration\n", port_id); return -EINVAL; } if (dev->data->dev_started != 0) { RTE_ETHDEV_LOG(ERR, - "Device with port_id=%u started,\n" - "cannot configure IP reassembly params.\n", + "port_id=%u is started, cannot configure IP reassembly params.\n", port_id); return -EINVAL; } -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v5 09/13] lib: add more logging helpers 2023-12-20 15:35 ` [PATCH v5 00/13] Detect superfluous newline in logs David Marchand ` (7 preceding siblings ...) 2023-12-20 15:36 ` [PATCH v5 08/13] lib: simplify multilines log messages David Marchand @ 2023-12-20 15:36 ` David Marchand 2023-12-20 15:36 ` [PATCH v5 10/13] vhost: improve log for memory dumping configuration David Marchand ` (5 subsequent siblings) 14 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-20 15:36 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Konstantin Ananyev, Anatoly Burakov, Harman Kalra, Jerin Jacob, Sunil Kumar Kori, Harry van Haaren, Stanislaw Kardach, Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam, Byron Marohn, Yipeng Wang, Vladimir Medvedkin, Sameh Gobriel, Reshma Pattan, Andrew Rybchenko, Cristian Dumitrescu, David Hunt, Sivaprasad Tummala, Honnappa Nagarahalli, Volodymyr Fialko, Maxime Coquelin, Chenbo Xia Add helpers for logging messages in libraries instead of calling RTE_LOG() directly. Those helpers take care of adding a \n: this will make the transition to RTE_LOG_LINE trivial. Note: - for acl and sched libraries that still has some debug multilines messages, a direct call to RTE_LOG is used: this will make it easier to notice such special cases, Signed-off-by: David Marchand <david.marchand@redhat.com> --- Changes since v4: - switched gears and only introduced new logging helpers in this patch, the conversion to RTE_LOG_LINE is moved to the last patch of the series, Changes since v3: - fixed some checkpatch complaints, --- lib/acl/acl_bld.c | 28 +-- lib/acl/acl_gen.c | 8 +- lib/acl/acl_log.h | 2 + lib/acl/rte_acl.c | 8 +- lib/acl/tb_mem.c | 2 +- lib/eal/common/eal_common_bus.c | 22 +- lib/eal/common/eal_common_class.c | 6 +- lib/eal/common/eal_common_config.c | 2 +- lib/eal/common/eal_common_debug.c | 8 +- lib/eal/common/eal_common_dev.c | 80 +++---- lib/eal/common/eal_common_devargs.c | 18 +- lib/eal/common/eal_common_dynmem.c | 34 +-- lib/eal/common/eal_common_fbarray.c | 12 +- lib/eal/common/eal_common_interrupts.c | 39 ++-- lib/eal/common/eal_common_lcore.c | 26 +-- lib/eal/common/eal_common_memalloc.c | 12 +- lib/eal/common/eal_common_memory.c | 66 +++--- lib/eal/common/eal_common_memzone.c | 24 +-- lib/eal/common/eal_common_options.c | 236 ++++++++++---------- lib/eal/common/eal_common_proc.c | 112 +++++----- lib/eal/common/eal_common_tailqs.c | 12 +- lib/eal/common/eal_common_thread.c | 12 +- lib/eal/common/eal_common_timer.c | 6 +- lib/eal/common/eal_common_trace_utils.c | 3 +- lib/eal/common/eal_private.h | 4 + lib/eal/common/eal_trace.h | 4 +- lib/eal/common/hotplug_mp.c | 54 ++--- lib/eal/common/malloc_elem.c | 6 +- lib/eal/common/malloc_heap.c | 40 ++-- lib/eal/common/malloc_mp.c | 72 +++---- lib/eal/common/rte_keepalive.c | 4 +- lib/eal/common/rte_malloc.c | 10 +- lib/eal/common/rte_service.c | 8 +- lib/eal/freebsd/eal.c | 75 +++---- lib/eal/freebsd/eal_alarm.c | 2 +- lib/eal/freebsd/eal_dev.c | 10 +- lib/eal/freebsd/eal_hugepage_info.c | 22 +- lib/eal/freebsd/eal_interrupts.c | 60 +++--- lib/eal/freebsd/eal_lcore.c | 2 +- lib/eal/freebsd/eal_memalloc.c | 11 +- lib/eal/freebsd/eal_memory.c | 34 +-- lib/eal/freebsd/eal_thread.c | 2 +- lib/eal/freebsd/eal_timer.c | 10 +- lib/eal/linux/eal.c | 122 +++++------ lib/eal/linux/eal_alarm.c | 2 +- lib/eal/linux/eal_dev.c | 40 ++-- lib/eal/linux/eal_hugepage_info.c | 38 ++-- lib/eal/linux/eal_interrupts.c | 116 +++++----- lib/eal/linux/eal_lcore.c | 4 +- lib/eal/linux/eal_memalloc.c | 120 +++++------ lib/eal/linux/eal_memory.c | 208 +++++++++--------- lib/eal/linux/eal_thread.c | 6 +- lib/eal/linux/eal_timer.c | 10 +- lib/eal/linux/eal_vfio.c | 270 +++++++++++------------ lib/eal/linux/eal_vfio_mp_sync.c | 5 +- lib/eal/riscv/rte_cycles.c | 4 +- lib/eal/unix/eal_filesystem.c | 14 +- lib/eal/unix/eal_firmware.c | 3 +- lib/eal/unix/eal_unix_memory.c | 8 +- lib/eal/unix/rte_thread.c | 36 ++-- lib/eal/windows/eal.c | 36 ++-- lib/eal/windows/eal_alarm.c | 13 +- lib/eal/windows/eal_debug.c | 10 +- lib/eal/windows/eal_dev.c | 10 +- lib/eal/windows/eal_hugepages.c | 10 +- lib/eal/windows/eal_interrupts.c | 10 +- lib/eal/windows/eal_lcore.c | 7 +- lib/eal/windows/eal_memalloc.c | 50 ++--- lib/eal/windows/eal_memory.c | 20 +- lib/eal/windows/eal_windows.h | 6 +- lib/eal/windows/rte_thread.c | 29 +-- lib/efd/rte_efd.c | 60 +++--- lib/fib/fib_log.h | 4 +- lib/fib/rte_fib.c | 14 +- lib/fib/rte_fib6.c | 14 +- lib/hash/rte_cuckoo_hash.c | 54 ++--- lib/hash/rte_fbk_hash.c | 6 +- lib/hash/rte_hash_crc.c | 14 +- lib/hash/rte_thash.c | 22 +- lib/hash/rte_thash_gfni.c | 10 +- lib/ip_frag/ip_frag_common.h | 3 + lib/ip_frag/rte_ip_frag_common.c | 8 +- lib/latencystats/rte_latencystats.c | 43 ++-- lib/lpm/lpm_log.h | 2 + lib/lpm/rte_lpm.c | 12 +- lib/lpm/rte_lpm6.c | 10 +- lib/mbuf/mbuf_log.h | 2 + lib/mbuf/rte_mbuf.c | 14 +- lib/mbuf/rte_mbuf_dyn.c | 14 +- lib/mbuf/rte_mbuf_pool_ops.c | 4 +- lib/mempool/rte_mempool.c | 24 +-- lib/mempool/rte_mempool.h | 4 +- lib/mempool/rte_mempool_ops.c | 10 +- lib/pipeline/rte_pipeline.c | 231 ++++++++++---------- lib/port/port_log.h | 9 + lib/port/rte_port_ethdev.c | 20 +- lib/port/rte_port_eventdev.c | 20 +- lib/port/rte_port_fd.c | 26 +-- lib/port/rte_port_frag.c | 16 +- lib/port/rte_port_ras.c | 14 +- lib/port/rte_port_ring.c | 20 +- lib/port/rte_port_sched.c | 14 +- lib/port/rte_port_source_sink.c | 50 ++--- lib/port/rte_port_sym_crypto.c | 20 +- lib/power/guest_channel.c | 40 ++-- lib/power/power_acpi_cpufreq.c | 106 ++++----- lib/power/power_amd_pstate_cpufreq.c | 120 +++++------ lib/power/power_common.c | 10 +- lib/power/power_common.h | 4 +- lib/power/power_cppc_cpufreq.c | 118 +++++----- lib/power/power_intel_uncore.c | 68 +++--- lib/power/power_kvm_vm.c | 22 +- lib/power/power_pstate_cpufreq.c | 144 ++++++------- lib/power/rte_power.c | 22 +- lib/power/rte_power_pmd_mgmt.c | 34 +-- lib/power/rte_power_uncore.c | 14 +- lib/rcu/rte_rcu_qsbr.c | 62 ++---- lib/rcu/rte_rcu_qsbr.h | 1 + lib/reorder/rte_reorder.c | 34 +-- lib/rib/rib_log.h | 6 +- lib/rib/rte_rib.c | 13 +- lib/rib/rte_rib6.c | 10 +- lib/ring/rte_ring.c | 26 +-- lib/sched/rte_pie.c | 18 +- lib/sched/rte_sched.c | 274 ++++++++++++------------ lib/sched/rte_sched_log.h | 2 + lib/table/rte_table_acl.c | 74 +++---- lib/table/rte_table_array.c | 18 +- lib/table/rte_table_hash_cuckoo.c | 24 ++- lib/table/rte_table_hash_ext.c | 24 ++- lib/table/rte_table_hash_key16.c | 40 ++-- lib/table/rte_table_hash_key32.c | 40 ++-- lib/table/rte_table_hash_key8.c | 40 ++-- lib/table/rte_table_hash_lru.c | 24 ++- lib/table/rte_table_lpm.c | 44 ++-- lib/table/rte_table_lpm_ipv6.c | 46 ++-- lib/table/rte_table_stub.c | 6 +- lib/table/table_log.h | 9 + lib/vhost/fd_man.c | 10 +- 139 files changed, 2435 insertions(+), 2315 deletions(-) create mode 100644 lib/port/port_log.h create mode 100644 lib/table/table_log.h diff --git a/lib/acl/acl_bld.c b/lib/acl/acl_bld.c index eaf8770415..56e3b31ed6 100644 --- a/lib/acl/acl_bld.c +++ b/lib/acl/acl_bld.c @@ -1017,8 +1017,8 @@ build_trie(struct acl_build_context *context, struct rte_acl_build_rule *head, break; default: - RTE_LOG(ERR, ACL, - "Error in rule[%u] type - %hhu\n", + ACL_LOG(ERR, + "Error in rule[%u] type - %hhu", rule->f->data.userdata, rule->config->defs[n].type); return NULL; @@ -1374,7 +1374,7 @@ acl_build_tries(struct acl_build_context *context, last = build_one_trie(context, rule_sets, n, context->node_max); if (context->bld_tries[n].trie == NULL) { - RTE_LOG(ERR, ACL, "Build of %u-th trie failed\n", n); + ACL_LOG(ERR, "Build of %u-th trie failed", n); return -ENOMEM; } @@ -1383,8 +1383,8 @@ acl_build_tries(struct acl_build_context *context, break; if (num_tries == RTE_DIM(context->tries)) { - RTE_LOG(ERR, ACL, - "Exceeded max number of tries: %u\n", + ACL_LOG(ERR, + "Exceeded max number of tries: %u", num_tries); return -ENOMEM; } @@ -1409,7 +1409,7 @@ acl_build_tries(struct acl_build_context *context, */ last = build_one_trie(context, rule_sets, n, INT32_MAX); if (context->bld_tries[n].trie == NULL || last != NULL) { - RTE_LOG(ERR, ACL, "Build of %u-th trie failed\n", n); + ACL_LOG(ERR, "Build of %u-th trie failed", n); return -ENOMEM; } @@ -1435,8 +1435,8 @@ acl_build_log(const struct acl_build_context *ctx) for (n = 0; n < RTE_DIM(ctx->tries); n++) { if (ctx->tries[n].count != 0) - RTE_LOG(DEBUG, ACL, - "trie %u: number of rules: %u, indexes: %u\n", + ACL_LOG(DEBUG, + "trie %u: number of rules: %u, indexes: %u", n, ctx->tries[n].count, ctx->tries[n].num_data_indexes); } @@ -1526,8 +1526,8 @@ acl_bld(struct acl_build_context *bcx, struct rte_acl_ctx *ctx, /* build phase runs out of memory. */ if (rc != 0) { - RTE_LOG(ERR, ACL, - "ACL context: %s, %s() failed with error code: %d\n", + ACL_LOG(ERR, + "ACL context: %s, %s() failed with error code: %d", bcx->acx->name, __func__, rc); return rc; } @@ -1568,8 +1568,8 @@ acl_check_bld_param(struct rte_acl_ctx *ctx, const struct rte_acl_config *cfg) for (i = 0; i != cfg->num_fields; i++) { if (cfg->defs[i].type > RTE_ACL_FIELD_TYPE_BITMASK) { - RTE_LOG(ERR, ACL, - "ACL context: %s, invalid type: %hhu for %u-th field\n", + ACL_LOG(ERR, + "ACL context: %s, invalid type: %hhu for %u-th field", ctx->name, cfg->defs[i].type, i); return -EINVAL; } @@ -1580,8 +1580,8 @@ acl_check_bld_param(struct rte_acl_ctx *ctx, const struct rte_acl_config *cfg) ; if (j == RTE_DIM(field_sizes)) { - RTE_LOG(ERR, ACL, - "ACL context: %s, invalid size: %hhu for %u-th field\n", + ACL_LOG(ERR, + "ACL context: %s, invalid size: %hhu for %u-th field", ctx->name, cfg->defs[i].size, i); return -EINVAL; } diff --git a/lib/acl/acl_gen.c b/lib/acl/acl_gen.c index 03a47ea231..3c53d24056 100644 --- a/lib/acl/acl_gen.c +++ b/lib/acl/acl_gen.c @@ -471,9 +471,9 @@ rte_acl_gen(struct rte_acl_ctx *ctx, struct rte_acl_trie *trie, XMM_SIZE; if (total_size > max_size) { - RTE_LOG(DEBUG, ACL, + ACL_LOG(DEBUG, "Gen phase for ACL ctx \"%s\" exceeds max_size limit, " - "bytes required: %zu, allowed: %zu\n", + "bytes required: %zu, allowed: %zu", ctx->name, total_size, max_size); return -ERANGE; } @@ -481,8 +481,8 @@ rte_acl_gen(struct rte_acl_ctx *ctx, struct rte_acl_trie *trie, mem = rte_zmalloc_socket(ctx->name, total_size, RTE_CACHE_LINE_SIZE, ctx->socket_id); if (mem == NULL) { - RTE_LOG(ERR, ACL, - "allocation of %zu bytes on socket %d for %s failed\n", + ACL_LOG(ERR, + "allocation of %zu bytes on socket %d for %s failed", total_size, ctx->socket_id, ctx->name); return -ENOMEM; } diff --git a/lib/acl/acl_log.h b/lib/acl/acl_log.h index 6147116d8d..2d7c376058 100644 --- a/lib/acl/acl_log.h +++ b/lib/acl/acl_log.h @@ -4,3 +4,5 @@ extern int acl_logtype; #define RTE_LOGTYPE_ACL acl_logtype +#define ACL_LOG(level, fmt, ...) \ + RTE_LOG(level, ACL, fmt "\n", ## __VA_ARGS__) diff --git a/lib/acl/rte_acl.c b/lib/acl/rte_acl.c index 760c3587d4..245b9b6d7a 100644 --- a/lib/acl/rte_acl.c +++ b/lib/acl/rte_acl.c @@ -399,15 +399,15 @@ rte_acl_create(const struct rte_acl_param *param) te = rte_zmalloc("ACL_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, ACL, "Cannot allocate tailq entry!\n"); + ACL_LOG(ERR, "Cannot allocate tailq entry!"); goto exit; } ctx = rte_zmalloc_socket(name, sz, RTE_CACHE_LINE_SIZE, param->socket_id); if (ctx == NULL) { - RTE_LOG(ERR, ACL, - "allocation of %zu bytes on socket %d for %s failed\n", + ACL_LOG(ERR, + "allocation of %zu bytes on socket %d for %s failed", sz, param->socket_id, name); rte_free(te); goto exit; @@ -473,7 +473,7 @@ rte_acl_add_rules(struct rte_acl_ctx *ctx, const struct rte_acl_rule *rules, ((uintptr_t)rules + i * ctx->rule_sz); rc = acl_check_rule(&rv->data); if (rc != 0) { - RTE_LOG(ERR, ACL, "%s(%s): rule #%u is invalid\n", + ACL_LOG(ERR, "%s(%s): rule #%u is invalid", __func__, ctx->name, i + 1); return rc; } diff --git a/lib/acl/tb_mem.c b/lib/acl/tb_mem.c index 4ee65b23da..9264433422 100644 --- a/lib/acl/tb_mem.c +++ b/lib/acl/tb_mem.c @@ -26,7 +26,7 @@ tb_pool(struct tb_mem_pool *pool, size_t sz) size = sz + pool->alignment - 1; block = calloc(1, size + sizeof(*pool->block)); if (block == NULL) { - RTE_LOG(ERR, ACL, "%s(%zu) failed, currently allocated by pool: %zu bytes\n", + ACL_LOG(ERR, "%s(%zu) failed, currently allocated by pool: %zu bytes", __func__, sz, pool->alloc); siglongjmp(pool->fail, -ENOMEM); return NULL; diff --git a/lib/eal/common/eal_common_bus.c b/lib/eal/common/eal_common_bus.c index acac14131a..7cbd09c421 100644 --- a/lib/eal/common/eal_common_bus.c +++ b/lib/eal/common/eal_common_bus.c @@ -35,14 +35,14 @@ rte_bus_register(struct rte_bus *bus) RTE_VERIFY(!bus->plug || bus->unplug); TAILQ_INSERT_TAIL(&rte_bus_list, bus, next); - RTE_LOG(DEBUG, EAL, "Registered [%s] bus.\n", rte_bus_name(bus)); + EAL_LOG(DEBUG, "Registered [%s] bus.", rte_bus_name(bus)); } void rte_bus_unregister(struct rte_bus *bus) { TAILQ_REMOVE(&rte_bus_list, bus, next); - RTE_LOG(DEBUG, EAL, "Unregistered [%s] bus.\n", rte_bus_name(bus)); + EAL_LOG(DEBUG, "Unregistered [%s] bus.", rte_bus_name(bus)); } /* Scan all the buses for registered devices */ @@ -55,7 +55,7 @@ rte_bus_scan(void) TAILQ_FOREACH(bus, &rte_bus_list, next) { ret = bus->scan(); if (ret) - RTE_LOG(ERR, EAL, "Scan for (%s) bus failed.\n", + EAL_LOG(ERR, "Scan for (%s) bus failed.", rte_bus_name(bus)); } @@ -77,14 +77,14 @@ rte_bus_probe(void) ret = bus->probe(); if (ret) - RTE_LOG(ERR, EAL, "Bus (%s) probe failed.\n", + EAL_LOG(ERR, "Bus (%s) probe failed.", rte_bus_name(bus)); } if (vbus) { ret = vbus->probe(); if (ret) - RTE_LOG(ERR, EAL, "Bus (%s) probe failed.\n", + EAL_LOG(ERR, "Bus (%s) probe failed.", rte_bus_name(vbus)); } @@ -133,7 +133,7 @@ rte_bus_dump(FILE *f) TAILQ_FOREACH(bus, &rte_bus_list, next) { ret = bus_dump_one(f, bus); if (ret) { - RTE_LOG(ERR, EAL, "Unable to write to stream (%d)\n", + EAL_LOG(ERR, "Unable to write to stream (%d)", ret); break; } @@ -235,15 +235,15 @@ rte_bus_get_iommu_class(void) continue; bus_iova_mode = bus->get_iommu_class(); - RTE_LOG(DEBUG, EAL, "Bus %s wants IOVA as '%s'\n", + EAL_LOG(DEBUG, "Bus %s wants IOVA as '%s'", rte_bus_name(bus), bus_iova_mode == RTE_IOVA_DC ? "DC" : (bus_iova_mode == RTE_IOVA_PA ? "PA" : "VA")); if (bus_iova_mode == RTE_IOVA_PA) { buses_want_pa = true; if (!RTE_IOVA_IN_MBUF) - RTE_LOG(WARNING, EAL, - "Bus %s wants IOVA as PA not compatible with 'enable_iova_as_pa=false' build option.\n", + EAL_LOG(WARNING, + "Bus %s wants IOVA as PA not compatible with 'enable_iova_as_pa=false' build option.", rte_bus_name(bus)); } else if (bus_iova_mode == RTE_IOVA_VA) buses_want_va = true; @@ -255,8 +255,8 @@ rte_bus_get_iommu_class(void) } else { mode = RTE_IOVA_DC; if (buses_want_va) { - RTE_LOG(WARNING, EAL, "Some buses want 'VA' but forcing 'DC' because other buses want 'PA'.\n"); - RTE_LOG(WARNING, EAL, "Depending on the final decision by the EAL, not all buses may be able to initialize.\n"); + EAL_LOG(WARNING, "Some buses want 'VA' but forcing 'DC' because other buses want 'PA'."); + EAL_LOG(WARNING, "Depending on the final decision by the EAL, not all buses may be able to initialize."); } } diff --git a/lib/eal/common/eal_common_class.c b/lib/eal/common/eal_common_class.c index 0187076af1..4938ec6707 100644 --- a/lib/eal/common/eal_common_class.c +++ b/lib/eal/common/eal_common_class.c @@ -9,6 +9,8 @@ #include <rte_class.h> #include <rte_debug.h> +#include "eal_private.h" + static struct rte_class_list rte_class_list = TAILQ_HEAD_INITIALIZER(rte_class_list); @@ -19,14 +21,14 @@ rte_class_register(struct rte_class *class) RTE_VERIFY(class->name && strlen(class->name)); TAILQ_INSERT_TAIL(&rte_class_list, class, next); - RTE_LOG(DEBUG, EAL, "Registered [%s] device class.\n", class->name); + EAL_LOG(DEBUG, "Registered [%s] device class.", class->name); } void rte_class_unregister(struct rte_class *class) { TAILQ_REMOVE(&rte_class_list, class, next); - RTE_LOG(DEBUG, EAL, "Unregistered [%s] device class.\n", class->name); + EAL_LOG(DEBUG, "Unregistered [%s] device class.", class->name); } struct rte_class * diff --git a/lib/eal/common/eal_common_config.c b/lib/eal/common/eal_common_config.c index 0daf0f3188..a72d46f602 100644 --- a/lib/eal/common/eal_common_config.c +++ b/lib/eal/common/eal_common_config.c @@ -31,7 +31,7 @@ int eal_set_runtime_dir(const char *run_dir) { if (strlcpy(runtime_dir, run_dir, PATH_MAX) >= PATH_MAX) { - RTE_LOG(ERR, EAL, "Runtime directory string too long\n"); + EAL_LOG(ERR, "Runtime directory string too long"); return -1; } diff --git a/lib/eal/common/eal_common_debug.c b/lib/eal/common/eal_common_debug.c index 9cac9c6390..3e77995896 100644 --- a/lib/eal/common/eal_common_debug.c +++ b/lib/eal/common/eal_common_debug.c @@ -11,12 +11,14 @@ #include <rte_debug.h> #include <rte_errno.h> +#include "eal_private.h" + void __rte_panic(const char *funcname, const char *format, ...) { va_list ap; - rte_log(RTE_LOG_CRIT, RTE_LOGTYPE_EAL, "PANIC in %s():\n", funcname); + EAL_LOG(CRIT, "PANIC in %s():", funcname); va_start(ap, format); rte_vlog(RTE_LOG_CRIT, RTE_LOGTYPE_EAL, format, ap); va_end(ap); @@ -42,7 +44,7 @@ rte_exit(int exit_code, const char *format, ...) va_end(ap); if (rte_eal_cleanup() != 0 && rte_errno != EALREADY) - RTE_LOG(CRIT, EAL, - "EAL could not release all resources\n"); + EAL_LOG(CRIT, + "EAL could not release all resources"); exit(exit_code); } diff --git a/lib/eal/common/eal_common_dev.c b/lib/eal/common/eal_common_dev.c index 614ef6c9fc..a99252b02f 100644 --- a/lib/eal/common/eal_common_dev.c +++ b/lib/eal/common/eal_common_dev.c @@ -182,7 +182,7 @@ local_dev_probe(const char *devargs, struct rte_device **new_dev) goto err_devarg; if (da->bus->plug == NULL) { - RTE_LOG(ERR, EAL, "Function plug not supported by bus (%s)\n", + EAL_LOG(ERR, "Function plug not supported by bus (%s)", da->bus->name); ret = -ENOTSUP; goto err_devarg; @@ -199,7 +199,7 @@ local_dev_probe(const char *devargs, struct rte_device **new_dev) dev = da->bus->find_device(NULL, cmp_dev_name, da->name); if (dev == NULL) { - RTE_LOG(ERR, EAL, "Cannot find device (%s)\n", + EAL_LOG(ERR, "Cannot find device (%s)", da->name); ret = -ENODEV; goto err_devarg; @@ -214,7 +214,7 @@ local_dev_probe(const char *devargs, struct rte_device **new_dev) ret = -ENOTSUP; if (ret && !rte_dev_is_probed(dev)) { /* if hasn't ever succeeded */ - RTE_LOG(ERR, EAL, "Driver cannot attach the device (%s)\n", + EAL_LOG(ERR, "Driver cannot attach the device (%s)", dev->name); return ret; } @@ -248,13 +248,13 @@ rte_dev_probe(const char *devargs) */ ret = eal_dev_hotplug_request_to_primary(&req); if (ret != 0) { - RTE_LOG(ERR, EAL, - "Failed to send hotplug request to primary\n"); + EAL_LOG(ERR, + "Failed to send hotplug request to primary"); return -ENOMSG; } if (req.result != 0) - RTE_LOG(ERR, EAL, - "Failed to hotplug add device\n"); + EAL_LOG(ERR, + "Failed to hotplug add device"); return req.result; } @@ -264,8 +264,8 @@ rte_dev_probe(const char *devargs) ret = local_dev_probe(devargs, &dev); if (ret != 0) { - RTE_LOG(ERR, EAL, - "Failed to attach device on primary process\n"); + EAL_LOG(ERR, + "Failed to attach device on primary process"); /** * it is possible that secondary process failed to attached a @@ -282,8 +282,8 @@ rte_dev_probe(const char *devargs) /* if any communication error, we need to rollback. */ if (ret != 0) { - RTE_LOG(ERR, EAL, - "Failed to send hotplug add request to secondary\n"); + EAL_LOG(ERR, + "Failed to send hotplug add request to secondary"); ret = -ENOMSG; goto rollback; } @@ -293,8 +293,8 @@ rte_dev_probe(const char *devargs) * is necessary. */ if (req.result != 0) { - RTE_LOG(ERR, EAL, - "Failed to attach device on secondary process\n"); + EAL_LOG(ERR, + "Failed to attach device on secondary process"); ret = req.result; /* for -EEXIST, we don't need to rollback. */ @@ -310,15 +310,15 @@ rte_dev_probe(const char *devargs) /* primary send rollback request to secondary. */ if (eal_dev_hotplug_request_to_secondary(&req) != 0) - RTE_LOG(WARNING, EAL, + EAL_LOG(WARNING, "Failed to rollback device attach on secondary." - "Devices in secondary may not sync with primary\n"); + "Devices in secondary may not sync with primary"); /* primary rollback itself. */ if (local_dev_remove(dev) != 0) - RTE_LOG(WARNING, EAL, + EAL_LOG(WARNING, "Failed to rollback device attach on primary." - "Devices in secondary may not sync with primary\n"); + "Devices in secondary may not sync with primary"); return ret; } @@ -331,13 +331,13 @@ rte_eal_hotplug_remove(const char *busname, const char *devname) bus = rte_bus_find_by_name(busname); if (bus == NULL) { - RTE_LOG(ERR, EAL, "Cannot find bus (%s)\n", busname); + EAL_LOG(ERR, "Cannot find bus (%s)", busname); return -ENOENT; } dev = bus->find_device(NULL, cmp_dev_name, devname); if (dev == NULL) { - RTE_LOG(ERR, EAL, "Cannot find plugged device (%s)\n", devname); + EAL_LOG(ERR, "Cannot find plugged device (%s)", devname); return -EINVAL; } @@ -351,14 +351,14 @@ local_dev_remove(struct rte_device *dev) int ret; if (dev->bus->unplug == NULL) { - RTE_LOG(ERR, EAL, "Function unplug not supported by bus (%s)\n", + EAL_LOG(ERR, "Function unplug not supported by bus (%s)", dev->bus->name); return -ENOTSUP; } ret = dev->bus->unplug(dev); if (ret) { - RTE_LOG(ERR, EAL, "Driver cannot detach the device (%s)\n", + EAL_LOG(ERR, "Driver cannot detach the device (%s)", dev->name); return (ret < 0) ? ret : -ENOENT; } @@ -374,7 +374,7 @@ rte_dev_remove(struct rte_device *dev) int ret; if (!rte_dev_is_probed(dev)) { - RTE_LOG(ERR, EAL, "Device is not probed\n"); + EAL_LOG(ERR, "Device is not probed"); return -ENOENT; } @@ -394,13 +394,13 @@ rte_dev_remove(struct rte_device *dev) */ ret = eal_dev_hotplug_request_to_primary(&req); if (ret != 0) { - RTE_LOG(ERR, EAL, - "Failed to send hotplug request to primary\n"); + EAL_LOG(ERR, + "Failed to send hotplug request to primary"); return -ENOMSG; } if (req.result != 0) - RTE_LOG(ERR, EAL, - "Failed to hotplug remove device\n"); + EAL_LOG(ERR, + "Failed to hotplug remove device"); return req.result; } @@ -414,8 +414,8 @@ rte_dev_remove(struct rte_device *dev) * part of the secondary processes still detached it successfully. */ if (ret != 0) { - RTE_LOG(ERR, EAL, - "Failed to send device detach request to secondary\n"); + EAL_LOG(ERR, + "Failed to send device detach request to secondary"); ret = -ENOMSG; goto rollback; } @@ -425,8 +425,8 @@ rte_dev_remove(struct rte_device *dev) * is necessary. */ if (req.result != 0) { - RTE_LOG(ERR, EAL, - "Failed to detach device on secondary process\n"); + EAL_LOG(ERR, + "Failed to detach device on secondary process"); ret = req.result; /** * if -ENOENT, we don't need to rollback, since devices is @@ -441,8 +441,8 @@ rte_dev_remove(struct rte_device *dev) /* if primary failed, still need to consider if rollback is necessary */ if (ret != 0) { - RTE_LOG(ERR, EAL, - "Failed to detach device on primary process\n"); + EAL_LOG(ERR, + "Failed to detach device on primary process"); /* if -ENOENT, we don't need to rollback */ if (ret == -ENOENT) return ret; @@ -456,9 +456,9 @@ rte_dev_remove(struct rte_device *dev) /* primary send rollback request to secondary. */ if (eal_dev_hotplug_request_to_secondary(&req) != 0) - RTE_LOG(WARNING, EAL, + EAL_LOG(WARNING, "Failed to rollback device detach on secondary." - "Devices in secondary may not sync with primary\n"); + "Devices in secondary may not sync with primary"); return ret; } @@ -508,16 +508,16 @@ rte_dev_event_callback_register(const char *device_name, } TAILQ_INSERT_TAIL(&dev_event_cbs, event_cb, next); } else { - RTE_LOG(ERR, EAL, + EAL_LOG(ERR, "Failed to allocate memory for device " "event callback."); ret = -ENOMEM; goto error; } } else { - RTE_LOG(ERR, EAL, + EAL_LOG(ERR, "The callback is already exist, no need " - "to register again.\n"); + "to register again."); event_cb = NULL; ret = -EEXIST; goto error; @@ -635,17 +635,17 @@ rte_dev_iterator_init(struct rte_dev_iterator *it, * one layer specified. */ if (bus == NULL && cls == NULL) { - RTE_LOG(DEBUG, EAL, "Either bus or class must be specified.\n"); + EAL_LOG(DEBUG, "Either bus or class must be specified."); rte_errno = EINVAL; goto get_out; } if (bus != NULL && bus->dev_iterate == NULL) { - RTE_LOG(DEBUG, EAL, "Bus %s not supported\n", bus->name); + EAL_LOG(DEBUG, "Bus %s not supported", bus->name); rte_errno = ENOTSUP; goto get_out; } if (cls != NULL && cls->dev_iterate == NULL) { - RTE_LOG(DEBUG, EAL, "Class %s not supported\n", cls->name); + EAL_LOG(DEBUG, "Class %s not supported", cls->name); rte_errno = ENOTSUP; goto get_out; } diff --git a/lib/eal/common/eal_common_devargs.c b/lib/eal/common/eal_common_devargs.c index fb5d0a293b..a64805b268 100644 --- a/lib/eal/common/eal_common_devargs.c +++ b/lib/eal/common/eal_common_devargs.c @@ -39,12 +39,12 @@ devargs_bus_parse_default(struct rte_devargs *devargs, /* Parse devargs name from bus key-value list. */ name = rte_kvargs_get(bus_args, "name"); if (name == NULL) { - RTE_LOG(DEBUG, EAL, "devargs name not found: %s\n", + EAL_LOG(DEBUG, "devargs name not found: %s", devargs->data); return 0; } if (rte_strscpy(devargs->name, name, sizeof(devargs->name)) < 0) { - RTE_LOG(ERR, EAL, "devargs name too long: %s\n", + EAL_LOG(ERR, "devargs name too long: %s", devargs->data); return -E2BIG; } @@ -79,7 +79,7 @@ rte_devargs_layers_parse(struct rte_devargs *devargs, if (devargs->data != devstr) { devargs->data = strdup(devstr); if (devargs->data == NULL) { - RTE_LOG(ERR, EAL, "OOM\n"); + EAL_LOG(ERR, "OOM"); ret = -ENOMEM; goto get_out; } @@ -133,7 +133,7 @@ rte_devargs_layers_parse(struct rte_devargs *devargs, devargs->bus_str = layers[i].str; devargs->bus = rte_bus_find_by_name(kv->value); if (devargs->bus == NULL) { - RTE_LOG(ERR, EAL, "Could not find bus \"%s\"\n", + EAL_LOG(ERR, "Could not find bus \"%s\"", kv->value); ret = -EFAULT; goto get_out; @@ -142,7 +142,7 @@ rte_devargs_layers_parse(struct rte_devargs *devargs, devargs->cls_str = layers[i].str; devargs->cls = rte_class_find_by_name(kv->value); if (devargs->cls == NULL) { - RTE_LOG(ERR, EAL, "Could not find class \"%s\"\n", + EAL_LOG(ERR, "Could not find class \"%s\"", kv->value); ret = -EFAULT; goto get_out; @@ -217,7 +217,7 @@ rte_devargs_parse(struct rte_devargs *da, const char *dev) da->name[i] = devname[i]; i++; if (i == maxlen) { - RTE_LOG(WARNING, EAL, "Parsing \"%s\": device name should be shorter than %zu\n", + EAL_LOG(WARNING, "Parsing \"%s\": device name should be shorter than %zu", dev, maxlen); da->name[i - 1] = '\0'; return -EINVAL; @@ -227,7 +227,7 @@ rte_devargs_parse(struct rte_devargs *da, const char *dev) if (bus == NULL) { bus = rte_bus_find_by_device_name(da->name); if (bus == NULL) { - RTE_LOG(ERR, EAL, "failed to parse device \"%s\"\n", + EAL_LOG(ERR, "failed to parse device \"%s\"", da->name); return -EFAULT; } @@ -239,7 +239,7 @@ rte_devargs_parse(struct rte_devargs *da, const char *dev) else da->data = strdup(""); if (da->data == NULL) { - RTE_LOG(ERR, EAL, "not enough memory to parse arguments\n"); + EAL_LOG(ERR, "not enough memory to parse arguments"); return -ENOMEM; } da->drv_str = da->data; @@ -266,7 +266,7 @@ rte_devargs_parsef(struct rte_devargs *da, const char *format, ...) len += 1; dev = calloc(1, (size_t)len); if (dev == NULL) { - RTE_LOG(ERR, EAL, "not enough memory to parse device\n"); + EAL_LOG(ERR, "not enough memory to parse device"); return -ENOMEM; } diff --git a/lib/eal/common/eal_common_dynmem.c b/lib/eal/common/eal_common_dynmem.c index 95da55d9b0..b4dc231940 100644 --- a/lib/eal/common/eal_common_dynmem.c +++ b/lib/eal/common/eal_common_dynmem.c @@ -76,7 +76,7 @@ eal_dynmem_memseg_lists_init(void) n_memtypes = internal_conf->num_hugepage_sizes * rte_socket_count(); memtypes = calloc(n_memtypes, sizeof(*memtypes)); if (memtypes == NULL) { - RTE_LOG(ERR, EAL, "Cannot allocate space for memory types\n"); + EAL_LOG(ERR, "Cannot allocate space for memory types"); return -1; } @@ -101,8 +101,8 @@ eal_dynmem_memseg_lists_init(void) memtypes[cur_type].page_sz = hugepage_sz; memtypes[cur_type].socket_id = socket_id; - RTE_LOG(DEBUG, EAL, "Detected memory type: " - "socket_id:%u hugepage_sz:%" PRIu64 "\n", + EAL_LOG(DEBUG, "Detected memory type: " + "socket_id:%u hugepage_sz:%" PRIu64, socket_id, hugepage_sz); } } @@ -120,7 +120,7 @@ eal_dynmem_memseg_lists_init(void) max_seglists_per_type = RTE_MAX_MEMSEG_LISTS / n_memtypes; if (max_seglists_per_type == 0) { - RTE_LOG(ERR, EAL, "Cannot accommodate all memory types, please increase RTE_MAX_MEMSEG_LISTS\n"); + EAL_LOG(ERR, "Cannot accommodate all memory types, please increase RTE_MAX_MEMSEG_LISTS"); goto out; } @@ -171,15 +171,15 @@ eal_dynmem_memseg_lists_init(void) /* limit number of segment lists according to our maximum */ n_seglists = RTE_MIN(n_seglists, max_seglists_per_type); - RTE_LOG(DEBUG, EAL, "Creating %i segment lists: " - "n_segs:%i socket_id:%i hugepage_sz:%" PRIu64 "\n", + EAL_LOG(DEBUG, "Creating %i segment lists: " + "n_segs:%i socket_id:%i hugepage_sz:%" PRIu64, n_seglists, n_segs, socket_id, pagesz); /* create all segment lists */ for (cur_seglist = 0; cur_seglist < n_seglists; cur_seglist++) { if (msl_idx >= RTE_MAX_MEMSEG_LISTS) { - RTE_LOG(ERR, EAL, - "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS\n"); + EAL_LOG(ERR, + "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS"); goto out; } msl = &mcfg->memsegs[msl_idx++]; @@ -189,7 +189,7 @@ eal_dynmem_memseg_lists_init(void) goto out; if (eal_memseg_list_alloc(msl, 0)) { - RTE_LOG(ERR, EAL, "Cannot allocate VA space for memseg list\n"); + EAL_LOG(ERR, "Cannot allocate VA space for memseg list"); goto out; } } @@ -287,9 +287,9 @@ eal_dynmem_hugepage_init(void) if (num_pages == 0) continue; - RTE_LOG(DEBUG, EAL, + EAL_LOG(DEBUG, "Allocating %u pages of size %" PRIu64 "M " - "on socket %i\n", + "on socket %i", num_pages, hpi->hugepage_sz >> 20, socket_id); /* we may not be able to allocate all pages in one go, @@ -307,7 +307,7 @@ eal_dynmem_hugepage_init(void) pages = malloc(sizeof(*pages) * needed); if (pages == NULL) { - RTE_LOG(ERR, EAL, "Failed to malloc pages\n"); + EAL_LOG(ERR, "Failed to malloc pages"); return -1; } @@ -342,7 +342,7 @@ eal_dynmem_hugepage_init(void) continue; if (rte_mem_alloc_validator_register("socket-limit", limits_callback, i, limit)) - RTE_LOG(ERR, EAL, "Failed to register socket limits validator callback\n"); + EAL_LOG(ERR, "Failed to register socket limits validator callback"); } } return 0; @@ -515,8 +515,8 @@ eal_dynmem_calc_num_pages_per_socket( internal_conf->socket_mem[socket] / 0x100000); available = requested - ((unsigned int)(memory[socket] / 0x100000)); - RTE_LOG(ERR, EAL, "Not enough memory available on " - "socket %u! Requested: %uMB, available: %uMB\n", + EAL_LOG(ERR, "Not enough memory available on " + "socket %u! Requested: %uMB, available: %uMB", socket, requested, available); return -1; } @@ -526,8 +526,8 @@ eal_dynmem_calc_num_pages_per_socket( if (total_mem > 0) { requested = (unsigned int)(internal_conf->memory / 0x100000); available = requested - (unsigned int)(total_mem / 0x100000); - RTE_LOG(ERR, EAL, "Not enough memory available! " - "Requested: %uMB, available: %uMB\n", + EAL_LOG(ERR, "Not enough memory available! " + "Requested: %uMB, available: %uMB", requested, available); return -1; } diff --git a/lib/eal/common/eal_common_fbarray.c b/lib/eal/common/eal_common_fbarray.c index 2055bfa57d..0fe5bcfe06 100644 --- a/lib/eal/common/eal_common_fbarray.c +++ b/lib/eal/common/eal_common_fbarray.c @@ -83,7 +83,7 @@ resize_and_map(int fd, const char *path, void *addr, size_t len) void *map_addr; if (eal_file_truncate(fd, len)) { - RTE_LOG(ERR, EAL, "Cannot truncate %s\n", path); + EAL_LOG(ERR, "Cannot truncate %s", path); return -1; } @@ -755,7 +755,7 @@ rte_fbarray_init(struct rte_fbarray *arr, const char *name, unsigned int len, void *new_data = rte_mem_map(data, mmap_len, RTE_PROT_READ | RTE_PROT_WRITE, flags, fd, 0); if (new_data == NULL) { - RTE_LOG(DEBUG, EAL, "%s(): couldn't remap anonymous memory: %s\n", + EAL_LOG(DEBUG, "%s(): couldn't remap anonymous memory: %s", __func__, rte_strerror(rte_errno)); goto fail; } @@ -770,12 +770,12 @@ rte_fbarray_init(struct rte_fbarray *arr, const char *name, unsigned int len, */ fd = eal_file_open(path, EAL_OPEN_CREATE | EAL_OPEN_READWRITE); if (fd < 0) { - RTE_LOG(DEBUG, EAL, "%s(): couldn't open %s: %s\n", + EAL_LOG(DEBUG, "%s(): couldn't open %s: %s", __func__, path, rte_strerror(rte_errno)); goto fail; } else if (eal_file_lock( fd, EAL_FLOCK_EXCLUSIVE, EAL_FLOCK_RETURN)) { - RTE_LOG(DEBUG, EAL, "%s(): couldn't lock %s: %s\n", + EAL_LOG(DEBUG, "%s(): couldn't lock %s: %s", __func__, path, rte_strerror(rte_errno)); rte_errno = EBUSY; goto fail; @@ -1017,7 +1017,7 @@ rte_fbarray_destroy(struct rte_fbarray *arr) */ fd = tmp->fd; if (eal_file_lock(fd, EAL_FLOCK_EXCLUSIVE, EAL_FLOCK_RETURN)) { - RTE_LOG(DEBUG, EAL, "Cannot destroy fbarray - another process is using it\n"); + EAL_LOG(DEBUG, "Cannot destroy fbarray - another process is using it"); rte_errno = EBUSY; ret = -1; goto out; @@ -1026,7 +1026,7 @@ rte_fbarray_destroy(struct rte_fbarray *arr) /* we're OK to destroy the file */ eal_get_fbarray_path(path, sizeof(path), arr->name); if (unlink(path)) { - RTE_LOG(DEBUG, EAL, "Cannot unlink fbarray: %s\n", + EAL_LOG(DEBUG, "Cannot unlink fbarray: %s", strerror(errno)); rte_errno = errno; /* diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c index 97b64fed58..b4d7b18fae 100644 --- a/lib/eal/common/eal_common_interrupts.c +++ b/lib/eal/common/eal_common_interrupts.c @@ -11,11 +11,12 @@ #include <rte_malloc.h> #include "eal_interrupts.h" +#include "eal_private.h" /* Macros to check for valid interrupt handle */ #define CHECK_VALID_INTR_HANDLE(intr_handle) do { \ if (intr_handle == NULL) { \ - RTE_LOG(DEBUG, EAL, "Interrupt instance unallocated\n"); \ + EAL_LOG(DEBUG, "Interrupt instance unallocated"); \ rte_errno = EINVAL; \ goto fail; \ } \ @@ -37,7 +38,7 @@ struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags) * defined flags. */ if ((flags & ~RTE_INTR_INSTANCE_KNOWN_FLAGS) != 0) { - RTE_LOG(DEBUG, EAL, "Invalid alloc flag passed 0x%x\n", flags); + EAL_LOG(DEBUG, "Invalid alloc flag passed 0x%x", flags); rte_errno = EINVAL; return NULL; } @@ -48,7 +49,7 @@ struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags) else intr_handle = calloc(1, sizeof(*intr_handle)); if (intr_handle == NULL) { - RTE_LOG(ERR, EAL, "Failed to allocate intr_handle\n"); + EAL_LOG(ERR, "Failed to allocate intr_handle"); rte_errno = ENOMEM; return NULL; } @@ -61,7 +62,7 @@ struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags) sizeof(int)); } if (intr_handle->efds == NULL) { - RTE_LOG(ERR, EAL, "Fail to allocate event fd list\n"); + EAL_LOG(ERR, "Fail to allocate event fd list"); rte_errno = ENOMEM; goto fail; } @@ -75,7 +76,7 @@ struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags) sizeof(struct rte_epoll_event)); } if (intr_handle->elist == NULL) { - RTE_LOG(ERR, EAL, "fail to allocate event fd list\n"); + EAL_LOG(ERR, "fail to allocate event fd list"); rte_errno = ENOMEM; goto fail; } @@ -100,7 +101,7 @@ struct rte_intr_handle *rte_intr_instance_dup(const struct rte_intr_handle *src) struct rte_intr_handle *intr_handle; if (src == NULL) { - RTE_LOG(DEBUG, EAL, "Source interrupt instance unallocated\n"); + EAL_LOG(DEBUG, "Source interrupt instance unallocated"); rte_errno = EINVAL; return NULL; } @@ -129,7 +130,7 @@ int rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size) CHECK_VALID_INTR_HANDLE(intr_handle); if (size == 0) { - RTE_LOG(DEBUG, EAL, "Size can't be zero\n"); + EAL_LOG(DEBUG, "Size can't be zero"); rte_errno = EINVAL; goto fail; } @@ -143,7 +144,7 @@ int rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size) tmp_efds = realloc(intr_handle->efds, size * sizeof(int)); } if (tmp_efds == NULL) { - RTE_LOG(ERR, EAL, "Failed to realloc the efds list\n"); + EAL_LOG(ERR, "Failed to realloc the efds list"); rte_errno = ENOMEM; goto fail; } @@ -157,7 +158,7 @@ int rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size) size * sizeof(struct rte_epoll_event)); } if (tmp_elist == NULL) { - RTE_LOG(ERR, EAL, "Failed to realloc the event list\n"); + EAL_LOG(ERR, "Failed to realloc the event list"); rte_errno = ENOMEM; goto fail; } @@ -253,8 +254,8 @@ int rte_intr_max_intr_set(struct rte_intr_handle *intr_handle, CHECK_VALID_INTR_HANDLE(intr_handle); if (max_intr > intr_handle->nb_intr) { - RTE_LOG(DEBUG, EAL, "Maximum interrupt vector ID (%d) exceeds " - "the number of available events (%d)\n", max_intr, + EAL_LOG(DEBUG, "Maximum interrupt vector ID (%d) exceeds " + "the number of available events (%d)", max_intr, intr_handle->nb_intr); rte_errno = ERANGE; goto fail; @@ -332,7 +333,7 @@ int rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle, CHECK_VALID_INTR_HANDLE(intr_handle); if (index >= intr_handle->nb_intr) { - RTE_LOG(DEBUG, EAL, "Invalid index %d, max limit %d\n", index, + EAL_LOG(DEBUG, "Invalid index %d, max limit %d", index, intr_handle->nb_intr); rte_errno = EINVAL; goto fail; @@ -349,7 +350,7 @@ int rte_intr_efds_index_set(struct rte_intr_handle *intr_handle, CHECK_VALID_INTR_HANDLE(intr_handle); if (index >= intr_handle->nb_intr) { - RTE_LOG(DEBUG, EAL, "Invalid index %d, max limit %d\n", index, + EAL_LOG(DEBUG, "Invalid index %d, max limit %d", index, intr_handle->nb_intr); rte_errno = ERANGE; goto fail; @@ -368,7 +369,7 @@ struct rte_epoll_event *rte_intr_elist_index_get( CHECK_VALID_INTR_HANDLE(intr_handle); if (index >= intr_handle->nb_intr) { - RTE_LOG(DEBUG, EAL, "Invalid index %d, max limit %d\n", index, + EAL_LOG(DEBUG, "Invalid index %d, max limit %d", index, intr_handle->nb_intr); rte_errno = ERANGE; goto fail; @@ -385,7 +386,7 @@ int rte_intr_elist_index_set(struct rte_intr_handle *intr_handle, CHECK_VALID_INTR_HANDLE(intr_handle); if (index >= intr_handle->nb_intr) { - RTE_LOG(DEBUG, EAL, "Invalid index %d, max limit %d\n", index, + EAL_LOG(DEBUG, "Invalid index %d, max limit %d", index, intr_handle->nb_intr); rte_errno = ERANGE; goto fail; @@ -408,7 +409,7 @@ int rte_intr_vec_list_alloc(struct rte_intr_handle *intr_handle, return 0; if (size > intr_handle->nb_intr) { - RTE_LOG(DEBUG, EAL, "Invalid size %d, max limit %d\n", size, + EAL_LOG(DEBUG, "Invalid size %d, max limit %d", size, intr_handle->nb_intr); rte_errno = ERANGE; goto fail; @@ -419,7 +420,7 @@ int rte_intr_vec_list_alloc(struct rte_intr_handle *intr_handle, else intr_handle->intr_vec = calloc(size, sizeof(int)); if (intr_handle->intr_vec == NULL) { - RTE_LOG(ERR, EAL, "Failed to allocate %d intr_vec\n", size); + EAL_LOG(ERR, "Failed to allocate %d intr_vec", size); rte_errno = ENOMEM; goto fail; } @@ -437,7 +438,7 @@ int rte_intr_vec_list_index_get(const struct rte_intr_handle *intr_handle, CHECK_VALID_INTR_HANDLE(intr_handle); if (index >= intr_handle->vec_list_size) { - RTE_LOG(DEBUG, EAL, "Index %d greater than vec list size %d\n", + EAL_LOG(DEBUG, "Index %d greater than vec list size %d", index, intr_handle->vec_list_size); rte_errno = ERANGE; goto fail; @@ -454,7 +455,7 @@ int rte_intr_vec_list_index_set(struct rte_intr_handle *intr_handle, CHECK_VALID_INTR_HANDLE(intr_handle); if (index >= intr_handle->vec_list_size) { - RTE_LOG(DEBUG, EAL, "Index %d greater than vec list size %d\n", + EAL_LOG(DEBUG, "Index %d greater than vec list size %d", index, intr_handle->vec_list_size); rte_errno = ERANGE; goto fail; diff --git a/lib/eal/common/eal_common_lcore.c b/lib/eal/common/eal_common_lcore.c index 6807a38247..2ff9252c52 100644 --- a/lib/eal/common/eal_common_lcore.c +++ b/lib/eal/common/eal_common_lcore.c @@ -174,8 +174,8 @@ rte_eal_cpu_init(void) lcore_config[lcore_id].core_role = ROLE_RTE; lcore_config[lcore_id].core_id = eal_cpu_core_id(lcore_id); lcore_config[lcore_id].socket_id = socket_id; - RTE_LOG(DEBUG, EAL, "Detected lcore %u as " - "core %u on socket %u\n", + EAL_LOG(DEBUG, "Detected lcore %u as " + "core %u on socket %u", lcore_id, lcore_config[lcore_id].core_id, lcore_config[lcore_id].socket_id); count++; @@ -183,17 +183,17 @@ rte_eal_cpu_init(void) for (; lcore_id < CPU_SETSIZE; lcore_id++) { if (eal_cpu_detected(lcore_id) == 0) continue; - RTE_LOG(DEBUG, EAL, "Skipped lcore %u as core %u on socket %u\n", + EAL_LOG(DEBUG, "Skipped lcore %u as core %u on socket %u", lcore_id, eal_cpu_core_id(lcore_id), eal_cpu_socket_id(lcore_id)); } /* Set the count of enabled logical cores of the EAL configuration */ config->lcore_count = count; - RTE_LOG(DEBUG, EAL, - "Maximum logical cores by configuration: %u\n", + EAL_LOG(DEBUG, + "Maximum logical cores by configuration: %u", RTE_MAX_LCORE); - RTE_LOG(INFO, EAL, "Detected CPU lcores: %u\n", config->lcore_count); + EAL_LOG(INFO, "Detected CPU lcores: %u", config->lcore_count); /* sort all socket id's in ascending order */ qsort(lcore_to_socket_id, RTE_DIM(lcore_to_socket_id), @@ -208,7 +208,7 @@ rte_eal_cpu_init(void) socket_id; prev_socket_id = socket_id; } - RTE_LOG(INFO, EAL, "Detected NUMA nodes: %u\n", config->numa_node_count); + EAL_LOG(INFO, "Detected NUMA nodes: %u", config->numa_node_count); return 0; } @@ -247,7 +247,7 @@ callback_init(struct lcore_callback *callback, unsigned int lcore_id) { if (callback->init == NULL) return 0; - RTE_LOG(DEBUG, EAL, "Call init for lcore callback %s, lcore_id %u\n", + EAL_LOG(DEBUG, "Call init for lcore callback %s, lcore_id %u", callback->name, lcore_id); return callback->init(lcore_id, callback->arg); } @@ -257,7 +257,7 @@ callback_uninit(struct lcore_callback *callback, unsigned int lcore_id) { if (callback->uninit == NULL) return; - RTE_LOG(DEBUG, EAL, "Call uninit for lcore callback %s, lcore_id %u\n", + EAL_LOG(DEBUG, "Call uninit for lcore callback %s, lcore_id %u", callback->name, lcore_id); callback->uninit(lcore_id, callback->arg); } @@ -311,7 +311,7 @@ rte_lcore_callback_register(const char *name, rte_lcore_init_cb init, } no_init: TAILQ_INSERT_TAIL(&lcore_callbacks, callback, next); - RTE_LOG(DEBUG, EAL, "Registered new lcore callback %s (%sinit, %suninit).\n", + EAL_LOG(DEBUG, "Registered new lcore callback %s (%sinit, %suninit).", callback->name, callback->init == NULL ? "NO " : "", callback->uninit == NULL ? "NO " : ""); out: @@ -339,7 +339,7 @@ rte_lcore_callback_unregister(void *handle) no_uninit: TAILQ_REMOVE(&lcore_callbacks, callback, next); rte_rwlock_write_unlock(&lcore_lock); - RTE_LOG(DEBUG, EAL, "Unregistered lcore callback %s-%p.\n", + EAL_LOG(DEBUG, "Unregistered lcore callback %s-%p.", callback->name, callback->arg); free_callback(callback); } @@ -361,7 +361,7 @@ eal_lcore_non_eal_allocate(void) break; } if (lcore_id == RTE_MAX_LCORE) { - RTE_LOG(DEBUG, EAL, "No lcore available.\n"); + EAL_LOG(DEBUG, "No lcore available."); goto out; } TAILQ_FOREACH(callback, &lcore_callbacks, next) { @@ -375,7 +375,7 @@ eal_lcore_non_eal_allocate(void) callback_uninit(prev, lcore_id); prev = TAILQ_PREV(prev, lcore_callbacks_head, next); } - RTE_LOG(DEBUG, EAL, "Initialization refused for lcore %u.\n", + EAL_LOG(DEBUG, "Initialization refused for lcore %u.", lcore_id); cfg->lcore_role[lcore_id] = ROLE_OFF; cfg->lcore_count--; diff --git a/lib/eal/common/eal_common_memalloc.c b/lib/eal/common/eal_common_memalloc.c index ab04479c1c..47e782f395 100644 --- a/lib/eal/common/eal_common_memalloc.c +++ b/lib/eal/common/eal_common_memalloc.c @@ -186,7 +186,7 @@ eal_memalloc_mem_event_callback_register(const char *name, ret = 0; - RTE_LOG(DEBUG, EAL, "Mem event callback '%s:%p' registered\n", + EAL_LOG(DEBUG, "Mem event callback '%s:%p' registered", name, arg); unlock: @@ -225,7 +225,7 @@ eal_memalloc_mem_event_callback_unregister(const char *name, void *arg) ret = 0; - RTE_LOG(DEBUG, EAL, "Mem event callback '%s:%p' unregistered\n", + EAL_LOG(DEBUG, "Mem event callback '%s:%p' unregistered", name, arg); unlock: @@ -242,7 +242,7 @@ eal_memalloc_mem_event_notify(enum rte_mem_event event, const void *start, rte_rwlock_read_lock(&mem_event_rwlock); TAILQ_FOREACH(entry, &mem_event_callback_list, next) { - RTE_LOG(DEBUG, EAL, "Calling mem event callback '%s:%p'\n", + EAL_LOG(DEBUG, "Calling mem event callback '%s:%p'", entry->name, entry->arg); entry->clb(event, start, len, entry->arg); } @@ -293,7 +293,7 @@ eal_memalloc_mem_alloc_validator_register(const char *name, ret = 0; - RTE_LOG(DEBUG, EAL, "Mem alloc validator '%s' on socket %i with limit %zu registered\n", + EAL_LOG(DEBUG, "Mem alloc validator '%s' on socket %i with limit %zu registered", name, socket_id, limit); unlock: @@ -332,7 +332,7 @@ eal_memalloc_mem_alloc_validator_unregister(const char *name, int socket_id) ret = 0; - RTE_LOG(DEBUG, EAL, "Mem alloc validator '%s' on socket %i unregistered\n", + EAL_LOG(DEBUG, "Mem alloc validator '%s' on socket %i unregistered", name, socket_id); unlock: @@ -351,7 +351,7 @@ eal_memalloc_mem_alloc_validate(int socket_id, size_t new_len) TAILQ_FOREACH(entry, &mem_alloc_validator_list, next) { if (entry->socket_id != socket_id || entry->limit > new_len) continue; - RTE_LOG(DEBUG, EAL, "Calling mem alloc validator '%s' on socket %i\n", + EAL_LOG(DEBUG, "Calling mem alloc validator '%s' on socket %i", entry->name, entry->socket_id); if (entry->clb(socket_id, entry->limit, new_len) < 0) ret = -1; diff --git a/lib/eal/common/eal_common_memory.c b/lib/eal/common/eal_common_memory.c index d9433db623..60ddc30580 100644 --- a/lib/eal/common/eal_common_memory.c +++ b/lib/eal/common/eal_common_memory.c @@ -57,7 +57,7 @@ eal_get_virtual_area(void *requested_addr, size_t *size, if (system_page_sz == 0) system_page_sz = rte_mem_page_size(); - RTE_LOG(DEBUG, EAL, "Ask a virtual area of 0x%zx bytes\n", *size); + EAL_LOG(DEBUG, "Ask a virtual area of 0x%zx bytes", *size); addr_is_hint = (flags & EAL_VIRTUAL_AREA_ADDR_IS_HINT) > 0; allow_shrink = (flags & EAL_VIRTUAL_AREA_ALLOW_SHRINK) > 0; @@ -94,7 +94,7 @@ eal_get_virtual_area(void *requested_addr, size_t *size, do { map_sz = no_align ? *size : *size + page_sz; if (map_sz > SIZE_MAX) { - RTE_LOG(ERR, EAL, "Map size too big\n"); + EAL_LOG(ERR, "Map size too big"); rte_errno = E2BIG; return NULL; } @@ -125,16 +125,16 @@ eal_get_virtual_area(void *requested_addr, size_t *size, RTE_PTR_ALIGN(mapped_addr, page_sz); if (*size == 0) { - RTE_LOG(ERR, EAL, "Cannot get a virtual area of any size: %s\n", + EAL_LOG(ERR, "Cannot get a virtual area of any size: %s", rte_strerror(rte_errno)); return NULL; } else if (mapped_addr == NULL) { - RTE_LOG(ERR, EAL, "Cannot get a virtual area: %s\n", + EAL_LOG(ERR, "Cannot get a virtual area: %s", rte_strerror(rte_errno)); return NULL; } else if (requested_addr != NULL && !addr_is_hint && aligned_addr != requested_addr) { - RTE_LOG(ERR, EAL, "Cannot get a virtual area at requested address: %p (got %p)\n", + EAL_LOG(ERR, "Cannot get a virtual area at requested address: %p (got %p)", requested_addr, aligned_addr); eal_mem_free(mapped_addr, map_sz); rte_errno = EADDRNOTAVAIL; @@ -146,19 +146,19 @@ eal_get_virtual_area(void *requested_addr, size_t *size, * a base virtual address. */ if (internal_conf->base_virtaddr != 0) { - RTE_LOG(WARNING, EAL, "WARNING! Base virtual address hint (%p != %p) not respected!\n", + EAL_LOG(WARNING, "WARNING! Base virtual address hint (%p != %p) not respected!", requested_addr, aligned_addr); - RTE_LOG(WARNING, EAL, " This may cause issues with mapping memory into secondary processes\n"); + EAL_LOG(WARNING, " This may cause issues with mapping memory into secondary processes"); } else { - RTE_LOG(DEBUG, EAL, "WARNING! Base virtual address hint (%p != %p) not respected!\n", + EAL_LOG(DEBUG, "WARNING! Base virtual address hint (%p != %p) not respected!", requested_addr, aligned_addr); - RTE_LOG(DEBUG, EAL, " This may cause issues with mapping memory into secondary processes\n"); + EAL_LOG(DEBUG, " This may cause issues with mapping memory into secondary processes"); } } else if (next_baseaddr != NULL) { next_baseaddr = RTE_PTR_ADD(aligned_addr, *size); } - RTE_LOG(DEBUG, EAL, "Virtual area found at %p (size = 0x%zx)\n", + EAL_LOG(DEBUG, "Virtual area found at %p (size = 0x%zx)", aligned_addr, *size); if (unmap) { @@ -202,7 +202,7 @@ eal_memseg_list_init_named(struct rte_memseg_list *msl, const char *name, { if (rte_fbarray_init(&msl->memseg_arr, name, n_segs, sizeof(struct rte_memseg))) { - RTE_LOG(ERR, EAL, "Cannot allocate memseg list: %s\n", + EAL_LOG(ERR, "Cannot allocate memseg list: %s", rte_strerror(rte_errno)); return -1; } @@ -212,8 +212,8 @@ eal_memseg_list_init_named(struct rte_memseg_list *msl, const char *name, msl->base_va = NULL; msl->heap = heap; - RTE_LOG(DEBUG, EAL, - "Memseg list allocated at socket %i, page size 0x%"PRIx64"kB\n", + EAL_LOG(DEBUG, + "Memseg list allocated at socket %i, page size 0x%"PRIx64"kB", socket_id, page_sz >> 10); return 0; @@ -251,8 +251,8 @@ eal_memseg_list_alloc(struct rte_memseg_list *msl, int reserve_flags) * including common code, so don't duplicate the message. */ if (rte_errno == EADDRNOTAVAIL) - RTE_LOG(ERR, EAL, "Cannot reserve %llu bytes at [%p] - " - "please use '--" OPT_BASE_VIRTADDR "' option\n", + EAL_LOG(ERR, "Cannot reserve %llu bytes at [%p] - " + "please use '--" OPT_BASE_VIRTADDR "' option", (unsigned long long)mem_sz, msl->base_va); #endif return -1; @@ -260,7 +260,7 @@ eal_memseg_list_alloc(struct rte_memseg_list *msl, int reserve_flags) msl->base_va = addr; msl->len = mem_sz; - RTE_LOG(DEBUG, EAL, "VA reserved for memseg list at %p, size %zx\n", + EAL_LOG(DEBUG, "VA reserved for memseg list at %p, size %zx", addr, mem_sz); return 0; @@ -472,7 +472,7 @@ rte_mem_event_callback_register(const char *name, rte_mem_event_callback_t clb, /* FreeBSD boots with legacy mem enabled by default */ if (internal_conf->legacy_mem) { - RTE_LOG(DEBUG, EAL, "Registering mem event callbacks not supported\n"); + EAL_LOG(DEBUG, "Registering mem event callbacks not supported"); rte_errno = ENOTSUP; return -1; } @@ -487,7 +487,7 @@ rte_mem_event_callback_unregister(const char *name, void *arg) /* FreeBSD boots with legacy mem enabled by default */ if (internal_conf->legacy_mem) { - RTE_LOG(DEBUG, EAL, "Registering mem event callbacks not supported\n"); + EAL_LOG(DEBUG, "Registering mem event callbacks not supported"); rte_errno = ENOTSUP; return -1; } @@ -503,7 +503,7 @@ rte_mem_alloc_validator_register(const char *name, /* FreeBSD boots with legacy mem enabled by default */ if (internal_conf->legacy_mem) { - RTE_LOG(DEBUG, EAL, "Registering mem alloc validators not supported\n"); + EAL_LOG(DEBUG, "Registering mem alloc validators not supported"); rte_errno = ENOTSUP; return -1; } @@ -519,7 +519,7 @@ rte_mem_alloc_validator_unregister(const char *name, int socket_id) /* FreeBSD boots with legacy mem enabled by default */ if (internal_conf->legacy_mem) { - RTE_LOG(DEBUG, EAL, "Registering mem alloc validators not supported\n"); + EAL_LOG(DEBUG, "Registering mem alloc validators not supported"); rte_errno = ENOTSUP; return -1; } @@ -545,10 +545,10 @@ check_iova(const struct rte_memseg_list *msl __rte_unused, if (!(iova & *mask)) return 0; - RTE_LOG(DEBUG, EAL, "memseg iova %"PRIx64", len %zx, out of range\n", + EAL_LOG(DEBUG, "memseg iova %"PRIx64", len %zx, out of range", ms->iova, ms->len); - RTE_LOG(DEBUG, EAL, "\tusing dma mask %"PRIx64"\n", *mask); + EAL_LOG(DEBUG, "\tusing dma mask %"PRIx64, *mask); return 1; } @@ -565,7 +565,7 @@ check_dma_mask(uint8_t maskbits, bool thread_unsafe) /* Sanity check. We only check width can be managed with 64 bits * variables. Indeed any higher value is likely wrong. */ if (maskbits > MAX_DMA_MASK_BITS) { - RTE_LOG(ERR, EAL, "wrong dma mask size %u (Max: %u)\n", + EAL_LOG(ERR, "wrong dma mask size %u (Max: %u)", maskbits, MAX_DMA_MASK_BITS); return -1; } @@ -925,7 +925,7 @@ rte_extmem_register(void *va_addr, size_t len, rte_iova_t iova_addrs[], /* get next available socket ID */ socket_id = mcfg->next_socket_id; if (socket_id > INT32_MAX) { - RTE_LOG(ERR, EAL, "Cannot assign new socket ID's\n"); + EAL_LOG(ERR, "Cannot assign new socket ID's"); rte_errno = ENOSPC; ret = -1; goto unlock; @@ -1030,7 +1030,7 @@ rte_eal_memory_detach(void) /* detach internal memory subsystem data first */ if (eal_memalloc_cleanup()) - RTE_LOG(ERR, EAL, "Could not release memory subsystem data\n"); + EAL_LOG(ERR, "Could not release memory subsystem data"); for (i = 0; i < RTE_DIM(mcfg->memsegs); i++) { struct rte_memseg_list *msl = &mcfg->memsegs[i]; @@ -1047,7 +1047,7 @@ rte_eal_memory_detach(void) */ if (!msl->external) if (rte_mem_unmap(msl->base_va, msl->len) != 0) - RTE_LOG(ERR, EAL, "Could not unmap memory: %s\n", + EAL_LOG(ERR, "Could not unmap memory: %s", rte_strerror(rte_errno)); /* @@ -1056,7 +1056,7 @@ rte_eal_memory_detach(void) * have no way of knowing if they still do. */ if (rte_fbarray_detach(&msl->memseg_arr)) - RTE_LOG(ERR, EAL, "Could not detach fbarray: %s\n", + EAL_LOG(ERR, "Could not detach fbarray: %s", rte_strerror(rte_errno)); } rte_rwlock_write_unlock(&mcfg->memory_hotplug_lock); @@ -1068,7 +1068,7 @@ rte_eal_memory_detach(void) */ if (internal_conf->no_shconf == 0 && mcfg->mem_cfg_addr != 0) { if (rte_mem_unmap(mcfg, RTE_ALIGN(sizeof(*mcfg), page_sz)) != 0) - RTE_LOG(ERR, EAL, "Could not unmap shared memory config: %s\n", + EAL_LOG(ERR, "Could not unmap shared memory config: %s", rte_strerror(rte_errno)); } rte_eal_get_configuration()->mem_config = NULL; @@ -1084,7 +1084,7 @@ rte_eal_memory_init(void) eal_get_internal_configuration(); int retval; - RTE_LOG(DEBUG, EAL, "Setting up physically contiguous memory...\n"); + EAL_LOG(DEBUG, "Setting up physically contiguous memory..."); if (rte_eal_memseg_init() < 0) goto fail; @@ -1213,7 +1213,7 @@ handle_eal_memzone_info_request(const char *cmd __rte_unused, /* go through each page occupied by this memzone */ msl = rte_mem_virt2memseg_list(mz->addr); if (!msl) { - RTE_LOG(DEBUG, EAL, "Skipping bad memzone\n"); + EAL_LOG(DEBUG, "Skipping bad memzone"); return -1; } page_sz = (size_t)mz->hugepage_sz; @@ -1404,7 +1404,7 @@ handle_eal_memseg_info_request(const char *cmd __rte_unused, ms = rte_fbarray_get(arr, ms_idx); if (ms == NULL) { rte_mcfg_mem_read_unlock(); - RTE_LOG(DEBUG, EAL, "Error fetching requested memseg.\n"); + EAL_LOG(DEBUG, "Error fetching requested memseg."); return -1; } @@ -1477,7 +1477,7 @@ handle_eal_element_list_request(const char *cmd __rte_unused, ms = rte_fbarray_get(&msl->memseg_arr, ms_idx); if (ms == NULL) { rte_mcfg_mem_read_unlock(); - RTE_LOG(DEBUG, EAL, "Error fetching requested memseg.\n"); + EAL_LOG(DEBUG, "Error fetching requested memseg."); return -1; } @@ -1555,7 +1555,7 @@ handle_eal_element_info_request(const char *cmd __rte_unused, ms = rte_fbarray_get(&msl->memseg_arr, ms_idx); if (ms == NULL) { rte_mcfg_mem_read_unlock(); - RTE_LOG(DEBUG, EAL, "Error fetching requested memseg.\n"); + EAL_LOG(DEBUG, "Error fetching requested memseg."); return -1; } diff --git a/lib/eal/common/eal_common_memzone.c b/lib/eal/common/eal_common_memzone.c index 1f3e701499..32e6b78f87 100644 --- a/lib/eal/common/eal_common_memzone.c +++ b/lib/eal/common/eal_common_memzone.c @@ -31,13 +31,13 @@ rte_memzone_max_set(size_t max) struct rte_mem_config *mcfg; if (eal_get_internal_configuration()->init_complete > 0) { - RTE_LOG(ERR, EAL, "Max memzone cannot be set after EAL init\n"); + EAL_LOG(ERR, "Max memzone cannot be set after EAL init"); return -1; } mcfg = rte_eal_get_configuration()->mem_config; if (mcfg == NULL) { - RTE_LOG(ERR, EAL, "Failed to set max memzone count\n"); + EAL_LOG(ERR, "Failed to set max memzone count"); return -1; } @@ -116,16 +116,16 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len, /* no more room in config */ if (arr->count >= arr->len) { - RTE_LOG(ERR, EAL, + EAL_LOG(ERR, "%s(): Number of requested memzone segments exceeds maximum " - "%u\n", __func__, arr->len); + "%u", __func__, arr->len); rte_errno = ENOSPC; return NULL; } if (strlen(name) > sizeof(mz->name) - 1) { - RTE_LOG(DEBUG, EAL, "%s(): memzone <%s>: name too long\n", + EAL_LOG(DEBUG, "%s(): memzone <%s>: name too long", __func__, name); rte_errno = ENAMETOOLONG; return NULL; @@ -133,7 +133,7 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len, /* zone already exist */ if ((memzone_lookup_thread_unsafe(name)) != NULL) { - RTE_LOG(DEBUG, EAL, "%s(): memzone <%s> already exists\n", + EAL_LOG(DEBUG, "%s(): memzone <%s> already exists", __func__, name); rte_errno = EEXIST; return NULL; @@ -141,7 +141,7 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len, /* if alignment is not a power of two */ if (align && !rte_is_power_of_2(align)) { - RTE_LOG(ERR, EAL, "%s(): Invalid alignment: %u\n", __func__, + EAL_LOG(ERR, "%s(): Invalid alignment: %u", __func__, align); rte_errno = EINVAL; return NULL; @@ -218,7 +218,7 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len, } if (mz == NULL) { - RTE_LOG(ERR, EAL, "%s(): Cannot find free memzone\n", __func__); + EAL_LOG(ERR, "%s(): Cannot find free memzone", __func__); malloc_heap_free(elem); rte_errno = ENOSPC; return NULL; @@ -323,7 +323,7 @@ rte_memzone_free(const struct rte_memzone *mz) if (found_mz == NULL) { ret = -EINVAL; } else if (found_mz->addr == NULL) { - RTE_LOG(ERR, EAL, "Memzone is not allocated\n"); + EAL_LOG(ERR, "Memzone is not allocated"); ret = -EINVAL; } else { addr = found_mz->addr; @@ -385,7 +385,7 @@ dump_memzone(const struct rte_memzone *mz, void *arg) /* go through each page occupied by this memzone */ msl = rte_mem_virt2memseg_list(mz->addr); if (!msl) { - RTE_LOG(DEBUG, EAL, "Skipping bad memzone\n"); + EAL_LOG(DEBUG, "Skipping bad memzone"); return; } page_sz = (size_t)mz->hugepage_sz; @@ -434,11 +434,11 @@ rte_eal_memzone_init(void) if (rte_eal_process_type() == RTE_PROC_PRIMARY && rte_fbarray_init(&mcfg->memzones, "memzone", rte_memzone_max_get(), sizeof(struct rte_memzone))) { - RTE_LOG(ERR, EAL, "Cannot allocate memzone list\n"); + EAL_LOG(ERR, "Cannot allocate memzone list"); ret = -1; } else if (rte_eal_process_type() == RTE_PROC_SECONDARY && rte_fbarray_attach(&mcfg->memzones)) { - RTE_LOG(ERR, EAL, "Cannot attach to memzone list\n"); + EAL_LOG(ERR, "Cannot attach to memzone list"); ret = -1; } diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c index e9ba01fb89..d9748076b4 100644 --- a/lib/eal/common/eal_common_options.c +++ b/lib/eal/common/eal_common_options.c @@ -255,14 +255,14 @@ eal_option_device_add(enum rte_devtype type, const char *optarg) optlen = strlen(optarg) + 1; devopt = calloc(1, sizeof(*devopt) + optlen); if (devopt == NULL) { - RTE_LOG(ERR, EAL, "Unable to allocate device option\n"); + EAL_LOG(ERR, "Unable to allocate device option"); return -ENOMEM; } devopt->type = type; ret = strlcpy(devopt->arg, optarg, optlen); if (ret < 0) { - RTE_LOG(ERR, EAL, "Unable to copy device option\n"); + EAL_LOG(ERR, "Unable to copy device option"); free(devopt); return -EINVAL; } @@ -281,7 +281,7 @@ eal_option_device_parse(void) if (ret == 0) { ret = rte_devargs_add(devopt->type, devopt->arg); if (ret) - RTE_LOG(ERR, EAL, "Unable to parse device '%s'\n", + EAL_LOG(ERR, "Unable to parse device '%s'", devopt->arg); } TAILQ_REMOVE(&devopt_list, devopt, next); @@ -360,7 +360,7 @@ eal_plugin_add(const char *path) solib = malloc(sizeof(*solib)); if (solib == NULL) { - RTE_LOG(ERR, EAL, "malloc(solib) failed\n"); + EAL_LOG(ERR, "malloc(solib) failed"); return -1; } memset(solib, 0, sizeof(*solib)); @@ -390,7 +390,7 @@ eal_plugindir_init(const char *path) d = opendir(path); if (d == NULL) { - RTE_LOG(ERR, EAL, "failed to open directory %s: %s\n", + EAL_LOG(ERR, "failed to open directory %s: %s", path, strerror(errno)); return -1; } @@ -442,13 +442,13 @@ verify_perms(const char *dirpath) /* call stat to check for permissions and ensure not world writable */ if (stat(dirpath, &st) != 0) { - RTE_LOG(ERR, EAL, "Error with stat on %s, %s\n", + EAL_LOG(ERR, "Error with stat on %s, %s", dirpath, strerror(errno)); return -1; } if (st.st_mode & S_IWOTH) { - RTE_LOG(ERR, EAL, - "Error, directory path %s is world-writable and insecure\n", + EAL_LOG(ERR, + "Error, directory path %s is world-writable and insecure", dirpath); return -1; } @@ -466,16 +466,16 @@ eal_dlopen(const char *pathname) /* not a full or relative path, try a load from system dirs */ retval = dlopen(pathname, RTLD_NOW); if (retval == NULL) - RTE_LOG(ERR, EAL, "%s\n", dlerror()); + EAL_LOG(ERR, "%s", dlerror()); return retval; } if (realp == NULL) { - RTE_LOG(ERR, EAL, "Error with realpath for %s, %s\n", + EAL_LOG(ERR, "Error with realpath for %s, %s", pathname, strerror(errno)); goto out; } if (strnlen(realp, PATH_MAX) == PATH_MAX) { - RTE_LOG(ERR, EAL, "Error, driver path greater than PATH_MAX\n"); + EAL_LOG(ERR, "Error, driver path greater than PATH_MAX"); goto out; } @@ -485,7 +485,7 @@ eal_dlopen(const char *pathname) retval = dlopen(realp, RTLD_NOW); if (retval == NULL) - RTE_LOG(ERR, EAL, "%s\n", dlerror()); + EAL_LOG(ERR, "%s", dlerror()); out: free(realp); return retval; @@ -500,7 +500,7 @@ is_shared_build(void) len = strlcpy(soname, EAL_SO"."ABI_VERSION, sizeof(soname)); if (len > sizeof(soname)) { - RTE_LOG(ERR, EAL, "Shared lib name too long in shared build check\n"); + EAL_LOG(ERR, "Shared lib name too long in shared build check"); len = sizeof(soname) - 1; } @@ -508,10 +508,10 @@ is_shared_build(void) void *handle; /* check if we have this .so loaded, if so - shared build */ - RTE_LOG(DEBUG, EAL, "Checking presence of .so '%s'\n", soname); + EAL_LOG(DEBUG, "Checking presence of .so '%s'", soname); handle = dlopen(soname, RTLD_LAZY | RTLD_NOLOAD); if (handle != NULL) { - RTE_LOG(INFO, EAL, "Detected shared linkage of DPDK\n"); + EAL_LOG(INFO, "Detected shared linkage of DPDK"); dlclose(handle); return 1; } @@ -524,7 +524,7 @@ is_shared_build(void) } } - RTE_LOG(INFO, EAL, "Detected static linkage of DPDK\n"); + EAL_LOG(INFO, "Detected static linkage of DPDK"); return 0; } @@ -549,13 +549,13 @@ eal_plugins_init(void) if (stat(solib->name, &sb) == 0 && S_ISDIR(sb.st_mode)) { if (eal_plugindir_init(solib->name) == -1) { - RTE_LOG(ERR, EAL, - "Cannot init plugin directory %s\n", + EAL_LOG(ERR, + "Cannot init plugin directory %s", solib->name); return -1; } } else { - RTE_LOG(DEBUG, EAL, "open shared lib %s\n", + EAL_LOG(DEBUG, "open shared lib %s", solib->name); solib->lib_handle = eal_dlopen(solib->name); if (solib->lib_handle == NULL) @@ -626,15 +626,15 @@ eal_parse_service_coremask(const char *coremask) uint32_t lcore = idx; if (main_lcore_parsed && cfg->main_lcore == lcore) { - RTE_LOG(ERR, EAL, - "lcore %u is main lcore, cannot use as service core\n", + EAL_LOG(ERR, + "lcore %u is main lcore, cannot use as service core", idx); return -1; } if (eal_cpu_detected(idx) == 0) { - RTE_LOG(ERR, EAL, - "lcore %u unavailable\n", idx); + EAL_LOG(ERR, + "lcore %u unavailable", idx); return -1; } @@ -658,9 +658,9 @@ eal_parse_service_coremask(const char *coremask) return -1; if (core_parsed && taken_lcore_count != count) { - RTE_LOG(WARNING, EAL, + EAL_LOG(WARNING, "Not all service cores are in the coremask. " - "Please ensure -c or -l includes service cores\n"); + "Please ensure -c or -l includes service cores"); } cfg->service_lcore_count = count; @@ -689,7 +689,7 @@ update_lcore_config(int *cores) for (i = 0; i < RTE_MAX_LCORE; i++) { if (cores[i] != -1) { if (eal_cpu_detected(i) == 0) { - RTE_LOG(ERR, EAL, "lcore %u unavailable\n", i); + EAL_LOG(ERR, "lcore %u unavailable", i); ret = -1; continue; } @@ -717,7 +717,7 @@ check_core_list(int *lcores, unsigned int count) if (lcores[i] < RTE_MAX_LCORE) continue; - RTE_LOG(ERR, EAL, "lcore %d >= RTE_MAX_LCORE (%d)\n", + EAL_LOG(ERR, "lcore %d >= RTE_MAX_LCORE (%d)", lcores[i], RTE_MAX_LCORE); overflow = true; } @@ -737,9 +737,9 @@ check_core_list(int *lcores, unsigned int count) } if (len > 0) lcorestr[len - 1] = 0; - RTE_LOG(ERR, EAL, "To use high physical core ids, " + EAL_LOG(ERR, "To use high physical core ids, " "please use --lcores to map them to lcore ids below RTE_MAX_LCORE, " - "e.g. --lcores %s\n", lcorestr); + "e.g. --lcores %s", lcorestr); return -1; } @@ -769,7 +769,7 @@ rte_eal_parse_coremask(const char *coremask, int *cores) while ((i > 0) && isblank(coremask[i - 1])) i--; if (i == 0) { - RTE_LOG(ERR, EAL, "No lcores in coremask: [%s]\n", + EAL_LOG(ERR, "No lcores in coremask: [%s]", coremask_orig); return -1; } @@ -778,7 +778,7 @@ rte_eal_parse_coremask(const char *coremask, int *cores) c = coremask[i]; if (isxdigit(c) == 0) { /* invalid characters */ - RTE_LOG(ERR, EAL, "invalid characters in coremask: [%s]\n", + EAL_LOG(ERR, "invalid characters in coremask: [%s]", coremask_orig); return -1; } @@ -787,7 +787,7 @@ rte_eal_parse_coremask(const char *coremask, int *cores) { if ((1 << j) & val) { if (count >= RTE_MAX_LCORE) { - RTE_LOG(ERR, EAL, "Too many lcores provided. Cannot exceed RTE_MAX_LCORE (%d)\n", + EAL_LOG(ERR, "Too many lcores provided. Cannot exceed RTE_MAX_LCORE (%d)", RTE_MAX_LCORE); return -1; } @@ -796,7 +796,7 @@ rte_eal_parse_coremask(const char *coremask, int *cores) } } if (count == 0) { - RTE_LOG(ERR, EAL, "No lcores in coremask: [%s]\n", + EAL_LOG(ERR, "No lcores in coremask: [%s]", coremask_orig); return -1; } @@ -864,8 +864,8 @@ eal_parse_service_corelist(const char *corelist) uint32_t lcore = idx; if (cfg->main_lcore == lcore && main_lcore_parsed) { - RTE_LOG(ERR, EAL, - "Error: lcore %u is main lcore, cannot use as service core\n", + EAL_LOG(ERR, + "Error: lcore %u is main lcore, cannot use as service core", idx); return -1; } @@ -887,9 +887,9 @@ eal_parse_service_corelist(const char *corelist) return -1; if (core_parsed && taken_lcore_count != count) { - RTE_LOG(WARNING, EAL, + EAL_LOG(WARNING, "Not all service cores were in the coremask. " - "Please ensure -c or -l includes service cores\n"); + "Please ensure -c or -l includes service cores"); } return 0; @@ -943,7 +943,7 @@ eal_parse_corelist(const char *corelist, int *cores) if (dup) continue; if (count >= RTE_MAX_LCORE) { - RTE_LOG(ERR, EAL, "Too many lcores provided. Cannot exceed RTE_MAX_LCORE (%d)\n", + EAL_LOG(ERR, "Too many lcores provided. Cannot exceed RTE_MAX_LCORE (%d)", RTE_MAX_LCORE); return -1; } @@ -991,8 +991,8 @@ eal_parse_main_lcore(const char *arg) /* ensure main core is not used as service core */ if (lcore_config[cfg->main_lcore].core_role == ROLE_SERVICE) { - RTE_LOG(ERR, EAL, - "Error: Main lcore is used as a service core\n"); + EAL_LOG(ERR, + "Error: Main lcore is used as a service core"); return -1; } @@ -1132,8 +1132,8 @@ check_cpuset(rte_cpuset_t *set) continue; if (eal_cpu_detected(idx) == 0) { - RTE_LOG(ERR, EAL, "core %u " - "unavailable\n", idx); + EAL_LOG(ERR, "core %u " + "unavailable", idx); return -1; } } @@ -1612,8 +1612,8 @@ eal_parse_huge_unlink(const char *arg, struct hugepage_file_discipline *out) return 0; } if (strcmp(arg, HUGE_UNLINK_NEVER) == 0) { - RTE_LOG(WARNING, EAL, "Using --"OPT_HUGE_UNLINK"=" - HUGE_UNLINK_NEVER" may create data leaks.\n"); + EAL_LOG(WARNING, "Using --"OPT_HUGE_UNLINK"=" + HUGE_UNLINK_NEVER" may create data leaks."); out->unlink_existing = false; return 0; } @@ -1648,24 +1648,24 @@ eal_parse_common_option(int opt, const char *optarg, int lcore_indexes[RTE_MAX_LCORE]; if (eal_service_cores_parsed()) - RTE_LOG(WARNING, EAL, - "Service cores parsed before dataplane cores. Please ensure -c is before -s or -S\n"); + EAL_LOG(WARNING, + "Service cores parsed before dataplane cores. Please ensure -c is before -s or -S"); if (rte_eal_parse_coremask(optarg, lcore_indexes) < 0) { - RTE_LOG(ERR, EAL, "invalid coremask syntax\n"); + EAL_LOG(ERR, "invalid coremask syntax"); return -1; } if (update_lcore_config(lcore_indexes) < 0) { char *available = available_cores(); - RTE_LOG(ERR, EAL, - "invalid coremask, please check specified cores are part of %s\n", + EAL_LOG(ERR, + "invalid coremask, please check specified cores are part of %s", available); free(available); return -1; } if (core_parsed) { - RTE_LOG(ERR, EAL, "Option -c is ignored, because (%s) is set!\n", + EAL_LOG(ERR, "Option -c is ignored, because (%s) is set!", (core_parsed == LCORE_OPT_LST) ? "-l" : (core_parsed == LCORE_OPT_MAP) ? "--lcore" : "-c"); @@ -1680,25 +1680,25 @@ eal_parse_common_option(int opt, const char *optarg, int lcore_indexes[RTE_MAX_LCORE]; if (eal_service_cores_parsed()) - RTE_LOG(WARNING, EAL, - "Service cores parsed before dataplane cores. Please ensure -l is before -s or -S\n"); + EAL_LOG(WARNING, + "Service cores parsed before dataplane cores. Please ensure -l is before -s or -S"); if (eal_parse_corelist(optarg, lcore_indexes) < 0) { - RTE_LOG(ERR, EAL, "invalid core list syntax\n"); + EAL_LOG(ERR, "invalid core list syntax"); return -1; } if (update_lcore_config(lcore_indexes) < 0) { char *available = available_cores(); - RTE_LOG(ERR, EAL, - "invalid core list, please check specified cores are part of %s\n", + EAL_LOG(ERR, + "invalid core list, please check specified cores are part of %s", available); free(available); return -1; } if (core_parsed) { - RTE_LOG(ERR, EAL, "Option -l is ignored, because (%s) is set!\n", + EAL_LOG(ERR, "Option -l is ignored, because (%s) is set!", (core_parsed == LCORE_OPT_MSK) ? "-c" : (core_parsed == LCORE_OPT_MAP) ? "--lcore" : "-l"); @@ -1711,14 +1711,14 @@ eal_parse_common_option(int opt, const char *optarg, /* service coremask */ case 's': if (eal_parse_service_coremask(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid service coremask\n"); + EAL_LOG(ERR, "invalid service coremask"); return -1; } break; /* service corelist */ case 'S': if (eal_parse_service_corelist(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid service core list\n"); + EAL_LOG(ERR, "invalid service core list"); return -1; } break; @@ -1733,7 +1733,7 @@ eal_parse_common_option(int opt, const char *optarg, case 'n': conf->force_nchannel = atoi(optarg); if (conf->force_nchannel == 0) { - RTE_LOG(ERR, EAL, "invalid channel number\n"); + EAL_LOG(ERR, "invalid channel number"); return -1; } break; @@ -1742,7 +1742,7 @@ eal_parse_common_option(int opt, const char *optarg, conf->force_nrank = atoi(optarg); if (conf->force_nrank == 0 || conf->force_nrank > 16) { - RTE_LOG(ERR, EAL, "invalid rank number\n"); + EAL_LOG(ERR, "invalid rank number"); return -1; } break; @@ -1756,13 +1756,13 @@ eal_parse_common_option(int opt, const char *optarg, * write message at highest log level so it can always * be seen * even if info or warning messages are disabled */ - RTE_LOG(CRIT, EAL, "RTE Version: '%s'\n", rte_version()); + EAL_LOG(CRIT, "RTE Version: '%s'", rte_version()); break; /* long options */ case OPT_HUGE_UNLINK_NUM: if (eal_parse_huge_unlink(optarg, &conf->hugepage_file) < 0) { - RTE_LOG(ERR, EAL, "invalid --"OPT_HUGE_UNLINK" option\n"); + EAL_LOG(ERR, "invalid --"OPT_HUGE_UNLINK" option"); return -1; } break; @@ -1802,8 +1802,8 @@ eal_parse_common_option(int opt, const char *optarg, case OPT_MAIN_LCORE_NUM: if (eal_parse_main_lcore(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameter for --" - OPT_MAIN_LCORE "\n"); + EAL_LOG(ERR, "invalid parameter for --" + OPT_MAIN_LCORE); return -1; } break; @@ -1818,8 +1818,8 @@ eal_parse_common_option(int opt, const char *optarg, #ifndef RTE_EXEC_ENV_WINDOWS case OPT_SYSLOG_NUM: if (eal_parse_syslog(optarg, conf) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_SYSLOG "\n"); + EAL_LOG(ERR, "invalid parameters for --" + OPT_SYSLOG); return -1; } break; @@ -1827,9 +1827,9 @@ eal_parse_common_option(int opt, const char *optarg, case OPT_LOG_LEVEL_NUM: { if (eal_parse_log_level(optarg) < 0) { - RTE_LOG(ERR, EAL, + EAL_LOG(ERR, "invalid parameters for --" - OPT_LOG_LEVEL "\n"); + OPT_LOG_LEVEL); return -1; } break; @@ -1838,8 +1838,8 @@ eal_parse_common_option(int opt, const char *optarg, #ifndef RTE_EXEC_ENV_WINDOWS case OPT_TRACE_NUM: { if (eal_trace_args_save(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_TRACE "\n"); + EAL_LOG(ERR, "invalid parameters for --" + OPT_TRACE); return -1; } break; @@ -1847,8 +1847,8 @@ eal_parse_common_option(int opt, const char *optarg, case OPT_TRACE_DIR_NUM: { if (eal_trace_dir_args_save(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_TRACE_DIR "\n"); + EAL_LOG(ERR, "invalid parameters for --" + OPT_TRACE_DIR); return -1; } break; @@ -1856,8 +1856,8 @@ eal_parse_common_option(int opt, const char *optarg, case OPT_TRACE_BUF_SIZE_NUM: { if (eal_trace_bufsz_args_save(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_TRACE_BUF_SIZE "\n"); + EAL_LOG(ERR, "invalid parameters for --" + OPT_TRACE_BUF_SIZE); return -1; } break; @@ -1865,8 +1865,8 @@ eal_parse_common_option(int opt, const char *optarg, case OPT_TRACE_MODE_NUM: { if (eal_trace_mode_args_save(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_TRACE_MODE "\n"); + EAL_LOG(ERR, "invalid parameters for --" + OPT_TRACE_MODE); return -1; } break; @@ -1875,13 +1875,13 @@ eal_parse_common_option(int opt, const char *optarg, case OPT_LCORES_NUM: if (eal_parse_lcores(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameter for --" - OPT_LCORES "\n"); + EAL_LOG(ERR, "invalid parameter for --" + OPT_LCORES); return -1; } if (core_parsed) { - RTE_LOG(ERR, EAL, "Option --lcore is ignored, because (%s) is set!\n", + EAL_LOG(ERR, "Option --lcore is ignored, because (%s) is set!", (core_parsed == LCORE_OPT_LST) ? "-l" : (core_parsed == LCORE_OPT_MSK) ? "-c" : "--lcore"); @@ -1898,15 +1898,15 @@ eal_parse_common_option(int opt, const char *optarg, break; case OPT_IOVA_MODE_NUM: if (eal_parse_iova_mode(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_IOVA_MODE "\n"); + EAL_LOG(ERR, "invalid parameters for --" + OPT_IOVA_MODE); return -1; } break; case OPT_BASE_VIRTADDR_NUM: if (eal_parse_base_virtaddr(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameter for --" - OPT_BASE_VIRTADDR "\n"); + EAL_LOG(ERR, "invalid parameter for --" + OPT_BASE_VIRTADDR); return -1; } break; @@ -1917,8 +1917,8 @@ eal_parse_common_option(int opt, const char *optarg, break; case OPT_FORCE_MAX_SIMD_BITWIDTH_NUM: if (eal_parse_simd_bitwidth(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameter for --" - OPT_FORCE_MAX_SIMD_BITWIDTH "\n"); + EAL_LOG(ERR, "invalid parameter for --" + OPT_FORCE_MAX_SIMD_BITWIDTH); return -1; } break; @@ -1932,8 +1932,8 @@ eal_parse_common_option(int opt, const char *optarg, return 0; ba_conflict: - RTE_LOG(ERR, EAL, - "Options allow (-a) and block (-b) can't be used at the same time\n"); + EAL_LOG(ERR, + "Options allow (-a) and block (-b) can't be used at the same time"); return -1; } @@ -2034,94 +2034,94 @@ eal_check_common_options(struct internal_config *internal_cfg) eal_get_internal_configuration(); if (cfg->lcore_role[cfg->main_lcore] != ROLE_RTE) { - RTE_LOG(ERR, EAL, "Main lcore is not enabled for DPDK\n"); + EAL_LOG(ERR, "Main lcore is not enabled for DPDK"); return -1; } if (internal_cfg->process_type == RTE_PROC_INVALID) { - RTE_LOG(ERR, EAL, "Invalid process type specified\n"); + EAL_LOG(ERR, "Invalid process type specified"); return -1; } if (internal_cfg->hugefile_prefix != NULL && strlen(internal_cfg->hugefile_prefix) < 1) { - RTE_LOG(ERR, EAL, "Invalid length of --" OPT_FILE_PREFIX " option\n"); + EAL_LOG(ERR, "Invalid length of --" OPT_FILE_PREFIX " option"); return -1; } if (internal_cfg->hugepage_dir != NULL && strlen(internal_cfg->hugepage_dir) < 1) { - RTE_LOG(ERR, EAL, "Invalid length of --" OPT_HUGE_DIR" option\n"); + EAL_LOG(ERR, "Invalid length of --" OPT_HUGE_DIR" option"); return -1; } if (internal_cfg->user_mbuf_pool_ops_name != NULL && strlen(internal_cfg->user_mbuf_pool_ops_name) < 1) { - RTE_LOG(ERR, EAL, "Invalid length of --" OPT_MBUF_POOL_OPS_NAME" option\n"); + EAL_LOG(ERR, "Invalid length of --" OPT_MBUF_POOL_OPS_NAME" option"); return -1; } if (strchr(eal_get_hugefile_prefix(), '%') != NULL) { - RTE_LOG(ERR, EAL, "Invalid char, '%%', in --"OPT_FILE_PREFIX" " - "option\n"); + EAL_LOG(ERR, "Invalid char, '%%', in --"OPT_FILE_PREFIX" " + "option"); return -1; } if (mem_parsed && internal_cfg->force_sockets == 1) { - RTE_LOG(ERR, EAL, "Options -m and --"OPT_SOCKET_MEM" cannot " - "be specified at the same time\n"); + EAL_LOG(ERR, "Options -m and --"OPT_SOCKET_MEM" cannot " + "be specified at the same time"); return -1; } if (internal_cfg->no_hugetlbfs && internal_cfg->force_sockets == 1) { - RTE_LOG(ERR, EAL, "Option --"OPT_SOCKET_MEM" cannot " - "be specified together with --"OPT_NO_HUGE"\n"); + EAL_LOG(ERR, "Option --"OPT_SOCKET_MEM" cannot " + "be specified together with --"OPT_NO_HUGE); return -1; } if (internal_cfg->no_hugetlbfs && internal_cfg->hugepage_file.unlink_before_mapping && !internal_cfg->in_memory) { - RTE_LOG(ERR, EAL, "Option --"OPT_HUGE_UNLINK" cannot " - "be specified together with --"OPT_NO_HUGE"\n"); + EAL_LOG(ERR, "Option --"OPT_HUGE_UNLINK" cannot " + "be specified together with --"OPT_NO_HUGE); return -1; } if (internal_cfg->no_hugetlbfs && internal_cfg->huge_worker_stack_size != 0) { - RTE_LOG(ERR, EAL, "Option --"OPT_HUGE_WORKER_STACK" cannot " - "be specified together with --"OPT_NO_HUGE"\n"); + EAL_LOG(ERR, "Option --"OPT_HUGE_WORKER_STACK" cannot " + "be specified together with --"OPT_NO_HUGE); return -1; } if (internal_conf->force_socket_limits && internal_conf->legacy_mem) { - RTE_LOG(ERR, EAL, "Option --"OPT_SOCKET_LIMIT - " is only supported in non-legacy memory mode\n"); + EAL_LOG(ERR, "Option --"OPT_SOCKET_LIMIT + " is only supported in non-legacy memory mode"); } if (internal_cfg->single_file_segments && internal_cfg->hugepage_file.unlink_before_mapping && !internal_cfg->in_memory) { - RTE_LOG(ERR, EAL, "Option --"OPT_SINGLE_FILE_SEGMENTS" is " - "not compatible with --"OPT_HUGE_UNLINK"\n"); + EAL_LOG(ERR, "Option --"OPT_SINGLE_FILE_SEGMENTS" is " + "not compatible with --"OPT_HUGE_UNLINK); return -1; } if (!internal_cfg->hugepage_file.unlink_existing && internal_cfg->in_memory) { - RTE_LOG(ERR, EAL, "Option --"OPT_IN_MEMORY" is not compatible " - "with --"OPT_HUGE_UNLINK"="HUGE_UNLINK_NEVER"\n"); + EAL_LOG(ERR, "Option --"OPT_IN_MEMORY" is not compatible " + "with --"OPT_HUGE_UNLINK"="HUGE_UNLINK_NEVER); return -1; } if (internal_cfg->legacy_mem && internal_cfg->in_memory) { - RTE_LOG(ERR, EAL, "Option --"OPT_LEGACY_MEM" is not compatible " - "with --"OPT_IN_MEMORY"\n"); + EAL_LOG(ERR, "Option --"OPT_LEGACY_MEM" is not compatible " + "with --"OPT_IN_MEMORY); return -1; } if (internal_cfg->legacy_mem && internal_cfg->match_allocations) { - RTE_LOG(ERR, EAL, "Option --"OPT_LEGACY_MEM" is not compatible " - "with --"OPT_MATCH_ALLOCATIONS"\n"); + EAL_LOG(ERR, "Option --"OPT_LEGACY_MEM" is not compatible " + "with --"OPT_MATCH_ALLOCATIONS); return -1; } if (internal_cfg->no_hugetlbfs && internal_cfg->match_allocations) { - RTE_LOG(ERR, EAL, "Option --"OPT_NO_HUGE" is not compatible " - "with --"OPT_MATCH_ALLOCATIONS"\n"); + EAL_LOG(ERR, "Option --"OPT_NO_HUGE" is not compatible " + "with --"OPT_MATCH_ALLOCATIONS); return -1; } if (internal_cfg->legacy_mem && internal_cfg->memory == 0) { - RTE_LOG(NOTICE, EAL, "Static memory layout is selected, " + EAL_LOG(NOTICE, "Static memory layout is selected, " "amount of reserved memory can be adjusted with " - "-m or --"OPT_SOCKET_MEM"\n"); + "-m or --"OPT_SOCKET_MEM); } return 0; @@ -2141,12 +2141,12 @@ rte_vect_set_max_simd_bitwidth(uint16_t bitwidth) struct internal_config *internal_conf = eal_get_internal_configuration(); if (internal_conf->max_simd_bitwidth.forced) { - RTE_LOG(NOTICE, EAL, "Cannot set max SIMD bitwidth - user runtime override enabled\n"); + EAL_LOG(NOTICE, "Cannot set max SIMD bitwidth - user runtime override enabled"); return -EPERM; } if (bitwidth < RTE_VECT_SIMD_DISABLED || !rte_is_power_of_2(bitwidth)) { - RTE_LOG(ERR, EAL, "Invalid bitwidth value!\n"); + EAL_LOG(ERR, "Invalid bitwidth value!"); return -EINVAL; } internal_conf->max_simd_bitwidth.bitwidth = bitwidth; diff --git a/lib/eal/common/eal_common_proc.c b/lib/eal/common/eal_common_proc.c index 728815c4a9..d24093937c 100644 --- a/lib/eal/common/eal_common_proc.c +++ b/lib/eal/common/eal_common_proc.c @@ -181,12 +181,12 @@ static int validate_action_name(const char *name) { if (name == NULL) { - RTE_LOG(ERR, EAL, "Action name cannot be NULL\n"); + EAL_LOG(ERR, "Action name cannot be NULL"); rte_errno = EINVAL; return -1; } if (strnlen(name, RTE_MP_MAX_NAME_LEN) == 0) { - RTE_LOG(ERR, EAL, "Length of action name is zero\n"); + EAL_LOG(ERR, "Length of action name is zero"); rte_errno = EINVAL; return -1; } @@ -208,7 +208,7 @@ rte_mp_action_register(const char *name, rte_mp_t action) return -1; if (internal_conf->no_shconf) { - RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n"); + EAL_LOG(DEBUG, "No shared files mode enabled, IPC is disabled"); rte_errno = ENOTSUP; return -1; } @@ -244,7 +244,7 @@ rte_mp_action_unregister(const char *name) return; if (internal_conf->no_shconf) { - RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n"); + EAL_LOG(DEBUG, "No shared files mode enabled, IPC is disabled"); return; } @@ -291,12 +291,12 @@ read_msg(int fd, struct mp_msg_internal *m, struct sockaddr_un *s) if (errno == EINTR) goto retry; - RTE_LOG(ERR, EAL, "recvmsg failed, %s\n", strerror(errno)); + EAL_LOG(ERR, "recvmsg failed, %s", strerror(errno)); return -1; } if (msglen != buflen || (msgh.msg_flags & (MSG_TRUNC | MSG_CTRUNC))) { - RTE_LOG(ERR, EAL, "truncated msg\n"); + EAL_LOG(ERR, "truncated msg"); return -1; } @@ -311,11 +311,11 @@ read_msg(int fd, struct mp_msg_internal *m, struct sockaddr_un *s) } /* sanity-check the response */ if (m->msg.num_fds < 0 || m->msg.num_fds > RTE_MP_MAX_FD_NUM) { - RTE_LOG(ERR, EAL, "invalid number of fd's received\n"); + EAL_LOG(ERR, "invalid number of fd's received"); return -1; } if (m->msg.len_param < 0 || m->msg.len_param > RTE_MP_MAX_PARAM_LEN) { - RTE_LOG(ERR, EAL, "invalid received data length\n"); + EAL_LOG(ERR, "invalid received data length"); return -1; } return msglen; @@ -340,7 +340,7 @@ process_msg(struct mp_msg_internal *m, struct sockaddr_un *s) const struct internal_config *internal_conf = eal_get_internal_configuration(); - RTE_LOG(DEBUG, EAL, "msg: %s\n", msg->name); + EAL_LOG(DEBUG, "msg: %s", msg->name); if (m->type == MP_REP || m->type == MP_IGN) { struct pending_request *req = NULL; @@ -359,7 +359,7 @@ process_msg(struct mp_msg_internal *m, struct sockaddr_un *s) req = async_reply_handle_thread_unsafe( pending_req); } else { - RTE_LOG(ERR, EAL, "Drop mp reply: %s\n", msg->name); + EAL_LOG(ERR, "Drop mp reply: %s", msg->name); cleanup_msg_fds(msg); } pthread_mutex_unlock(&pending_requests.lock); @@ -388,12 +388,12 @@ process_msg(struct mp_msg_internal *m, struct sockaddr_un *s) strlcpy(dummy.name, msg->name, sizeof(dummy.name)); mp_send(&dummy, s->sun_path, MP_IGN); } else { - RTE_LOG(ERR, EAL, "Cannot find action: %s\n", + EAL_LOG(ERR, "Cannot find action: %s", msg->name); } cleanup_msg_fds(msg); } else if (action(msg, s->sun_path) < 0) { - RTE_LOG(ERR, EAL, "Fail to handle message: %s\n", msg->name); + EAL_LOG(ERR, "Fail to handle message: %s", msg->name); } } @@ -459,7 +459,7 @@ process_async_request(struct pending_request *sr, const struct timespec *now) tmp = realloc(user_msgs, sizeof(*msg) * (reply->nb_received + 1)); if (!tmp) { - RTE_LOG(ERR, EAL, "Fail to alloc reply for request %s:%s\n", + EAL_LOG(ERR, "Fail to alloc reply for request %s:%s", sr->dst, sr->request->name); /* this entry is going to be removed and its message * dropped, but we don't want to leak memory, so @@ -518,7 +518,7 @@ async_reply_handle_thread_unsafe(void *arg) struct timespec ts_now; if (clock_gettime(CLOCK_MONOTONIC, &ts_now) < 0) { - RTE_LOG(ERR, EAL, "Cannot get current time\n"); + EAL_LOG(ERR, "Cannot get current time"); goto no_trigger; } @@ -532,10 +532,10 @@ async_reply_handle_thread_unsafe(void *arg) * handling the same message twice. */ if (rte_errno == EINPROGRESS) { - RTE_LOG(DEBUG, EAL, "Request handling is already in progress\n"); + EAL_LOG(DEBUG, "Request handling is already in progress"); goto no_trigger; } - RTE_LOG(ERR, EAL, "Failed to cancel alarm\n"); + EAL_LOG(ERR, "Failed to cancel alarm"); } if (action == ACTION_TRIGGER) @@ -570,7 +570,7 @@ open_socket_fd(void) mp_fd = socket(AF_UNIX, SOCK_DGRAM, 0); if (mp_fd < 0) { - RTE_LOG(ERR, EAL, "failed to create unix socket\n"); + EAL_LOG(ERR, "failed to create unix socket"); return -1; } @@ -582,13 +582,13 @@ open_socket_fd(void) unlink(un.sun_path); /* May still exist since last run */ if (bind(mp_fd, (struct sockaddr *)&un, sizeof(un)) < 0) { - RTE_LOG(ERR, EAL, "failed to bind %s: %s\n", + EAL_LOG(ERR, "failed to bind %s: %s", un.sun_path, strerror(errno)); close(mp_fd); return -1; } - RTE_LOG(INFO, EAL, "Multi-process socket %s\n", un.sun_path); + EAL_LOG(INFO, "Multi-process socket %s", un.sun_path); return mp_fd; } @@ -614,7 +614,7 @@ rte_mp_channel_init(void) * so no need to initialize IPC. */ if (internal_conf->no_shconf) { - RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC will be disabled\n"); + EAL_LOG(DEBUG, "No shared files mode enabled, IPC will be disabled"); rte_errno = ENOTSUP; return -1; } @@ -630,13 +630,13 @@ rte_mp_channel_init(void) /* lock the directory */ dir_fd = open(mp_dir_path, O_RDONLY); if (dir_fd < 0) { - RTE_LOG(ERR, EAL, "failed to open %s: %s\n", + EAL_LOG(ERR, "failed to open %s: %s", mp_dir_path, strerror(errno)); return -1; } if (flock(dir_fd, LOCK_EX)) { - RTE_LOG(ERR, EAL, "failed to lock %s: %s\n", + EAL_LOG(ERR, "failed to lock %s: %s", mp_dir_path, strerror(errno)); close(dir_fd); return -1; @@ -649,7 +649,7 @@ rte_mp_channel_init(void) if (rte_thread_create_internal_control(&mp_handle_tid, "mp-msg", mp_handle, NULL) < 0) { - RTE_LOG(ERR, EAL, "failed to create mp thread: %s\n", + EAL_LOG(ERR, "failed to create mp thread: %s", strerror(errno)); close(dir_fd); close(rte_atomic_exchange_explicit(&mp_fd, -1, rte_memory_order_relaxed)); @@ -732,7 +732,7 @@ send_msg(const char *dst_path, struct rte_mp_msg *msg, int type) unlink(dst_path); return 0; } - RTE_LOG(ERR, EAL, "failed to send to (%s) due to %s\n", + EAL_LOG(ERR, "failed to send to (%s) due to %s", dst_path, strerror(errno)); return -1; } @@ -760,7 +760,7 @@ mp_send(struct rte_mp_msg *msg, const char *peer, int type) /* broadcast to all secondary processes */ mp_dir = opendir(mp_dir_path); if (!mp_dir) { - RTE_LOG(ERR, EAL, "Unable to open directory %s\n", + EAL_LOG(ERR, "Unable to open directory %s", mp_dir_path); rte_errno = errno; return -1; @@ -769,7 +769,7 @@ mp_send(struct rte_mp_msg *msg, const char *peer, int type) dir_fd = dirfd(mp_dir); /* lock the directory to prevent processes spinning up while we send */ if (flock(dir_fd, LOCK_SH)) { - RTE_LOG(ERR, EAL, "Unable to lock directory %s\n", + EAL_LOG(ERR, "Unable to lock directory %s", mp_dir_path); rte_errno = errno; closedir(mp_dir); @@ -799,7 +799,7 @@ static int check_input(const struct rte_mp_msg *msg) { if (msg == NULL) { - RTE_LOG(ERR, EAL, "Msg cannot be NULL\n"); + EAL_LOG(ERR, "Msg cannot be NULL"); rte_errno = EINVAL; return -1; } @@ -808,25 +808,25 @@ check_input(const struct rte_mp_msg *msg) return -1; if (msg->len_param < 0) { - RTE_LOG(ERR, EAL, "Message data length is negative\n"); + EAL_LOG(ERR, "Message data length is negative"); rte_errno = EINVAL; return -1; } if (msg->num_fds < 0) { - RTE_LOG(ERR, EAL, "Number of fd's is negative\n"); + EAL_LOG(ERR, "Number of fd's is negative"); rte_errno = EINVAL; return -1; } if (msg->len_param > RTE_MP_MAX_PARAM_LEN) { - RTE_LOG(ERR, EAL, "Message data is too long\n"); + EAL_LOG(ERR, "Message data is too long"); rte_errno = E2BIG; return -1; } if (msg->num_fds > RTE_MP_MAX_FD_NUM) { - RTE_LOG(ERR, EAL, "Cannot send more than %d FDs\n", + EAL_LOG(ERR, "Cannot send more than %d FDs", RTE_MP_MAX_FD_NUM); rte_errno = E2BIG; return -1; @@ -845,12 +845,12 @@ rte_mp_sendmsg(struct rte_mp_msg *msg) return -1; if (internal_conf->no_shconf) { - RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n"); + EAL_LOG(DEBUG, "No shared files mode enabled, IPC is disabled"); rte_errno = ENOTSUP; return -1; } - RTE_LOG(DEBUG, EAL, "sendmsg: %s\n", msg->name); + EAL_LOG(DEBUG, "sendmsg: %s", msg->name); return mp_send(msg, NULL, MP_MSG); } @@ -865,7 +865,7 @@ mp_request_async(const char *dst, struct rte_mp_msg *req, pending_req = calloc(1, sizeof(*pending_req)); reply_msg = calloc(1, sizeof(*reply_msg)); if (pending_req == NULL || reply_msg == NULL) { - RTE_LOG(ERR, EAL, "Could not allocate space for sync request\n"); + EAL_LOG(ERR, "Could not allocate space for sync request"); rte_errno = ENOMEM; ret = -1; goto fail; @@ -881,7 +881,7 @@ mp_request_async(const char *dst, struct rte_mp_msg *req, exist = find_pending_request(dst, req->name); if (exist) { - RTE_LOG(ERR, EAL, "A pending request %s:%s\n", dst, req->name); + EAL_LOG(ERR, "A pending request %s:%s", dst, req->name); rte_errno = EEXIST; ret = -1; goto fail; @@ -889,7 +889,7 @@ mp_request_async(const char *dst, struct rte_mp_msg *req, ret = send_msg(dst, req, MP_REQ); if (ret < 0) { - RTE_LOG(ERR, EAL, "Fail to send request %s:%s\n", + EAL_LOG(ERR, "Fail to send request %s:%s", dst, req->name); ret = -1; goto fail; @@ -902,7 +902,7 @@ mp_request_async(const char *dst, struct rte_mp_msg *req, /* if alarm set fails, we simply ignore the reply */ if (rte_eal_alarm_set(ts->tv_sec * 1000000 + ts->tv_nsec / 1000, async_reply_handle, pending_req) < 0) { - RTE_LOG(ERR, EAL, "Fail to set alarm for request %s:%s\n", + EAL_LOG(ERR, "Fail to set alarm for request %s:%s", dst, req->name); ret = -1; goto fail; @@ -936,14 +936,14 @@ mp_request_sync(const char *dst, struct rte_mp_msg *req, exist = find_pending_request(dst, req->name); if (exist) { - RTE_LOG(ERR, EAL, "A pending request %s:%s\n", dst, req->name); + EAL_LOG(ERR, "A pending request %s:%s", dst, req->name); rte_errno = EEXIST; return -1; } ret = send_msg(dst, req, MP_REQ); if (ret < 0) { - RTE_LOG(ERR, EAL, "Fail to send request %s:%s\n", + EAL_LOG(ERR, "Fail to send request %s:%s", dst, req->name); return -1; } else if (ret == 0) @@ -961,13 +961,13 @@ mp_request_sync(const char *dst, struct rte_mp_msg *req, TAILQ_REMOVE(&pending_requests.requests, &pending_req, next); if (pending_req.reply_received == 0) { - RTE_LOG(ERR, EAL, "Fail to recv reply for request %s:%s\n", + EAL_LOG(ERR, "Fail to recv reply for request %s:%s", dst, req->name); rte_errno = ETIMEDOUT; return -1; } if (pending_req.reply_received == -1) { - RTE_LOG(DEBUG, EAL, "Asked to ignore response\n"); + EAL_LOG(DEBUG, "Asked to ignore response"); /* not receiving this message is not an error, so decrement * number of sent messages */ @@ -977,7 +977,7 @@ mp_request_sync(const char *dst, struct rte_mp_msg *req, tmp = realloc(reply->msgs, sizeof(msg) * (reply->nb_received + 1)); if (!tmp) { - RTE_LOG(ERR, EAL, "Fail to alloc reply for request %s:%s\n", + EAL_LOG(ERR, "Fail to alloc reply for request %s:%s", dst, req->name); rte_errno = ENOMEM; return -1; @@ -999,7 +999,7 @@ rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply, const struct internal_config *internal_conf = eal_get_internal_configuration(); - RTE_LOG(DEBUG, EAL, "request: %s\n", req->name); + EAL_LOG(DEBUG, "request: %s", req->name); reply->nb_sent = 0; reply->nb_received = 0; @@ -1009,13 +1009,13 @@ rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply, goto end; if (internal_conf->no_shconf) { - RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n"); + EAL_LOG(DEBUG, "No shared files mode enabled, IPC is disabled"); rte_errno = ENOTSUP; return -1; } if (clock_gettime(CLOCK_MONOTONIC, &now) < 0) { - RTE_LOG(ERR, EAL, "Failed to get current time\n"); + EAL_LOG(ERR, "Failed to get current time"); rte_errno = errno; goto end; } @@ -1035,7 +1035,7 @@ rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply, /* for primary process, broadcast request, and collect reply 1 by 1 */ mp_dir = opendir(mp_dir_path); if (!mp_dir) { - RTE_LOG(ERR, EAL, "Unable to open directory %s\n", mp_dir_path); + EAL_LOG(ERR, "Unable to open directory %s", mp_dir_path); rte_errno = errno; goto end; } @@ -1043,7 +1043,7 @@ rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply, dir_fd = dirfd(mp_dir); /* lock the directory to prevent processes spinning up while we send */ if (flock(dir_fd, LOCK_SH)) { - RTE_LOG(ERR, EAL, "Unable to lock directory %s\n", + EAL_LOG(ERR, "Unable to lock directory %s", mp_dir_path); rte_errno = errno; goto close_end; @@ -1102,19 +1102,19 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts, const struct internal_config *internal_conf = eal_get_internal_configuration(); - RTE_LOG(DEBUG, EAL, "request: %s\n", req->name); + EAL_LOG(DEBUG, "request: %s", req->name); if (check_input(req) != 0) return -1; if (internal_conf->no_shconf) { - RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n"); + EAL_LOG(DEBUG, "No shared files mode enabled, IPC is disabled"); rte_errno = ENOTSUP; return -1; } if (clock_gettime(CLOCK_MONOTONIC, &now) < 0) { - RTE_LOG(ERR, EAL, "Failed to get current time\n"); + EAL_LOG(ERR, "Failed to get current time"); rte_errno = errno; return -1; } @@ -1122,7 +1122,7 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts, dummy = calloc(1, sizeof(*dummy)); param = calloc(1, sizeof(*param)); if (copy == NULL || dummy == NULL || param == NULL) { - RTE_LOG(ERR, EAL, "Failed to allocate memory for async reply\n"); + EAL_LOG(ERR, "Failed to allocate memory for async reply"); rte_errno = ENOMEM; goto fail; } @@ -1180,7 +1180,7 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts, /* for primary process, broadcast request */ mp_dir = opendir(mp_dir_path); if (!mp_dir) { - RTE_LOG(ERR, EAL, "Unable to open directory %s\n", mp_dir_path); + EAL_LOG(ERR, "Unable to open directory %s", mp_dir_path); rte_errno = errno; goto unlock_fail; } @@ -1188,7 +1188,7 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts, /* lock the directory to prevent processes spinning up while we send */ if (flock(dir_fd, LOCK_SH)) { - RTE_LOG(ERR, EAL, "Unable to lock directory %s\n", + EAL_LOG(ERR, "Unable to lock directory %s", mp_dir_path); rte_errno = errno; goto closedir_fail; @@ -1240,7 +1240,7 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts, int rte_mp_reply(struct rte_mp_msg *msg, const char *peer) { - RTE_LOG(DEBUG, EAL, "reply: %s\n", msg->name); + EAL_LOG(DEBUG, "reply: %s", msg->name); const struct internal_config *internal_conf = eal_get_internal_configuration(); @@ -1248,13 +1248,13 @@ rte_mp_reply(struct rte_mp_msg *msg, const char *peer) return -1; if (peer == NULL) { - RTE_LOG(ERR, EAL, "peer is not specified\n"); + EAL_LOG(ERR, "peer is not specified"); rte_errno = EINVAL; return -1; } if (internal_conf->no_shconf) { - RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n"); + EAL_LOG(DEBUG, "No shared files mode enabled, IPC is disabled"); return 0; } diff --git a/lib/eal/common/eal_common_tailqs.c b/lib/eal/common/eal_common_tailqs.c index 580fbf24bc..56bb2bd91c 100644 --- a/lib/eal/common/eal_common_tailqs.c +++ b/lib/eal/common/eal_common_tailqs.c @@ -109,8 +109,8 @@ int rte_eal_tailq_register(struct rte_tailq_elem *t) { if (rte_eal_tailq_local_register(t) < 0) { - RTE_LOG(ERR, EAL, - "%s tailq is already registered\n", t->name); + EAL_LOG(ERR, + "%s tailq is already registered", t->name); goto error; } @@ -119,8 +119,8 @@ rte_eal_tailq_register(struct rte_tailq_elem *t) if (rte_tailqs_count >= 0) { rte_eal_tailq_update(t); if (t->head == NULL) { - RTE_LOG(ERR, EAL, - "Cannot initialize tailq: %s\n", t->name); + EAL_LOG(ERR, + "Cannot initialize tailq: %s", t->name); TAILQ_REMOVE(&rte_tailq_elem_head, t, next); goto error; } @@ -145,8 +145,8 @@ rte_eal_tailqs_init(void) * rte_eal_tailq_register and EAL_REGISTER_TAILQ */ rte_eal_tailq_update(t); if (t->head == NULL) { - RTE_LOG(ERR, EAL, - "Cannot initialize tailq: %s\n", t->name); + EAL_LOG(ERR, + "Cannot initialize tailq: %s", t->name); /* TAILQ_REMOVE not needed, error is already fatal */ goto fail; } diff --git a/lib/eal/common/eal_common_thread.c b/lib/eal/common/eal_common_thread.c index c422ea8b53..a53bc639ae 100644 --- a/lib/eal/common/eal_common_thread.c +++ b/lib/eal/common/eal_common_thread.c @@ -86,7 +86,7 @@ int rte_thread_set_affinity(rte_cpuset_t *cpusetp) { if (rte_thread_set_affinity_by_id(rte_thread_self(), cpusetp) != 0) { - RTE_LOG(ERR, EAL, "rte_thread_set_affinity_by_id failed\n"); + EAL_LOG(ERR, "rte_thread_set_affinity_by_id failed"); return -1; } @@ -175,7 +175,7 @@ eal_thread_loop(void *arg) __rte_thread_init(lcore_id, &lcore_config[lcore_id].cpuset); ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset)); - RTE_LOG(DEBUG, EAL, "lcore %u is ready (tid=%zx;cpuset=[%s%s])\n", + EAL_LOG(DEBUG, "lcore %u is ready (tid=%zx;cpuset=[%s%s])", lcore_id, rte_thread_self().opaque_id, cpuset, ret == 0 ? "" : "..."); @@ -368,12 +368,12 @@ rte_thread_register(void) /* EAL init flushes all lcores, we can't register before. */ if (eal_get_internal_configuration()->init_complete != 1) { - RTE_LOG(DEBUG, EAL, "Called %s before EAL init.\n", __func__); + EAL_LOG(DEBUG, "Called %s before EAL init.", __func__); rte_errno = EINVAL; return -1; } if (!rte_mp_disable()) { - RTE_LOG(ERR, EAL, "Multiprocess in use, registering non-EAL threads is not supported.\n"); + EAL_LOG(ERR, "Multiprocess in use, registering non-EAL threads is not supported."); rte_errno = EINVAL; return -1; } @@ -387,7 +387,7 @@ rte_thread_register(void) rte_errno = ENOMEM; return -1; } - RTE_LOG(DEBUG, EAL, "Registered non-EAL thread as lcore %u.\n", + EAL_LOG(DEBUG, "Registered non-EAL thread as lcore %u.", lcore_id); return 0; } @@ -401,7 +401,7 @@ rte_thread_unregister(void) eal_lcore_non_eal_release(lcore_id); __rte_thread_uninit(); if (lcore_id != LCORE_ID_ANY) - RTE_LOG(DEBUG, EAL, "Unregistered non-EAL thread (was lcore %u).\n", + EAL_LOG(DEBUG, "Unregistered non-EAL thread (was lcore %u).", lcore_id); } diff --git a/lib/eal/common/eal_common_timer.c b/lib/eal/common/eal_common_timer.c index 5686a5102b..c5c4703f15 100644 --- a/lib/eal/common/eal_common_timer.c +++ b/lib/eal/common/eal_common_timer.c @@ -39,8 +39,8 @@ static uint64_t estimate_tsc_freq(void) { #define CYC_PER_10MHZ 1E7 - RTE_LOG(WARNING, EAL, "WARNING: TSC frequency estimated roughly" - " - clock timings may be less accurate.\n"); + EAL_LOG(WARNING, "WARNING: TSC frequency estimated roughly" + " - clock timings may be less accurate."); /* assume that the rte_delay_us_sleep() will sleep for 1 second */ uint64_t start = rte_rdtsc(); rte_delay_us_sleep(US_PER_S); @@ -71,7 +71,7 @@ set_tsc_freq(void) if (!freq) freq = estimate_tsc_freq(); - RTE_LOG(DEBUG, EAL, "TSC frequency is ~%" PRIu64 " KHz\n", freq / 1000); + EAL_LOG(DEBUG, "TSC frequency is ~%" PRIu64 " KHz", freq / 1000); eal_tsc_resolution_hz = freq; mcfg->tsc_hz = freq; } diff --git a/lib/eal/common/eal_common_trace_utils.c b/lib/eal/common/eal_common_trace_utils.c index 8561a0e198..7282715b11 100644 --- a/lib/eal/common/eal_common_trace_utils.c +++ b/lib/eal/common/eal_common_trace_utils.c @@ -12,6 +12,7 @@ #include <rte_string_fns.h> #include "eal_filesystem.h" +#include "eal_private.h" #include "eal_trace.h" const char * @@ -348,7 +349,7 @@ trace_mkdir(void) return -rte_errno; } - RTE_LOG(INFO, EAL, "Trace dir: %s\n", trace->dir); + EAL_LOG(INFO, "Trace dir: %s", trace->dir); already_done = true; return 0; } diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h index 4d2e806610..afd87000e7 100644 --- a/lib/eal/common/eal_private.h +++ b/lib/eal/common/eal_private.h @@ -12,6 +12,7 @@ #include <dev_driver.h> #include <rte_lcore.h> +#include <rte_log.h> #include <rte_memory.h> #include "eal_internal_cfg.h" @@ -747,4 +748,7 @@ int eal_asprintf(char **buffer, const char *format, ...); eal_asprintf(buffer, format, ##__VA_ARGS__) #endif +#define EAL_LOG(level, fmt, ...) \ + RTE_LOG(level, EAL, fmt "\n", ## __VA_ARGS__) + #endif /* _EAL_PRIVATE_H_ */ diff --git a/lib/eal/common/eal_trace.h b/lib/eal/common/eal_trace.h index ace2ef3ee5..bd082a0337 100644 --- a/lib/eal/common/eal_trace.h +++ b/lib/eal/common/eal_trace.h @@ -17,10 +17,10 @@ #include "eal_thread.h" #define trace_err(fmt, args...) \ - RTE_LOG(ERR, EAL, "%s():%u " fmt "\n", __func__, __LINE__, ## args) + EAL_LOG(ERR, "%s():%u " fmt, __func__, __LINE__, ## args) #define trace_crit(fmt, args...) \ - RTE_LOG(CRIT, EAL, "%s():%u " fmt "\n", __func__, __LINE__, ## args) + EAL_LOG(CRIT, "%s():%u " fmt, __func__, __LINE__, ## args) #define TRACE_CTF_MAGIC 0xC1FC1FC1 #define TRACE_MAX_ARGS 32 diff --git a/lib/eal/common/hotplug_mp.c b/lib/eal/common/hotplug_mp.c index 602781966c..17089ca3db 100644 --- a/lib/eal/common/hotplug_mp.c +++ b/lib/eal/common/hotplug_mp.c @@ -77,7 +77,7 @@ send_response_to_secondary(const struct eal_dev_mp_req *req, ret = rte_mp_reply(&mp_resp, peer); if (ret != 0) - RTE_LOG(ERR, EAL, "failed to send response to secondary\n"); + EAL_LOG(ERR, "failed to send response to secondary"); return ret; } @@ -101,18 +101,18 @@ __handle_secondary_request(void *param) if (req->t == EAL_DEV_REQ_TYPE_ATTACH) { ret = local_dev_probe(req->devargs, &dev); if (ret != 0 && ret != -EEXIST) { - RTE_LOG(ERR, EAL, "Failed to hotplug add device on primary\n"); + EAL_LOG(ERR, "Failed to hotplug add device on primary"); goto finish; } ret = eal_dev_hotplug_request_to_secondary(&tmp_req); if (ret != 0) { - RTE_LOG(ERR, EAL, "Failed to send hotplug request to secondary\n"); + EAL_LOG(ERR, "Failed to send hotplug request to secondary"); ret = -ENOMSG; goto rollback; } if (tmp_req.result != 0) { ret = tmp_req.result; - RTE_LOG(ERR, EAL, "Failed to hotplug add device on secondary\n"); + EAL_LOG(ERR, "Failed to hotplug add device on secondary"); if (ret != -EEXIST) goto rollback; } @@ -123,27 +123,27 @@ __handle_secondary_request(void *param) ret = eal_dev_hotplug_request_to_secondary(&tmp_req); if (ret != 0) { - RTE_LOG(ERR, EAL, "Failed to send hotplug request to secondary\n"); + EAL_LOG(ERR, "Failed to send hotplug request to secondary"); ret = -ENOMSG; goto rollback; } bus = rte_bus_find_by_name(da.bus->name); if (bus == NULL) { - RTE_LOG(ERR, EAL, "Cannot find bus (%s)\n", da.bus->name); + EAL_LOG(ERR, "Cannot find bus (%s)", da.bus->name); ret = -ENOENT; goto finish; } dev = bus->find_device(NULL, cmp_dev_name, da.name); if (dev == NULL) { - RTE_LOG(ERR, EAL, "Cannot find plugged device (%s)\n", da.name); + EAL_LOG(ERR, "Cannot find plugged device (%s)", da.name); ret = -ENOENT; goto finish; } if (tmp_req.result != 0) { - RTE_LOG(ERR, EAL, "Failed to hotplug remove device on secondary\n"); + EAL_LOG(ERR, "Failed to hotplug remove device on secondary"); ret = tmp_req.result; if (ret != -ENOENT) goto rollback; @@ -151,12 +151,12 @@ __handle_secondary_request(void *param) ret = local_dev_remove(dev); if (ret != 0) { - RTE_LOG(ERR, EAL, "Failed to hotplug remove device on primary\n"); + EAL_LOG(ERR, "Failed to hotplug remove device on primary"); if (ret != -ENOENT) goto rollback; } } else { - RTE_LOG(ERR, EAL, "unsupported secondary to primary request\n"); + EAL_LOG(ERR, "unsupported secondary to primary request"); ret = -ENOTSUP; } goto finish; @@ -174,7 +174,7 @@ __handle_secondary_request(void *param) finish: ret = send_response_to_secondary(&tmp_req, ret, bundle->peer); if (ret) - RTE_LOG(ERR, EAL, "failed to send response to secondary\n"); + EAL_LOG(ERR, "failed to send response to secondary"); rte_devargs_reset(&da); free(bundle->peer); @@ -191,7 +191,7 @@ handle_secondary_request(const struct rte_mp_msg *msg, const void *peer) bundle = malloc(sizeof(*bundle)); if (bundle == NULL) { - RTE_LOG(ERR, EAL, "not enough memory\n"); + EAL_LOG(ERR, "not enough memory"); return send_response_to_secondary(req, -ENOMEM, peer); } @@ -204,7 +204,7 @@ handle_secondary_request(const struct rte_mp_msg *msg, const void *peer) bundle->peer = strdup(peer); if (bundle->peer == NULL) { free(bundle); - RTE_LOG(ERR, EAL, "not enough memory\n"); + EAL_LOG(ERR, "not enough memory"); return send_response_to_secondary(req, -ENOMEM, peer); } @@ -214,7 +214,7 @@ handle_secondary_request(const struct rte_mp_msg *msg, const void *peer) */ ret = rte_eal_alarm_set(1, __handle_secondary_request, bundle); if (ret != 0) { - RTE_LOG(ERR, EAL, "failed to add mp task\n"); + EAL_LOG(ERR, "failed to add mp task"); free(bundle->peer); free(bundle); return send_response_to_secondary(req, ret, peer); @@ -257,14 +257,14 @@ static void __handle_primary_request(void *param) bus = rte_bus_find_by_name(da->bus->name); if (bus == NULL) { - RTE_LOG(ERR, EAL, "Cannot find bus (%s)\n", da->bus->name); + EAL_LOG(ERR, "Cannot find bus (%s)", da->bus->name); ret = -ENOENT; goto quit; } dev = bus->find_device(NULL, cmp_dev_name, da->name); if (dev == NULL) { - RTE_LOG(ERR, EAL, "Cannot find plugged device (%s)\n", da->name); + EAL_LOG(ERR, "Cannot find plugged device (%s)", da->name); ret = -ENOENT; goto quit; } @@ -296,7 +296,7 @@ static void __handle_primary_request(void *param) memcpy(resp, req, sizeof(*resp)); resp->result = ret; if (rte_mp_reply(&mp_resp, bundle->peer) < 0) - RTE_LOG(ERR, EAL, "failed to send reply to primary request\n"); + EAL_LOG(ERR, "failed to send reply to primary request"); free(bundle->peer); free(bundle); @@ -320,11 +320,11 @@ handle_primary_request(const struct rte_mp_msg *msg, const void *peer) bundle = calloc(1, sizeof(*bundle)); if (bundle == NULL) { - RTE_LOG(ERR, EAL, "not enough memory\n"); + EAL_LOG(ERR, "not enough memory"); resp->result = -ENOMEM; ret = rte_mp_reply(&mp_resp, peer); if (ret) - RTE_LOG(ERR, EAL, "failed to send reply to primary request\n"); + EAL_LOG(ERR, "failed to send reply to primary request"); return ret; } @@ -336,12 +336,12 @@ handle_primary_request(const struct rte_mp_msg *msg, const void *peer) */ bundle->peer = (void *)strdup(peer); if (bundle->peer == NULL) { - RTE_LOG(ERR, EAL, "not enough memory\n"); + EAL_LOG(ERR, "not enough memory"); free(bundle); resp->result = -ENOMEM; ret = rte_mp_reply(&mp_resp, peer); if (ret) - RTE_LOG(ERR, EAL, "failed to send reply to primary request\n"); + EAL_LOG(ERR, "failed to send reply to primary request"); return ret; } @@ -356,7 +356,7 @@ handle_primary_request(const struct rte_mp_msg *msg, const void *peer) resp->result = ret; ret = rte_mp_reply(&mp_resp, peer); if (ret != 0) { - RTE_LOG(ERR, EAL, "failed to send reply to primary request\n"); + EAL_LOG(ERR, "failed to send reply to primary request"); return ret; } } @@ -378,7 +378,7 @@ int eal_dev_hotplug_request_to_primary(struct eal_dev_mp_req *req) ret = rte_mp_request_sync(&mp_req, &mp_reply, &ts); if (ret || mp_reply.nb_received != 1) { - RTE_LOG(ERR, EAL, "Cannot send request to primary\n"); + EAL_LOG(ERR, "Cannot send request to primary"); if (!ret) return -1; return ret; @@ -408,14 +408,14 @@ int eal_dev_hotplug_request_to_secondary(struct eal_dev_mp_req *req) if (ret != 0) { /* if IPC is not supported, behave as if the call succeeded */ if (rte_errno != ENOTSUP) - RTE_LOG(ERR, EAL, "rte_mp_request_sync failed\n"); + EAL_LOG(ERR, "rte_mp_request_sync failed"); else ret = 0; return ret; } if (mp_reply.nb_sent != mp_reply.nb_received) { - RTE_LOG(ERR, EAL, "not all secondary reply\n"); + EAL_LOG(ERR, "not all secondary reply"); free(mp_reply.msgs); return -1; } @@ -448,7 +448,7 @@ int eal_mp_dev_hotplug_init(void) handle_secondary_request); /* primary is allowed to not support IPC */ if (ret != 0 && rte_errno != ENOTSUP) { - RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n", + EAL_LOG(ERR, "Couldn't register '%s' action", EAL_DEV_MP_ACTION_REQUEST); return ret; } @@ -456,7 +456,7 @@ int eal_mp_dev_hotplug_init(void) ret = rte_mp_action_register(EAL_DEV_MP_ACTION_REQUEST, handle_primary_request); if (ret != 0) { - RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n", + EAL_LOG(ERR, "Couldn't register '%s' action", EAL_DEV_MP_ACTION_REQUEST); return ret; } diff --git a/lib/eal/common/malloc_elem.c b/lib/eal/common/malloc_elem.c index f5d1c8c2e2..452b119c20 100644 --- a/lib/eal/common/malloc_elem.c +++ b/lib/eal/common/malloc_elem.c @@ -148,7 +148,7 @@ malloc_elem_insert(struct malloc_elem *elem) /* first and last elements must be both NULL or both non-NULL */ if ((heap->first == NULL) != (heap->last == NULL)) { - RTE_LOG(ERR, EAL, "Heap is probably corrupt\n"); + EAL_LOG(ERR, "Heap is probably corrupt"); return; } @@ -628,7 +628,7 @@ malloc_elem_hide_region(struct malloc_elem *elem, void *start, size_t len) malloc_elem_free_list_insert(hide_end); } else if (len_after > 0) { - RTE_LOG(ERR, EAL, "Unaligned element, heap is probably corrupt\n"); + EAL_LOG(ERR, "Unaligned element, heap is probably corrupt"); return; } } @@ -647,7 +647,7 @@ malloc_elem_hide_region(struct malloc_elem *elem, void *start, size_t len) malloc_elem_free_list_insert(prev); } else if (len_before > 0) { - RTE_LOG(ERR, EAL, "Unaligned element, heap is probably corrupt\n"); + EAL_LOG(ERR, "Unaligned element, heap is probably corrupt"); return; } } diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c index 6b6cf9174c..5ff27548ff 100644 --- a/lib/eal/common/malloc_heap.c +++ b/lib/eal/common/malloc_heap.c @@ -117,7 +117,7 @@ malloc_add_seg(const struct rte_memseg_list *msl, heap_idx = malloc_socket_to_heap_id(msl->socket_id); if (heap_idx < 0) { - RTE_LOG(ERR, EAL, "Memseg list has invalid socket id\n"); + EAL_LOG(ERR, "Memseg list has invalid socket id"); return -1; } heap = &mcfg->malloc_heaps[heap_idx]; @@ -135,7 +135,7 @@ malloc_add_seg(const struct rte_memseg_list *msl, heap->total_size += len; - RTE_LOG(DEBUG, EAL, "Added %zuM to heap on socket %i\n", len >> 20, + EAL_LOG(DEBUG, "Added %zuM to heap on socket %i", len >> 20, msl->socket_id); return 0; } @@ -308,7 +308,7 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size, /* first, check if we're allowed to allocate this memory */ if (eal_memalloc_mem_alloc_validate(socket, heap->total_size + alloc_sz) < 0) { - RTE_LOG(DEBUG, EAL, "User has disallowed allocation\n"); + EAL_LOG(DEBUG, "User has disallowed allocation"); return NULL; } @@ -324,7 +324,7 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size, /* check if we wanted contiguous memory but didn't get it */ if (contig && !eal_memalloc_is_contig(msl, map_addr, alloc_sz)) { - RTE_LOG(DEBUG, EAL, "%s(): couldn't allocate physically contiguous space\n", + EAL_LOG(DEBUG, "%s(): couldn't allocate physically contiguous space", __func__); goto fail; } @@ -352,8 +352,8 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size, * which could solve some situations when IOVA VA is not * really needed. */ - RTE_LOG(ERR, EAL, - "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask\n", + EAL_LOG(ERR, + "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask", __func__); /* @@ -363,8 +363,8 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size, */ if ((rte_eal_iova_mode() == RTE_IOVA_VA) && rte_eal_using_phys_addrs()) - RTE_LOG(ERR, EAL, - "%s(): Please try initializing EAL with --iova-mode=pa parameter\n", + EAL_LOG(ERR, + "%s(): Please try initializing EAL with --iova-mode=pa parameter", __func__); goto fail; } @@ -440,7 +440,7 @@ try_expand_heap_primary(struct malloc_heap *heap, uint64_t pg_sz, } heap->total_size += alloc_sz; - RTE_LOG(DEBUG, EAL, "Heap on socket %d was expanded by %zdMB\n", + EAL_LOG(DEBUG, "Heap on socket %d was expanded by %zdMB", socket, alloc_sz >> 20ULL); free(ms); @@ -693,7 +693,7 @@ malloc_heap_alloc_on_heap_id(const char *type, size_t size, /* this should have succeeded */ if (ret == NULL) - RTE_LOG(ERR, EAL, "Error allocating from heap\n"); + EAL_LOG(ERR, "Error allocating from heap"); } alloc_unlock: rte_spinlock_unlock(&(heap->lock)); @@ -1040,7 +1040,7 @@ malloc_heap_free(struct malloc_elem *elem) /* we didn't exit early, meaning we have unmapped some pages */ unmapped = true; - RTE_LOG(DEBUG, EAL, "Heap on socket %d was shrunk by %zdMB\n", + EAL_LOG(DEBUG, "Heap on socket %d was shrunk by %zdMB", msl->socket_id, aligned_len >> 20ULL); rte_mcfg_mem_write_unlock(); @@ -1199,7 +1199,7 @@ malloc_heap_create_external_seg(void *va_addr, rte_iova_t iova_addrs[], } } if (msl == NULL) { - RTE_LOG(ERR, EAL, "Couldn't find empty memseg list\n"); + EAL_LOG(ERR, "Couldn't find empty memseg list"); rte_errno = ENOSPC; return NULL; } @@ -1210,7 +1210,7 @@ malloc_heap_create_external_seg(void *va_addr, rte_iova_t iova_addrs[], /* create the backing fbarray */ if (rte_fbarray_init(&msl->memseg_arr, fbarray_name, n_pages, sizeof(struct rte_memseg)) < 0) { - RTE_LOG(ERR, EAL, "Couldn't create fbarray backing the memseg list\n"); + EAL_LOG(ERR, "Couldn't create fbarray backing the memseg list"); return NULL; } arr = &msl->memseg_arr; @@ -1310,7 +1310,7 @@ malloc_heap_add_external_memory(struct malloc_heap *heap, heap->total_size += msl->len; /* all done! */ - RTE_LOG(DEBUG, EAL, "Added segment for heap %s starting at %p\n", + EAL_LOG(DEBUG, "Added segment for heap %s starting at %p", heap->name, msl->base_va); /* notify all subscribers that a new memory area has been added */ @@ -1356,7 +1356,7 @@ malloc_heap_create(struct malloc_heap *heap, const char *heap_name) /* prevent overflow. did you really create 2 billion heaps??? */ if (next_socket_id > INT32_MAX) { - RTE_LOG(ERR, EAL, "Cannot assign new socket ID's\n"); + EAL_LOG(ERR, "Cannot assign new socket ID's"); rte_errno = ENOSPC; return -1; } @@ -1382,17 +1382,17 @@ int malloc_heap_destroy(struct malloc_heap *heap) { if (heap->alloc_count != 0) { - RTE_LOG(ERR, EAL, "Heap is still in use\n"); + EAL_LOG(ERR, "Heap is still in use"); rte_errno = EBUSY; return -1; } if (heap->first != NULL || heap->last != NULL) { - RTE_LOG(ERR, EAL, "Heap still contains memory segments\n"); + EAL_LOG(ERR, "Heap still contains memory segments"); rte_errno = EBUSY; return -1; } if (heap->total_size != 0) - RTE_LOG(ERR, EAL, "Total size not zero, heap is likely corrupt\n"); + EAL_LOG(ERR, "Total size not zero, heap is likely corrupt"); /* Reset all of the heap but the (hold) lock so caller can release it. */ RTE_BUILD_BUG_ON(offsetof(struct malloc_heap, lock) != 0); @@ -1411,7 +1411,7 @@ rte_eal_malloc_heap_init(void) eal_get_internal_configuration(); if (internal_conf->match_allocations) - RTE_LOG(DEBUG, EAL, "Hugepages will be freed exactly as allocated.\n"); + EAL_LOG(DEBUG, "Hugepages will be freed exactly as allocated."); if (rte_eal_process_type() == RTE_PROC_PRIMARY) { /* assign min socket ID to external heaps */ @@ -1431,7 +1431,7 @@ rte_eal_malloc_heap_init(void) } if (register_mp_requests()) { - RTE_LOG(ERR, EAL, "Couldn't register malloc multiprocess actions\n"); + EAL_LOG(ERR, "Couldn't register malloc multiprocess actions"); return -1; } diff --git a/lib/eal/common/malloc_mp.c b/lib/eal/common/malloc_mp.c index 4d62397aba..2d39b0716f 100644 --- a/lib/eal/common/malloc_mp.c +++ b/lib/eal/common/malloc_mp.c @@ -156,7 +156,7 @@ handle_sync(const struct rte_mp_msg *msg, const void *peer) int ret; if (req->t != REQ_TYPE_SYNC) { - RTE_LOG(ERR, EAL, "Unexpected request from primary\n"); + EAL_LOG(ERR, "Unexpected request from primary"); return -1; } @@ -189,19 +189,19 @@ handle_free_request(const struct malloc_mp_req *m) /* check if the requested memory actually exists */ msl = rte_mem_virt2memseg_list(start); if (msl == NULL) { - RTE_LOG(ERR, EAL, "Requested to free unknown memory\n"); + EAL_LOG(ERR, "Requested to free unknown memory"); return -1; } /* check if end is within the same memory region */ if (rte_mem_virt2memseg_list(end) != msl) { - RTE_LOG(ERR, EAL, "Requested to free memory spanning multiple regions\n"); + EAL_LOG(ERR, "Requested to free memory spanning multiple regions"); return -1; } /* we're supposed to only free memory that's not external */ if (msl->external) { - RTE_LOG(ERR, EAL, "Requested to free external memory\n"); + EAL_LOG(ERR, "Requested to free external memory"); return -1; } @@ -228,13 +228,13 @@ handle_alloc_request(const struct malloc_mp_req *m, /* this is checked by the API, but we need to prevent divide by zero */ if (ar->page_sz == 0 || !rte_is_power_of_2(ar->page_sz)) { - RTE_LOG(ERR, EAL, "Attempting to allocate with invalid page size\n"); + EAL_LOG(ERR, "Attempting to allocate with invalid page size"); return -1; } /* heap idx is index into the heap array, not socket ID */ if (ar->malloc_heap_idx >= RTE_MAX_HEAPS) { - RTE_LOG(ERR, EAL, "Attempting to allocate from invalid heap\n"); + EAL_LOG(ERR, "Attempting to allocate from invalid heap"); return -1; } @@ -247,7 +247,7 @@ handle_alloc_request(const struct malloc_mp_req *m, * socket ID's are always lower than RTE_MAX_NUMA_NODES. */ if (heap->socket_id >= RTE_MAX_NUMA_NODES) { - RTE_LOG(ERR, EAL, "Attempting to allocate from external heap\n"); + EAL_LOG(ERR, "Attempting to allocate from external heap"); return -1; } @@ -258,7 +258,7 @@ handle_alloc_request(const struct malloc_mp_req *m, /* we can't know in advance how many pages we'll need, so we malloc */ ms = malloc(sizeof(*ms) * n_segs); if (ms == NULL) { - RTE_LOG(ERR, EAL, "Couldn't allocate memory for request state\n"); + EAL_LOG(ERR, "Couldn't allocate memory for request state"); return -1; } memset(ms, 0, sizeof(*ms) * n_segs); @@ -307,13 +307,13 @@ handle_request(const struct rte_mp_msg *msg, const void *peer __rte_unused) /* make sure it's not a dupe */ entry = find_request_by_id(m->id); if (entry != NULL) { - RTE_LOG(ERR, EAL, "Duplicate request id\n"); + EAL_LOG(ERR, "Duplicate request id"); goto fail; } entry = malloc(sizeof(*entry)); if (entry == NULL) { - RTE_LOG(ERR, EAL, "Unable to allocate memory for request\n"); + EAL_LOG(ERR, "Unable to allocate memory for request"); goto fail; } @@ -325,7 +325,7 @@ handle_request(const struct rte_mp_msg *msg, const void *peer __rte_unused) } else if (m->t == REQ_TYPE_FREE) { ret = handle_free_request(m); } else { - RTE_LOG(ERR, EAL, "Unexpected request from secondary\n"); + EAL_LOG(ERR, "Unexpected request from secondary"); goto fail; } @@ -345,7 +345,7 @@ handle_request(const struct rte_mp_msg *msg, const void *peer __rte_unused) resp->id = m->id; if (rte_mp_sendmsg(&resp_msg)) { - RTE_LOG(ERR, EAL, "Couldn't send response\n"); + EAL_LOG(ERR, "Couldn't send response"); goto fail; } /* we did not modify the request */ @@ -376,7 +376,7 @@ handle_request(const struct rte_mp_msg *msg, const void *peer __rte_unused) handle_sync_response); } while (ret != 0 && rte_errno == EEXIST); if (ret != 0) { - RTE_LOG(ERR, EAL, "Couldn't send sync request\n"); + EAL_LOG(ERR, "Couldn't send sync request"); if (m->t == REQ_TYPE_ALLOC) free(entry->alloc_state.ms); goto fail; @@ -414,7 +414,7 @@ handle_sync_response(const struct rte_mp_msg *request, entry = find_request_by_id(mpreq->id); if (entry == NULL) { - RTE_LOG(ERR, EAL, "Wrong request ID\n"); + EAL_LOG(ERR, "Wrong request ID"); goto fail; } @@ -428,12 +428,12 @@ handle_sync_response(const struct rte_mp_msg *request, (struct malloc_mp_req *)reply->msgs[i].param; if (resp->t != REQ_TYPE_SYNC) { - RTE_LOG(ERR, EAL, "Unexpected response to sync request\n"); + EAL_LOG(ERR, "Unexpected response to sync request"); result = REQ_RESULT_FAIL; break; } if (resp->id != entry->user_req.id) { - RTE_LOG(ERR, EAL, "Response to wrong sync request\n"); + EAL_LOG(ERR, "Response to wrong sync request"); result = REQ_RESULT_FAIL; break; } @@ -458,7 +458,7 @@ handle_sync_response(const struct rte_mp_msg *request, strlcpy(msg.name, MP_ACTION_RESPONSE, sizeof(msg.name)); if (rte_mp_sendmsg(&msg)) - RTE_LOG(ERR, EAL, "Could not send message to secondary process\n"); + EAL_LOG(ERR, "Could not send message to secondary process"); TAILQ_REMOVE(&mp_request_list.list, entry, next); free(entry); @@ -482,7 +482,7 @@ handle_sync_response(const struct rte_mp_msg *request, strlcpy(msg.name, MP_ACTION_RESPONSE, sizeof(msg.name)); if (rte_mp_sendmsg(&msg)) - RTE_LOG(ERR, EAL, "Could not send message to secondary process\n"); + EAL_LOG(ERR, "Could not send message to secondary process"); TAILQ_REMOVE(&mp_request_list.list, entry, next); free(entry->alloc_state.ms); @@ -524,7 +524,7 @@ handle_sync_response(const struct rte_mp_msg *request, handle_rollback_response); } while (ret != 0 && rte_errno == EEXIST); if (ret != 0) { - RTE_LOG(ERR, EAL, "Could not send rollback request to secondary process\n"); + EAL_LOG(ERR, "Could not send rollback request to secondary process"); /* we couldn't send rollback request, but that's OK - * secondary will time out, and memory has been removed @@ -536,7 +536,7 @@ handle_sync_response(const struct rte_mp_msg *request, goto fail; } } else { - RTE_LOG(ERR, EAL, " to sync request of unknown type\n"); + EAL_LOG(ERR, " to sync request of unknown type"); goto fail; } @@ -564,12 +564,12 @@ handle_rollback_response(const struct rte_mp_msg *request, entry = find_request_by_id(mpreq->id); if (entry == NULL) { - RTE_LOG(ERR, EAL, "Wrong request ID\n"); + EAL_LOG(ERR, "Wrong request ID"); goto fail; } if (entry->user_req.t != REQ_TYPE_ALLOC) { - RTE_LOG(ERR, EAL, "Unexpected active request\n"); + EAL_LOG(ERR, "Unexpected active request"); goto fail; } @@ -582,7 +582,7 @@ handle_rollback_response(const struct rte_mp_msg *request, strlcpy(msg.name, MP_ACTION_RESPONSE, sizeof(msg.name)); if (rte_mp_sendmsg(&msg)) - RTE_LOG(ERR, EAL, "Could not send message to secondary process\n"); + EAL_LOG(ERR, "Could not send message to secondary process"); /* clean up */ TAILQ_REMOVE(&mp_request_list.list, entry, next); @@ -657,14 +657,14 @@ request_sync(void) if (ret != 0) { /* if IPC is unsupported, behave as if the call succeeded */ if (rte_errno != ENOTSUP) - RTE_LOG(ERR, EAL, "Could not send sync request to secondary process\n"); + EAL_LOG(ERR, "Could not send sync request to secondary process"); else ret = 0; goto out; } if (reply.nb_received != reply.nb_sent) { - RTE_LOG(ERR, EAL, "Not all secondaries have responded\n"); + EAL_LOG(ERR, "Not all secondaries have responded"); goto out; } @@ -672,15 +672,15 @@ request_sync(void) struct malloc_mp_req *resp = (struct malloc_mp_req *)reply.msgs[i].param; if (resp->t != REQ_TYPE_SYNC) { - RTE_LOG(ERR, EAL, "Unexpected response from secondary\n"); + EAL_LOG(ERR, "Unexpected response from secondary"); goto out; } if (resp->id != req->id) { - RTE_LOG(ERR, EAL, "Wrong request ID\n"); + EAL_LOG(ERR, "Wrong request ID"); goto out; } if (resp->result != REQ_RESULT_SUCCESS) { - RTE_LOG(ERR, EAL, "Secondary process failed to synchronize\n"); + EAL_LOG(ERR, "Secondary process failed to synchronize"); goto out; } } @@ -711,14 +711,14 @@ request_to_primary(struct malloc_mp_req *user_req) entry = malloc(sizeof(*entry)); if (entry == NULL) { - RTE_LOG(ERR, EAL, "Cannot allocate memory for request\n"); + EAL_LOG(ERR, "Cannot allocate memory for request"); goto fail; } memset(entry, 0, sizeof(*entry)); if (gettimeofday(&now, NULL) < 0) { - RTE_LOG(ERR, EAL, "Cannot get current time\n"); + EAL_LOG(ERR, "Cannot get current time"); goto fail; } @@ -740,7 +740,7 @@ request_to_primary(struct malloc_mp_req *user_req) memcpy(msg_req, user_req, sizeof(*msg_req)); if (rte_mp_sendmsg(&msg)) { - RTE_LOG(ERR, EAL, "Cannot send message to primary\n"); + EAL_LOG(ERR, "Cannot send message to primary"); goto fail; } @@ -759,7 +759,7 @@ request_to_primary(struct malloc_mp_req *user_req) } while (ret != 0 && ret != ETIMEDOUT); if (entry->state != REQ_STATE_COMPLETE) { - RTE_LOG(ERR, EAL, "Request timed out\n"); + EAL_LOG(ERR, "Request timed out"); ret = -1; } else { ret = 0; @@ -783,24 +783,24 @@ register_mp_requests(void) /* it's OK for primary to not support IPC */ if (rte_mp_action_register(MP_ACTION_REQUEST, handle_request) && rte_errno != ENOTSUP) { - RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n", + EAL_LOG(ERR, "Couldn't register '%s' action", MP_ACTION_REQUEST); return -1; } } else { if (rte_mp_action_register(MP_ACTION_SYNC, handle_sync)) { - RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n", + EAL_LOG(ERR, "Couldn't register '%s' action", MP_ACTION_SYNC); return -1; } if (rte_mp_action_register(MP_ACTION_ROLLBACK, handle_sync)) { - RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n", + EAL_LOG(ERR, "Couldn't register '%s' action", MP_ACTION_SYNC); return -1; } if (rte_mp_action_register(MP_ACTION_RESPONSE, handle_response)) { - RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n", + EAL_LOG(ERR, "Couldn't register '%s' action", MP_ACTION_RESPONSE); return -1; } diff --git a/lib/eal/common/rte_keepalive.c b/lib/eal/common/rte_keepalive.c index e0494b2010..f6db97317c 100644 --- a/lib/eal/common/rte_keepalive.c +++ b/lib/eal/common/rte_keepalive.c @@ -11,6 +11,8 @@ #include <rte_keepalive.h> #include <rte_malloc.h> +#include "eal_private.h" + struct rte_keepalive { /** Core Liveness. */ struct { @@ -53,7 +55,7 @@ struct rte_keepalive { static void print_trace(const char *msg, struct rte_keepalive *keepcfg, int idx_core) { - RTE_LOG(INFO, EAL, "%sLast seen %" PRId64 "ms ago.\n", + EAL_LOG(INFO, "%sLast seen %" PRId64 "ms ago.", msg, ((rte_rdtsc() - keepcfg->last_alive[idx_core])*1000) / rte_get_tsc_hz() diff --git a/lib/eal/common/rte_malloc.c b/lib/eal/common/rte_malloc.c index 9db0c399ae..6d3c301a23 100644 --- a/lib/eal/common/rte_malloc.c +++ b/lib/eal/common/rte_malloc.c @@ -35,7 +35,7 @@ mem_free(void *addr, const bool trace_ena) if (addr == NULL) return; if (malloc_heap_free(malloc_elem_from_data(addr)) < 0) - RTE_LOG(ERR, EAL, "Error: Invalid memory\n"); + EAL_LOG(ERR, "Error: Invalid memory"); } void @@ -171,7 +171,7 @@ rte_realloc_socket(void *ptr, size_t size, unsigned int align, int socket) struct malloc_elem *elem = malloc_elem_from_data(ptr); if (elem == NULL) { - RTE_LOG(ERR, EAL, "Error: memory corruption detected\n"); + EAL_LOG(ERR, "Error: memory corruption detected"); return NULL; } @@ -598,7 +598,7 @@ rte_malloc_heap_create(const char *heap_name) /* existing heap */ if (strncmp(heap_name, tmp->name, RTE_HEAP_NAME_MAX_LEN) == 0) { - RTE_LOG(ERR, EAL, "Heap %s already exists\n", + EAL_LOG(ERR, "Heap %s already exists", heap_name); rte_errno = EEXIST; ret = -1; @@ -611,7 +611,7 @@ rte_malloc_heap_create(const char *heap_name) } } if (heap == NULL) { - RTE_LOG(ERR, EAL, "Cannot create new heap: no space\n"); + EAL_LOG(ERR, "Cannot create new heap: no space"); rte_errno = ENOSPC; ret = -1; goto unlock; @@ -643,7 +643,7 @@ rte_malloc_heap_destroy(const char *heap_name) /* start from non-socket heaps */ heap = find_named_heap(heap_name); if (heap == NULL) { - RTE_LOG(ERR, EAL, "Heap %s not found\n", heap_name); + EAL_LOG(ERR, "Heap %s not found", heap_name); rte_errno = ENOENT; ret = -1; goto unlock; diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c index e183d2e631..d959c91459 100644 --- a/lib/eal/common/rte_service.c +++ b/lib/eal/common/rte_service.c @@ -87,8 +87,8 @@ rte_service_init(void) RTE_BUILD_BUG_ON(RTE_SERVICE_NUM_MAX > 64); if (rte_service_library_initialized) { - RTE_LOG(NOTICE, EAL, - "service library init() called, init flag %d\n", + EAL_LOG(NOTICE, + "service library init() called, init flag %d", rte_service_library_initialized); return -EALREADY; } @@ -97,14 +97,14 @@ rte_service_init(void) sizeof(struct rte_service_spec_impl), RTE_CACHE_LINE_SIZE); if (!rte_services) { - RTE_LOG(ERR, EAL, "error allocating rte services array\n"); + EAL_LOG(ERR, "error allocating rte services array"); goto fail_mem; } lcore_states = rte_calloc("rte_service_core_states", RTE_MAX_LCORE, sizeof(struct core_state), RTE_CACHE_LINE_SIZE); if (!lcore_states) { - RTE_LOG(ERR, EAL, "error allocating core states array\n"); + EAL_LOG(ERR, "error allocating core states array"); goto fail_mem; } diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c index 568e06e9ed..bab77118e9 100644 --- a/lib/eal/freebsd/eal.c +++ b/lib/eal/freebsd/eal.c @@ -117,7 +117,7 @@ rte_eal_config_create(void) if (mem_cfg_fd < 0){ mem_cfg_fd = open(pathname, O_RDWR | O_CREAT, 0600); if (mem_cfg_fd < 0) { - RTE_LOG(ERR, EAL, "Cannot open '%s' for rte_mem_config\n", + EAL_LOG(ERR, "Cannot open '%s' for rte_mem_config", pathname); return -1; } @@ -127,7 +127,7 @@ rte_eal_config_create(void) if (retval < 0){ close(mem_cfg_fd); mem_cfg_fd = -1; - RTE_LOG(ERR, EAL, "Cannot resize '%s' for rte_mem_config\n", + EAL_LOG(ERR, "Cannot resize '%s' for rte_mem_config", pathname); return -1; } @@ -136,8 +136,8 @@ rte_eal_config_create(void) if (retval < 0){ close(mem_cfg_fd); mem_cfg_fd = -1; - RTE_LOG(ERR, EAL, "Cannot create lock on '%s'. Is another primary " - "process running?\n", pathname); + EAL_LOG(ERR, "Cannot create lock on '%s'. Is another primary " + "process running?", pathname); return -1; } @@ -145,7 +145,7 @@ rte_eal_config_create(void) rte_mem_cfg_addr = eal_get_virtual_area(rte_mem_cfg_addr, &cfg_len_aligned, page_sz, 0, 0); if (rte_mem_cfg_addr == NULL) { - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config\n"); + EAL_LOG(ERR, "Cannot mmap memory for rte_config"); close(mem_cfg_fd); mem_cfg_fd = -1; return -1; @@ -156,7 +156,7 @@ rte_eal_config_create(void) cfg_len_aligned, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_FIXED, mem_cfg_fd, 0); if (mapped_mem_cfg_addr == MAP_FAILED) { - RTE_LOG(ERR, EAL, "Cannot remap memory for rte_config\n"); + EAL_LOG(ERR, "Cannot remap memory for rte_config"); munmap(rte_mem_cfg_addr, cfg_len); close(mem_cfg_fd); mem_cfg_fd = -1; @@ -190,7 +190,7 @@ rte_eal_config_attach(void) if (mem_cfg_fd < 0){ mem_cfg_fd = open(pathname, O_RDWR); if (mem_cfg_fd < 0) { - RTE_LOG(ERR, EAL, "Cannot open '%s' for rte_mem_config\n", + EAL_LOG(ERR, "Cannot open '%s' for rte_mem_config", pathname); return -1; } @@ -202,7 +202,7 @@ rte_eal_config_attach(void) if (rte_mem_cfg_addr == MAP_FAILED) { close(mem_cfg_fd); mem_cfg_fd = -1; - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)\n", + EAL_LOG(ERR, "Cannot mmap memory for rte_config! error %i (%s)", errno, strerror(errno)); return -1; } @@ -242,14 +242,14 @@ rte_eal_config_reattach(void) if (mem_config == MAP_FAILED || mem_config != rte_mem_cfg_addr) { if (mem_config != MAP_FAILED) { /* errno is stale, don't use */ - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config at [%p], got [%p]" + EAL_LOG(ERR, "Cannot mmap memory for rte_config at [%p], got [%p]" " - please use '--" OPT_BASE_VIRTADDR - "' option\n", + "' option", rte_mem_cfg_addr, mem_config); munmap(mem_config, sizeof(struct rte_mem_config)); return -1; } - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)\n", + EAL_LOG(ERR, "Cannot mmap memory for rte_config! error %i (%s)", errno, strerror(errno)); return -1; } @@ -280,7 +280,7 @@ eal_proc_type_detect(void) ptype = RTE_PROC_SECONDARY; } - RTE_LOG(INFO, EAL, "Auto-detected process type: %s\n", + EAL_LOG(INFO, "Auto-detected process type: %s", ptype == RTE_PROC_PRIMARY ? "PRIMARY" : "SECONDARY"); return ptype; @@ -307,20 +307,20 @@ rte_config_init(void) return -1; eal_mcfg_wait_complete(); if (eal_mcfg_check_version() < 0) { - RTE_LOG(ERR, EAL, "Primary and secondary process DPDK version mismatch\n"); + EAL_LOG(ERR, "Primary and secondary process DPDK version mismatch"); return -1; } if (rte_eal_config_reattach() < 0) return -1; if (!__rte_mp_enable()) { - RTE_LOG(ERR, EAL, "Primary process refused secondary attachment\n"); + EAL_LOG(ERR, "Primary process refused secondary attachment"); return -1; } eal_mcfg_update_internal(); break; case RTE_PROC_AUTO: case RTE_PROC_INVALID: - RTE_LOG(ERR, EAL, "Invalid process type %d\n", + EAL_LOG(ERR, "Invalid process type %d", config->process_type); return -1; } @@ -454,7 +454,7 @@ eal_parse_args(int argc, char **argv) { char *ops_name = strdup(optarg); if (ops_name == NULL) - RTE_LOG(ERR, EAL, "Could not store mbuf pool ops name\n"); + EAL_LOG(ERR, "Could not store mbuf pool ops name"); else { /* free old ops name */ free(internal_conf->user_mbuf_pool_ops_name); @@ -469,16 +469,16 @@ eal_parse_args(int argc, char **argv) exit(EXIT_SUCCESS); default: if (opt < OPT_LONG_MIN_NUM && isprint(opt)) { - RTE_LOG(ERR, EAL, "Option %c is not supported " - "on FreeBSD\n", opt); + EAL_LOG(ERR, "Option %c is not supported " + "on FreeBSD", opt); } else if (opt >= OPT_LONG_MIN_NUM && opt < OPT_LONG_MAX_NUM) { - RTE_LOG(ERR, EAL, "Option %s is not supported " - "on FreeBSD\n", + EAL_LOG(ERR, "Option %s is not supported " + "on FreeBSD", eal_long_options[option_index].name); } else { - RTE_LOG(ERR, EAL, "Option %d is not supported " - "on FreeBSD\n", opt); + EAL_LOG(ERR, "Option %d is not supported " + "on FreeBSD", opt); } eal_usage(prgname); ret = -1; @@ -489,11 +489,11 @@ eal_parse_args(int argc, char **argv) /* create runtime data directory. In no_shconf mode, skip any errors */ if (eal_create_runtime_dir() < 0) { if (internal_conf->no_shconf == 0) { - RTE_LOG(ERR, EAL, "Cannot create runtime directory\n"); + EAL_LOG(ERR, "Cannot create runtime directory"); ret = -1; goto out; } else - RTE_LOG(WARNING, EAL, "No DPDK runtime directory created\n"); + EAL_LOG(WARNING, "No DPDK runtime directory created"); } if (eal_adjust_config(internal_conf) != 0) { @@ -545,7 +545,7 @@ eal_check_mem_on_local_socket(void) socket_id = rte_lcore_to_socket_id(config->main_lcore); if (rte_memseg_list_walk(check_socket, &socket_id) == 0) - RTE_LOG(WARNING, EAL, "WARNING: Main core has no memory on local socket!\n"); + EAL_LOG(WARNING, "WARNING: Main core has no memory on local socket!"); } @@ -572,7 +572,7 @@ rte_eal_iopl_init(void) static void rte_eal_init_alert(const char *msg) { fprintf(stderr, "EAL: FATAL: %s\n", msg); - RTE_LOG(ERR, EAL, "%s\n", msg); + EAL_LOG(ERR, "%s", msg); } /* Launch threads, called at application init(). */ @@ -629,7 +629,8 @@ rte_eal_init(int argc, char **argv) /* FreeBSD always uses legacy memory model */ internal_conf->legacy_mem = true; if (internal_conf->in_memory) { - RTE_LOG(WARNING, EAL, "Warning: ignoring unsupported flag, '%s'\n", OPT_IN_MEMORY); + EAL_LOG(WARNING, "Warning: ignoring unsupported flag, '%s'", + OPT_IN_MEMORY); internal_conf->in_memory = false; } @@ -695,14 +696,14 @@ rte_eal_init(int argc, char **argv) has_phys_addr = internal_conf->no_hugetlbfs == 0; iova_mode = internal_conf->iova_mode; if (iova_mode == RTE_IOVA_DC) { - RTE_LOG(DEBUG, EAL, "Specific IOVA mode is not requested, autodetecting\n"); + EAL_LOG(DEBUG, "Specific IOVA mode is not requested, autodetecting"); if (has_phys_addr) { - RTE_LOG(DEBUG, EAL, "Selecting IOVA mode according to bus requests\n"); + EAL_LOG(DEBUG, "Selecting IOVA mode according to bus requests"); iova_mode = rte_bus_get_iommu_class(); if (iova_mode == RTE_IOVA_DC) { if (!RTE_IOVA_IN_MBUF) { iova_mode = RTE_IOVA_VA; - RTE_LOG(DEBUG, EAL, "IOVA as VA mode is forced by build option.\n"); + EAL_LOG(DEBUG, "IOVA as VA mode is forced by build option."); } else { iova_mode = RTE_IOVA_PA; } @@ -725,7 +726,7 @@ rte_eal_init(int argc, char **argv) } rte_eal_get_configuration()->iova_mode = iova_mode; - RTE_LOG(INFO, EAL, "Selected IOVA mode '%s'\n", + EAL_LOG(INFO, "Selected IOVA mode '%s'", rte_eal_iova_mode() == RTE_IOVA_PA ? "PA" : "VA"); if (internal_conf->no_hugetlbfs == 0) { @@ -751,11 +752,11 @@ rte_eal_init(int argc, char **argv) if (internal_conf->vmware_tsc_map == 1) { #ifdef RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT rte_cycles_vmware_tsc_map = 1; - RTE_LOG (DEBUG, EAL, "Using VMWARE TSC MAP, " - "you must have monitor_control.pseudo_perfctr = TRUE\n"); + EAL_LOG(DEBUG, "Using VMWARE TSC MAP, " + "you must have monitor_control.pseudo_perfctr = TRUE"); #else - RTE_LOG (WARNING, EAL, "Ignoring --vmware-tsc-map because " - "RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT is not set\n"); + EAL_LOG(WARNING, "Ignoring --vmware-tsc-map because " + "RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT is not set"); #endif } @@ -818,7 +819,7 @@ rte_eal_init(int argc, char **argv) ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset)); - RTE_LOG(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])\n", + EAL_LOG(DEBUG, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])", config->main_lcore, (uintptr_t)pthread_self(), cpuset, ret == 0 ? "" : "..."); @@ -917,7 +918,7 @@ rte_eal_cleanup(void) if (!rte_atomic_compare_exchange_strong_explicit(&run_once, &has_run, 1, rte_memory_order_relaxed, rte_memory_order_relaxed)) { - RTE_LOG(WARNING, EAL, "Already called cleanup\n"); + EAL_LOG(WARNING, "Already called cleanup"); rte_errno = EALREADY; return -1; } diff --git a/lib/eal/freebsd/eal_alarm.c b/lib/eal/freebsd/eal_alarm.c index e5b0909a45..94cae5f4b6 100644 --- a/lib/eal/freebsd/eal_alarm.c +++ b/lib/eal/freebsd/eal_alarm.c @@ -59,7 +59,7 @@ rte_eal_alarm_init(void) intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE); if (intr_handle == NULL) { - RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n"); + EAL_LOG(ERR, "Fail to allocate intr_handle"); goto error; } diff --git a/lib/eal/freebsd/eal_dev.c b/lib/eal/freebsd/eal_dev.c index c3dfe9108f..d8659cc7fc 100644 --- a/lib/eal/freebsd/eal_dev.c +++ b/lib/eal/freebsd/eal_dev.c @@ -5,30 +5,32 @@ #include <rte_log.h> #include <rte_dev.h> +#include "eal_private.h" + int rte_dev_event_monitor_start(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for FreeBSD\n"); + EAL_LOG(ERR, "Device event is not supported for FreeBSD"); return -1; } int rte_dev_event_monitor_stop(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for FreeBSD\n"); + EAL_LOG(ERR, "Device event is not supported for FreeBSD"); return -1; } int rte_dev_hotplug_handle_enable(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for FreeBSD\n"); + EAL_LOG(ERR, "Device event is not supported for FreeBSD"); return -1; } int rte_dev_hotplug_handle_disable(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for FreeBSD\n"); + EAL_LOG(ERR, "Device event is not supported for FreeBSD"); return -1; } diff --git a/lib/eal/freebsd/eal_hugepage_info.c b/lib/eal/freebsd/eal_hugepage_info.c index e58e618469..b6772e0701 100644 --- a/lib/eal/freebsd/eal_hugepage_info.c +++ b/lib/eal/freebsd/eal_hugepage_info.c @@ -72,7 +72,7 @@ eal_hugepage_info_init(void) &sysctl_size, NULL, 0); if (error != 0) { - RTE_LOG(ERR, EAL, "could not read sysctl hw.contigmem.num_buffers\n"); + EAL_LOG(ERR, "could not read sysctl hw.contigmem.num_buffers"); return -1; } @@ -81,28 +81,28 @@ eal_hugepage_info_init(void) &sysctl_size, NULL, 0); if (error != 0) { - RTE_LOG(ERR, EAL, "could not read sysctl hw.contigmem.buffer_size\n"); + EAL_LOG(ERR, "could not read sysctl hw.contigmem.buffer_size"); return -1; } fd = open(CONTIGMEM_DEV, O_RDWR); if (fd < 0) { - RTE_LOG(ERR, EAL, "could not open "CONTIGMEM_DEV"\n"); + EAL_LOG(ERR, "could not open "CONTIGMEM_DEV); return -1; } if (flock(fd, LOCK_EX | LOCK_NB) < 0) { - RTE_LOG(ERR, EAL, "could not lock memory. Is another DPDK process running?\n"); + EAL_LOG(ERR, "could not lock memory. Is another DPDK process running?"); return -1; } if (buffer_size >= 1<<30) - RTE_LOG(INFO, EAL, "Contigmem driver has %d buffers, each of size %dGB\n", + EAL_LOG(INFO, "Contigmem driver has %d buffers, each of size %dGB", num_buffers, (int)(buffer_size>>30)); else if (buffer_size >= 1<<20) - RTE_LOG(INFO, EAL, "Contigmem driver has %d buffers, each of size %dMB\n", + EAL_LOG(INFO, "Contigmem driver has %d buffers, each of size %dMB", num_buffers, (int)(buffer_size>>20)); else - RTE_LOG(INFO, EAL, "Contigmem driver has %d buffers, each of size %dKB\n", + EAL_LOG(INFO, "Contigmem driver has %d buffers, each of size %dKB", num_buffers, (int)(buffer_size>>10)); strlcpy(hpi->hugedir, CONTIGMEM_DEV, sizeof(hpi->hugedir)); @@ -117,7 +117,7 @@ eal_hugepage_info_init(void) tmp_hpi = create_shared_memory(eal_hugepage_info_path(), sizeof(internal_conf->hugepage_info)); if (tmp_hpi == NULL ) { - RTE_LOG(ERR, EAL, "Failed to create shared memory!\n"); + EAL_LOG(ERR, "Failed to create shared memory!"); return -1; } @@ -132,7 +132,7 @@ eal_hugepage_info_init(void) } if (munmap(tmp_hpi, sizeof(internal_conf->hugepage_info)) < 0) { - RTE_LOG(ERR, EAL, "Failed to unmap shared memory!\n"); + EAL_LOG(ERR, "Failed to unmap shared memory!"); return -1; } @@ -154,14 +154,14 @@ eal_hugepage_info_read(void) tmp_hpi = open_shared_memory(eal_hugepage_info_path(), sizeof(internal_conf->hugepage_info)); if (tmp_hpi == NULL) { - RTE_LOG(ERR, EAL, "Failed to open shared memory!\n"); + EAL_LOG(ERR, "Failed to open shared memory!"); return -1; } memcpy(hpi, tmp_hpi, sizeof(internal_conf->hugepage_info)); if (munmap(tmp_hpi, sizeof(internal_conf->hugepage_info)) < 0) { - RTE_LOG(ERR, EAL, "Failed to unmap shared memory!\n"); + EAL_LOG(ERR, "Failed to unmap shared memory!"); return -1; } return 0; diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c index 2b31dfb099..23747babc2 100644 --- a/lib/eal/freebsd/eal_interrupts.c +++ b/lib/eal/freebsd/eal_interrupts.c @@ -90,12 +90,12 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, /* first do parameter checking */ if (rte_intr_fd_get(intr_handle) < 0 || cb == NULL) { - RTE_LOG(ERR, EAL, - "Registering with invalid input parameter\n"); + EAL_LOG(ERR, + "Registering with invalid input parameter"); return -EINVAL; } if (kq < 0) { - RTE_LOG(ERR, EAL, "Kqueue is not active: %d\n", kq); + EAL_LOG(ERR, "Kqueue is not active: %d", kq); return -ENODEV; } @@ -120,7 +120,7 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, /* allocate a new interrupt callback entity */ callback = calloc(1, sizeof(*callback)); if (callback == NULL) { - RTE_LOG(ERR, EAL, "Can not allocate memory\n"); + EAL_LOG(ERR, "Can not allocate memory"); ret = -ENOMEM; goto fail; } @@ -132,13 +132,13 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, if (src == NULL) { src = calloc(1, sizeof(*src)); if (src == NULL) { - RTE_LOG(ERR, EAL, "Can not allocate memory\n"); + EAL_LOG(ERR, "Can not allocate memory"); ret = -ENOMEM; goto fail; } else { src->intr_handle = rte_intr_instance_dup(intr_handle); if (src->intr_handle == NULL) { - RTE_LOG(ERR, EAL, "Can not create intr instance\n"); + EAL_LOG(ERR, "Can not create intr instance"); ret = -ENOMEM; free(src); src = NULL; @@ -167,7 +167,7 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, ke.flags = EV_ADD; /* mark for addition to the queue */ if (intr_source_to_kevent(intr_handle, &ke) < 0) { - RTE_LOG(ERR, EAL, "Cannot convert interrupt handle to kevent\n"); + EAL_LOG(ERR, "Cannot convert interrupt handle to kevent"); ret = -ENODEV; goto fail; } @@ -181,10 +181,10 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, * user. so, don't output it unless debug log level set. */ if (errno == ENODEV) - RTE_LOG(DEBUG, EAL, "Interrupt handle %d not supported\n", + EAL_LOG(DEBUG, "Interrupt handle %d not supported", rte_intr_fd_get(src->intr_handle)); else - RTE_LOG(ERR, EAL, "Error adding fd %d kevent, %s\n", + EAL_LOG(ERR, "Error adding fd %d kevent, %s", rte_intr_fd_get(src->intr_handle), strerror(errno)); ret = -errno; @@ -222,13 +222,13 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle, /* do parameter checking first */ if (rte_intr_fd_get(intr_handle) < 0) { - RTE_LOG(ERR, EAL, - "Unregistering with invalid input parameter\n"); + EAL_LOG(ERR, + "Unregistering with invalid input parameter"); return -EINVAL; } if (kq < 0) { - RTE_LOG(ERR, EAL, "Kqueue is not active\n"); + EAL_LOG(ERR, "Kqueue is not active"); return -ENODEV; } @@ -277,12 +277,12 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle, /* do parameter checking first */ if (rte_intr_fd_get(intr_handle) < 0) { - RTE_LOG(ERR, EAL, - "Unregistering with invalid input parameter\n"); + EAL_LOG(ERR, + "Unregistering with invalid input parameter"); return -EINVAL; } if (kq < 0) { - RTE_LOG(ERR, EAL, "Kqueue is not active\n"); + EAL_LOG(ERR, "Kqueue is not active"); return -ENODEV; } @@ -312,7 +312,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle, ke.flags = EV_DELETE; /* mark for deletion from the queue */ if (intr_source_to_kevent(intr_handle, &ke) < 0) { - RTE_LOG(ERR, EAL, "Cannot convert to kevent\n"); + EAL_LOG(ERR, "Cannot convert to kevent"); ret = -ENODEV; goto out; } @@ -321,7 +321,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle, * remove intr file descriptor from wait list. */ if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) { - RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n", + EAL_LOG(ERR, "Error removing fd %d kevent, %s", rte_intr_fd_get(src->intr_handle), strerror(errno)); /* removing non-existent even is an expected condition @@ -396,7 +396,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle) break; /* unknown handle type */ default: - RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n", + EAL_LOG(ERR, "Unknown handle type of fd %d", rte_intr_fd_get(intr_handle)); rc = -1; break; @@ -437,7 +437,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle) break; /* unknown handle type */ default: - RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n", + EAL_LOG(ERR, "Unknown handle type of fd %d", rte_intr_fd_get(intr_handle)); rc = -1; break; @@ -513,13 +513,13 @@ eal_intr_process_interrupts(struct kevent *events, int nfds) if (errno == EINTR || errno == EWOULDBLOCK) continue; - RTE_LOG(ERR, EAL, "Error reading from file " - "descriptor %d: %s\n", + EAL_LOG(ERR, "Error reading from file " + "descriptor %d: %s", event_fd, strerror(errno)); } else if (bytes_read == 0) - RTE_LOG(ERR, EAL, "Read nothing from file " - "descriptor %d\n", event_fd); + EAL_LOG(ERR, "Read nothing from file " + "descriptor %d", event_fd); else call = true; } @@ -556,7 +556,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds) ke.flags = EV_DELETE; if (intr_source_to_kevent(src->intr_handle, &ke) < 0) { - RTE_LOG(ERR, EAL, "Cannot convert to kevent\n"); + EAL_LOG(ERR, "Cannot convert to kevent"); rte_spinlock_unlock(&intr_lock); return; } @@ -565,7 +565,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds) * remove intr file descriptor from wait list. */ if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) { - RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n", + EAL_LOG(ERR, "Error removing fd %d kevent, %s", rte_intr_fd_get(src->intr_handle), strerror(errno)); /* removing non-existent even is an expected @@ -606,8 +606,8 @@ eal_intr_thread_main(void *arg __rte_unused) if (nfds < 0) { if (errno == EINTR) continue; - RTE_LOG(ERR, EAL, - "kevent returns with fail\n"); + EAL_LOG(ERR, + "kevent returns with fail"); break; } /* kevent timeout, will never happen here */ @@ -632,7 +632,7 @@ rte_eal_intr_init(void) kq = kqueue(); if (kq < 0) { - RTE_LOG(ERR, EAL, "Cannot create kqueue instance\n"); + EAL_LOG(ERR, "Cannot create kqueue instance"); return -1; } @@ -641,8 +641,8 @@ rte_eal_intr_init(void) eal_intr_thread_main, NULL); if (ret != 0) { rte_errno = -ret; - RTE_LOG(ERR, EAL, - "Failed to create thread for interrupt handling\n"); + EAL_LOG(ERR, + "Failed to create thread for interrupt handling"); } return ret; diff --git a/lib/eal/freebsd/eal_lcore.c b/lib/eal/freebsd/eal_lcore.c index d9ef4bc9c5..1d3d1b67b9 100644 --- a/lib/eal/freebsd/eal_lcore.c +++ b/lib/eal/freebsd/eal_lcore.c @@ -30,7 +30,7 @@ eal_get_ncpus(void) if (ncpu < 0) { sysctl(mib, 2, &ncpu, &len, NULL, 0); - RTE_LOG(INFO, EAL, "Sysctl reports %d cpus\n", ncpu); + EAL_LOG(INFO, "Sysctl reports %d cpus", ncpu); } return ncpu; } diff --git a/lib/eal/freebsd/eal_memalloc.c b/lib/eal/freebsd/eal_memalloc.c index 00ab02cb63..49faf0c6aa 100644 --- a/lib/eal/freebsd/eal_memalloc.c +++ b/lib/eal/freebsd/eal_memalloc.c @@ -9,27 +9,28 @@ #include <rte_memory.h> #include "eal_memalloc.h" +#include "eal_private.h" int eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms __rte_unused, int __rte_unused n_segs, size_t __rte_unused page_sz, int __rte_unused socket, bool __rte_unused exact) { - RTE_LOG(ERR, EAL, "Memory hotplug not supported on FreeBSD\n"); + EAL_LOG(ERR, "Memory hotplug not supported on FreeBSD"); return -1; } struct rte_memseg * eal_memalloc_alloc_seg(size_t __rte_unused page_sz, int __rte_unused socket) { - RTE_LOG(ERR, EAL, "Memory hotplug not supported on FreeBSD\n"); + EAL_LOG(ERR, "Memory hotplug not supported on FreeBSD"); return NULL; } int eal_memalloc_free_seg(struct rte_memseg *ms __rte_unused) { - RTE_LOG(ERR, EAL, "Memory hotplug not supported on FreeBSD\n"); + EAL_LOG(ERR, "Memory hotplug not supported on FreeBSD"); return -1; } @@ -37,14 +38,14 @@ int eal_memalloc_free_seg_bulk(struct rte_memseg **ms __rte_unused, int n_segs __rte_unused) { - RTE_LOG(ERR, EAL, "Memory hotplug not supported on FreeBSD\n"); + EAL_LOG(ERR, "Memory hotplug not supported on FreeBSD"); return -1; } int eal_memalloc_sync_with_primary(void) { - RTE_LOG(ERR, EAL, "Memory hotplug not supported on FreeBSD\n"); + EAL_LOG(ERR, "Memory hotplug not supported on FreeBSD"); return -1; } diff --git a/lib/eal/freebsd/eal_memory.c b/lib/eal/freebsd/eal_memory.c index 5c6165c580..a6f3ba226e 100644 --- a/lib/eal/freebsd/eal_memory.c +++ b/lib/eal/freebsd/eal_memory.c @@ -84,7 +84,7 @@ rte_eal_hugepage_init(void) addr = mmap(NULL, mem_sz, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); if (addr == MAP_FAILED) { - RTE_LOG(ERR, EAL, "%s: mmap() failed: %s\n", __func__, + EAL_LOG(ERR, "%s: mmap() failed: %s", __func__, strerror(errno)); return -1; } @@ -132,8 +132,8 @@ rte_eal_hugepage_init(void) error = sysctlbyname(physaddr_str, &physaddr, &sysctl_size, NULL, 0); if (error < 0) { - RTE_LOG(ERR, EAL, "Failed to get physical addr for buffer %u " - "from %s\n", j, hpi->hugedir); + EAL_LOG(ERR, "Failed to get physical addr for buffer %u " + "from %s", j, hpi->hugedir); return -1; } @@ -172,8 +172,8 @@ rte_eal_hugepage_init(void) break; } if (msl_idx == RTE_MAX_MEMSEG_LISTS) { - RTE_LOG(ERR, EAL, "Could not find space for memseg. Please increase RTE_MAX_MEMSEG_PER_LIST " - "RTE_MAX_MEMSEG_PER_TYPE and/or RTE_MAX_MEM_MB_PER_TYPE in configuration.\n"); + EAL_LOG(ERR, "Could not find space for memseg. Please increase RTE_MAX_MEMSEG_PER_LIST " + "RTE_MAX_MEMSEG_PER_TYPE and/or RTE_MAX_MEM_MB_PER_TYPE in configuration."); return -1; } arr = &msl->memseg_arr; @@ -190,7 +190,7 @@ rte_eal_hugepage_init(void) hpi->lock_descriptor, j * EAL_PAGE_SIZE); if (addr == MAP_FAILED) { - RTE_LOG(ERR, EAL, "Failed to mmap buffer %u from %s\n", + EAL_LOG(ERR, "Failed to mmap buffer %u from %s", j, hpi->hugedir); return -1; } @@ -205,8 +205,8 @@ rte_eal_hugepage_init(void) rte_fbarray_set_used(arr, ms_idx); - RTE_LOG(INFO, EAL, "Mapped memory segment %u @ %p: physaddr:0x%" - PRIx64", len %zu\n", + EAL_LOG(INFO, "Mapped memory segment %u @ %p: physaddr:0x%" + PRIx64", len %zu", seg_idx++, addr, physaddr, page_sz); total_mem += seg->len; @@ -215,9 +215,9 @@ rte_eal_hugepage_init(void) break; } if (total_mem < internal_conf->memory) { - RTE_LOG(ERR, EAL, "Couldn't reserve requested memory, " + EAL_LOG(ERR, "Couldn't reserve requested memory, " "requested: %" PRIu64 "M " - "available: %" PRIu64 "M\n", + "available: %" PRIu64 "M", internal_conf->memory >> 20, total_mem >> 20); return -1; } @@ -268,7 +268,7 @@ rte_eal_hugepage_attach(void) /* Obtain a file descriptor for contiguous memory */ fd_hugepage = open(cur_hpi->hugedir, O_RDWR); if (fd_hugepage < 0) { - RTE_LOG(ERR, EAL, "Could not open %s\n", + EAL_LOG(ERR, "Could not open %s", cur_hpi->hugedir); goto error; } @@ -277,7 +277,7 @@ rte_eal_hugepage_attach(void) /* Map the contiguous memory into each memory segment */ if (rte_memseg_walk(attach_segment, &wa) < 0) { - RTE_LOG(ERR, EAL, "Failed to mmap buffer %u from %s\n", + EAL_LOG(ERR, "Failed to mmap buffer %u from %s", wa.seg_idx, cur_hpi->hugedir); goto error; } @@ -402,8 +402,8 @@ memseg_primary_init(void) unsigned int n_segs; if (msl_idx >= RTE_MAX_MEMSEG_LISTS) { - RTE_LOG(ERR, EAL, - "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS\n"); + EAL_LOG(ERR, + "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS"); return -1; } @@ -424,7 +424,7 @@ memseg_primary_init(void) type_msl_idx++; if (memseg_list_alloc(msl)) { - RTE_LOG(ERR, EAL, "Cannot allocate VA space for memseg list\n"); + EAL_LOG(ERR, "Cannot allocate VA space for memseg list"); return -1; } } @@ -449,13 +449,13 @@ memseg_secondary_init(void) continue; if (rte_fbarray_attach(&msl->memseg_arr)) { - RTE_LOG(ERR, EAL, "Cannot attach to primary process memseg lists\n"); + EAL_LOG(ERR, "Cannot attach to primary process memseg lists"); return -1; } /* preallocate VA space */ if (memseg_list_alloc(msl)) { - RTE_LOG(ERR, EAL, "Cannot preallocate VA space for hugepage memory\n"); + EAL_LOG(ERR, "Cannot preallocate VA space for hugepage memory"); return -1; } } diff --git a/lib/eal/freebsd/eal_thread.c b/lib/eal/freebsd/eal_thread.c index 6f97a3c2c1..77b7621908 100644 --- a/lib/eal/freebsd/eal_thread.c +++ b/lib/eal/freebsd/eal_thread.c @@ -38,7 +38,7 @@ void rte_thread_set_name(rte_thread_t thread_id, const char *thread_name) const size_t truncatedsz = sizeof(truncated); if (strlcpy(truncated, thread_name, truncatedsz) >= truncatedsz) - RTE_LOG(DEBUG, EAL, "Truncated thread name\n"); + EAL_LOG(DEBUG, "Truncated thread name"); pthread_set_name_np((pthread_t)thread_id.opaque_id, truncated); } diff --git a/lib/eal/freebsd/eal_timer.c b/lib/eal/freebsd/eal_timer.c index beff755a47..3dd70e24ba 100644 --- a/lib/eal/freebsd/eal_timer.c +++ b/lib/eal/freebsd/eal_timer.c @@ -36,20 +36,20 @@ get_tsc_freq(void) tmp = 0; if (sysctlbyname("kern.timecounter.smp_tsc", &tmp, &sz, NULL, 0)) - RTE_LOG(WARNING, EAL, "%s\n", strerror(errno)); + EAL_LOG(WARNING, "%s", strerror(errno)); else if (tmp != 1) - RTE_LOG(WARNING, EAL, "TSC is not safe to use in SMP mode\n"); + EAL_LOG(WARNING, "TSC is not safe to use in SMP mode"); tmp = 0; if (sysctlbyname("kern.timecounter.invariant_tsc", &tmp, &sz, NULL, 0)) - RTE_LOG(WARNING, EAL, "%s\n", strerror(errno)); + EAL_LOG(WARNING, "%s", strerror(errno)); else if (tmp != 1) - RTE_LOG(WARNING, EAL, "TSC is not invariant\n"); + EAL_LOG(WARNING, "TSC is not invariant"); sz = sizeof(tsc_hz); if (sysctlbyname("machdep.tsc_freq", &tsc_hz, &sz, NULL, 0)) { - RTE_LOG(WARNING, EAL, "%s\n", strerror(errno)); + EAL_LOG(WARNING, "%s", strerror(errno)); return 0; } diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c index 57da058cec..fd422f1f62 100644 --- a/lib/eal/linux/eal.c +++ b/lib/eal/linux/eal.c @@ -94,7 +94,7 @@ eal_clean_runtime_dir(void) /* open directory */ dir = opendir(runtime_dir); if (!dir) { - RTE_LOG(ERR, EAL, "Unable to open runtime directory %s\n", + EAL_LOG(ERR, "Unable to open runtime directory %s", runtime_dir); goto error; } @@ -102,14 +102,14 @@ eal_clean_runtime_dir(void) /* lock the directory before doing anything, to avoid races */ if (flock(dir_fd, LOCK_EX) < 0) { - RTE_LOG(ERR, EAL, "Unable to lock runtime directory %s\n", + EAL_LOG(ERR, "Unable to lock runtime directory %s", runtime_dir); goto error; } dirent = readdir(dir); if (!dirent) { - RTE_LOG(ERR, EAL, "Unable to read runtime directory %s\n", + EAL_LOG(ERR, "Unable to read runtime directory %s", runtime_dir); goto error; } @@ -159,7 +159,7 @@ eal_clean_runtime_dir(void) if (dir) closedir(dir); - RTE_LOG(ERR, EAL, "Error while clearing runtime dir: %s\n", + EAL_LOG(ERR, "Error while clearing runtime dir: %s", strerror(errno)); return -1; @@ -200,7 +200,7 @@ rte_eal_config_create(void) if (mem_cfg_fd < 0){ mem_cfg_fd = open(pathname, O_RDWR | O_CREAT, 0600); if (mem_cfg_fd < 0) { - RTE_LOG(ERR, EAL, "Cannot open '%s' for rte_mem_config\n", + EAL_LOG(ERR, "Cannot open '%s' for rte_mem_config", pathname); return -1; } @@ -210,7 +210,7 @@ rte_eal_config_create(void) if (retval < 0){ close(mem_cfg_fd); mem_cfg_fd = -1; - RTE_LOG(ERR, EAL, "Cannot resize '%s' for rte_mem_config\n", + EAL_LOG(ERR, "Cannot resize '%s' for rte_mem_config", pathname); return -1; } @@ -219,8 +219,8 @@ rte_eal_config_create(void) if (retval < 0){ close(mem_cfg_fd); mem_cfg_fd = -1; - RTE_LOG(ERR, EAL, "Cannot create lock on '%s'. Is another primary " - "process running?\n", pathname); + EAL_LOG(ERR, "Cannot create lock on '%s'. Is another primary " + "process running?", pathname); return -1; } @@ -228,7 +228,7 @@ rte_eal_config_create(void) rte_mem_cfg_addr = eal_get_virtual_area(rte_mem_cfg_addr, &cfg_len_aligned, page_sz, 0, 0); if (rte_mem_cfg_addr == NULL) { - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config\n"); + EAL_LOG(ERR, "Cannot mmap memory for rte_config"); close(mem_cfg_fd); mem_cfg_fd = -1; return -1; @@ -242,7 +242,7 @@ rte_eal_config_create(void) munmap(rte_mem_cfg_addr, cfg_len); close(mem_cfg_fd); mem_cfg_fd = -1; - RTE_LOG(ERR, EAL, "Cannot remap memory for rte_config\n"); + EAL_LOG(ERR, "Cannot remap memory for rte_config"); return -1; } @@ -275,7 +275,7 @@ rte_eal_config_attach(void) if (mem_cfg_fd < 0){ mem_cfg_fd = open(pathname, O_RDWR); if (mem_cfg_fd < 0) { - RTE_LOG(ERR, EAL, "Cannot open '%s' for rte_mem_config\n", + EAL_LOG(ERR, "Cannot open '%s' for rte_mem_config", pathname); return -1; } @@ -287,7 +287,7 @@ rte_eal_config_attach(void) if (mem_config == MAP_FAILED) { close(mem_cfg_fd); mem_cfg_fd = -1; - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)\n", + EAL_LOG(ERR, "Cannot mmap memory for rte_config! error %i (%s)", errno, strerror(errno)); return -1; } @@ -328,13 +328,13 @@ rte_eal_config_reattach(void) if (mem_config == MAP_FAILED || mem_config != rte_mem_cfg_addr) { if (mem_config != MAP_FAILED) { /* errno is stale, don't use */ - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config at [%p], got [%p]" + EAL_LOG(ERR, "Cannot mmap memory for rte_config at [%p], got [%p]" " - please use '--" OPT_BASE_VIRTADDR - "' option\n", rte_mem_cfg_addr, mem_config); + "' option", rte_mem_cfg_addr, mem_config); munmap(mem_config, sizeof(struct rte_mem_config)); return -1; } - RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)\n", + EAL_LOG(ERR, "Cannot mmap memory for rte_config! error %i (%s)", errno, strerror(errno)); return -1; } @@ -365,7 +365,7 @@ eal_proc_type_detect(void) ptype = RTE_PROC_SECONDARY; } - RTE_LOG(INFO, EAL, "Auto-detected process type: %s\n", + EAL_LOG(INFO, "Auto-detected process type: %s", ptype == RTE_PROC_PRIMARY ? "PRIMARY" : "SECONDARY"); return ptype; @@ -392,20 +392,20 @@ rte_config_init(void) return -1; eal_mcfg_wait_complete(); if (eal_mcfg_check_version() < 0) { - RTE_LOG(ERR, EAL, "Primary and secondary process DPDK version mismatch\n"); + EAL_LOG(ERR, "Primary and secondary process DPDK version mismatch"); return -1; } if (rte_eal_config_reattach() < 0) return -1; if (!__rte_mp_enable()) { - RTE_LOG(ERR, EAL, "Primary process refused secondary attachment\n"); + EAL_LOG(ERR, "Primary process refused secondary attachment"); return -1; } eal_mcfg_update_internal(); break; case RTE_PROC_AUTO: case RTE_PROC_INVALID: - RTE_LOG(ERR, EAL, "Invalid process type %d\n", + EAL_LOG(ERR, "Invalid process type %d", config->process_type); return -1; } @@ -474,7 +474,7 @@ eal_parse_socket_arg(char *strval, volatile uint64_t *socket_arg) len = strnlen(strval, SOCKET_MEM_STRLEN); if (len == SOCKET_MEM_STRLEN) { - RTE_LOG(ERR, EAL, "--socket-mem is too long\n"); + EAL_LOG(ERR, "--socket-mem is too long"); return -1; } @@ -595,13 +595,13 @@ eal_parse_huge_worker_stack(const char *arg) int ret; if (pthread_attr_init(&attr) != 0) { - RTE_LOG(ERR, EAL, "Could not retrieve default stack size\n"); + EAL_LOG(ERR, "Could not retrieve default stack size"); return -1; } ret = pthread_attr_getstacksize(&attr, &cfg->huge_worker_stack_size); pthread_attr_destroy(&attr); if (ret != 0) { - RTE_LOG(ERR, EAL, "Could not retrieve default stack size\n"); + EAL_LOG(ERR, "Could not retrieve default stack size"); return -1; } } else { @@ -617,7 +617,7 @@ eal_parse_huge_worker_stack(const char *arg) cfg->huge_worker_stack_size = stack_size * 1024; } - RTE_LOG(DEBUG, EAL, "Each worker thread will use %zu kB of DPDK memory as stack\n", + EAL_LOG(DEBUG, "Each worker thread will use %zu kB of DPDK memory as stack", cfg->huge_worker_stack_size / 1024); return 0; } @@ -673,7 +673,7 @@ eal_parse_args(int argc, char **argv) { char *hdir = strdup(optarg); if (hdir == NULL) - RTE_LOG(ERR, EAL, "Could not store hugepage directory\n"); + EAL_LOG(ERR, "Could not store hugepage directory"); else { /* free old hugepage dir */ free(internal_conf->hugepage_dir); @@ -685,7 +685,7 @@ eal_parse_args(int argc, char **argv) { char *prefix = strdup(optarg); if (prefix == NULL) - RTE_LOG(ERR, EAL, "Could not store file prefix\n"); + EAL_LOG(ERR, "Could not store file prefix"); else { /* free old prefix */ free(internal_conf->hugefile_prefix); @@ -696,8 +696,8 @@ eal_parse_args(int argc, char **argv) case OPT_SOCKET_MEM_NUM: if (eal_parse_socket_arg(optarg, internal_conf->socket_mem) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_SOCKET_MEM "\n"); + EAL_LOG(ERR, "invalid parameters for --" + OPT_SOCKET_MEM); eal_usage(prgname); ret = -1; goto out; @@ -708,8 +708,8 @@ eal_parse_args(int argc, char **argv) case OPT_SOCKET_LIMIT_NUM: if (eal_parse_socket_arg(optarg, internal_conf->socket_limit) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_SOCKET_LIMIT "\n"); + EAL_LOG(ERR, "invalid parameters for --" + OPT_SOCKET_LIMIT); eal_usage(prgname); ret = -1; goto out; @@ -719,8 +719,8 @@ eal_parse_args(int argc, char **argv) case OPT_VFIO_INTR_NUM: if (eal_parse_vfio_intr(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_VFIO_INTR "\n"); + EAL_LOG(ERR, "invalid parameters for --" + OPT_VFIO_INTR); eal_usage(prgname); ret = -1; goto out; @@ -729,8 +729,8 @@ eal_parse_args(int argc, char **argv) case OPT_VFIO_VF_TOKEN_NUM: if (eal_parse_vfio_vf_token(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameters for --" - OPT_VFIO_VF_TOKEN "\n"); + EAL_LOG(ERR, "invalid parameters for --" + OPT_VFIO_VF_TOKEN); eal_usage(prgname); ret = -1; goto out; @@ -745,7 +745,7 @@ eal_parse_args(int argc, char **argv) { char *ops_name = strdup(optarg); if (ops_name == NULL) - RTE_LOG(ERR, EAL, "Could not store mbuf pool ops name\n"); + EAL_LOG(ERR, "Could not store mbuf pool ops name"); else { /* free old ops name */ free(internal_conf->user_mbuf_pool_ops_name); @@ -761,8 +761,8 @@ eal_parse_args(int argc, char **argv) case OPT_HUGE_WORKER_STACK_NUM: if (eal_parse_huge_worker_stack(optarg) < 0) { - RTE_LOG(ERR, EAL, "invalid parameter for --" - OPT_HUGE_WORKER_STACK"\n"); + EAL_LOG(ERR, "invalid parameter for --" + OPT_HUGE_WORKER_STACK); eal_usage(prgname); ret = -1; goto out; @@ -771,16 +771,16 @@ eal_parse_args(int argc, char **argv) default: if (opt < OPT_LONG_MIN_NUM && isprint(opt)) { - RTE_LOG(ERR, EAL, "Option %c is not supported " - "on Linux\n", opt); + EAL_LOG(ERR, "Option %c is not supported " + "on Linux", opt); } else if (opt >= OPT_LONG_MIN_NUM && opt < OPT_LONG_MAX_NUM) { - RTE_LOG(ERR, EAL, "Option %s is not supported " - "on Linux\n", + EAL_LOG(ERR, "Option %s is not supported " + "on Linux", eal_long_options[option_index].name); } else { - RTE_LOG(ERR, EAL, "Option %d is not supported " - "on Linux\n", opt); + EAL_LOG(ERR, "Option %d is not supported " + "on Linux", opt); } eal_usage(prgname); ret = -1; @@ -791,11 +791,11 @@ eal_parse_args(int argc, char **argv) /* create runtime data directory. In no_shconf mode, skip any errors */ if (eal_create_runtime_dir() < 0) { if (internal_conf->no_shconf == 0) { - RTE_LOG(ERR, EAL, "Cannot create runtime directory\n"); + EAL_LOG(ERR, "Cannot create runtime directory"); ret = -1; goto out; } else - RTE_LOG(WARNING, EAL, "No DPDK runtime directory created\n"); + EAL_LOG(WARNING, "No DPDK runtime directory created"); } if (eal_adjust_config(internal_conf) != 0) { @@ -843,7 +843,7 @@ eal_check_mem_on_local_socket(void) socket_id = rte_lcore_to_socket_id(config->main_lcore); if (rte_memseg_list_walk(check_socket, &socket_id) == 0) - RTE_LOG(WARNING, EAL, "WARNING: Main core has no memory on local socket!\n"); + EAL_LOG(WARNING, "WARNING: Main core has no memory on local socket!"); } static int @@ -880,7 +880,7 @@ static int rte_eal_vfio_setup(void) static void rte_eal_init_alert(const char *msg) { fprintf(stderr, "EAL: FATAL: %s\n", msg); - RTE_LOG(ERR, EAL, "%s\n", msg); + EAL_LOG(ERR, "%s", msg); } /* @@ -1073,27 +1073,27 @@ rte_eal_init(int argc, char **argv) enum rte_iova_mode iova_mode = rte_bus_get_iommu_class(); if (iova_mode == RTE_IOVA_DC) { - RTE_LOG(DEBUG, EAL, "Buses did not request a specific IOVA mode.\n"); + EAL_LOG(DEBUG, "Buses did not request a specific IOVA mode."); if (!RTE_IOVA_IN_MBUF) { iova_mode = RTE_IOVA_VA; - RTE_LOG(DEBUG, EAL, "IOVA as VA mode is forced by build option.\n"); + EAL_LOG(DEBUG, "IOVA as VA mode is forced by build option."); } else if (!phys_addrs) { /* if we have no access to physical addresses, * pick IOVA as VA mode. */ iova_mode = RTE_IOVA_VA; - RTE_LOG(DEBUG, EAL, "Physical addresses are unavailable, selecting IOVA as VA mode.\n"); + EAL_LOG(DEBUG, "Physical addresses are unavailable, selecting IOVA as VA mode."); } else if (is_iommu_enabled()) { /* we have an IOMMU, pick IOVA as VA mode */ iova_mode = RTE_IOVA_VA; - RTE_LOG(DEBUG, EAL, "IOMMU is available, selecting IOVA as VA mode.\n"); + EAL_LOG(DEBUG, "IOMMU is available, selecting IOVA as VA mode."); } else { /* physical addresses available, and no IOMMU * found, so pick IOVA as PA. */ iova_mode = RTE_IOVA_PA; - RTE_LOG(DEBUG, EAL, "IOMMU is not available, selecting IOVA as PA mode.\n"); + EAL_LOG(DEBUG, "IOMMU is not available, selecting IOVA as PA mode."); } } rte_eal_get_configuration()->iova_mode = iova_mode; @@ -1114,7 +1114,7 @@ rte_eal_init(int argc, char **argv) return -1; } - RTE_LOG(INFO, EAL, "Selected IOVA mode '%s'\n", + EAL_LOG(INFO, "Selected IOVA mode '%s'", rte_eal_iova_mode() == RTE_IOVA_PA ? "PA" : "VA"); if (internal_conf->no_hugetlbfs == 0) { @@ -1138,11 +1138,11 @@ rte_eal_init(int argc, char **argv) if (internal_conf->vmware_tsc_map == 1) { #ifdef RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT rte_cycles_vmware_tsc_map = 1; - RTE_LOG (DEBUG, EAL, "Using VMWARE TSC MAP, " - "you must have monitor_control.pseudo_perfctr = TRUE\n"); + EAL_LOG(DEBUG, "Using VMWARE TSC MAP, " + "you must have monitor_control.pseudo_perfctr = TRUE"); #else - RTE_LOG (WARNING, EAL, "Ignoring --vmware-tsc-map because " - "RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT is not set\n"); + EAL_LOG(WARNING, "Ignoring --vmware-tsc-map because " + "RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT is not set"); #endif } @@ -1229,7 +1229,7 @@ rte_eal_init(int argc, char **argv) &lcore_config[config->main_lcore].cpuset); ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset)); - RTE_LOG(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])\n", + EAL_LOG(DEBUG, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])", config->main_lcore, (uintptr_t)pthread_self(), cpuset, ret == 0 ? "" : "..."); @@ -1350,7 +1350,7 @@ rte_eal_cleanup(void) if (!rte_atomic_compare_exchange_strong_explicit(&run_once, &has_run, 1, rte_memory_order_relaxed, rte_memory_order_relaxed)) { - RTE_LOG(WARNING, EAL, "Already called cleanup\n"); + EAL_LOG(WARNING, "Already called cleanup"); rte_errno = EALREADY; return -1; } @@ -1420,7 +1420,7 @@ rte_eal_check_module(const char *module_name) /* Check if there is sysfs mounted */ if (stat("/sys/module", &st) != 0) { - RTE_LOG(DEBUG, EAL, "sysfs is not mounted! error %i (%s)\n", + EAL_LOG(DEBUG, "sysfs is not mounted! error %i (%s)", errno, strerror(errno)); return -1; } @@ -1428,12 +1428,12 @@ rte_eal_check_module(const char *module_name) /* A module might be built-in, therefore try sysfs */ n = snprintf(sysfs_mod_name, PATH_MAX, "/sys/module/%s", module_name); if (n < 0 || n > PATH_MAX) { - RTE_LOG(DEBUG, EAL, "Could not format module path\n"); + EAL_LOG(DEBUG, "Could not format module path"); return -1; } if (stat(sysfs_mod_name, &st) != 0) { - RTE_LOG(DEBUG, EAL, "Module %s not found! error %i (%s)\n", + EAL_LOG(DEBUG, "Module %s not found! error %i (%s)", sysfs_mod_name, errno, strerror(errno)); return 0; } diff --git a/lib/eal/linux/eal_alarm.c b/lib/eal/linux/eal_alarm.c index 766ba2c251..eeb096213b 100644 --- a/lib/eal/linux/eal_alarm.c +++ b/lib/eal/linux/eal_alarm.c @@ -65,7 +65,7 @@ rte_eal_alarm_init(void) intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE); if (intr_handle == NULL) { - RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n"); + EAL_LOG(ERR, "Fail to allocate intr_handle"); goto error; } diff --git a/lib/eal/linux/eal_dev.c b/lib/eal/linux/eal_dev.c index ac76f6174d..e678dce6c7 100644 --- a/lib/eal/linux/eal_dev.c +++ b/lib/eal/linux/eal_dev.c @@ -64,7 +64,7 @@ static void sigbus_handler(int signum, siginfo_t *info, { int ret; - RTE_LOG(DEBUG, EAL, "Thread catch SIGBUS, fault address:%p\n", + EAL_LOG(DEBUG, "Thread catch SIGBUS, fault address:%p", info->si_addr); rte_spinlock_lock(&failure_handle_lock); @@ -88,7 +88,7 @@ static void sigbus_handler(int signum, siginfo_t *info, } } - RTE_LOG(DEBUG, EAL, "Success to handle SIGBUS for hot-unplug!\n"); + EAL_LOG(DEBUG, "Success to handle SIGBUS for hot-unplug!"); } static int cmp_dev_name(const struct rte_device *dev, @@ -108,7 +108,7 @@ dev_uev_socket_fd_create(void) fd = socket(PF_NETLINK, SOCK_RAW | SOCK_CLOEXEC | SOCK_NONBLOCK, NETLINK_KOBJECT_UEVENT); if (fd < 0) { - RTE_LOG(ERR, EAL, "create uevent fd failed.\n"); + EAL_LOG(ERR, "create uevent fd failed."); return -1; } @@ -119,7 +119,7 @@ dev_uev_socket_fd_create(void) ret = bind(fd, (struct sockaddr *) &addr, sizeof(addr)); if (ret < 0) { - RTE_LOG(ERR, EAL, "Failed to bind uevent socket.\n"); + EAL_LOG(ERR, "Failed to bind uevent socket."); goto err; } @@ -245,18 +245,18 @@ dev_uev_handler(__rte_unused void *param) return; else if (ret <= 0) { /* connection is closed or broken, can not up again. */ - RTE_LOG(ERR, EAL, "uevent socket connection is broken.\n"); + EAL_LOG(ERR, "uevent socket connection is broken."); rte_eal_alarm_set(1, dev_delayed_unregister, NULL); return; } ret = dev_uev_parse(buf, &uevent, EAL_UEV_MSG_LEN); if (ret < 0) { - RTE_LOG(DEBUG, EAL, "Ignoring uevent '%s'\n", buf); + EAL_LOG(DEBUG, "Ignoring uevent '%s'", buf); return; } - RTE_LOG(DEBUG, EAL, "receive uevent(name:%s, type:%d, subsystem:%d)\n", + EAL_LOG(DEBUG, "receive uevent(name:%s, type:%d, subsystem:%d)", uevent.devname, uevent.type, uevent.subsystem); switch (uevent.subsystem) { @@ -273,7 +273,7 @@ dev_uev_handler(__rte_unused void *param) rte_spinlock_lock(&failure_handle_lock); bus = rte_bus_find_by_name(busname); if (bus == NULL) { - RTE_LOG(ERR, EAL, "Cannot find bus (%s)\n", + EAL_LOG(ERR, "Cannot find bus (%s)", busname); goto failure_handle_err; } @@ -281,15 +281,15 @@ dev_uev_handler(__rte_unused void *param) dev = bus->find_device(NULL, cmp_dev_name, uevent.devname); if (dev == NULL) { - RTE_LOG(ERR, EAL, "Cannot find device (%s) on " - "bus (%s)\n", uevent.devname, busname); + EAL_LOG(ERR, "Cannot find device (%s) on " + "bus (%s)", uevent.devname, busname); goto failure_handle_err; } ret = bus->hot_unplug_handler(dev); if (ret) { - RTE_LOG(ERR, EAL, "Can not handle hot-unplug " - "for device (%s)\n", dev->name); + EAL_LOG(ERR, "Can not handle hot-unplug " + "for device (%s)", dev->name); } rte_spinlock_unlock(&failure_handle_lock); } @@ -318,7 +318,7 @@ rte_dev_event_monitor_start(void) intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE); if (intr_handle == NULL) { - RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n"); + EAL_LOG(ERR, "Fail to allocate intr_handle"); goto exit; } @@ -332,7 +332,7 @@ rte_dev_event_monitor_start(void) ret = dev_uev_socket_fd_create(); if (ret) { - RTE_LOG(ERR, EAL, "error create device event fd.\n"); + EAL_LOG(ERR, "error create device event fd."); goto exit; } @@ -362,7 +362,7 @@ rte_dev_event_monitor_stop(void) rte_rwlock_write_lock(&monitor_lock); if (!monitor_refcount) { - RTE_LOG(ERR, EAL, "device event monitor already stopped\n"); + EAL_LOG(ERR, "device event monitor already stopped"); goto exit; } @@ -374,7 +374,7 @@ rte_dev_event_monitor_stop(void) ret = rte_intr_callback_unregister(intr_handle, dev_uev_handler, (void *)-1); if (ret < 0) { - RTE_LOG(ERR, EAL, "fail to unregister uevent callback.\n"); + EAL_LOG(ERR, "fail to unregister uevent callback."); goto exit; } @@ -429,8 +429,8 @@ rte_dev_hotplug_handle_enable(void) ret = dev_sigbus_handler_register(); if (ret < 0) - RTE_LOG(ERR, EAL, - "fail to register sigbus handler for devices.\n"); + EAL_LOG(ERR, + "fail to register sigbus handler for devices."); hotplug_handle = true; @@ -444,8 +444,8 @@ rte_dev_hotplug_handle_disable(void) ret = dev_sigbus_handler_unregister(); if (ret < 0) - RTE_LOG(ERR, EAL, - "fail to unregister sigbus handler for devices.\n"); + EAL_LOG(ERR, + "fail to unregister sigbus handler for devices."); hotplug_handle = false; diff --git a/lib/eal/linux/eal_hugepage_info.c b/lib/eal/linux/eal_hugepage_info.c index 36a495fb1f..d47a19c56a 100644 --- a/lib/eal/linux/eal_hugepage_info.c +++ b/lib/eal/linux/eal_hugepage_info.c @@ -110,7 +110,7 @@ get_num_hugepages(const char *subdir, size_t sz, unsigned int reusable_pages) over_pages = 0; if (num_pages == 0 && over_pages == 0 && reusable_pages) - RTE_LOG(WARNING, EAL, "No available %zu kB hugepages reported\n", + EAL_LOG(WARNING, "No available %zu kB hugepages reported", sz >> 10); num_pages += over_pages; @@ -155,7 +155,7 @@ get_num_hugepages_on_node(const char *subdir, unsigned int socket, size_t sz) return 0; if (num_pages == 0) - RTE_LOG(WARNING, EAL, "No free %zu kB hugepages reported on node %u\n", + EAL_LOG(WARNING, "No free %zu kB hugepages reported on node %u", sz >> 10, socket); /* @@ -239,7 +239,7 @@ get_hugepage_dir(uint64_t hugepage_sz, char *hugedir, int len) if (rte_strsplit(buf, sizeof(buf), splitstr, _FIELDNAME_MAX, split_tok) != _FIELDNAME_MAX) { - RTE_LOG(ERR, EAL, "Error parsing %s\n", proc_mounts); + EAL_LOG(ERR, "Error parsing %s", proc_mounts); break; /* return NULL */ } @@ -325,7 +325,7 @@ walk_hugedir(const char *hugedir, walk_hugedir_t *cb, void *user_data) dir = opendir(hugedir); if (!dir) { - RTE_LOG(ERR, EAL, "Unable to open hugepage directory %s\n", + EAL_LOG(ERR, "Unable to open hugepage directory %s", hugedir); goto error; } @@ -333,7 +333,7 @@ walk_hugedir(const char *hugedir, walk_hugedir_t *cb, void *user_data) dirent = readdir(dir); if (!dirent) { - RTE_LOG(ERR, EAL, "Unable to read hugepage directory %s\n", + EAL_LOG(ERR, "Unable to read hugepage directory %s", hugedir); goto error; } @@ -377,7 +377,7 @@ walk_hugedir(const char *hugedir, walk_hugedir_t *cb, void *user_data) if (dir) closedir(dir); - RTE_LOG(ERR, EAL, "Error while walking hugepage dir: %s\n", + EAL_LOG(ERR, "Error while walking hugepage dir: %s", strerror(errno)); return -1; @@ -403,7 +403,7 @@ inspect_hugedir_cb(const struct walk_hugedir_data *whd) struct stat st; if (fstat(whd->file_fd, &st) < 0) - RTE_LOG(DEBUG, EAL, "%s(): stat(\"%s\") failed: %s\n", + EAL_LOG(DEBUG, "%s(): stat(\"%s\") failed: %s", __func__, whd->file_name, strerror(errno)); else (*total_size) += st.st_size; @@ -492,8 +492,8 @@ hugepage_info_init(void) dir = opendir(sys_dir_path); if (dir == NULL) { - RTE_LOG(ERR, EAL, - "Cannot open directory %s to read system hugepage info\n", + EAL_LOG(ERR, + "Cannot open directory %s to read system hugepage info", sys_dir_path); return -1; } @@ -520,10 +520,10 @@ hugepage_info_init(void) num_pages = get_num_hugepages(dirent->d_name, hpi->hugepage_sz, 0); if (num_pages > 0) - RTE_LOG(NOTICE, EAL, + EAL_LOG(NOTICE, "%" PRIu32 " hugepages of size " "%" PRIu64 " reserved, but no mounted " - "hugetlbfs found for that size\n", + "hugetlbfs found for that size", num_pages, hpi->hugepage_sz); /* if we have kernel support for reserving hugepages * through mmap, and we're in in-memory mode, treat this @@ -533,9 +533,9 @@ hugepage_info_init(void) */ #ifdef MAP_HUGE_SHIFT if (internal_conf->in_memory) { - RTE_LOG(DEBUG, EAL, "In-memory mode enabled, " + EAL_LOG(DEBUG, "In-memory mode enabled, " "hugepages of size %" PRIu64 " bytes " - "will be allocated anonymously\n", + "will be allocated anonymously", hpi->hugepage_sz); calc_num_pages(hpi, dirent, 0); num_sizes++; @@ -549,8 +549,8 @@ hugepage_info_init(void) /* if blocking lock failed */ if (flock(hpi->lock_descriptor, LOCK_EX) == -1) { - RTE_LOG(CRIT, EAL, - "Failed to lock hugepage directory!\n"); + EAL_LOG(CRIT, + "Failed to lock hugepage directory!"); break; } @@ -626,7 +626,7 @@ eal_hugepage_info_init(void) tmp_hpi = create_shared_memory(eal_hugepage_info_path(), sizeof(internal_conf->hugepage_info)); if (tmp_hpi == NULL) { - RTE_LOG(ERR, EAL, "Failed to create shared memory!\n"); + EAL_LOG(ERR, "Failed to create shared memory!"); return -1; } @@ -641,7 +641,7 @@ eal_hugepage_info_init(void) } if (munmap(tmp_hpi, sizeof(internal_conf->hugepage_info)) < 0) { - RTE_LOG(ERR, EAL, "Failed to unmap shared memory!\n"); + EAL_LOG(ERR, "Failed to unmap shared memory!"); return -1; } return 0; @@ -657,14 +657,14 @@ int eal_hugepage_info_read(void) tmp_hpi = open_shared_memory(eal_hugepage_info_path(), sizeof(internal_conf->hugepage_info)); if (tmp_hpi == NULL) { - RTE_LOG(ERR, EAL, "Failed to open shared memory!\n"); + EAL_LOG(ERR, "Failed to open shared memory!"); return -1; } memcpy(hpi, tmp_hpi, sizeof(internal_conf->hugepage_info)); if (munmap(tmp_hpi, sizeof(internal_conf->hugepage_info)) < 0) { - RTE_LOG(ERR, EAL, "Failed to unmap shared memory!\n"); + EAL_LOG(ERR, "Failed to unmap shared memory!"); return -1; } return 0; diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c index eabac24992..6436f796eb 100644 --- a/lib/eal/linux/eal_interrupts.c +++ b/lib/eal/linux/eal_interrupts.c @@ -123,7 +123,7 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) { ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { - RTE_LOG(ERR, EAL, "Error enabling INTx interrupts for fd %d\n", + EAL_LOG(ERR, "Error enabling INTx interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -140,7 +140,7 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) { ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { - RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n", + EAL_LOG(ERR, "Error unmasking INTx interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -168,7 +168,7 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) { ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { - RTE_LOG(ERR, EAL, "Error masking INTx interrupts for fd %d\n", + EAL_LOG(ERR, "Error masking INTx interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -184,7 +184,7 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) { ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { - RTE_LOG(ERR, EAL, "Error disabling INTx interrupts for fd %d\n", + EAL_LOG(ERR, "Error disabling INTx interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -208,7 +208,7 @@ vfio_ack_intx(const struct rte_intr_handle *intr_handle) vfio_dev_fd = rte_intr_dev_fd_get(intr_handle); if (ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) { - RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n", + EAL_LOG(ERR, "Error unmasking INTx interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -238,7 +238,7 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) { ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { - RTE_LOG(ERR, EAL, "Error enabling MSI interrupts for fd %d\n", + EAL_LOG(ERR, "Error enabling MSI interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -264,7 +264,7 @@ vfio_disable_msi(const struct rte_intr_handle *intr_handle) { vfio_dev_fd = rte_intr_dev_fd_get(intr_handle); ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) - RTE_LOG(ERR, EAL, "Error disabling MSI interrupts for fd %d\n", + EAL_LOG(ERR, "Error disabling MSI interrupts for fd %d", rte_intr_fd_get(intr_handle)); return ret; @@ -303,7 +303,7 @@ vfio_enable_msix(const struct rte_intr_handle *intr_handle) { ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { - RTE_LOG(ERR, EAL, "Error enabling MSI-X interrupts for fd %d\n", + EAL_LOG(ERR, "Error enabling MSI-X interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -331,7 +331,7 @@ vfio_disable_msix(const struct rte_intr_handle *intr_handle) { ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) - RTE_LOG(ERR, EAL, "Error disabling MSI-X interrupts for fd %d\n", + EAL_LOG(ERR, "Error disabling MSI-X interrupts for fd %d", rte_intr_fd_get(intr_handle)); return ret; @@ -363,7 +363,7 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle) ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) { - RTE_LOG(ERR, EAL, "Error enabling req interrupts for fd %d\n", + EAL_LOG(ERR, "Error enabling req interrupts for fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -392,7 +392,7 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle) ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); if (ret) - RTE_LOG(ERR, EAL, "Error disabling req interrupts for fd %d\n", + EAL_LOG(ERR, "Error disabling req interrupts for fd %d", rte_intr_fd_get(intr_handle)); return ret; @@ -409,16 +409,16 @@ uio_intx_intr_disable(const struct rte_intr_handle *intr_handle) /* use UIO config file descriptor for uio_pci_generic */ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle); if (uio_cfg_fd < 0 || pread(uio_cfg_fd, &command_high, 1, 5) != 1) { - RTE_LOG(ERR, EAL, - "Error reading interrupts status for fd %d\n", + EAL_LOG(ERR, + "Error reading interrupts status for fd %d", uio_cfg_fd); return -1; } /* disable interrupts */ command_high |= 0x4; if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) { - RTE_LOG(ERR, EAL, - "Error disabling interrupts for fd %d\n", + EAL_LOG(ERR, + "Error disabling interrupts for fd %d", uio_cfg_fd); return -1; } @@ -435,16 +435,16 @@ uio_intx_intr_enable(const struct rte_intr_handle *intr_handle) /* use UIO config file descriptor for uio_pci_generic */ uio_cfg_fd = rte_intr_dev_fd_get(intr_handle); if (uio_cfg_fd < 0 || pread(uio_cfg_fd, &command_high, 1, 5) != 1) { - RTE_LOG(ERR, EAL, - "Error reading interrupts status for fd %d\n", + EAL_LOG(ERR, + "Error reading interrupts status for fd %d", uio_cfg_fd); return -1; } /* enable interrupts */ command_high &= ~0x4; if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) { - RTE_LOG(ERR, EAL, - "Error enabling interrupts for fd %d\n", + EAL_LOG(ERR, + "Error enabling interrupts for fd %d", uio_cfg_fd); return -1; } @@ -459,7 +459,7 @@ uio_intr_disable(const struct rte_intr_handle *intr_handle) if (rte_intr_fd_get(intr_handle) < 0 || write(rte_intr_fd_get(intr_handle), &value, sizeof(value)) < 0) { - RTE_LOG(ERR, EAL, "Error disabling interrupts for fd %d (%s)\n", + EAL_LOG(ERR, "Error disabling interrupts for fd %d (%s)", rte_intr_fd_get(intr_handle), strerror(errno)); return -1; } @@ -473,7 +473,7 @@ uio_intr_enable(const struct rte_intr_handle *intr_handle) if (rte_intr_fd_get(intr_handle) < 0 || write(rte_intr_fd_get(intr_handle), &value, sizeof(value)) < 0) { - RTE_LOG(ERR, EAL, "Error enabling interrupts for fd %d (%s)\n", + EAL_LOG(ERR, "Error enabling interrupts for fd %d (%s)", rte_intr_fd_get(intr_handle), strerror(errno)); return -1; } @@ -492,14 +492,14 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, /* first do parameter checking */ if (rte_intr_fd_get(intr_handle) < 0 || cb == NULL) { - RTE_LOG(ERR, EAL, "Registering with invalid input parameter\n"); + EAL_LOG(ERR, "Registering with invalid input parameter"); return -EINVAL; } /* allocate a new interrupt callback entity */ callback = calloc(1, sizeof(*callback)); if (callback == NULL) { - RTE_LOG(ERR, EAL, "Can not allocate memory\n"); + EAL_LOG(ERR, "Can not allocate memory"); return -ENOMEM; } callback->cb_fn = cb; @@ -526,14 +526,14 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle, if (src == NULL) { src = calloc(1, sizeof(*src)); if (src == NULL) { - RTE_LOG(ERR, EAL, "Can not allocate memory\n"); + EAL_LOG(ERR, "Can not allocate memory"); ret = -ENOMEM; free(callback); callback = NULL; } else { src->intr_handle = rte_intr_instance_dup(intr_handle); if (src->intr_handle == NULL) { - RTE_LOG(ERR, EAL, "Can not create intr instance\n"); + EAL_LOG(ERR, "Can not create intr instance"); ret = -ENOMEM; free(callback); callback = NULL; @@ -575,7 +575,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle, /* do parameter checking first */ if (rte_intr_fd_get(intr_handle) < 0) { - RTE_LOG(ERR, EAL, "Unregistering with invalid input parameter\n"); + EAL_LOG(ERR, "Unregistering with invalid input parameter"); return -EINVAL; } @@ -625,7 +625,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle, /* do parameter checking first */ if (rte_intr_fd_get(intr_handle) < 0) { - RTE_LOG(ERR, EAL, "Unregistering with invalid input parameter\n"); + EAL_LOG(ERR, "Unregistering with invalid input parameter"); return -EINVAL; } @@ -752,7 +752,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle) break; /* unknown handle type */ default: - RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n", + EAL_LOG(ERR, "Unknown handle type of fd %d", rte_intr_fd_get(intr_handle)); rc = -1; break; @@ -817,7 +817,7 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle) return -1; /* unknown handle type */ default: - RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n", + EAL_LOG(ERR, "Unknown handle type of fd %d", rte_intr_fd_get(intr_handle)); return -1; } @@ -884,7 +884,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle) break; /* unknown handle type */ default: - RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n", + EAL_LOG(ERR, "Unknown handle type of fd %d", rte_intr_fd_get(intr_handle)); rc = -1; break; @@ -972,8 +972,8 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds) if (errno == EINTR || errno == EWOULDBLOCK) continue; - RTE_LOG(ERR, EAL, "Error reading from file " - "descriptor %d: %s\n", + EAL_LOG(ERR, "Error reading from file " + "descriptor %d: %s", events[n].data.fd, strerror(errno)); /* @@ -995,8 +995,8 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds) free(src); return -1; } else if (bytes_read == 0) - RTE_LOG(ERR, EAL, "Read nothing from file " - "descriptor %d\n", events[n].data.fd); + EAL_LOG(ERR, "Read nothing from file " + "descriptor %d", events[n].data.fd); else call = true; } @@ -1080,8 +1080,8 @@ eal_intr_handle_interrupts(int pfd, unsigned totalfds) if (nfds < 0) { if (errno == EINTR) continue; - RTE_LOG(ERR, EAL, - "epoll_wait returns with fail\n"); + EAL_LOG(ERR, + "epoll_wait returns with fail"); return; } /* epoll_wait timeout, will never happens here */ @@ -1192,8 +1192,8 @@ rte_eal_intr_init(void) eal_intr_thread_main, NULL); if (ret != 0) { rte_errno = -ret; - RTE_LOG(ERR, EAL, - "Failed to create thread for interrupt handling\n"); + EAL_LOG(ERR, + "Failed to create thread for interrupt handling"); } return ret; @@ -1226,7 +1226,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle) return; default: bytes_read = 1; - RTE_LOG(INFO, EAL, "unexpected intr type\n"); + EAL_LOG(INFO, "unexpected intr type"); break; } @@ -1242,11 +1242,11 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle) if (errno == EINTR || errno == EWOULDBLOCK || errno == EAGAIN) continue; - RTE_LOG(ERR, EAL, - "Error reading from fd %d: %s\n", + EAL_LOG(ERR, + "Error reading from fd %d: %s", fd, strerror(errno)); } else if (nbytes == 0) - RTE_LOG(ERR, EAL, "Read nothing from fd %d\n", fd); + EAL_LOG(ERR, "Read nothing from fd %d", fd); return; } while (1); } @@ -1296,8 +1296,8 @@ eal_init_tls_epfd(void) int pfd = epoll_create(255); if (pfd < 0) { - RTE_LOG(ERR, EAL, - "Cannot create epoll instance\n"); + EAL_LOG(ERR, + "Cannot create epoll instance"); return -1; } return pfd; @@ -1320,7 +1320,7 @@ eal_epoll_wait(int epfd, struct rte_epoll_event *events, int rc; if (!events) { - RTE_LOG(ERR, EAL, "rte_epoll_event can't be NULL\n"); + EAL_LOG(ERR, "rte_epoll_event can't be NULL"); return -1; } @@ -1342,7 +1342,7 @@ eal_epoll_wait(int epfd, struct rte_epoll_event *events, continue; } /* epoll_wait fail */ - RTE_LOG(ERR, EAL, "epoll_wait returns with fail %s\n", + EAL_LOG(ERR, "epoll_wait returns with fail %s", strerror(errno)); rc = -1; break; @@ -1393,7 +1393,7 @@ rte_epoll_ctl(int epfd, int op, int fd, struct epoll_event ev; if (!event) { - RTE_LOG(ERR, EAL, "rte_epoll_event can't be NULL\n"); + EAL_LOG(ERR, "rte_epoll_event can't be NULL"); return -1; } @@ -1411,7 +1411,7 @@ rte_epoll_ctl(int epfd, int op, int fd, ev.events = event->epdata.event; if (epoll_ctl(epfd, op, fd, &ev) < 0) { - RTE_LOG(ERR, EAL, "Error op %d fd %d epoll_ctl, %s\n", + EAL_LOG(ERR, "Error op %d fd %d epoll_ctl, %s", op, fd, strerror(errno)); if (op == EPOLL_CTL_ADD) /* rollback status when CTL_ADD fail */ @@ -1442,7 +1442,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd, if (intr_handle == NULL || rte_intr_nb_efd_get(intr_handle) == 0 || efd_idx >= (unsigned int)rte_intr_nb_efd_get(intr_handle)) { - RTE_LOG(ERR, EAL, "Wrong intr vector number.\n"); + EAL_LOG(ERR, "Wrong intr vector number."); return -EPERM; } @@ -1452,7 +1452,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd, rev = rte_intr_elist_index_get(intr_handle, efd_idx); if (rte_atomic_load_explicit(&rev->status, rte_memory_order_relaxed) != RTE_EPOLL_INVALID) { - RTE_LOG(INFO, EAL, "Event already been added.\n"); + EAL_LOG(INFO, "Event already been added."); return -EEXIST; } @@ -1465,9 +1465,9 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd, rc = rte_epoll_ctl(epfd, epfd_op, rte_intr_efds_index_get(intr_handle, efd_idx), rev); if (!rc) - RTE_LOG(DEBUG, EAL, - "efd %d associated with vec %d added on epfd %d" - "\n", rev->fd, vec, epfd); + EAL_LOG(DEBUG, + "efd %d associated with vec %d added on epfd %d", + rev->fd, vec, epfd); else rc = -EPERM; break; @@ -1476,7 +1476,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd, rev = rte_intr_elist_index_get(intr_handle, efd_idx); if (rte_atomic_load_explicit(&rev->status, rte_memory_order_relaxed) == RTE_EPOLL_INVALID) { - RTE_LOG(INFO, EAL, "Event does not exist.\n"); + EAL_LOG(INFO, "Event does not exist."); return -EPERM; } @@ -1485,7 +1485,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd, rc = -EPERM; break; default: - RTE_LOG(ERR, EAL, "event op type mismatch\n"); + EAL_LOG(ERR, "event op type mismatch"); rc = -EPERM; } @@ -1523,8 +1523,8 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd) for (i = 0; i < n; i++) { fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC); if (fd < 0) { - RTE_LOG(ERR, EAL, - "can't setup eventfd, error %i (%s)\n", + EAL_LOG(ERR, + "can't setup eventfd, error %i (%s)", errno, strerror(errno)); return -errno; } @@ -1542,7 +1542,7 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd) /* only check, initialization would be done in vdev driver.*/ if ((uint64_t)rte_intr_efd_counter_size_get(intr_handle) > sizeof(union rte_intr_read_buffer)) { - RTE_LOG(ERR, EAL, "the efd_counter_size is oversized\n"); + EAL_LOG(ERR, "the efd_counter_size is oversized"); return -EINVAL; } } else { diff --git a/lib/eal/linux/eal_lcore.c b/lib/eal/linux/eal_lcore.c index 2e6a350603..29b36dd610 100644 --- a/lib/eal/linux/eal_lcore.c +++ b/lib/eal/linux/eal_lcore.c @@ -68,7 +68,7 @@ eal_cpu_core_id(unsigned lcore_id) return (unsigned)id; err: - RTE_LOG(ERR, EAL, "Error reading core id value from %s " - "for lcore %u - assuming core 0\n", SYS_CPU_DIR, lcore_id); + EAL_LOG(ERR, "Error reading core id value from %s " + "for lcore %u - assuming core 0", SYS_CPU_DIR, lcore_id); return 0; } diff --git a/lib/eal/linux/eal_memalloc.c b/lib/eal/linux/eal_memalloc.c index 9853ec78a2..0cc3295994 100644 --- a/lib/eal/linux/eal_memalloc.c +++ b/lib/eal/linux/eal_memalloc.c @@ -147,7 +147,7 @@ check_numa(void) bool ret = true; /* Check if kernel supports NUMA. */ if (numa_available() != 0) { - RTE_LOG(DEBUG, EAL, "NUMA is not supported.\n"); + EAL_LOG(DEBUG, "NUMA is not supported."); ret = false; } return ret; @@ -156,16 +156,16 @@ check_numa(void) static void prepare_numa(int *oldpolicy, struct bitmask *oldmask, int socket_id) { - RTE_LOG(DEBUG, EAL, "Trying to obtain current memory policy.\n"); + EAL_LOG(DEBUG, "Trying to obtain current memory policy."); if (get_mempolicy(oldpolicy, oldmask->maskp, oldmask->size + 1, 0, 0) < 0) { - RTE_LOG(ERR, EAL, + EAL_LOG(ERR, "Failed to get current mempolicy: %s. " - "Assuming MPOL_DEFAULT.\n", strerror(errno)); + "Assuming MPOL_DEFAULT.", strerror(errno)); *oldpolicy = MPOL_DEFAULT; } - RTE_LOG(DEBUG, EAL, - "Setting policy MPOL_PREFERRED for socket %d\n", + EAL_LOG(DEBUG, + "Setting policy MPOL_PREFERRED for socket %d", socket_id); numa_set_preferred(socket_id); } @@ -173,13 +173,13 @@ prepare_numa(int *oldpolicy, struct bitmask *oldmask, int socket_id) static void restore_numa(int *oldpolicy, struct bitmask *oldmask) { - RTE_LOG(DEBUG, EAL, - "Restoring previous memory policy: %d\n", *oldpolicy); + EAL_LOG(DEBUG, + "Restoring previous memory policy: %d", *oldpolicy); if (*oldpolicy == MPOL_DEFAULT) { numa_set_localalloc(); } else if (set_mempolicy(*oldpolicy, oldmask->maskp, oldmask->size + 1) < 0) { - RTE_LOG(ERR, EAL, "Failed to restore mempolicy: %s\n", + EAL_LOG(ERR, "Failed to restore mempolicy: %s", strerror(errno)); numa_set_localalloc(); } @@ -223,7 +223,7 @@ static int lock(int fd, int type) /* couldn't lock */ return 0; } else if (ret) { - RTE_LOG(ERR, EAL, "%s(): error calling flock(): %s\n", + EAL_LOG(ERR, "%s(): error calling flock(): %s", __func__, strerror(errno)); return -1; } @@ -251,7 +251,7 @@ get_seg_memfd(struct hugepage_info *hi __rte_unused, snprintf(segname, sizeof(segname), "seg_%i", list_idx); fd = memfd_create(segname, flags); if (fd < 0) { - RTE_LOG(DEBUG, EAL, "%s(): memfd create failed: %s\n", + EAL_LOG(DEBUG, "%s(): memfd create failed: %s", __func__, strerror(errno)); return -1; } @@ -265,7 +265,7 @@ get_seg_memfd(struct hugepage_info *hi __rte_unused, list_idx, seg_idx); fd = memfd_create(segname, flags); if (fd < 0) { - RTE_LOG(DEBUG, EAL, "%s(): memfd create failed: %s\n", + EAL_LOG(DEBUG, "%s(): memfd create failed: %s", __func__, strerror(errno)); return -1; } @@ -316,7 +316,7 @@ get_seg_fd(char *path, int buflen, struct hugepage_info *hi, */ ret = stat(path, &st); if (ret < 0 && errno != ENOENT) { - RTE_LOG(DEBUG, EAL, "%s(): stat() for '%s' failed: %s\n", + EAL_LOG(DEBUG, "%s(): stat() for '%s' failed: %s", __func__, path, strerror(errno)); return -1; } @@ -342,7 +342,7 @@ get_seg_fd(char *path, int buflen, struct hugepage_info *hi, ret == 0) { /* coverity[toctou] */ if (unlink(path) < 0) { - RTE_LOG(DEBUG, EAL, "%s(): could not remove '%s': %s\n", + EAL_LOG(DEBUG, "%s(): could not remove '%s': %s", __func__, path, strerror(errno)); return -1; } @@ -351,13 +351,13 @@ get_seg_fd(char *path, int buflen, struct hugepage_info *hi, /* coverity[toctou] */ fd = open(path, O_CREAT | O_RDWR, 0600); if (fd < 0) { - RTE_LOG(ERR, EAL, "%s(): open '%s' failed: %s\n", + EAL_LOG(ERR, "%s(): open '%s' failed: %s", __func__, path, strerror(errno)); return -1; } /* take out a read lock */ if (lock(fd, LOCK_SH) < 0) { - RTE_LOG(ERR, EAL, "%s(): lock '%s' failed: %s\n", + EAL_LOG(ERR, "%s(): lock '%s' failed: %s", __func__, path, strerror(errno)); close(fd); return -1; @@ -378,7 +378,7 @@ resize_hugefile_in_memory(int fd, uint64_t fa_offset, ret = fallocate(fd, flags, fa_offset, page_sz); if (ret < 0) { - RTE_LOG(DEBUG, EAL, "%s(): fallocate() failed: %s\n", + EAL_LOG(DEBUG, "%s(): fallocate() failed: %s", __func__, strerror(errno)); return -1; @@ -402,7 +402,7 @@ resize_hugefile_in_filesystem(int fd, uint64_t fa_offset, uint64_t page_sz, */ if (!grow) { - RTE_LOG(DEBUG, EAL, "%s(): fallocate not supported, not freeing page back to the system\n", + EAL_LOG(DEBUG, "%s(): fallocate not supported, not freeing page back to the system", __func__); return -1; } @@ -414,7 +414,7 @@ resize_hugefile_in_filesystem(int fd, uint64_t fa_offset, uint64_t page_sz, *dirty = new_size <= cur_size; if (new_size > cur_size && ftruncate(fd, new_size) < 0) { - RTE_LOG(DEBUG, EAL, "%s(): ftruncate() failed: %s\n", + EAL_LOG(DEBUG, "%s(): ftruncate() failed: %s", __func__, strerror(errno)); return -1; } @@ -444,12 +444,12 @@ resize_hugefile_in_filesystem(int fd, uint64_t fa_offset, uint64_t page_sz, if (ret < 0) { if (fallocate_supported == -1 && errno == ENOTSUP) { - RTE_LOG(ERR, EAL, "%s(): fallocate() not supported, hugepage deallocation will be disabled\n", + EAL_LOG(ERR, "%s(): fallocate() not supported, hugepage deallocation will be disabled", __func__); again = true; fallocate_supported = 0; } else { - RTE_LOG(DEBUG, EAL, "%s(): fallocate() failed: %s\n", + EAL_LOG(DEBUG, "%s(): fallocate() failed: %s", __func__, strerror(errno)); return -1; @@ -483,7 +483,7 @@ close_hugefile(int fd, char *path, int list_idx) if (!internal_conf->in_memory && rte_eal_process_type() == RTE_PROC_PRIMARY && unlink(path)) - RTE_LOG(ERR, EAL, "%s(): unlinking '%s' failed: %s\n", + EAL_LOG(ERR, "%s(): unlinking '%s' failed: %s", __func__, path, strerror(errno)); close(fd); @@ -536,12 +536,12 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, /* these are checked at init, but code analyzers don't know that */ if (internal_conf->in_memory && !anonymous_hugepages_supported) { - RTE_LOG(ERR, EAL, "Anonymous hugepages not supported, in-memory mode cannot allocate memory\n"); + EAL_LOG(ERR, "Anonymous hugepages not supported, in-memory mode cannot allocate memory"); return -1; } if (internal_conf->in_memory && !memfd_create_supported && internal_conf->single_file_segments) { - RTE_LOG(ERR, EAL, "Single-file segments are not supported without memfd support\n"); + EAL_LOG(ERR, "Single-file segments are not supported without memfd support"); return -1; } @@ -569,7 +569,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, fd = get_seg_fd(path, sizeof(path), hi, list_idx, seg_idx, &dirty); if (fd < 0) { - RTE_LOG(ERR, EAL, "Couldn't get fd on hugepage file\n"); + EAL_LOG(ERR, "Couldn't get fd on hugepage file"); return -1; } @@ -584,14 +584,14 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, } else { map_offset = 0; if (ftruncate(fd, alloc_sz) < 0) { - RTE_LOG(DEBUG, EAL, "%s(): ftruncate() failed: %s\n", + EAL_LOG(DEBUG, "%s(): ftruncate() failed: %s", __func__, strerror(errno)); goto resized; } if (internal_conf->hugepage_file.unlink_before_mapping && !internal_conf->in_memory) { if (unlink(path)) { - RTE_LOG(DEBUG, EAL, "%s(): unlink() failed: %s\n", + EAL_LOG(DEBUG, "%s(): unlink() failed: %s", __func__, strerror(errno)); goto resized; } @@ -610,7 +610,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, map_offset); if (va == MAP_FAILED) { - RTE_LOG(DEBUG, EAL, "%s(): mmap() failed: %s\n", __func__, + EAL_LOG(DEBUG, "%s(): mmap() failed: %s", __func__, strerror(errno)); /* mmap failed, but the previous region might have been * unmapped anyway. try to remap it @@ -618,7 +618,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, goto unmapped; } if (va != addr) { - RTE_LOG(DEBUG, EAL, "%s(): wrong mmap() address\n", __func__); + EAL_LOG(DEBUG, "%s(): wrong mmap() address", __func__); munmap(va, alloc_sz); goto resized; } @@ -631,7 +631,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, * back here. */ if (huge_wrap_sigsetjmp()) { - RTE_LOG(DEBUG, EAL, "SIGBUS: Cannot mmap more hugepages of size %uMB\n", + EAL_LOG(DEBUG, "SIGBUS: Cannot mmap more hugepages of size %uMB", (unsigned int)(alloc_sz >> 20)); goto mapped; } @@ -645,7 +645,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, iova = rte_mem_virt2iova(addr); if (iova == RTE_BAD_PHYS_ADDR) { - RTE_LOG(DEBUG, EAL, "%s(): can't get IOVA addr\n", + EAL_LOG(DEBUG, "%s(): can't get IOVA addr", __func__); goto mapped; } @@ -661,19 +661,19 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, ret = get_mempolicy(&cur_socket_id, NULL, 0, addr, MPOL_F_NODE | MPOL_F_ADDR); if (ret < 0) { - RTE_LOG(DEBUG, EAL, "%s(): get_mempolicy: %s\n", + EAL_LOG(DEBUG, "%s(): get_mempolicy: %s", __func__, strerror(errno)); goto mapped; } else if (cur_socket_id != socket_id) { - RTE_LOG(DEBUG, EAL, - "%s(): allocation happened on wrong socket (wanted %d, got %d)\n", + EAL_LOG(DEBUG, + "%s(): allocation happened on wrong socket (wanted %d, got %d)", __func__, socket_id, cur_socket_id); goto mapped; } } #else if (rte_socket_count() > 1) - RTE_LOG(DEBUG, EAL, "%s(): not checking hugepage NUMA node.\n", + EAL_LOG(DEBUG, "%s(): not checking hugepage NUMA node.", __func__); #endif @@ -703,7 +703,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id, * somebody else maps this hole now, we could accidentally * override it in the future. */ - RTE_LOG(CRIT, EAL, "Can't mmap holes in our virtual address space\n"); + EAL_LOG(CRIT, "Can't mmap holes in our virtual address space"); } /* roll back the ref count */ if (internal_conf->single_file_segments) @@ -748,7 +748,7 @@ free_seg(struct rte_memseg *ms, struct hugepage_info *hi, if (mmap(ms->addr, ms->len, PROT_NONE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED, -1, 0) == MAP_FAILED) { - RTE_LOG(DEBUG, EAL, "couldn't unmap page\n"); + EAL_LOG(DEBUG, "couldn't unmap page"); return -1; } @@ -873,13 +873,13 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg) if (wa->hi->lock_descriptor == -1 && !internal_conf->in_memory) { dir_fd = open(wa->hi->hugedir, O_RDONLY); if (dir_fd < 0) { - RTE_LOG(ERR, EAL, "%s(): Cannot open '%s': %s\n", + EAL_LOG(ERR, "%s(): Cannot open '%s': %s", __func__, wa->hi->hugedir, strerror(errno)); return -1; } /* blocking writelock */ if (flock(dir_fd, LOCK_EX)) { - RTE_LOG(ERR, EAL, "%s(): Cannot lock '%s': %s\n", + EAL_LOG(ERR, "%s(): Cannot lock '%s': %s", __func__, wa->hi->hugedir, strerror(errno)); close(dir_fd); return -1; @@ -896,7 +896,7 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg) if (alloc_seg(cur, map_addr, wa->socket, wa->hi, msl_idx, cur_idx)) { - RTE_LOG(DEBUG, EAL, "attempted to allocate %i segments, but only %i were allocated\n", + EAL_LOG(DEBUG, "attempted to allocate %i segments, but only %i were allocated", need, i); /* if exact number wasn't requested, stop */ @@ -916,7 +916,7 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg) * may fail. */ if (free_seg(tmp, wa->hi, msl_idx, j)) - RTE_LOG(DEBUG, EAL, "Cannot free page\n"); + EAL_LOG(DEBUG, "Cannot free page"); } /* clear the list */ if (wa->ms) @@ -980,13 +980,13 @@ free_seg_walk(const struct rte_memseg_list *msl, void *arg) if (wa->hi->lock_descriptor == -1 && !internal_conf->in_memory) { dir_fd = open(wa->hi->hugedir, O_RDONLY); if (dir_fd < 0) { - RTE_LOG(ERR, EAL, "%s(): Cannot open '%s': %s\n", + EAL_LOG(ERR, "%s(): Cannot open '%s': %s", __func__, wa->hi->hugedir, strerror(errno)); return -1; } /* blocking writelock */ if (flock(dir_fd, LOCK_EX)) { - RTE_LOG(ERR, EAL, "%s(): Cannot lock '%s': %s\n", + EAL_LOG(ERR, "%s(): Cannot lock '%s': %s", __func__, wa->hi->hugedir, strerror(errno)); close(dir_fd); return -1; @@ -1037,7 +1037,7 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs, size_t page_sz, } } if (!hi) { - RTE_LOG(ERR, EAL, "%s(): can't find relevant hugepage_info entry\n", + EAL_LOG(ERR, "%s(): can't find relevant hugepage_info entry", __func__); return -1; } @@ -1061,7 +1061,7 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs, size_t page_sz, /* memalloc is locked, so it's safe to use thread-unsafe version */ ret = rte_memseg_list_walk_thread_unsafe(alloc_seg_walk, &wa); if (ret == 0) { - RTE_LOG(ERR, EAL, "%s(): couldn't find suitable memseg_list\n", + EAL_LOG(ERR, "%s(): couldn't find suitable memseg_list", __func__); ret = -1; } else if (ret > 0) { @@ -1104,7 +1104,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs) /* if this page is marked as unfreeable, fail */ if (cur->flags & RTE_MEMSEG_FLAG_DO_NOT_FREE) { - RTE_LOG(DEBUG, EAL, "Page is not allowed to be freed\n"); + EAL_LOG(DEBUG, "Page is not allowed to be freed"); ret = -1; continue; } @@ -1118,7 +1118,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs) break; } if (i == (int)RTE_DIM(internal_conf->hugepage_info)) { - RTE_LOG(ERR, EAL, "Can't find relevant hugepage_info entry\n"); + EAL_LOG(ERR, "Can't find relevant hugepage_info entry"); ret = -1; continue; } @@ -1133,7 +1133,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs) if (walk_res == 1) continue; if (walk_res == 0) - RTE_LOG(ERR, EAL, "Couldn't find memseg list\n"); + EAL_LOG(ERR, "Couldn't find memseg list"); ret = -1; } return ret; @@ -1344,13 +1344,13 @@ sync_existing(struct rte_memseg_list *primary_msl, */ dir_fd = open(hi->hugedir, O_RDONLY); if (dir_fd < 0) { - RTE_LOG(ERR, EAL, "%s(): Cannot open '%s': %s\n", __func__, + EAL_LOG(ERR, "%s(): Cannot open '%s': %s", __func__, hi->hugedir, strerror(errno)); return -1; } /* blocking writelock */ if (flock(dir_fd, LOCK_EX)) { - RTE_LOG(ERR, EAL, "%s(): Cannot lock '%s': %s\n", __func__, + EAL_LOG(ERR, "%s(): Cannot lock '%s': %s", __func__, hi->hugedir, strerror(errno)); close(dir_fd); return -1; @@ -1405,7 +1405,7 @@ sync_walk(const struct rte_memseg_list *msl, void *arg __rte_unused) } } if (!hi) { - RTE_LOG(ERR, EAL, "Can't find relevant hugepage_info entry\n"); + EAL_LOG(ERR, "Can't find relevant hugepage_info entry"); return -1; } @@ -1454,7 +1454,7 @@ secondary_msl_create_walk(const struct rte_memseg_list *msl, primary_msl->memseg_arr.len, primary_msl->memseg_arr.elt_sz); if (ret < 0) { - RTE_LOG(ERR, EAL, "Cannot initialize local memory map\n"); + EAL_LOG(ERR, "Cannot initialize local memory map"); return -1; } local_msl->base_va = primary_msl->base_va; @@ -1479,7 +1479,7 @@ secondary_msl_destroy_walk(const struct rte_memseg_list *msl, ret = rte_fbarray_destroy(&local_msl->memseg_arr); if (ret < 0) { - RTE_LOG(ERR, EAL, "Cannot destroy local memory map\n"); + EAL_LOG(ERR, "Cannot destroy local memory map"); return -1; } local_msl->base_va = NULL; @@ -1501,7 +1501,7 @@ alloc_list(int list_idx, int len) /* ensure we have space to store fd per each possible segment */ data = malloc(sizeof(int) * len); if (data == NULL) { - RTE_LOG(ERR, EAL, "Unable to allocate space for file descriptors\n"); + EAL_LOG(ERR, "Unable to allocate space for file descriptors"); return -1; } /* set all fd's as invalid */ @@ -1750,13 +1750,13 @@ eal_memalloc_init(void) int mfd_res = test_memfd_create(); if (mfd_res < 0) { - RTE_LOG(ERR, EAL, "Unable to check if memfd is supported\n"); + EAL_LOG(ERR, "Unable to check if memfd is supported"); return -1; } if (mfd_res == 1) - RTE_LOG(DEBUG, EAL, "Using memfd for anonymous memory\n"); + EAL_LOG(DEBUG, "Using memfd for anonymous memory"); else - RTE_LOG(INFO, EAL, "Using memfd is not supported, falling back to anonymous hugepages\n"); + EAL_LOG(INFO, "Using memfd is not supported, falling back to anonymous hugepages"); /* we only support single-file segments mode with in-memory mode * if we support hugetlbfs with memfd_create. this code will @@ -1764,18 +1764,18 @@ eal_memalloc_init(void) */ if (internal_conf->single_file_segments && mfd_res != 1) { - RTE_LOG(ERR, EAL, "Single-file segments mode cannot be used without memfd support\n"); + EAL_LOG(ERR, "Single-file segments mode cannot be used without memfd support"); return -1; } /* this cannot ever happen but better safe than sorry */ if (!anonymous_hugepages_supported) { - RTE_LOG(ERR, EAL, "Using anonymous memory is not supported\n"); + EAL_LOG(ERR, "Using anonymous memory is not supported"); return -1; } /* safety net, should be impossible to configure */ if (internal_conf->hugepage_file.unlink_before_mapping && !internal_conf->hugepage_file.unlink_existing) { - RTE_LOG(ERR, EAL, "Unlinking existing hugepage files is prohibited, cannot unlink them before mapping.\n"); + EAL_LOG(ERR, "Unlinking existing hugepage files is prohibited, cannot unlink them before mapping."); return -1; } } diff --git a/lib/eal/linux/eal_memory.c b/lib/eal/linux/eal_memory.c index 9b6f08fba8..45879ca743 100644 --- a/lib/eal/linux/eal_memory.c +++ b/lib/eal/linux/eal_memory.c @@ -104,7 +104,7 @@ rte_mem_virt2phy(const void *virtaddr) fd = open("/proc/self/pagemap", O_RDONLY); if (fd < 0) { - RTE_LOG(INFO, EAL, "%s(): cannot open /proc/self/pagemap: %s\n", + EAL_LOG(INFO, "%s(): cannot open /proc/self/pagemap: %s", __func__, strerror(errno)); return RTE_BAD_IOVA; } @@ -112,7 +112,7 @@ rte_mem_virt2phy(const void *virtaddr) virt_pfn = (unsigned long)virtaddr / page_size; offset = sizeof(uint64_t) * virt_pfn; if (lseek(fd, offset, SEEK_SET) == (off_t) -1) { - RTE_LOG(INFO, EAL, "%s(): seek error in /proc/self/pagemap: %s\n", + EAL_LOG(INFO, "%s(): seek error in /proc/self/pagemap: %s", __func__, strerror(errno)); close(fd); return RTE_BAD_IOVA; @@ -121,12 +121,12 @@ rte_mem_virt2phy(const void *virtaddr) retval = read(fd, &page, PFN_MASK_SIZE); close(fd); if (retval < 0) { - RTE_LOG(INFO, EAL, "%s(): cannot read /proc/self/pagemap: %s\n", + EAL_LOG(INFO, "%s(): cannot read /proc/self/pagemap: %s", __func__, strerror(errno)); return RTE_BAD_IOVA; } else if (retval != PFN_MASK_SIZE) { - RTE_LOG(INFO, EAL, "%s(): read %d bytes from /proc/self/pagemap " - "but expected %d:\n", + EAL_LOG(INFO, "%s(): read %d bytes from /proc/self/pagemap " + "but expected %d:", __func__, retval, PFN_MASK_SIZE); return RTE_BAD_IOVA; } @@ -237,7 +237,7 @@ static int huge_wrap_sigsetjmp(void) /* Callback for numa library. */ void numa_error(char *where) { - RTE_LOG(ERR, EAL, "%s failed: %s\n", where, strerror(errno)); + EAL_LOG(ERR, "%s failed: %s", where, strerror(errno)); } #endif @@ -267,18 +267,18 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, /* Check if kernel supports NUMA. */ if (numa_available() != 0) { - RTE_LOG(DEBUG, EAL, "NUMA is not supported.\n"); + EAL_LOG(DEBUG, "NUMA is not supported."); have_numa = false; } if (have_numa) { - RTE_LOG(DEBUG, EAL, "Trying to obtain current memory policy.\n"); + EAL_LOG(DEBUG, "Trying to obtain current memory policy."); oldmask = numa_allocate_nodemask(); if (get_mempolicy(&oldpolicy, oldmask->maskp, oldmask->size + 1, 0, 0) < 0) { - RTE_LOG(ERR, EAL, + EAL_LOG(ERR, "Failed to get current mempolicy: %s. " - "Assuming MPOL_DEFAULT.\n", strerror(errno)); + "Assuming MPOL_DEFAULT.", strerror(errno)); oldpolicy = MPOL_DEFAULT; } for (i = 0; i < RTE_MAX_NUMA_NODES; i++) @@ -316,8 +316,8 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, essential_memory[j] -= hugepage_sz; } - RTE_LOG(DEBUG, EAL, - "Setting policy MPOL_PREFERRED for socket %d\n", + EAL_LOG(DEBUG, + "Setting policy MPOL_PREFERRED for socket %d", node_id); numa_set_preferred(node_id); } @@ -332,7 +332,7 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, /* try to create hugepage file */ fd = open(hf->filepath, O_CREAT | O_RDWR, 0600); if (fd < 0) { - RTE_LOG(DEBUG, EAL, "%s(): open failed: %s\n", __func__, + EAL_LOG(DEBUG, "%s(): open failed: %s", __func__, strerror(errno)); goto out; } @@ -345,7 +345,7 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, virtaddr = mmap(NULL, hugepage_sz, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_POPULATE, fd, 0); if (virtaddr == MAP_FAILED) { - RTE_LOG(DEBUG, EAL, "%s(): mmap failed: %s\n", __func__, + EAL_LOG(DEBUG, "%s(): mmap failed: %s", __func__, strerror(errno)); close(fd); goto out; @@ -361,8 +361,8 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, * back here. */ if (huge_wrap_sigsetjmp()) { - RTE_LOG(DEBUG, EAL, "SIGBUS: Cannot mmap more " - "hugepages of size %u MB\n", + EAL_LOG(DEBUG, "SIGBUS: Cannot mmap more " + "hugepages of size %u MB", (unsigned int)(hugepage_sz / 0x100000)); munmap(virtaddr, hugepage_sz); close(fd); @@ -378,7 +378,7 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, /* set shared lock on the file. */ if (flock(fd, LOCK_SH) < 0) { - RTE_LOG(DEBUG, EAL, "%s(): Locking file failed:%s \n", + EAL_LOG(DEBUG, "%s(): Locking file failed:%s ", __func__, strerror(errno)); close(fd); goto out; @@ -390,13 +390,13 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi, out: #ifdef RTE_EAL_NUMA_AWARE_HUGEPAGES if (maxnode) { - RTE_LOG(DEBUG, EAL, - "Restoring previous memory policy: %d\n", oldpolicy); + EAL_LOG(DEBUG, + "Restoring previous memory policy: %d", oldpolicy); if (oldpolicy == MPOL_DEFAULT) { numa_set_localalloc(); } else if (set_mempolicy(oldpolicy, oldmask->maskp, oldmask->size + 1) < 0) { - RTE_LOG(ERR, EAL, "Failed to restore mempolicy: %s\n", + EAL_LOG(ERR, "Failed to restore mempolicy: %s", strerror(errno)); numa_set_localalloc(); } @@ -424,8 +424,8 @@ find_numasocket(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi) f = fopen("/proc/self/numa_maps", "r"); if (f == NULL) { - RTE_LOG(NOTICE, EAL, "NUMA support not available" - " consider that all memory is in socket_id 0\n"); + EAL_LOG(NOTICE, "NUMA support not available" + " consider that all memory is in socket_id 0"); return 0; } @@ -443,20 +443,20 @@ find_numasocket(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi) /* get zone addr */ virt_addr = strtoull(buf, &end, 16); if (virt_addr == 0 || end == buf) { - RTE_LOG(ERR, EAL, "%s(): error in numa_maps parsing\n", __func__); + EAL_LOG(ERR, "%s(): error in numa_maps parsing", __func__); goto error; } /* get node id (socket id) */ nodestr = strstr(buf, " N"); if (nodestr == NULL) { - RTE_LOG(ERR, EAL, "%s(): error in numa_maps parsing\n", __func__); + EAL_LOG(ERR, "%s(): error in numa_maps parsing", __func__); goto error; } nodestr += 2; end = strstr(nodestr, "="); if (end == NULL) { - RTE_LOG(ERR, EAL, "%s(): error in numa_maps parsing\n", __func__); + EAL_LOG(ERR, "%s(): error in numa_maps parsing", __func__); goto error; } end[0] = '\0'; @@ -464,7 +464,7 @@ find_numasocket(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi) socket_id = strtoul(nodestr, &end, 0); if ((nodestr[0] == '\0') || (end == NULL) || (*end != '\0')) { - RTE_LOG(ERR, EAL, "%s(): error in numa_maps parsing\n", __func__); + EAL_LOG(ERR, "%s(): error in numa_maps parsing", __func__); goto error; } @@ -475,8 +475,8 @@ find_numasocket(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi) hugepg_tbl[i].socket_id = socket_id; hp_count++; #ifdef RTE_EAL_NUMA_AWARE_HUGEPAGES - RTE_LOG(DEBUG, EAL, - "Hugepage %s is on socket %d\n", + EAL_LOG(DEBUG, + "Hugepage %s is on socket %d", hugepg_tbl[i].filepath, socket_id); #endif } @@ -589,7 +589,7 @@ unlink_hugepage_files(struct hugepage_file *hugepg_tbl, struct hugepage_file *hp = &hugepg_tbl[page]; if (hp->orig_va != NULL && unlink(hp->filepath)) { - RTE_LOG(WARNING, EAL, "%s(): Removing %s failed: %s\n", + EAL_LOG(WARNING, "%s(): Removing %s failed: %s", __func__, hp->filepath, strerror(errno)); } } @@ -639,7 +639,7 @@ unmap_unneeded_hugepages(struct hugepage_file *hugepg_tbl, hp->orig_va = NULL; if (unlink(hp->filepath) == -1) { - RTE_LOG(ERR, EAL, "%s(): Removing %s failed: %s\n", + EAL_LOG(ERR, "%s(): Removing %s failed: %s", __func__, hp->filepath, strerror(errno)); return -1; } @@ -676,7 +676,7 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end) socket_id = hugepages[seg_start].socket_id; seg_len = seg_end - seg_start; - RTE_LOG(DEBUG, EAL, "Attempting to map %" PRIu64 "M on socket %i\n", + EAL_LOG(DEBUG, "Attempting to map %" PRIu64 "M on socket %i", (seg_len * page_sz) >> 20ULL, socket_id); /* find free space in memseg lists */ @@ -716,8 +716,8 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end) break; } if (msl_idx == RTE_MAX_MEMSEG_LISTS) { - RTE_LOG(ERR, EAL, "Could not find space for memseg. Please increase RTE_MAX_MEMSEG_PER_LIST " - "RTE_MAX_MEMSEG_PER_TYPE and/or RTE_MAX_MEM_MB_PER_TYPE in configuration.\n"); + EAL_LOG(ERR, "Could not find space for memseg. Please increase RTE_MAX_MEMSEG_PER_LIST " + "RTE_MAX_MEMSEG_PER_TYPE and/or RTE_MAX_MEM_MB_PER_TYPE in configuration."); return -1; } @@ -735,13 +735,13 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end) fd = open(hfile->filepath, O_RDWR); if (fd < 0) { - RTE_LOG(ERR, EAL, "Could not open '%s': %s\n", + EAL_LOG(ERR, "Could not open '%s': %s", hfile->filepath, strerror(errno)); return -1; } /* set shared lock on the file. */ if (flock(fd, LOCK_SH) < 0) { - RTE_LOG(DEBUG, EAL, "Could not lock '%s': %s\n", + EAL_LOG(DEBUG, "Could not lock '%s': %s", hfile->filepath, strerror(errno)); close(fd); return -1; @@ -755,7 +755,7 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end) addr = mmap(addr, page_sz, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_POPULATE | MAP_FIXED, fd, 0); if (addr == MAP_FAILED) { - RTE_LOG(ERR, EAL, "Couldn't remap '%s': %s\n", + EAL_LOG(ERR, "Couldn't remap '%s': %s", hfile->filepath, strerror(errno)); close(fd); return -1; @@ -790,10 +790,10 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end) /* store segment fd internally */ if (eal_memalloc_set_seg_fd(msl_idx, ms_idx, fd) < 0) - RTE_LOG(ERR, EAL, "Could not store segment fd: %s\n", + EAL_LOG(ERR, "Could not store segment fd: %s", rte_strerror(rte_errno)); } - RTE_LOG(DEBUG, EAL, "Allocated %" PRIu64 "M on socket %i\n", + EAL_LOG(DEBUG, "Allocated %" PRIu64 "M on socket %i", (seg_len * page_sz) >> 20, socket_id); return seg_len; } @@ -819,7 +819,7 @@ static int memseg_list_free(struct rte_memseg_list *msl) { if (rte_fbarray_destroy(&msl->memseg_arr)) { - RTE_LOG(ERR, EAL, "Cannot destroy memseg list\n"); + EAL_LOG(ERR, "Cannot destroy memseg list"); return -1; } memset(msl, 0, sizeof(*msl)); @@ -965,7 +965,7 @@ prealloc_segments(struct hugepage_file *hugepages, int n_pages) break; } if (msl_idx == RTE_MAX_MEMSEG_LISTS) { - RTE_LOG(ERR, EAL, "Not enough space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS\n"); + EAL_LOG(ERR, "Not enough space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS"); return -1; } @@ -976,7 +976,7 @@ prealloc_segments(struct hugepage_file *hugepages, int n_pages) /* finally, allocate VA space */ if (eal_memseg_list_alloc(msl, 0) < 0) { - RTE_LOG(ERR, EAL, "Cannot preallocate 0x%"PRIx64"kB hugepages\n", + EAL_LOG(ERR, "Cannot preallocate 0x%"PRIx64"kB hugepages", page_sz >> 10); return -1; } @@ -1177,15 +1177,15 @@ eal_legacy_hugepage_init(void) /* create a memfd and store it in the segment fd table */ memfd = memfd_create("nohuge", 0); if (memfd < 0) { - RTE_LOG(DEBUG, EAL, "Cannot create memfd: %s\n", + EAL_LOG(DEBUG, "Cannot create memfd: %s", strerror(errno)); - RTE_LOG(DEBUG, EAL, "Falling back to anonymous map\n"); + EAL_LOG(DEBUG, "Falling back to anonymous map"); } else { /* we got an fd - now resize it */ if (ftruncate(memfd, internal_conf->memory) < 0) { - RTE_LOG(ERR, EAL, "Cannot resize memfd: %s\n", + EAL_LOG(ERR, "Cannot resize memfd: %s", strerror(errno)); - RTE_LOG(ERR, EAL, "Falling back to anonymous map\n"); + EAL_LOG(ERR, "Falling back to anonymous map"); close(memfd); } else { /* creating memfd-backed file was successful. @@ -1193,7 +1193,7 @@ eal_legacy_hugepage_init(void) * other processes (such as vhost backend), so * map it as shared memory. */ - RTE_LOG(DEBUG, EAL, "Using memfd for anonymous memory\n"); + EAL_LOG(DEBUG, "Using memfd for anonymous memory"); fd = memfd; flags = MAP_SHARED; } @@ -1203,7 +1203,7 @@ eal_legacy_hugepage_init(void) * fit into the DMA mask. */ if (eal_memseg_list_alloc(msl, 0)) { - RTE_LOG(ERR, EAL, "Cannot preallocate VA space for hugepage memory\n"); + EAL_LOG(ERR, "Cannot preallocate VA space for hugepage memory"); return -1; } @@ -1211,7 +1211,7 @@ eal_legacy_hugepage_init(void) addr = mmap(prealloc_addr, mem_sz, PROT_READ | PROT_WRITE, flags | MAP_FIXED, fd, 0); if (addr == MAP_FAILED || addr != prealloc_addr) { - RTE_LOG(ERR, EAL, "%s: mmap() failed: %s\n", __func__, + EAL_LOG(ERR, "%s: mmap() failed: %s", __func__, strerror(errno)); munmap(prealloc_addr, mem_sz); return -1; @@ -1222,7 +1222,7 @@ eal_legacy_hugepage_init(void) */ if (fd != -1) { if (eal_memalloc_set_seg_list_fd(0, fd) < 0) { - RTE_LOG(ERR, EAL, "Cannot set up segment list fd\n"); + EAL_LOG(ERR, "Cannot set up segment list fd"); /* not a serious error, proceed */ } } @@ -1231,13 +1231,13 @@ eal_legacy_hugepage_init(void) if (mcfg->dma_maskbits && rte_mem_check_dma_mask_thread_unsafe(mcfg->dma_maskbits)) { - RTE_LOG(ERR, EAL, - "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask.\n", + EAL_LOG(ERR, + "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask.", __func__); if (rte_eal_iova_mode() == RTE_IOVA_VA && rte_eal_using_phys_addrs()) - RTE_LOG(ERR, EAL, - "%s(): Please try initializing EAL with --iova-mode=pa parameter.\n", + EAL_LOG(ERR, + "%s(): Please try initializing EAL with --iova-mode=pa parameter.", __func__); goto fail; } @@ -1292,8 +1292,8 @@ eal_legacy_hugepage_init(void) pages_old = hpi->num_pages[0]; pages_new = map_all_hugepages(&tmp_hp[hp_offset], hpi, memory); if (pages_new < pages_old) { - RTE_LOG(DEBUG, EAL, - "%d not %d hugepages of size %u MB allocated\n", + EAL_LOG(DEBUG, + "%d not %d hugepages of size %u MB allocated", pages_new, pages_old, (unsigned)(hpi->hugepage_sz / 0x100000)); @@ -1309,23 +1309,23 @@ eal_legacy_hugepage_init(void) rte_eal_iova_mode() != RTE_IOVA_VA) { /* find physical addresses for each hugepage */ if (find_physaddrs(&tmp_hp[hp_offset], hpi) < 0) { - RTE_LOG(DEBUG, EAL, "Failed to find phys addr " - "for %u MB pages\n", + EAL_LOG(DEBUG, "Failed to find phys addr " + "for %u MB pages", (unsigned int)(hpi->hugepage_sz / 0x100000)); goto fail; } } else { /* set physical addresses for each hugepage */ if (set_physaddrs(&tmp_hp[hp_offset], hpi) < 0) { - RTE_LOG(DEBUG, EAL, "Failed to set phys addr " - "for %u MB pages\n", + EAL_LOG(DEBUG, "Failed to set phys addr " + "for %u MB pages", (unsigned int)(hpi->hugepage_sz / 0x100000)); goto fail; } } if (find_numasocket(&tmp_hp[hp_offset], hpi) < 0){ - RTE_LOG(DEBUG, EAL, "Failed to find NUMA socket for %u MB pages\n", + EAL_LOG(DEBUG, "Failed to find NUMA socket for %u MB pages", (unsigned)(hpi->hugepage_sz / 0x100000)); goto fail; } @@ -1382,9 +1382,9 @@ eal_legacy_hugepage_init(void) for (i = 0; i < (int) internal_conf->num_hugepage_sizes; i++) { for (j = 0; j < RTE_MAX_NUMA_NODES; j++) { if (used_hp[i].num_pages[j] > 0) { - RTE_LOG(DEBUG, EAL, + EAL_LOG(DEBUG, "Requesting %u pages of size %uMB" - " from socket %i\n", + " from socket %i", used_hp[i].num_pages[j], (unsigned) (used_hp[i].hugepage_sz / 0x100000), @@ -1398,7 +1398,7 @@ eal_legacy_hugepage_init(void) nr_hugefiles * sizeof(struct hugepage_file)); if (hugepage == NULL) { - RTE_LOG(ERR, EAL, "Failed to create shared memory!\n"); + EAL_LOG(ERR, "Failed to create shared memory!"); goto fail; } memset(hugepage, 0, nr_hugefiles * sizeof(struct hugepage_file)); @@ -1409,7 +1409,7 @@ eal_legacy_hugepage_init(void) */ if (unmap_unneeded_hugepages(tmp_hp, used_hp, internal_conf->num_hugepage_sizes) < 0) { - RTE_LOG(ERR, EAL, "Unmapping and locking hugepages failed!\n"); + EAL_LOG(ERR, "Unmapping and locking hugepages failed!"); goto fail; } @@ -1420,7 +1420,7 @@ eal_legacy_hugepage_init(void) */ if (copy_hugepages_to_shared_mem(hugepage, nr_hugefiles, tmp_hp, nr_hugefiles) < 0) { - RTE_LOG(ERR, EAL, "Copying tables to shared memory failed!\n"); + EAL_LOG(ERR, "Copying tables to shared memory failed!"); goto fail; } @@ -1428,7 +1428,7 @@ eal_legacy_hugepage_init(void) /* for legacy 32-bit mode, we did not preallocate VA space, so do it */ if (internal_conf->legacy_mem && prealloc_segments(hugepage, nr_hugefiles)) { - RTE_LOG(ERR, EAL, "Could not preallocate VA space for hugepages\n"); + EAL_LOG(ERR, "Could not preallocate VA space for hugepages"); goto fail; } #endif @@ -1437,14 +1437,14 @@ eal_legacy_hugepage_init(void) * pages become first-class citizens in DPDK memory subsystem */ if (remap_needed_hugepages(hugepage, nr_hugefiles)) { - RTE_LOG(ERR, EAL, "Couldn't remap hugepage files into memseg lists\n"); + EAL_LOG(ERR, "Couldn't remap hugepage files into memseg lists"); goto fail; } /* free the hugepage backing files */ if (internal_conf->hugepage_file.unlink_before_mapping && unlink_hugepage_files(tmp_hp, internal_conf->num_hugepage_sizes) < 0) { - RTE_LOG(ERR, EAL, "Unlinking hugepage files failed!\n"); + EAL_LOG(ERR, "Unlinking hugepage files failed!"); goto fail; } @@ -1480,8 +1480,8 @@ eal_legacy_hugepage_init(void) if (mcfg->dma_maskbits && rte_mem_check_dma_mask_thread_unsafe(mcfg->dma_maskbits)) { - RTE_LOG(ERR, EAL, - "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask.\n", + EAL_LOG(ERR, + "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask.", __func__); goto fail; } @@ -1527,15 +1527,15 @@ eal_legacy_hugepage_attach(void) int fd, fd_hugepage = -1; if (aslr_enabled() > 0) { - RTE_LOG(WARNING, EAL, "WARNING: Address Space Layout Randomization " - "(ASLR) is enabled in the kernel.\n"); - RTE_LOG(WARNING, EAL, " This may cause issues with mapping memory " - "into secondary processes\n"); + EAL_LOG(WARNING, "WARNING: Address Space Layout Randomization " + "(ASLR) is enabled in the kernel."); + EAL_LOG(WARNING, " This may cause issues with mapping memory " + "into secondary processes"); } fd_hugepage = open(eal_hugepage_data_path(), O_RDONLY); if (fd_hugepage < 0) { - RTE_LOG(ERR, EAL, "Could not open %s\n", + EAL_LOG(ERR, "Could not open %s", eal_hugepage_data_path()); goto error; } @@ -1543,13 +1543,13 @@ eal_legacy_hugepage_attach(void) size = getFileSize(fd_hugepage); hp = mmap(NULL, size, PROT_READ, MAP_PRIVATE, fd_hugepage, 0); if (hp == MAP_FAILED) { - RTE_LOG(ERR, EAL, "Could not mmap %s\n", + EAL_LOG(ERR, "Could not mmap %s", eal_hugepage_data_path()); goto error; } num_hp = size / sizeof(struct hugepage_file); - RTE_LOG(DEBUG, EAL, "Analysing %u files\n", num_hp); + EAL_LOG(DEBUG, "Analysing %u files", num_hp); /* map all segments into memory to make sure we get the addrs. the * segments themselves are already in memseg list (which is shared and @@ -1570,7 +1570,7 @@ eal_legacy_hugepage_attach(void) fd = open(hf->filepath, O_RDWR); if (fd < 0) { - RTE_LOG(ERR, EAL, "Could not open %s: %s\n", + EAL_LOG(ERR, "Could not open %s: %s", hf->filepath, strerror(errno)); goto error; } @@ -1578,14 +1578,14 @@ eal_legacy_hugepage_attach(void) map_addr = mmap(map_addr, map_sz, PROT_READ | PROT_WRITE, MAP_SHARED | MAP_FIXED, fd, 0); if (map_addr == MAP_FAILED) { - RTE_LOG(ERR, EAL, "Could not map %s: %s\n", + EAL_LOG(ERR, "Could not map %s: %s", hf->filepath, strerror(errno)); goto fd_error; } /* set shared lock on the file. */ if (flock(fd, LOCK_SH) < 0) { - RTE_LOG(DEBUG, EAL, "%s(): Locking file failed: %s\n", + EAL_LOG(DEBUG, "%s(): Locking file failed: %s", __func__, strerror(errno)); goto mmap_error; } @@ -1593,13 +1593,13 @@ eal_legacy_hugepage_attach(void) /* find segment data */ msl = rte_mem_virt2memseg_list(map_addr); if (msl == NULL) { - RTE_LOG(DEBUG, EAL, "%s(): Cannot find memseg list\n", + EAL_LOG(DEBUG, "%s(): Cannot find memseg list", __func__); goto mmap_error; } ms = rte_mem_virt2memseg(map_addr, msl); if (ms == NULL) { - RTE_LOG(DEBUG, EAL, "%s(): Cannot find memseg\n", + EAL_LOG(DEBUG, "%s(): Cannot find memseg", __func__); goto mmap_error; } @@ -1607,14 +1607,14 @@ eal_legacy_hugepage_attach(void) msl_idx = msl - mcfg->memsegs; ms_idx = rte_fbarray_find_idx(&msl->memseg_arr, ms); if (ms_idx < 0) { - RTE_LOG(DEBUG, EAL, "%s(): Cannot find memseg idx\n", + EAL_LOG(DEBUG, "%s(): Cannot find memseg idx", __func__); goto mmap_error; } /* store segment fd internally */ if (eal_memalloc_set_seg_fd(msl_idx, ms_idx, fd) < 0) - RTE_LOG(ERR, EAL, "Could not store segment fd: %s\n", + EAL_LOG(ERR, "Could not store segment fd: %s", rte_strerror(rte_errno)); } /* unmap the hugepage config file, since we are done using it */ @@ -1642,9 +1642,9 @@ static int eal_hugepage_attach(void) { if (eal_memalloc_sync_with_primary()) { - RTE_LOG(ERR, EAL, "Could not map memory from primary process\n"); + EAL_LOG(ERR, "Could not map memory from primary process"); if (aslr_enabled() > 0) - RTE_LOG(ERR, EAL, "It is recommended to disable ASLR in the kernel and retry running both primary and secondary processes\n"); + EAL_LOG(ERR, "It is recommended to disable ASLR in the kernel and retry running both primary and secondary processes"); return -1; } return 0; @@ -1740,7 +1740,7 @@ memseg_primary_init_32(void) max_mem = (uint64_t)RTE_MAX_MEM_MB << 20; if (total_requested_mem > max_mem) { - RTE_LOG(ERR, EAL, "Invalid parameters: 32-bit process can at most use %uM of memory\n", + EAL_LOG(ERR, "Invalid parameters: 32-bit process can at most use %uM of memory", (unsigned int)(max_mem >> 20)); return -1; } @@ -1787,7 +1787,7 @@ memseg_primary_init_32(void) skip |= active_sockets == 0 && socket_id != main_lcore_socket; if (skip) { - RTE_LOG(DEBUG, EAL, "Will not preallocate memory on socket %u\n", + EAL_LOG(DEBUG, "Will not preallocate memory on socket %u", socket_id); continue; } @@ -1819,8 +1819,8 @@ memseg_primary_init_32(void) max_pagesz_mem = RTE_ALIGN_FLOOR(max_pagesz_mem, hugepage_sz); - RTE_LOG(DEBUG, EAL, "Attempting to preallocate " - "%" PRIu64 "M on socket %i\n", + EAL_LOG(DEBUG, "Attempting to preallocate " + "%" PRIu64 "M on socket %i", max_pagesz_mem >> 20, socket_id); type_msl_idx = 0; @@ -1830,8 +1830,8 @@ memseg_primary_init_32(void) unsigned int n_segs; if (msl_idx >= RTE_MAX_MEMSEG_LISTS) { - RTE_LOG(ERR, EAL, - "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS\n"); + EAL_LOG(ERR, + "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS"); return -1; } @@ -1847,7 +1847,7 @@ memseg_primary_init_32(void) /* failing to allocate a memseg list is * a serious error. */ - RTE_LOG(ERR, EAL, "Cannot allocate memseg list\n"); + EAL_LOG(ERR, "Cannot allocate memseg list"); return -1; } @@ -1855,7 +1855,7 @@ memseg_primary_init_32(void) /* if we couldn't allocate VA space, we * can try with smaller page sizes. */ - RTE_LOG(ERR, EAL, "Cannot allocate VA space for memseg list, retrying with different page size\n"); + EAL_LOG(ERR, "Cannot allocate VA space for memseg list, retrying with different page size"); /* deallocate memseg list */ if (memseg_list_free(msl)) return -1; @@ -1870,7 +1870,7 @@ memseg_primary_init_32(void) cur_socket_mem += cur_pagesz_mem; } if (cur_socket_mem == 0) { - RTE_LOG(ERR, EAL, "Cannot allocate VA space on socket %u\n", + EAL_LOG(ERR, "Cannot allocate VA space on socket %u", socket_id); return -1; } @@ -1901,13 +1901,13 @@ memseg_secondary_init(void) continue; if (rte_fbarray_attach(&msl->memseg_arr)) { - RTE_LOG(ERR, EAL, "Cannot attach to primary process memseg lists\n"); + EAL_LOG(ERR, "Cannot attach to primary process memseg lists"); return -1; } /* preallocate VA space */ if (eal_memseg_list_alloc(msl, 0)) { - RTE_LOG(ERR, EAL, "Cannot preallocate VA space for hugepage memory\n"); + EAL_LOG(ERR, "Cannot preallocate VA space for hugepage memory"); return -1; } } @@ -1930,21 +1930,21 @@ rte_eal_memseg_init(void) lim.rlim_cur = lim.rlim_max; if (setrlimit(RLIMIT_NOFILE, &lim) < 0) { - RTE_LOG(DEBUG, EAL, "Setting maximum number of open files failed: %s\n", + EAL_LOG(DEBUG, "Setting maximum number of open files failed: %s", strerror(errno)); } else { - RTE_LOG(DEBUG, EAL, "Setting maximum number of open files to %" - PRIu64 "\n", + EAL_LOG(DEBUG, "Setting maximum number of open files to %" + PRIu64, (uint64_t)lim.rlim_cur); } } else { - RTE_LOG(ERR, EAL, "Cannot get current resource limits\n"); + EAL_LOG(ERR, "Cannot get current resource limits"); } #ifndef RTE_EAL_NUMA_AWARE_HUGEPAGES if (!internal_conf->legacy_mem && rte_socket_count() > 1) { - RTE_LOG(WARNING, EAL, "DPDK is running on a NUMA system, but is compiled without NUMA support.\n"); - RTE_LOG(WARNING, EAL, "This will have adverse consequences for performance and usability.\n"); - RTE_LOG(WARNING, EAL, "Please use --"OPT_LEGACY_MEM" option, or recompile with NUMA support.\n"); + EAL_LOG(WARNING, "DPDK is running on a NUMA system, but is compiled without NUMA support."); + EAL_LOG(WARNING, "This will have adverse consequences for performance and usability."); + EAL_LOG(WARNING, "Please use --"OPT_LEGACY_MEM" option, or recompile with NUMA support."); } #endif diff --git a/lib/eal/linux/eal_thread.c b/lib/eal/linux/eal_thread.c index 880070c627..7051840cdf 100644 --- a/lib/eal/linux/eal_thread.c +++ b/lib/eal/linux/eal_thread.c @@ -13,6 +13,8 @@ #include <rte_log.h> #include <rte_string_fns.h> +#include "eal_private.h" + /* require calling thread tid by gettid() */ int rte_sys_gettid(void) { @@ -28,7 +30,7 @@ void rte_thread_set_name(rte_thread_t thread_id, const char *thread_name) const size_t truncatedsz = sizeof(truncated); if (strlcpy(truncated, thread_name, truncatedsz) >= truncatedsz) - RTE_LOG(DEBUG, EAL, "Truncated thread name\n"); + EAL_LOG(DEBUG, "Truncated thread name"); ret = pthread_setname_np((pthread_t)thread_id.opaque_id, truncated); #endif @@ -37,5 +39,5 @@ void rte_thread_set_name(rte_thread_t thread_id, const char *thread_name) RTE_SET_USED(thread_name); if (ret != 0) - RTE_LOG(DEBUG, EAL, "Failed to set thread name\n"); + EAL_LOG(DEBUG, "Failed to set thread name"); } diff --git a/lib/eal/linux/eal_timer.c b/lib/eal/linux/eal_timer.c index df9ad61ae9..1cb1e92193 100644 --- a/lib/eal/linux/eal_timer.c +++ b/lib/eal/linux/eal_timer.c @@ -139,20 +139,20 @@ rte_eal_hpet_init(int make_default) eal_get_internal_configuration(); if (internal_conf->no_hpet) { - RTE_LOG(NOTICE, EAL, "HPET is disabled\n"); + EAL_LOG(NOTICE, "HPET is disabled"); return -1; } fd = open(DEV_HPET, O_RDONLY); if (fd < 0) { - RTE_LOG(ERR, EAL, "ERROR: Cannot open "DEV_HPET": %s!\n", + EAL_LOG(ERR, "ERROR: Cannot open "DEV_HPET": %s!", strerror(errno)); internal_conf->no_hpet = 1; return -1; } eal_hpet = mmap(NULL, 1024, PROT_READ, MAP_SHARED, fd, 0); if (eal_hpet == MAP_FAILED) { - RTE_LOG(ERR, EAL, "ERROR: Cannot mmap "DEV_HPET"!\n"); + EAL_LOG(ERR, "ERROR: Cannot mmap "DEV_HPET"!"); close(fd); internal_conf->no_hpet = 1; return -1; @@ -166,7 +166,7 @@ rte_eal_hpet_init(int make_default) eal_hpet_resolution_hz = (1000ULL*1000ULL*1000ULL*1000ULL*1000ULL) / (uint64_t)eal_hpet_resolution_fs; - RTE_LOG(INFO, EAL, "HPET frequency is ~%"PRIu64" kHz\n", + EAL_LOG(INFO, "HPET frequency is ~%"PRIu64" kHz", eal_hpet_resolution_hz/1000); eal_hpet_msb = (eal_hpet->counter_l >> 30); @@ -176,7 +176,7 @@ rte_eal_hpet_init(int make_default) ret = rte_thread_create_internal_control(&msb_inc_thread_id, "hpet-msb", hpet_msb_inc, NULL); if (ret != 0) { - RTE_LOG(ERR, EAL, "ERROR: Cannot create HPET timer thread!\n"); + EAL_LOG(ERR, "ERROR: Cannot create HPET timer thread!"); internal_conf->no_hpet = 1; return -1; } diff --git a/lib/eal/linux/eal_vfio.c b/lib/eal/linux/eal_vfio.c index ad3c1654b2..5061f3c1cc 100644 --- a/lib/eal/linux/eal_vfio.c +++ b/lib/eal/linux/eal_vfio.c @@ -367,7 +367,7 @@ vfio_open_group_fd(int iommu_group_num) if (vfio_group_fd < 0) { /* if file not found, it's not an error */ if (errno != ENOENT) { - RTE_LOG(ERR, EAL, "Cannot open %s: %s\n", + EAL_LOG(ERR, "Cannot open %s: %s", filename, strerror(errno)); return -1; } @@ -379,8 +379,8 @@ vfio_open_group_fd(int iommu_group_num) vfio_group_fd = open(filename, O_RDWR); if (vfio_group_fd < 0) { if (errno != ENOENT) { - RTE_LOG(ERR, EAL, - "Cannot open %s: %s\n", + EAL_LOG(ERR, + "Cannot open %s: %s", filename, strerror(errno)); return -1; } @@ -408,14 +408,14 @@ vfio_open_group_fd(int iommu_group_num) if (p->result == SOCKET_OK && mp_rep->num_fds == 1) { vfio_group_fd = mp_rep->fds[0]; } else if (p->result == SOCKET_NO_FD) { - RTE_LOG(ERR, EAL, "Bad VFIO group fd\n"); + EAL_LOG(ERR, "Bad VFIO group fd"); vfio_group_fd = -ENOENT; } } free(mp_reply.msgs); if (vfio_group_fd < 0 && vfio_group_fd != -ENOENT) - RTE_LOG(ERR, EAL, "Cannot request VFIO group fd\n"); + EAL_LOG(ERR, "Cannot request VFIO group fd"); return vfio_group_fd; } @@ -452,7 +452,7 @@ vfio_get_group_fd(struct vfio_config *vfio_cfg, /* Lets see first if there is room for a new group */ if (vfio_cfg->vfio_active_groups == VFIO_MAX_GROUPS) { - RTE_LOG(ERR, EAL, "Maximum number of VFIO groups reached!\n"); + EAL_LOG(ERR, "Maximum number of VFIO groups reached!"); return -1; } @@ -465,13 +465,13 @@ vfio_get_group_fd(struct vfio_config *vfio_cfg, /* This should not happen */ if (i == VFIO_MAX_GROUPS) { - RTE_LOG(ERR, EAL, "No VFIO group free slot found\n"); + EAL_LOG(ERR, "No VFIO group free slot found"); return -1; } vfio_group_fd = vfio_open_group_fd(iommu_group_num); if (vfio_group_fd < 0) { - RTE_LOG(ERR, EAL, "Failed to open VFIO group %d\n", + EAL_LOG(ERR, "Failed to open VFIO group %d", iommu_group_num); return vfio_group_fd; } @@ -551,13 +551,13 @@ vfio_group_device_get(int vfio_group_fd) vfio_cfg = get_vfio_cfg_by_group_fd(vfio_group_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO group fd!\n"); + EAL_LOG(ERR, "Invalid VFIO group fd!"); return; } i = get_vfio_group_idx(vfio_group_fd); if (i < 0 || i > (VFIO_MAX_GROUPS - 1)) - RTE_LOG(ERR, EAL, "Wrong VFIO group index (%d)\n", i); + EAL_LOG(ERR, "Wrong VFIO group index (%d)", i); else vfio_cfg->vfio_groups[i].devices++; } @@ -570,13 +570,13 @@ vfio_group_device_put(int vfio_group_fd) vfio_cfg = get_vfio_cfg_by_group_fd(vfio_group_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO group fd!\n"); + EAL_LOG(ERR, "Invalid VFIO group fd!"); return; } i = get_vfio_group_idx(vfio_group_fd); if (i < 0 || i > (VFIO_MAX_GROUPS - 1)) - RTE_LOG(ERR, EAL, "Wrong VFIO group index (%d)\n", i); + EAL_LOG(ERR, "Wrong VFIO group index (%d)", i); else vfio_cfg->vfio_groups[i].devices--; } @@ -589,13 +589,13 @@ vfio_group_device_count(int vfio_group_fd) vfio_cfg = get_vfio_cfg_by_group_fd(vfio_group_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO group fd!\n"); + EAL_LOG(ERR, "Invalid VFIO group fd!"); return -1; } i = get_vfio_group_idx(vfio_group_fd); if (i < 0 || i > (VFIO_MAX_GROUPS - 1)) { - RTE_LOG(ERR, EAL, "Wrong VFIO group index (%d)\n", i); + EAL_LOG(ERR, "Wrong VFIO group index (%d)", i); return -1; } @@ -636,8 +636,8 @@ vfio_mem_event_callback(enum rte_mem_event type, const void *addr, size_t len, while (cur_len < len) { /* some memory segments may have invalid IOVA */ if (ms->iova == RTE_BAD_IOVA) { - RTE_LOG(DEBUG, EAL, - "Memory segment at %p has bad IOVA, skipping\n", + EAL_LOG(DEBUG, + "Memory segment at %p has bad IOVA, skipping", ms->addr); goto next; } @@ -670,7 +670,7 @@ vfio_sync_default_container(void) /* default container fd should have been opened in rte_vfio_enable() */ if (!default_vfio_cfg->vfio_enabled || default_vfio_cfg->vfio_container_fd < 0) { - RTE_LOG(ERR, EAL, "VFIO support is not initialized\n"); + EAL_LOG(ERR, "VFIO support is not initialized"); return -1; } @@ -690,8 +690,8 @@ vfio_sync_default_container(void) } free(mp_reply.msgs); if (iommu_type_id < 0) { - RTE_LOG(ERR, EAL, - "Could not get IOMMU type for default container\n"); + EAL_LOG(ERR, + "Could not get IOMMU type for default container"); return -1; } @@ -708,7 +708,7 @@ vfio_sync_default_container(void) return 0; } - RTE_LOG(ERR, EAL, "Could not find IOMMU type id (%i)\n", + EAL_LOG(ERR, "Could not find IOMMU type id (%i)", iommu_type_id); return -1; } @@ -721,7 +721,7 @@ rte_vfio_clear_group(int vfio_group_fd) vfio_cfg = get_vfio_cfg_by_group_fd(vfio_group_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO group fd!\n"); + EAL_LOG(ERR, "Invalid VFIO group fd!"); return -1; } @@ -756,8 +756,8 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, /* get group number */ ret = rte_vfio_get_group_num(sysfs_base, dev_addr, &iommu_group_num); if (ret == 0) { - RTE_LOG(NOTICE, EAL, - "%s not managed by VFIO driver, skipping\n", + EAL_LOG(NOTICE, + "%s not managed by VFIO driver, skipping", dev_addr); return 1; } @@ -776,8 +776,8 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, * isn't managed by VFIO */ if (vfio_group_fd == -ENOENT) { - RTE_LOG(NOTICE, EAL, - "%s not managed by VFIO driver, skipping\n", + EAL_LOG(NOTICE, + "%s not managed by VFIO driver, skipping", dev_addr); return 1; } @@ -790,14 +790,14 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, /* check if the group is viable */ ret = ioctl(vfio_group_fd, VFIO_GROUP_GET_STATUS, &group_status); if (ret) { - RTE_LOG(ERR, EAL, "%s cannot get VFIO group status, " - "error %i (%s)\n", dev_addr, errno, strerror(errno)); + EAL_LOG(ERR, "%s cannot get VFIO group status, " + "error %i (%s)", dev_addr, errno, strerror(errno)); close(vfio_group_fd); rte_vfio_clear_group(vfio_group_fd); return -1; } else if (!(group_status.flags & VFIO_GROUP_FLAGS_VIABLE)) { - RTE_LOG(ERR, EAL, "%s VFIO group is not viable! " - "Not all devices in IOMMU group bound to VFIO or unbound\n", + EAL_LOG(ERR, "%s VFIO group is not viable! " + "Not all devices in IOMMU group bound to VFIO or unbound", dev_addr); close(vfio_group_fd); rte_vfio_clear_group(vfio_group_fd); @@ -817,9 +817,9 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, ret = ioctl(vfio_group_fd, VFIO_GROUP_SET_CONTAINER, &vfio_container_fd); if (ret) { - RTE_LOG(ERR, EAL, + EAL_LOG(ERR, "%s cannot add VFIO group to container, error " - "%i (%s)\n", dev_addr, errno, strerror(errno)); + "%i (%s)", dev_addr, errno, strerror(errno)); close(vfio_group_fd); rte_vfio_clear_group(vfio_group_fd); return -1; @@ -841,8 +841,8 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, /* select an IOMMU type which we will be using */ t = vfio_set_iommu_type(vfio_container_fd); if (!t) { - RTE_LOG(ERR, EAL, - "%s failed to select IOMMU type\n", + EAL_LOG(ERR, + "%s failed to select IOMMU type", dev_addr); close(vfio_group_fd); rte_vfio_clear_group(vfio_group_fd); @@ -857,9 +857,9 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, else ret = 0; if (ret) { - RTE_LOG(ERR, EAL, + EAL_LOG(ERR, "%s DMA remapping failed, error " - "%i (%s)\n", + "%i (%s)", dev_addr, errno, strerror(errno)); close(vfio_group_fd); rte_vfio_clear_group(vfio_group_fd); @@ -886,10 +886,10 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, map->addr, map->iova, map->len, 1); if (ret) { - RTE_LOG(ERR, EAL, "Couldn't map user memory for DMA: " + EAL_LOG(ERR, "Couldn't map user memory for DMA: " "va: 0x%" PRIx64 " " "iova: 0x%" PRIx64 " " - "len: 0x%" PRIu64 "\n", + "len: 0x%" PRIu64, map->addr, map->iova, map->len); rte_spinlock_recursive_unlock( @@ -911,13 +911,13 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, rte_mcfg_mem_read_unlock(); if (ret && rte_errno != ENOTSUP) { - RTE_LOG(ERR, EAL, "Could not install memory event callback for VFIO\n"); + EAL_LOG(ERR, "Could not install memory event callback for VFIO"); return -1; } if (ret) - RTE_LOG(DEBUG, EAL, "Memory event callbacks not supported\n"); + EAL_LOG(DEBUG, "Memory event callbacks not supported"); else - RTE_LOG(DEBUG, EAL, "Installed memory event callback for VFIO\n"); + EAL_LOG(DEBUG, "Installed memory event callback for VFIO"); } } else if (rte_eal_process_type() != RTE_PROC_PRIMARY && vfio_cfg == default_vfio_cfg && @@ -929,7 +929,7 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, */ ret = vfio_sync_default_container(); if (ret < 0) { - RTE_LOG(ERR, EAL, "Could not sync default VFIO container\n"); + EAL_LOG(ERR, "Could not sync default VFIO container"); close(vfio_group_fd); rte_vfio_clear_group(vfio_group_fd); return -1; @@ -937,7 +937,7 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, /* we have successfully initialized VFIO, notify user */ const struct vfio_iommu_type *t = default_vfio_cfg->vfio_iommu_type; - RTE_LOG(INFO, EAL, "Using IOMMU type %d (%s)\n", + EAL_LOG(INFO, "Using IOMMU type %d (%s)", t->type_id, t->name); } @@ -965,7 +965,7 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, * the VFIO group or the container not having IOMMU configured. */ - RTE_LOG(WARNING, EAL, "Getting a vfio_dev_fd for %s failed\n", + EAL_LOG(WARNING, "Getting a vfio_dev_fd for %s failed", dev_addr); close(vfio_group_fd); rte_vfio_clear_group(vfio_group_fd); @@ -976,8 +976,8 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr, dev_get_info: ret = ioctl(*vfio_dev_fd, VFIO_DEVICE_GET_INFO, device_info); if (ret) { - RTE_LOG(ERR, EAL, "%s cannot get device info, " - "error %i (%s)\n", dev_addr, errno, + EAL_LOG(ERR, "%s cannot get device info, " + "error %i (%s)", dev_addr, errno, strerror(errno)); close(*vfio_dev_fd); close(vfio_group_fd); @@ -1007,7 +1007,7 @@ rte_vfio_release_device(const char *sysfs_base, const char *dev_addr, /* get group number */ ret = rte_vfio_get_group_num(sysfs_base, dev_addr, &iommu_group_num); if (ret <= 0) { - RTE_LOG(WARNING, EAL, "%s not managed by VFIO driver\n", + EAL_LOG(WARNING, "%s not managed by VFIO driver", dev_addr); /* This is an error at this point. */ ret = -1; @@ -1017,7 +1017,7 @@ rte_vfio_release_device(const char *sysfs_base, const char *dev_addr, /* get the actual group fd */ vfio_group_fd = rte_vfio_get_group_fd(iommu_group_num); if (vfio_group_fd < 0) { - RTE_LOG(INFO, EAL, "rte_vfio_get_group_fd failed for %s\n", + EAL_LOG(INFO, "rte_vfio_get_group_fd failed for %s", dev_addr); ret = vfio_group_fd; goto out; @@ -1034,7 +1034,7 @@ rte_vfio_release_device(const char *sysfs_base, const char *dev_addr, /* Closing a device */ if (close(vfio_dev_fd) < 0) { - RTE_LOG(INFO, EAL, "Error when closing vfio_dev_fd for %s\n", + EAL_LOG(INFO, "Error when closing vfio_dev_fd for %s", dev_addr); ret = -1; goto out; @@ -1047,14 +1047,14 @@ rte_vfio_release_device(const char *sysfs_base, const char *dev_addr, if (!vfio_group_device_count(vfio_group_fd)) { if (close(vfio_group_fd) < 0) { - RTE_LOG(INFO, EAL, "Error when closing vfio_group_fd for %s\n", + EAL_LOG(INFO, "Error when closing vfio_group_fd for %s", dev_addr); ret = -1; goto out; } if (rte_vfio_clear_group(vfio_group_fd) < 0) { - RTE_LOG(INFO, EAL, "Error when clearing group for %s\n", + EAL_LOG(INFO, "Error when clearing group for %s", dev_addr); ret = -1; goto out; @@ -1101,21 +1101,21 @@ rte_vfio_enable(const char *modname) } } - RTE_LOG(DEBUG, EAL, "Probing VFIO support...\n"); + EAL_LOG(DEBUG, "Probing VFIO support..."); /* check if vfio module is loaded */ vfio_available = rte_eal_check_module(modname); /* return error directly */ if (vfio_available == -1) { - RTE_LOG(INFO, EAL, "Could not get loaded module details!\n"); + EAL_LOG(INFO, "Could not get loaded module details!"); return -1; } /* return 0 if VFIO modules not loaded */ if (vfio_available == 0) { - RTE_LOG(DEBUG, EAL, - "VFIO modules not loaded, skipping VFIO support...\n"); + EAL_LOG(DEBUG, + "VFIO modules not loaded, skipping VFIO support..."); return 0; } @@ -1131,10 +1131,10 @@ rte_vfio_enable(const char *modname) /* check if we have VFIO driver enabled */ if (default_vfio_cfg->vfio_container_fd != -1) { - RTE_LOG(INFO, EAL, "VFIO support initialized\n"); + EAL_LOG(INFO, "VFIO support initialized"); default_vfio_cfg->vfio_enabled = 1; } else { - RTE_LOG(NOTICE, EAL, "VFIO support could not be initialized\n"); + EAL_LOG(NOTICE, "VFIO support could not be initialized"); } return 0; @@ -1186,7 +1186,7 @@ vfio_get_default_container_fd(void) } free(mp_reply.msgs); - RTE_LOG(ERR, EAL, "Cannot request default VFIO container fd\n"); + EAL_LOG(ERR, "Cannot request default VFIO container fd"); return -1; } @@ -1209,13 +1209,13 @@ vfio_set_iommu_type(int vfio_container_fd) int ret = ioctl(vfio_container_fd, VFIO_SET_IOMMU, t->type_id); if (!ret) { - RTE_LOG(INFO, EAL, "Using IOMMU type %d (%s)\n", + EAL_LOG(INFO, "Using IOMMU type %d (%s)", t->type_id, t->name); return t; } /* not an error, there may be more supported IOMMU types */ - RTE_LOG(DEBUG, EAL, "Set IOMMU type %d (%s) failed, error " - "%i (%s)\n", t->type_id, t->name, errno, + EAL_LOG(DEBUG, "Set IOMMU type %d (%s) failed, error " + "%i (%s)", t->type_id, t->name, errno, strerror(errno)); } /* if we didn't find a suitable IOMMU type, fail */ @@ -1233,15 +1233,15 @@ vfio_has_supported_extensions(int vfio_container_fd) ret = ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION, t->type_id); if (ret < 0) { - RTE_LOG(ERR, EAL, "Could not get IOMMU type, error " - "%i (%s)\n", errno, strerror(errno)); + EAL_LOG(ERR, "Could not get IOMMU type, error " + "%i (%s)", errno, strerror(errno)); close(vfio_container_fd); return -1; } else if (ret == 1) { /* we found a supported extension */ n_extensions++; } - RTE_LOG(DEBUG, EAL, "IOMMU type %d (%s) is %s\n", + EAL_LOG(DEBUG, "IOMMU type %d (%s) is %s", t->type_id, t->name, ret ? "supported" : "not supported"); } @@ -1271,9 +1271,9 @@ rte_vfio_get_container_fd(void) if (internal_conf->process_type == RTE_PROC_PRIMARY) { vfio_container_fd = open(VFIO_CONTAINER_PATH, O_RDWR); if (vfio_container_fd < 0) { - RTE_LOG(ERR, EAL, + EAL_LOG(ERR, "Cannot open VFIO container %s, error " - "%i (%s)\n", VFIO_CONTAINER_PATH, + "%i (%s)", VFIO_CONTAINER_PATH, errno, strerror(errno)); return -1; } @@ -1282,19 +1282,19 @@ rte_vfio_get_container_fd(void) ret = ioctl(vfio_container_fd, VFIO_GET_API_VERSION); if (ret != VFIO_API_VERSION) { if (ret < 0) - RTE_LOG(ERR, EAL, + EAL_LOG(ERR, "Could not get VFIO API version, error " - "%i (%s)\n", errno, strerror(errno)); + "%i (%s)", errno, strerror(errno)); else - RTE_LOG(ERR, EAL, "Unsupported VFIO API version!\n"); + EAL_LOG(ERR, "Unsupported VFIO API version!"); close(vfio_container_fd); return -1; } ret = vfio_has_supported_extensions(vfio_container_fd); if (ret) { - RTE_LOG(ERR, EAL, - "No supported IOMMU extensions found!\n"); + EAL_LOG(ERR, + "No supported IOMMU extensions found!"); return -1; } @@ -1322,7 +1322,7 @@ rte_vfio_get_container_fd(void) } free(mp_reply.msgs); - RTE_LOG(ERR, EAL, "Cannot request VFIO container fd\n"); + EAL_LOG(ERR, "Cannot request VFIO container fd"); return -1; } @@ -1352,7 +1352,7 @@ rte_vfio_get_group_num(const char *sysfs_base, tok, RTE_DIM(tok), '/'); if (ret <= 0) { - RTE_LOG(ERR, EAL, "%s cannot get IOMMU group\n", dev_addr); + EAL_LOG(ERR, "%s cannot get IOMMU group", dev_addr); return -1; } @@ -1362,7 +1362,7 @@ rte_vfio_get_group_num(const char *sysfs_base, end = group_tok; *iommu_group_num = strtol(group_tok, &end, 10); if ((end != group_tok && *end != '\0') || errno != 0) { - RTE_LOG(ERR, EAL, "%s error parsing IOMMU number!\n", dev_addr); + EAL_LOG(ERR, "%s error parsing IOMMU number!", dev_addr); return -1; } @@ -1411,12 +1411,12 @@ vfio_type1_dma_mem_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, * returned from kernel. */ if (errno == EEXIST) { - RTE_LOG(DEBUG, EAL, + EAL_LOG(DEBUG, "Memory segment is already mapped, skipping"); } else { - RTE_LOG(ERR, EAL, + EAL_LOG(ERR, "Cannot set up DMA remapping, error " - "%i (%s)\n", errno, strerror(errno)); + "%i (%s)", errno, strerror(errno)); return -1; } } @@ -1429,12 +1429,12 @@ vfio_type1_dma_mem_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, ret = ioctl(vfio_container_fd, VFIO_IOMMU_UNMAP_DMA, &dma_unmap); if (ret) { - RTE_LOG(ERR, EAL, "Cannot clear DMA remapping, error " - "%i (%s)\n", errno, strerror(errno)); + EAL_LOG(ERR, "Cannot clear DMA remapping, error " + "%i (%s)", errno, strerror(errno)); return -1; } else if (dma_unmap.size != len) { - RTE_LOG(ERR, EAL, "Unexpected size %"PRIu64 - " of DMA remapping cleared instead of %"PRIu64"\n", + EAL_LOG(ERR, "Unexpected size %"PRIu64 + " of DMA remapping cleared instead of %"PRIu64, (uint64_t)dma_unmap.size, len); rte_errno = EIO; return -1; @@ -1470,16 +1470,16 @@ vfio_spapr_dma_do_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, struct vfio_iommu_type1_dma_map dma_map; if (iova + len > spapr_dma_win_len) { - RTE_LOG(ERR, EAL, "DMA map attempt outside DMA window\n"); + EAL_LOG(ERR, "DMA map attempt outside DMA window"); return -1; } ret = ioctl(vfio_container_fd, VFIO_IOMMU_SPAPR_REGISTER_MEMORY, ®); if (ret) { - RTE_LOG(ERR, EAL, + EAL_LOG(ERR, "Cannot register vaddr for IOMMU, error " - "%i (%s)\n", errno, strerror(errno)); + "%i (%s)", errno, strerror(errno)); return -1; } @@ -1493,8 +1493,8 @@ vfio_spapr_dma_do_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, ret = ioctl(vfio_container_fd, VFIO_IOMMU_MAP_DMA, &dma_map); if (ret) { - RTE_LOG(ERR, EAL, "Cannot map vaddr for IOMMU, error " - "%i (%s)\n", errno, strerror(errno)); + EAL_LOG(ERR, "Cannot map vaddr for IOMMU, error " + "%i (%s)", errno, strerror(errno)); return -1; } @@ -1509,17 +1509,17 @@ vfio_spapr_dma_do_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, ret = ioctl(vfio_container_fd, VFIO_IOMMU_UNMAP_DMA, &dma_unmap); if (ret) { - RTE_LOG(ERR, EAL, "Cannot unmap vaddr for IOMMU, error " - "%i (%s)\n", errno, strerror(errno)); + EAL_LOG(ERR, "Cannot unmap vaddr for IOMMU, error " + "%i (%s)", errno, strerror(errno)); return -1; } ret = ioctl(vfio_container_fd, VFIO_IOMMU_SPAPR_UNREGISTER_MEMORY, ®); if (ret) { - RTE_LOG(ERR, EAL, + EAL_LOG(ERR, "Cannot unregister vaddr for IOMMU, error " - "%i (%s)\n", errno, strerror(errno)); + "%i (%s)", errno, strerror(errno)); return -1; } } @@ -1599,7 +1599,7 @@ find_highest_mem_addr(struct spapr_size_walk_param *param) */ FILE *fd = fopen(proc_iomem, "r"); if (fd == NULL) { - RTE_LOG(ERR, EAL, "Cannot open %s\n", proc_iomem); + EAL_LOG(ERR, "Cannot open %s", proc_iomem); return -1; } /* Scan /proc/iomem for the highest PA in the system */ @@ -1612,15 +1612,15 @@ find_highest_mem_addr(struct spapr_size_walk_param *param) /* Validate the format of the memory string */ if (space == NULL || dash == NULL || space < dash) { - RTE_LOG(ERR, EAL, "Can't parse line \"%s\" in file %s\n", + EAL_LOG(ERR, "Can't parse line \"%s\" in file %s", line, proc_iomem); continue; } start = strtoull(line, NULL, 16); end = strtoull(dash + 1, NULL, 16); - RTE_LOG(DEBUG, EAL, "Found system RAM from 0x%" PRIx64 - " to 0x%" PRIx64 "\n", start, end); + EAL_LOG(DEBUG, "Found system RAM from 0x%" PRIx64 + " to 0x%" PRIx64, start, end); if (end > max) max = end; } @@ -1628,22 +1628,22 @@ find_highest_mem_addr(struct spapr_size_walk_param *param) fclose(fd); if (max == 0) { - RTE_LOG(ERR, EAL, "Failed to find valid \"System RAM\" " - "entry in file %s\n", proc_iomem); + EAL_LOG(ERR, "Failed to find valid \"System RAM\" " + "entry in file %s", proc_iomem); return -1; } spapr_dma_win_len = rte_align64pow2(max + 1); return 0; } else if (rte_eal_iova_mode() == RTE_IOVA_VA) { - RTE_LOG(DEBUG, EAL, "Highest VA address in memseg list is 0x%" - PRIx64 "\n", param->max_va); + EAL_LOG(DEBUG, "Highest VA address in memseg list is 0x%" + PRIx64, param->max_va); spapr_dma_win_len = rte_align64pow2(param->max_va); return 0; } spapr_dma_win_len = 0; - RTE_LOG(ERR, EAL, "Unsupported IOVA mode\n"); + EAL_LOG(ERR, "Unsupported IOVA mode"); return -1; } @@ -1668,18 +1668,18 @@ spapr_dma_win_size(void) /* walk the memseg list to find the page size/max VA address */ memset(¶m, 0, sizeof(param)); if (rte_memseg_list_walk(vfio_spapr_size_walk, ¶m) < 0) { - RTE_LOG(ERR, EAL, "Failed to walk memseg list for DMA window size\n"); + EAL_LOG(ERR, "Failed to walk memseg list for DMA window size"); return -1; } /* we can't be sure if DMA window covers external memory */ if (param.is_user_managed) - RTE_LOG(WARNING, EAL, "Detected user managed external memory which may not be managed by the IOMMU\n"); + EAL_LOG(WARNING, "Detected user managed external memory which may not be managed by the IOMMU"); /* check physical/virtual memory size */ if (find_highest_mem_addr(¶m) < 0) return -1; - RTE_LOG(DEBUG, EAL, "Setting DMA window size to 0x%" PRIx64 "\n", + EAL_LOG(DEBUG, "Setting DMA window size to 0x%" PRIx64, spapr_dma_win_len); spapr_dma_win_page_sz = param.page_sz; rte_mem_set_dma_mask(rte_ctz64(spapr_dma_win_len)); @@ -1703,7 +1703,7 @@ vfio_spapr_create_dma_window(int vfio_container_fd) ret = ioctl(vfio_container_fd, VFIO_IOMMU_SPAPR_TCE_GET_INFO, &info); if (ret) { - RTE_LOG(ERR, EAL, "Cannot get IOMMU info, error %i (%s)\n", + EAL_LOG(ERR, "Cannot get IOMMU info, error %i (%s)", errno, strerror(errno)); return -1; } @@ -1744,17 +1744,17 @@ vfio_spapr_create_dma_window(int vfio_container_fd) } #endif /* VFIO_IOMMU_SPAPR_INFO_DDW */ if (ret) { - RTE_LOG(ERR, EAL, "Cannot create new DMA window, error " - "%i (%s)\n", errno, strerror(errno)); - RTE_LOG(ERR, EAL, - "Consider using a larger hugepage size if supported by the system\n"); + EAL_LOG(ERR, "Cannot create new DMA window, error " + "%i (%s)", errno, strerror(errno)); + EAL_LOG(ERR, + "Consider using a larger hugepage size if supported by the system"); return -1; } /* verify the start address */ if (create.start_addr != 0) { - RTE_LOG(ERR, EAL, "Received unsupported start address 0x%" - PRIx64 "\n", (uint64_t)create.start_addr); + EAL_LOG(ERR, "Received unsupported start address 0x%" + PRIx64, (uint64_t)create.start_addr); return -1; } return ret; @@ -1769,13 +1769,13 @@ vfio_spapr_dma_mem_map(int vfio_container_fd, uint64_t vaddr, if (do_map) { if (vfio_spapr_dma_do_map(vfio_container_fd, vaddr, iova, len, 1)) { - RTE_LOG(ERR, EAL, "Failed to map DMA\n"); + EAL_LOG(ERR, "Failed to map DMA"); ret = -1; } } else { if (vfio_spapr_dma_do_map(vfio_container_fd, vaddr, iova, len, 0)) { - RTE_LOG(ERR, EAL, "Failed to unmap DMA\n"); + EAL_LOG(ERR, "Failed to unmap DMA"); ret = -1; } } @@ -1787,7 +1787,7 @@ static int vfio_spapr_dma_map(int vfio_container_fd) { if (vfio_spapr_create_dma_window(vfio_container_fd) < 0) { - RTE_LOG(ERR, EAL, "Could not create new DMA window!\n"); + EAL_LOG(ERR, "Could not create new DMA window!"); return -1; } @@ -1822,14 +1822,14 @@ vfio_dma_mem_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, const struct vfio_iommu_type *t = vfio_cfg->vfio_iommu_type; if (!t) { - RTE_LOG(ERR, EAL, "VFIO support not initialized\n"); + EAL_LOG(ERR, "VFIO support not initialized"); rte_errno = ENODEV; return -1; } if (!t->dma_user_map_func) { - RTE_LOG(ERR, EAL, - "VFIO custom DMA region mapping not supported by IOMMU %s\n", + EAL_LOG(ERR, + "VFIO custom DMA region mapping not supported by IOMMU %s", t->name); rte_errno = ENOTSUP; return -1; @@ -1851,7 +1851,7 @@ container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, user_mem_maps = &vfio_cfg->mem_maps; rte_spinlock_recursive_lock(&user_mem_maps->lock); if (user_mem_maps->n_maps == VFIO_MAX_USER_MEM_MAPS) { - RTE_LOG(ERR, EAL, "No more space for user mem maps\n"); + EAL_LOG(ERR, "No more space for user mem maps"); rte_errno = ENOMEM; ret = -1; goto out; @@ -1865,7 +1865,7 @@ container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, * this to be unsupported, because we can't just store any old * mapping and pollute list of active mappings willy-nilly. */ - RTE_LOG(ERR, EAL, "Couldn't map new region for DMA\n"); + EAL_LOG(ERR, "Couldn't map new region for DMA"); ret = -1; goto out; } @@ -1921,7 +1921,7 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, orig_maps, RTE_DIM(orig_maps)); /* did we find anything? */ if (n_orig < 0) { - RTE_LOG(ERR, EAL, "Couldn't find previously mapped region\n"); + EAL_LOG(ERR, "Couldn't find previously mapped region"); rte_errno = EINVAL; ret = -1; goto out; @@ -1943,7 +1943,7 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, vaddr + len, iova + len); if (!start_aligned || !end_aligned) { - RTE_LOG(DEBUG, EAL, "DMA partial unmap unsupported\n"); + EAL_LOG(DEBUG, "DMA partial unmap unsupported"); rte_errno = ENOTSUP; ret = -1; goto out; @@ -1961,7 +1961,7 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, /* can we store the new maps in our list? */ newlen = (user_mem_maps->n_maps - n_orig) + n_new; if (newlen >= VFIO_MAX_USER_MEM_MAPS) { - RTE_LOG(ERR, EAL, "Not enough space to store partial mapping\n"); + EAL_LOG(ERR, "Not enough space to store partial mapping"); rte_errno = ENOMEM; ret = -1; goto out; @@ -1978,11 +1978,11 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova, * within our mapped range but had invalid alignment). */ if (rte_errno != ENODEV && rte_errno != ENOTSUP) { - RTE_LOG(ERR, EAL, "Couldn't unmap region for DMA\n"); + EAL_LOG(ERR, "Couldn't unmap region for DMA"); ret = -1; goto out; } else { - RTE_LOG(DEBUG, EAL, "DMA unmapping failed, but removing mappings anyway\n"); + EAL_LOG(DEBUG, "DMA unmapping failed, but removing mappings anyway"); } } @@ -2005,8 +2005,8 @@ rte_vfio_noiommu_is_enabled(void) fd = open(VFIO_NOIOMMU_MODE, O_RDONLY); if (fd < 0) { if (errno != ENOENT) { - RTE_LOG(ERR, EAL, "Cannot open VFIO noiommu file " - "%i (%s)\n", errno, strerror(errno)); + EAL_LOG(ERR, "Cannot open VFIO noiommu file " + "%i (%s)", errno, strerror(errno)); return -1; } /* @@ -2019,8 +2019,8 @@ rte_vfio_noiommu_is_enabled(void) cnt = read(fd, &c, 1); close(fd); if (cnt != 1) { - RTE_LOG(ERR, EAL, "Unable to read from VFIO noiommu file " - "%i (%s)\n", errno, strerror(errno)); + EAL_LOG(ERR, "Unable to read from VFIO noiommu file " + "%i (%s)", errno, strerror(errno)); return -1; } @@ -2039,13 +2039,13 @@ rte_vfio_container_create(void) } if (i == VFIO_MAX_CONTAINERS) { - RTE_LOG(ERR, EAL, "Exceed max VFIO container limit\n"); + EAL_LOG(ERR, "Exceed max VFIO container limit"); return -1; } vfio_cfgs[i].vfio_container_fd = rte_vfio_get_container_fd(); if (vfio_cfgs[i].vfio_container_fd < 0) { - RTE_LOG(NOTICE, EAL, "Fail to create a new VFIO container\n"); + EAL_LOG(NOTICE, "Fail to create a new VFIO container"); return -1; } @@ -2060,7 +2060,7 @@ rte_vfio_container_destroy(int container_fd) vfio_cfg = get_vfio_cfg_by_container_fd(container_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO container fd\n"); + EAL_LOG(ERR, "Invalid VFIO container fd"); return -1; } @@ -2084,7 +2084,7 @@ rte_vfio_container_group_bind(int container_fd, int iommu_group_num) vfio_cfg = get_vfio_cfg_by_container_fd(container_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO container fd\n"); + EAL_LOG(ERR, "Invalid VFIO container fd"); return -1; } @@ -2100,7 +2100,7 @@ rte_vfio_container_group_unbind(int container_fd, int iommu_group_num) vfio_cfg = get_vfio_cfg_by_container_fd(container_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO container fd\n"); + EAL_LOG(ERR, "Invalid VFIO container fd"); return -1; } @@ -2113,14 +2113,14 @@ rte_vfio_container_group_unbind(int container_fd, int iommu_group_num) /* This should not happen */ if (i == VFIO_MAX_GROUPS || cur_grp == NULL) { - RTE_LOG(ERR, EAL, "Specified VFIO group number not found\n"); + EAL_LOG(ERR, "Specified VFIO group number not found"); return -1; } if (cur_grp->fd >= 0 && close(cur_grp->fd) < 0) { - RTE_LOG(ERR, EAL, + EAL_LOG(ERR, "Error when closing vfio_group_fd for iommu_group_num " - "%d\n", iommu_group_num); + "%d", iommu_group_num); return -1; } cur_grp->group_num = -1; @@ -2144,7 +2144,7 @@ rte_vfio_container_dma_map(int container_fd, uint64_t vaddr, uint64_t iova, vfio_cfg = get_vfio_cfg_by_container_fd(container_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO container fd\n"); + EAL_LOG(ERR, "Invalid VFIO container fd"); return -1; } @@ -2164,7 +2164,7 @@ rte_vfio_container_dma_unmap(int container_fd, uint64_t vaddr, uint64_t iova, vfio_cfg = get_vfio_cfg_by_container_fd(container_fd); if (vfio_cfg == NULL) { - RTE_LOG(ERR, EAL, "Invalid VFIO container fd\n"); + EAL_LOG(ERR, "Invalid VFIO container fd"); return -1; } diff --git a/lib/eal/linux/eal_vfio_mp_sync.c b/lib/eal/linux/eal_vfio_mp_sync.c index 157f20e583..ce14e260fe 100644 --- a/lib/eal/linux/eal_vfio_mp_sync.c +++ b/lib/eal/linux/eal_vfio_mp_sync.c @@ -11,6 +11,7 @@ #include <rte_vfio.h> #include <rte_eal.h> +#include "eal_private.h" #include "eal_vfio.h" /** @@ -33,7 +34,7 @@ vfio_mp_primary(const struct rte_mp_msg *msg, const void *peer) (const struct vfio_mp_param *)msg->param; if (msg->len_param != sizeof(*m)) { - RTE_LOG(ERR, EAL, "vfio received invalid message!\n"); + EAL_LOG(ERR, "vfio received invalid message!"); return -1; } @@ -95,7 +96,7 @@ vfio_mp_primary(const struct rte_mp_msg *msg, const void *peer) break; } default: - RTE_LOG(ERR, EAL, "vfio received invalid message!\n"); + EAL_LOG(ERR, "vfio received invalid message!"); return -1; } diff --git a/lib/eal/riscv/rte_cycles.c b/lib/eal/riscv/rte_cycles.c index 358f271311..5691ec3a93 100644 --- a/lib/eal/riscv/rte_cycles.c +++ b/lib/eal/riscv/rte_cycles.c @@ -38,14 +38,14 @@ __rte_riscv_timefrq(void) break; } fail: - RTE_LOG(WARNING, EAL, "Unable to read timebase-frequency from FDT.\n"); + EAL_LOG(WARNING, "Unable to read timebase-frequency from FDT."); return 0; } uint64_t get_tsc_freq_arch(void) { - RTE_LOG(NOTICE, EAL, "TSC using RISC-V %s.\n", + EAL_LOG(NOTICE, "TSC using RISC-V %s.", RTE_RISCV_RDTSC_USE_HPM ? "rdcycle" : "rdtime"); if (!RTE_RISCV_RDTSC_USE_HPM) return __rte_riscv_timefrq(); diff --git a/lib/eal/unix/eal_filesystem.c b/lib/eal/unix/eal_filesystem.c index afbab9368a..6cd5f1492c 100644 --- a/lib/eal/unix/eal_filesystem.c +++ b/lib/eal/unix/eal_filesystem.c @@ -41,7 +41,7 @@ int eal_create_runtime_dir(void) /* create DPDK subdirectory under runtime dir */ ret = snprintf(tmp, sizeof(tmp), "%s/dpdk", directory); if (ret < 0 || ret == sizeof(tmp)) { - RTE_LOG(ERR, EAL, "Error creating DPDK runtime path name\n"); + EAL_LOG(ERR, "Error creating DPDK runtime path name"); return -1; } @@ -49,7 +49,7 @@ int eal_create_runtime_dir(void) ret = snprintf(run_dir, sizeof(run_dir), "%s/%s", tmp, eal_get_hugefile_prefix()); if (ret < 0 || ret == sizeof(run_dir)) { - RTE_LOG(ERR, EAL, "Error creating prefix-specific runtime path name\n"); + EAL_LOG(ERR, "Error creating prefix-specific runtime path name"); return -1; } @@ -58,14 +58,14 @@ int eal_create_runtime_dir(void) */ ret = mkdir(tmp, 0700); if (ret < 0 && errno != EEXIST) { - RTE_LOG(ERR, EAL, "Error creating '%s': %s\n", + EAL_LOG(ERR, "Error creating '%s': %s", tmp, strerror(errno)); return -1; } ret = mkdir(run_dir, 0700); if (ret < 0 && errno != EEXIST) { - RTE_LOG(ERR, EAL, "Error creating '%s': %s\n", + EAL_LOG(ERR, "Error creating '%s': %s", run_dir, strerror(errno)); return -1; } @@ -84,20 +84,20 @@ int eal_parse_sysfs_value(const char *filename, unsigned long *val) char *end = NULL; if ((f = fopen(filename, "r")) == NULL) { - RTE_LOG(ERR, EAL, "%s(): cannot open sysfs value %s\n", + EAL_LOG(ERR, "%s(): cannot open sysfs value %s", __func__, filename); return -1; } if (fgets(buf, sizeof(buf), f) == NULL) { - RTE_LOG(ERR, EAL, "%s(): cannot read sysfs value %s\n", + EAL_LOG(ERR, "%s(): cannot read sysfs value %s", __func__, filename); fclose(f); return -1; } *val = strtoul(buf, &end, 0); if ((buf[0] == '\0') || (end == NULL) || (*end != '\n')) { - RTE_LOG(ERR, EAL, "%s(): cannot parse sysfs value %s\n", + EAL_LOG(ERR, "%s(): cannot parse sysfs value %s", __func__, filename); fclose(f); return -1; diff --git a/lib/eal/unix/eal_firmware.c b/lib/eal/unix/eal_firmware.c index 1a7cf8e7b7..1d47e879c8 100644 --- a/lib/eal/unix/eal_firmware.c +++ b/lib/eal/unix/eal_firmware.c @@ -14,6 +14,7 @@ #include <rte_log.h> #include "eal_firmware.h" +#include "eal_private.h" #ifdef RTE_HAS_LIBARCHIVE @@ -151,7 +152,7 @@ rte_firmware_read(const char *name, void **buf, size_t *bufsz) path[PATH_MAX - 1] = '\0'; #ifndef RTE_HAS_LIBARCHIVE if (access(path, F_OK) == 0) { - RTE_LOG(WARNING, EAL, "libarchive not linked, %s cannot be decompressed\n", + EAL_LOG(WARNING, "libarchive not linked, %s cannot be decompressed", path); } #else diff --git a/lib/eal/unix/eal_unix_memory.c b/lib/eal/unix/eal_unix_memory.c index 68ae93bd6e..97969a401b 100644 --- a/lib/eal/unix/eal_unix_memory.c +++ b/lib/eal/unix/eal_unix_memory.c @@ -29,8 +29,8 @@ mem_map(void *requested_addr, size_t size, int prot, int flags, { void *virt = mmap(requested_addr, size, prot, flags, fd, offset); if (virt == MAP_FAILED) { - RTE_LOG(DEBUG, EAL, - "Cannot mmap(%p, 0x%zx, 0x%x, 0x%x, %d, 0x%"PRIx64"): %s\n", + EAL_LOG(DEBUG, + "Cannot mmap(%p, 0x%zx, 0x%x, 0x%x, %d, 0x%"PRIx64"): %s", requested_addr, size, prot, flags, fd, offset, strerror(errno)); rte_errno = errno; @@ -44,7 +44,7 @@ mem_unmap(void *virt, size_t size) { int ret = munmap(virt, size); if (ret < 0) { - RTE_LOG(DEBUG, EAL, "Cannot munmap(%p, 0x%zx): %s\n", + EAL_LOG(DEBUG, "Cannot munmap(%p, 0x%zx): %s", virt, size, strerror(errno)); rte_errno = errno; } @@ -83,7 +83,7 @@ eal_mem_set_dump(void *virt, size_t size, bool dump) int flags = dump ? EAL_DODUMP : EAL_DONTDUMP; int ret = madvise(virt, size, flags); if (ret) { - RTE_LOG(DEBUG, EAL, "madvise(%p, %#zx, %d) failed: %s\n", + EAL_LOG(DEBUG, "madvise(%p, %#zx, %d) failed: %s", virt, size, flags, strerror(rte_errno)); rte_errno = errno; } diff --git a/lib/eal/unix/rte_thread.c b/lib/eal/unix/rte_thread.c index 36a21ab2f9..1b4c73f58e 100644 --- a/lib/eal/unix/rte_thread.c +++ b/lib/eal/unix/rte_thread.c @@ -13,6 +13,8 @@ #include <rte_log.h> #include <rte_thread.h> +#include "eal_private.h" + struct eal_tls_key { pthread_key_t thread_index; }; @@ -53,7 +55,7 @@ thread_map_priority_to_os_value(enum rte_thread_priority eal_pri, int *os_pri, *os_pri = sched_get_priority_max(SCHED_RR); break; default: - RTE_LOG(DEBUG, EAL, "The requested priority value is invalid.\n"); + EAL_LOG(DEBUG, "The requested priority value is invalid."); return EINVAL; } @@ -79,7 +81,7 @@ thread_map_os_priority_to_eal_priority(int policy, int os_pri, } break; default: - RTE_LOG(DEBUG, EAL, "The OS priority value does not map to an EAL-defined priority.\n"); + EAL_LOG(DEBUG, "The OS priority value does not map to an EAL-defined priority."); return EINVAL; } @@ -97,7 +99,7 @@ thread_start_wrapper(void *arg) if (ctx->thread_attr != NULL && CPU_COUNT(&ctx->thread_attr->cpuset) > 0) { ret = rte_thread_set_affinity_by_id(rte_thread_self(), &ctx->thread_attr->cpuset); if (ret != 0) - RTE_LOG(DEBUG, EAL, "rte_thread_set_affinity_by_id failed\n"); + EAL_LOG(DEBUG, "rte_thread_set_affinity_by_id failed"); } pthread_mutex_lock(&ctx->wrapper_mutex); @@ -136,7 +138,7 @@ rte_thread_create(rte_thread_t *thread_id, if (thread_attr != NULL) { ret = pthread_attr_init(&attr); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "pthread_attr_init failed\n"); + EAL_LOG(DEBUG, "pthread_attr_init failed"); goto cleanup; } @@ -149,7 +151,7 @@ rte_thread_create(rte_thread_t *thread_id, ret = pthread_attr_setinheritsched(attrp, PTHREAD_EXPLICIT_SCHED); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "pthread_attr_setinheritsched failed\n"); + EAL_LOG(DEBUG, "pthread_attr_setinheritsched failed"); goto cleanup; } @@ -165,13 +167,13 @@ rte_thread_create(rte_thread_t *thread_id, ret = pthread_attr_setschedpolicy(attrp, policy); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "pthread_attr_setschedpolicy failed\n"); + EAL_LOG(DEBUG, "pthread_attr_setschedpolicy failed"); goto cleanup; } ret = pthread_attr_setschedparam(attrp, ¶m); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "pthread_attr_setschedparam failed\n"); + EAL_LOG(DEBUG, "pthread_attr_setschedparam failed"); goto cleanup; } } @@ -179,7 +181,7 @@ rte_thread_create(rte_thread_t *thread_id, ret = pthread_create((pthread_t *)&thread_id->opaque_id, attrp, thread_start_wrapper, &ctx); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "pthread_create failed\n"); + EAL_LOG(DEBUG, "pthread_create failed"); goto cleanup; } @@ -211,7 +213,7 @@ rte_thread_join(rte_thread_t thread_id, uint32_t *value_ptr) ret = pthread_join((pthread_t)thread_id.opaque_id, pres); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "pthread_join failed\n"); + EAL_LOG(DEBUG, "pthread_join failed"); return ret; } @@ -256,7 +258,7 @@ rte_thread_get_priority(rte_thread_t thread_id, ret = pthread_getschedparam((pthread_t)thread_id.opaque_id, &policy, ¶m); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "pthread_getschedparam failed\n"); + EAL_LOG(DEBUG, "pthread_getschedparam failed"); goto cleanup; } @@ -295,13 +297,13 @@ rte_thread_key_create(rte_thread_key *key, void (*destructor)(void *)) *key = malloc(sizeof(**key)); if ((*key) == NULL) { - RTE_LOG(DEBUG, EAL, "Cannot allocate TLS key.\n"); + EAL_LOG(DEBUG, "Cannot allocate TLS key."); rte_errno = ENOMEM; return -1; } err = pthread_key_create(&((*key)->thread_index), destructor); if (err) { - RTE_LOG(DEBUG, EAL, "pthread_key_create failed: %s\n", + EAL_LOG(DEBUG, "pthread_key_create failed: %s", strerror(err)); free(*key); rte_errno = ENOEXEC; @@ -316,13 +318,13 @@ rte_thread_key_delete(rte_thread_key key) int err; if (!key) { - RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n"); + EAL_LOG(DEBUG, "Invalid TLS key."); rte_errno = EINVAL; return -1; } err = pthread_key_delete(key->thread_index); if (err) { - RTE_LOG(DEBUG, EAL, "pthread_key_delete failed: %s\n", + EAL_LOG(DEBUG, "pthread_key_delete failed: %s", strerror(err)); free(key); rte_errno = ENOEXEC; @@ -338,13 +340,13 @@ rte_thread_value_set(rte_thread_key key, const void *value) int err; if (!key) { - RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n"); + EAL_LOG(DEBUG, "Invalid TLS key."); rte_errno = EINVAL; return -1; } err = pthread_setspecific(key->thread_index, value); if (err) { - RTE_LOG(DEBUG, EAL, "pthread_setspecific failed: %s\n", + EAL_LOG(DEBUG, "pthread_setspecific failed: %s", strerror(err)); rte_errno = ENOEXEC; return -1; @@ -356,7 +358,7 @@ void * rte_thread_value_get(rte_thread_key key) { if (!key) { - RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n"); + EAL_LOG(DEBUG, "Invalid TLS key."); rte_errno = EINVAL; return NULL; } diff --git a/lib/eal/windows/eal.c b/lib/eal/windows/eal.c index 7ec2152211..52f0e7462d 100644 --- a/lib/eal/windows/eal.c +++ b/lib/eal/windows/eal.c @@ -67,7 +67,7 @@ eal_proc_type_detect(void) ptype = RTE_PROC_SECONDARY; } - RTE_LOG(INFO, EAL, "Auto-detected process type: %s\n", + EAL_LOG(INFO, "Auto-detected process type: %s", ptype == RTE_PROC_PRIMARY ? "PRIMARY" : "SECONDARY"); return ptype; @@ -175,16 +175,16 @@ eal_parse_args(int argc, char **argv) exit(EXIT_SUCCESS); default: if (opt < OPT_LONG_MIN_NUM && isprint(opt)) { - RTE_LOG(ERR, EAL, "Option %c is not supported " - "on Windows\n", opt); + EAL_LOG(ERR, "Option %c is not supported " + "on Windows", opt); } else if (opt >= OPT_LONG_MIN_NUM && opt < OPT_LONG_MAX_NUM) { - RTE_LOG(ERR, EAL, "Option %s is not supported " - "on Windows\n", + EAL_LOG(ERR, "Option %s is not supported " + "on Windows", eal_long_options[option_index].name); } else { - RTE_LOG(ERR, EAL, "Option %d is not supported " - "on Windows\n", opt); + EAL_LOG(ERR, "Option %d is not supported " + "on Windows", opt); } eal_usage(prgname); return -1; @@ -217,7 +217,7 @@ static void rte_eal_init_alert(const char *msg) { fprintf(stderr, "EAL: FATAL: %s\n", msg); - RTE_LOG(ERR, EAL, "%s\n", msg); + EAL_LOG(ERR, "%s", msg); } /* Stubs to enable EAL trace point compilation @@ -312,8 +312,8 @@ rte_eal_init(int argc, char **argv) /* Prevent creation of shared memory files. */ if (internal_conf->in_memory == 0) { - RTE_LOG(WARNING, EAL, "Multi-process support is requested, " - "but not available.\n"); + EAL_LOG(WARNING, "Multi-process support is requested, " + "but not available."); internal_conf->in_memory = 1; internal_conf->no_shconf = 1; } @@ -356,21 +356,21 @@ rte_eal_init(int argc, char **argv) has_phys_addr = true; if (eal_mem_virt2iova_init() < 0) { /* Non-fatal error if physical addresses are not required. */ - RTE_LOG(DEBUG, EAL, "Cannot access virt2phys driver, " - "PA will not be available\n"); + EAL_LOG(DEBUG, "Cannot access virt2phys driver, " + "PA will not be available"); has_phys_addr = false; } iova_mode = internal_conf->iova_mode; if (iova_mode == RTE_IOVA_DC) { - RTE_LOG(DEBUG, EAL, "Specific IOVA mode is not requested, autodetecting\n"); + EAL_LOG(DEBUG, "Specific IOVA mode is not requested, autodetecting"); if (has_phys_addr) { - RTE_LOG(DEBUG, EAL, "Selecting IOVA mode according to bus requests\n"); + EAL_LOG(DEBUG, "Selecting IOVA mode according to bus requests"); iova_mode = rte_bus_get_iommu_class(); if (iova_mode == RTE_IOVA_DC) { if (!RTE_IOVA_IN_MBUF) { iova_mode = RTE_IOVA_VA; - RTE_LOG(DEBUG, EAL, "IOVA as VA mode is forced by build option.\n"); + EAL_LOG(DEBUG, "IOVA as VA mode is forced by build option."); } else { iova_mode = RTE_IOVA_PA; } @@ -392,7 +392,7 @@ rte_eal_init(int argc, char **argv) return -1; } - RTE_LOG(DEBUG, EAL, "Selected IOVA mode '%s'\n", + EAL_LOG(DEBUG, "Selected IOVA mode '%s'", iova_mode == RTE_IOVA_PA ? "PA" : "VA"); rte_eal_get_configuration()->iova_mode = iova_mode; @@ -442,7 +442,7 @@ rte_eal_init(int argc, char **argv) &lcore_config[config->main_lcore].cpuset); ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset)); - RTE_LOG(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])\n", + EAL_LOG(DEBUG, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])", config->main_lcore, rte_thread_self().opaque_id, cpuset, ret == 0 ? "" : "..."); @@ -474,7 +474,7 @@ rte_eal_init(int argc, char **argv) ret = rte_thread_set_affinity_by_id(lcore_config[i].thread_id, &lcore_config[i].cpuset); if (ret != 0) - RTE_LOG(DEBUG, EAL, "Cannot set affinity\n"); + EAL_LOG(DEBUG, "Cannot set affinity"); } /* Initialize services so drivers can register services during probe. */ diff --git a/lib/eal/windows/eal_alarm.c b/lib/eal/windows/eal_alarm.c index 34b52380ce..052af4b21b 100644 --- a/lib/eal/windows/eal_alarm.c +++ b/lib/eal/windows/eal_alarm.c @@ -9,6 +9,7 @@ #include <rte_alarm.h> #include <rte_spinlock.h> +#include "eal_private.h" #include <eal_trace_internal.h> #include "eal_windows.h" @@ -92,7 +93,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg) int ret; if (cb_fn == NULL) { - RTE_LOG(ERR, EAL, "NULL callback\n"); + EAL_LOG(ERR, "NULL callback"); ret = -EINVAL; goto exit; } @@ -105,7 +106,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg) ap = calloc(1, sizeof(*ap)); if (ap == NULL) { - RTE_LOG(ERR, EAL, "Cannot allocate alarm entry\n"); + EAL_LOG(ERR, "Cannot allocate alarm entry"); ret = -ENOMEM; goto exit; } @@ -129,7 +130,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg) /* Directly schedule callback execution. */ ret = alarm_set(ap, deadline); if (ret < 0) { - RTE_LOG(ERR, EAL, "Cannot setup alarm\n"); + EAL_LOG(ERR, "Cannot setup alarm"); goto fail; } } else { @@ -143,7 +144,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg) ret = intr_thread_exec_sync(alarm_task_exec, &task); if (ret < 0) { - RTE_LOG(ERR, EAL, "Cannot setup alarm in interrupt thread\n"); + EAL_LOG(ERR, "Cannot setup alarm in interrupt thread"); goto fail; } @@ -187,7 +188,7 @@ rte_eal_alarm_cancel(rte_eal_alarm_callback cb_fn, void *cb_arg) removed = 0; if (cb_fn == NULL) { - RTE_LOG(ERR, EAL, "NULL callback\n"); + EAL_LOG(ERR, "NULL callback"); return -EINVAL; } @@ -246,7 +247,7 @@ intr_thread_exec_sync(void (*func)(void *arg), void *arg) rte_spinlock_lock(&task.lock); ret = eal_intr_thread_schedule(intr_thread_entry, &task); if (ret < 0) { - RTE_LOG(ERR, EAL, "Cannot schedule task to interrupt thread\n"); + EAL_LOG(ERR, "Cannot schedule task to interrupt thread"); return -EINVAL; } diff --git a/lib/eal/windows/eal_debug.c b/lib/eal/windows/eal_debug.c index 56ed70df7d..4a6303a2a9 100644 --- a/lib/eal/windows/eal_debug.c +++ b/lib/eal/windows/eal_debug.c @@ -7,6 +7,8 @@ #include <rte_debug.h> #include <rte_windows.h> +#include "eal_private.h" + #include <dbghelp.h> #define BACKTRACE_SIZE 256 @@ -48,8 +50,8 @@ rte_dump_stack(void) error_code = GetLastError(); if (error_code == ERROR_INVALID_ADDRESS) { /* Missing symbols, print message */ - rte_log(RTE_LOG_ERR, RTE_LOGTYPE_EAL, - "%d: [<missing_symbols>]\n", frame_num--); + EAL_LOG(ERR, + "%d: [<missing_symbols>]", frame_num--); continue; } else { RTE_LOG_WIN32_ERR("SymFromAddr()"); @@ -67,8 +69,8 @@ rte_dump_stack(void) } } - rte_log(RTE_LOG_ERR, RTE_LOGTYPE_EAL, - "%d: [%s (%s+0x%0llx)[0x%0llX]]\n", frame_num, + EAL_LOG(ERR, + "%d: [%s (%s+0x%0llx)[0x%0llX]]", frame_num, error_code ? "<unknown>" : line.FileName, symbol_info->Name, sym_disp, symbol_info->Address); frame_num--; diff --git a/lib/eal/windows/eal_dev.c b/lib/eal/windows/eal_dev.c index 35191056fd..e0b8c54dc5 100644 --- a/lib/eal/windows/eal_dev.c +++ b/lib/eal/windows/eal_dev.c @@ -4,30 +4,32 @@ #include <rte_dev.h> +#include "eal_private.h" + int rte_dev_event_monitor_start(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for Windows\n"); + EAL_LOG(ERR, "Device event is not supported for Windows"); return -1; } int rte_dev_event_monitor_stop(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for Windows\n"); + EAL_LOG(ERR, "Device event is not supported for Windows"); return -1; } int rte_dev_hotplug_handle_enable(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for Windows\n"); + EAL_LOG(ERR, "Device event is not supported for Windows"); return -1; } int rte_dev_hotplug_handle_disable(void) { - RTE_LOG(ERR, EAL, "Device event is not supported for Windows\n"); + EAL_LOG(ERR, "Device event is not supported for Windows"); return -1; } diff --git a/lib/eal/windows/eal_hugepages.c b/lib/eal/windows/eal_hugepages.c index 775c67e4c4..ff72b8ee38 100644 --- a/lib/eal/windows/eal_hugepages.c +++ b/lib/eal/windows/eal_hugepages.c @@ -89,8 +89,8 @@ hugepage_info_init(void) } hpi->num_pages[socket_id] = bytes / hpi->hugepage_sz; - RTE_LOG(DEBUG, EAL, - "Found %u hugepages of %zu bytes on socket %u\n", + EAL_LOG(DEBUG, + "Found %u hugepages of %zu bytes on socket %u", hpi->num_pages[socket_id], hpi->hugepage_sz, socket_id); } @@ -105,13 +105,13 @@ int eal_hugepage_info_init(void) { if (hugepage_claim_privilege() < 0) { - RTE_LOG(ERR, EAL, - "Cannot claim hugepage privilege, check large-page support privilege\n"); + EAL_LOG(ERR, + "Cannot claim hugepage privilege, check large-page support privilege"); return -1; } if (hugepage_info_init() < 0) { - RTE_LOG(ERR, EAL, "Cannot discover available hugepages\n"); + EAL_LOG(ERR, "Cannot discover available hugepages"); return -1; } diff --git a/lib/eal/windows/eal_interrupts.c b/lib/eal/windows/eal_interrupts.c index 49efdc098c..c97118d231 100644 --- a/lib/eal/windows/eal_interrupts.c +++ b/lib/eal/windows/eal_interrupts.c @@ -39,7 +39,7 @@ eal_intr_thread_main(LPVOID arg __rte_unused) bool finished = false; if (eal_intr_thread_handle_init() < 0) { - RTE_LOG(ERR, EAL, "Cannot open interrupt thread handle\n"); + EAL_LOG(ERR, "Cannot open interrupt thread handle"); goto cleanup; } @@ -57,7 +57,7 @@ eal_intr_thread_main(LPVOID arg __rte_unused) DWORD error = GetLastError(); if (error != WAIT_IO_COMPLETION) { RTE_LOG_WIN32_ERR("GetQueuedCompletionStatusEx()"); - RTE_LOG(ERR, EAL, "Failed waiting for interrupts\n"); + EAL_LOG(ERR, "Failed waiting for interrupts"); break; } @@ -94,7 +94,7 @@ rte_eal_intr_init(void) intr_iocp = CreateIoCompletionPort(INVALID_HANDLE_VALUE, NULL, 0, 1); if (intr_iocp == NULL) { RTE_LOG_WIN32_ERR("CreateIoCompletionPort()"); - RTE_LOG(ERR, EAL, "Cannot create interrupt IOCP\n"); + EAL_LOG(ERR, "Cannot create interrupt IOCP"); return -1; } @@ -102,7 +102,7 @@ rte_eal_intr_init(void) eal_intr_thread_main, NULL); if (ret != 0) { rte_errno = -ret; - RTE_LOG(ERR, EAL, "Cannot create interrupt thread\n"); + EAL_LOG(ERR, "Cannot create interrupt thread"); } return ret; @@ -140,7 +140,7 @@ eal_intr_thread_cancel(void) if (!PostQueuedCompletionStatus( intr_iocp, 0, IOCP_KEY_SHUTDOWN, NULL)) { RTE_LOG_WIN32_ERR("PostQueuedCompletionStatus()"); - RTE_LOG(ERR, EAL, "Cannot cancel interrupt thread\n"); + EAL_LOG(ERR, "Cannot cancel interrupt thread"); return; } diff --git a/lib/eal/windows/eal_lcore.c b/lib/eal/windows/eal_lcore.c index 286fe241eb..a498044620 100644 --- a/lib/eal/windows/eal_lcore.c +++ b/lib/eal/windows/eal_lcore.c @@ -65,7 +65,8 @@ eal_query_group_affinity(void) &infos_size)) { DWORD error = GetLastError(); if (error != ERROR_INSUFFICIENT_BUFFER) { - RTE_LOG(ERR, EAL, "Cannot get group information size, error %lu\n", error); + EAL_LOG(ERR, "Cannot get group information size, error %lu", + error); rte_errno = EINVAL; ret = -1; goto cleanup; @@ -74,7 +75,7 @@ eal_query_group_affinity(void) infos = malloc(infos_size); if (infos == NULL) { - RTE_LOG(ERR, EAL, "Cannot allocate memory for NUMA node information\n"); + EAL_LOG(ERR, "Cannot allocate memory for NUMA node information"); rte_errno = ENOMEM; ret = -1; goto cleanup; @@ -82,7 +83,7 @@ eal_query_group_affinity(void) if (!GetLogicalProcessorInformationEx(RelationGroup, infos, &infos_size)) { - RTE_LOG(ERR, EAL, "Cannot get group information, error %lu\n", + EAL_LOG(ERR, "Cannot get group information, error %lu", GetLastError()); rte_errno = EINVAL; ret = -1; diff --git a/lib/eal/windows/eal_memalloc.c b/lib/eal/windows/eal_memalloc.c index aa7589b81d..5db5a474cc 100644 --- a/lib/eal/windows/eal_memalloc.c +++ b/lib/eal/windows/eal_memalloc.c @@ -52,7 +52,7 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id, } /* Bugcheck, should not happen. */ - RTE_LOG(DEBUG, EAL, "Attempted to reallocate segment %p " + EAL_LOG(DEBUG, "Attempted to reallocate segment %p " "(size %zu) on socket %d", ms->addr, ms->len, ms->socket_id); return -1; @@ -66,8 +66,8 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id, /* Request a new chunk of memory from OS. */ addr = eal_mem_alloc_socket(alloc_sz, socket_id); if (addr == NULL) { - RTE_LOG(DEBUG, EAL, "Cannot allocate %zu bytes " - "on socket %d\n", alloc_sz, socket_id); + EAL_LOG(DEBUG, "Cannot allocate %zu bytes " + "on socket %d", alloc_sz, socket_id); return -1; } } else { @@ -79,15 +79,15 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id, * error, because it breaks MSL assumptions. */ if ((addr != NULL) && (addr != requested_addr)) { - RTE_LOG(CRIT, EAL, "Address %p occupied by an alien " - " allocation - MSL is not VA-contiguous!\n", + EAL_LOG(CRIT, "Address %p occupied by an alien " + " allocation - MSL is not VA-contiguous!", requested_addr); return -1; } if (addr == NULL) { - RTE_LOG(DEBUG, EAL, "Cannot commit reserved memory %p " - "(size %zu) on socket %d\n", + EAL_LOG(DEBUG, "Cannot commit reserved memory %p " + "(size %zu) on socket %d", requested_addr, alloc_sz, socket_id); return -1; } @@ -101,8 +101,8 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id, iova = rte_mem_virt2iova(addr); if (iova == RTE_BAD_IOVA) { - RTE_LOG(DEBUG, EAL, - "Cannot get IOVA of allocated segment\n"); + EAL_LOG(DEBUG, + "Cannot get IOVA of allocated segment"); goto error; } @@ -115,12 +115,12 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id, page = &info.VirtualAttributes; if (!page->Valid || !page->LargePage) { - RTE_LOG(DEBUG, EAL, "Got regular page instead of a hugepage\n"); + EAL_LOG(DEBUG, "Got regular page instead of a hugepage"); goto error; } if (page->Node != numa_node) { - RTE_LOG(DEBUG, EAL, - "NUMA node hint %u (socket %d) not respected, got %u\n", + EAL_LOG(DEBUG, + "NUMA node hint %u (socket %d) not respected, got %u", numa_node, socket_id, page->Node); goto error; } @@ -141,8 +141,8 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id, /* During decommitment, memory is temporarily returned * to the system and the address may become unavailable. */ - RTE_LOG(CRIT, EAL, "Address %p occupied by an alien " - " allocation - MSL is not VA-contiguous!\n", addr); + EAL_LOG(CRIT, "Address %p occupied by an alien " + " allocation - MSL is not VA-contiguous!", addr); } return -1; } @@ -153,8 +153,8 @@ free_seg(struct rte_memseg *ms) if (eal_mem_decommit(ms->addr, ms->len)) { if (rte_errno == EADDRNOTAVAIL) { /* See alloc_seg() for explanation. */ - RTE_LOG(CRIT, EAL, "Address %p occupied by an alien " - " allocation - MSL is not VA-contiguous!\n", + EAL_LOG(CRIT, "Address %p occupied by an alien " + " allocation - MSL is not VA-contiguous!", ms->addr); } return -1; @@ -233,8 +233,8 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg) map_addr = RTE_PTR_ADD(cur_msl->base_va, cur_idx * page_sz); if (alloc_seg(cur, map_addr, wa->socket, wa->hi)) { - RTE_LOG(DEBUG, EAL, "attempted to allocate %i segments, " - "but only %i were allocated\n", need, i); + EAL_LOG(DEBUG, "attempted to allocate %i segments, " + "but only %i were allocated", need, i); /* if exact number wasn't requested, stop */ if (!wa->exact) @@ -249,7 +249,7 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg) rte_fbarray_set_free(arr, j); if (free_seg(tmp)) - RTE_LOG(DEBUG, EAL, "Cannot free page\n"); + EAL_LOG(DEBUG, "Cannot free page"); } /* clear the list */ if (wa->ms) @@ -318,7 +318,7 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs, eal_get_internal_configuration(); if (internal_conf->legacy_mem) { - RTE_LOG(ERR, EAL, "dynamic allocation not supported in legacy mode\n"); + EAL_LOG(ERR, "dynamic allocation not supported in legacy mode"); return -ENOTSUP; } @@ -330,7 +330,7 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs, } } if (!hi) { - RTE_LOG(ERR, EAL, "cannot find relevant hugepage_info entry\n"); + EAL_LOG(ERR, "cannot find relevant hugepage_info entry"); return -1; } @@ -346,7 +346,7 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs, /* memalloc is locked, so it's safe to use thread-unsafe version */ ret = rte_memseg_list_walk_thread_unsafe(alloc_seg_walk, &wa); if (ret == 0) { - RTE_LOG(ERR, EAL, "cannot find suitable memseg_list\n"); + EAL_LOG(ERR, "cannot find suitable memseg_list"); ret = -1; } else if (ret > 0) { ret = (int)wa.segs_allocated; @@ -383,7 +383,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs) /* if this page is marked as unfreeable, fail */ if (cur->flags & RTE_MEMSEG_FLAG_DO_NOT_FREE) { - RTE_LOG(DEBUG, EAL, "Page is not allowed to be freed\n"); + EAL_LOG(DEBUG, "Page is not allowed to be freed"); ret = -1; continue; } @@ -396,7 +396,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs) break; } if (i == RTE_DIM(internal_conf->hugepage_info)) { - RTE_LOG(ERR, EAL, "Can't find relevant hugepage_info entry\n"); + EAL_LOG(ERR, "Can't find relevant hugepage_info entry"); ret = -1; continue; } @@ -411,7 +411,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs) if (walk_res == 1) continue; if (walk_res == 0) - RTE_LOG(ERR, EAL, "Couldn't find memseg list\n"); + EAL_LOG(ERR, "Couldn't find memseg list"); ret = -1; } return ret; diff --git a/lib/eal/windows/eal_memory.c b/lib/eal/windows/eal_memory.c index fd39155163..8de92d4089 100644 --- a/lib/eal/windows/eal_memory.c +++ b/lib/eal/windows/eal_memory.c @@ -114,8 +114,8 @@ eal_mem_win32api_init(void) library_name, function); /* Contrary to the docs, Server 2016 is not supported. */ - RTE_LOG(ERR, EAL, "Windows 10 or Windows Server 2019 " - " is required for memory management\n"); + EAL_LOG(ERR, "Windows 10 or Windows Server 2019 " + " is required for memory management"); ret = -1; } @@ -173,8 +173,8 @@ eal_mem_virt2iova_init(void) detail = malloc(detail_size); if (detail == NULL) { - RTE_LOG(ERR, EAL, "Cannot allocate virt2phys " - "device interface detail data\n"); + EAL_LOG(ERR, "Cannot allocate virt2phys " + "device interface detail data"); goto exit; } @@ -185,7 +185,7 @@ eal_mem_virt2iova_init(void) goto exit; } - RTE_LOG(DEBUG, EAL, "Found virt2phys device: %s\n", detail->DevicePath); + EAL_LOG(DEBUG, "Found virt2phys device: %s", detail->DevicePath); virt2phys_device = CreateFile( detail->DevicePath, 0, 0, NULL, OPEN_EXISTING, 0, NULL); @@ -574,8 +574,8 @@ rte_mem_map(void *requested_addr, size_t size, int prot, int flags, int ret = mem_free(requested_addr, size, true); if (ret) { if (ret > 0) { - RTE_LOG(ERR, EAL, "Cannot map memory " - "to a region not reserved\n"); + EAL_LOG(ERR, "Cannot map memory " + "to a region not reserved"); rte_errno = EADDRNOTAVAIL; } return NULL; @@ -691,7 +691,7 @@ eal_nohuge_init(void) NULL, mem_sz, MEM_RESERVE | MEM_COMMIT, PAGE_READWRITE); if (addr == NULL) { RTE_LOG_WIN32_ERR("VirtualAlloc(size=%#zx)", mem_sz); - RTE_LOG(ERR, EAL, "Cannot allocate memory\n"); + EAL_LOG(ERR, "Cannot allocate memory"); return -1; } @@ -702,9 +702,9 @@ eal_nohuge_init(void) if (mcfg->dma_maskbits && rte_mem_check_dma_mask_thread_unsafe(mcfg->dma_maskbits)) { - RTE_LOG(ERR, EAL, + EAL_LOG(ERR, "%s(): couldn't allocate memory due to IOVA " - "exceeding limits of current DMA mask.\n", __func__); + "exceeding limits of current DMA mask.", __func__); return -1; } diff --git a/lib/eal/windows/eal_windows.h b/lib/eal/windows/eal_windows.h index 43b228d388..91cf15eaaa 100644 --- a/lib/eal/windows/eal_windows.h +++ b/lib/eal/windows/eal_windows.h @@ -12,12 +12,14 @@ #include <rte_errno.h> #include <rte_windows.h> +#include "eal_private.h" + /** * Log current function as not implemented and set rte_errno. */ #define EAL_LOG_NOT_IMPLEMENTED() \ do { \ - RTE_LOG(DEBUG, EAL, "%s() is not implemented\n", __func__); \ + EAL_LOG(DEBUG, "%s() is not implemented", __func__); \ rte_errno = ENOTSUP; \ } while (0) @@ -25,7 +27,7 @@ * Log current function as a stub. */ #define EAL_LOG_STUB() \ - RTE_LOG(DEBUG, EAL, "Windows: %s() is a stub\n", __func__) + EAL_LOG(DEBUG, "Windows: %s() is a stub", __func__) /** * Create a map of processors and cores on the system. diff --git a/lib/eal/windows/rte_thread.c b/lib/eal/windows/rte_thread.c index 145ac4b5aa..6f991dfa5d 100644 --- a/lib/eal/windows/rte_thread.c +++ b/lib/eal/windows/rte_thread.c @@ -12,6 +12,7 @@ #include <rte_stdatomic.h> #include <rte_thread.h> +#include "eal_private.h" #include "eal_windows.h" struct eal_tls_key { @@ -67,7 +68,7 @@ static int thread_log_last_error(const char *message) { DWORD error = GetLastError(); - RTE_LOG(DEBUG, EAL, "GetLastError()=%lu: %s\n", error, message); + EAL_LOG(DEBUG, "GetLastError()=%lu: %s", error, message); return thread_translate_win32_error(error); } @@ -90,7 +91,7 @@ thread_map_priority_to_os_value(enum rte_thread_priority eal_pri, int *os_pri, *os_pri = THREAD_PRIORITY_TIME_CRITICAL; break; default: - RTE_LOG(DEBUG, EAL, "The requested priority value is invalid.\n"); + EAL_LOG(DEBUG, "The requested priority value is invalid."); return EINVAL; } @@ -109,7 +110,7 @@ thread_map_os_priority_to_eal_value(int os_pri, DWORD pri_class, } break; case HIGH_PRIORITY_CLASS: - RTE_LOG(WARNING, EAL, "The OS priority class is high not real-time.\n"); + EAL_LOG(WARNING, "The OS priority class is high not real-time."); /* FALLTHROUGH */ case REALTIME_PRIORITY_CLASS: if (os_pri == THREAD_PRIORITY_TIME_CRITICAL) { @@ -118,7 +119,7 @@ thread_map_os_priority_to_eal_value(int os_pri, DWORD pri_class, } break; default: - RTE_LOG(DEBUG, EAL, "The OS priority value does not map to an EAL-defined priority.\n"); + EAL_LOG(DEBUG, "The OS priority value does not map to an EAL-defined priority."); return EINVAL; } @@ -148,7 +149,7 @@ convert_cpuset_to_affinity(const rte_cpuset_t *cpuset, if (affinity->Group == (USHORT)-1) { affinity->Group = cpu_affinity->Group; } else if (affinity->Group != cpu_affinity->Group) { - RTE_LOG(DEBUG, EAL, "All processors must belong to the same processor group\n"); + EAL_LOG(DEBUG, "All processors must belong to the same processor group"); ret = ENOTSUP; goto cleanup; } @@ -194,7 +195,7 @@ rte_thread_create(rte_thread_t *thread_id, ctx = calloc(1, sizeof(*ctx)); if (ctx == NULL) { - RTE_LOG(DEBUG, EAL, "Insufficient memory for thread context allocations\n"); + EAL_LOG(DEBUG, "Insufficient memory for thread context allocations"); ret = ENOMEM; goto cleanup; } @@ -217,7 +218,7 @@ rte_thread_create(rte_thread_t *thread_id, &thread_affinity ); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "Unable to convert cpuset to thread affinity\n"); + EAL_LOG(DEBUG, "Unable to convert cpuset to thread affinity"); thread_exit = true; goto resume_thread; } @@ -232,7 +233,7 @@ rte_thread_create(rte_thread_t *thread_id, ret = rte_thread_set_priority(*thread_id, thread_attr->priority); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "Unable to set thread priority\n"); + EAL_LOG(DEBUG, "Unable to set thread priority"); thread_exit = true; goto resume_thread; } @@ -360,7 +361,7 @@ rte_thread_set_name(rte_thread_t thread_id, const char *thread_name) CloseHandle(thread_handle); if (ret != 0) - RTE_LOG(DEBUG, EAL, "Failed to set thread name\n"); + EAL_LOG(DEBUG, "Failed to set thread name"); } int @@ -446,7 +447,7 @@ rte_thread_key_create(rte_thread_key *key, { *key = malloc(sizeof(**key)); if ((*key) == NULL) { - RTE_LOG(DEBUG, EAL, "Cannot allocate TLS key.\n"); + EAL_LOG(DEBUG, "Cannot allocate TLS key."); rte_errno = ENOMEM; return -1; } @@ -464,7 +465,7 @@ int rte_thread_key_delete(rte_thread_key key) { if (!key) { - RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n"); + EAL_LOG(DEBUG, "Invalid TLS key."); rte_errno = EINVAL; return -1; } @@ -484,7 +485,7 @@ rte_thread_value_set(rte_thread_key key, const void *value) char *p; if (!key) { - RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n"); + EAL_LOG(DEBUG, "Invalid TLS key."); rte_errno = EINVAL; return -1; } @@ -504,7 +505,7 @@ rte_thread_value_get(rte_thread_key key) void *output; if (!key) { - RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n"); + EAL_LOG(DEBUG, "Invalid TLS key."); rte_errno = EINVAL; return NULL; } @@ -532,7 +533,7 @@ rte_thread_set_affinity_by_id(rte_thread_t thread_id, ret = convert_cpuset_to_affinity(cpuset, &thread_affinity); if (ret != 0) { - RTE_LOG(DEBUG, EAL, "Unable to convert cpuset to thread affinity\n"); + EAL_LOG(DEBUG, "Unable to convert cpuset to thread affinity"); goto cleanup; } diff --git a/lib/efd/rte_efd.c b/lib/efd/rte_efd.c index 78fb9250ef..cd0d90f328 100644 --- a/lib/efd/rte_efd.c +++ b/lib/efd/rte_efd.c @@ -31,6 +31,8 @@ RTE_LOG_REGISTER_DEFAULT(efd_logtype, INFO); #define RTE_LOGTYPE_EFD efd_logtype +#define EFD_LOG(level, fmt, ...) \ + RTE_LOG(level, EFD, fmt "\n", ## __VA_ARGS__) #define EFD_KEY(key_idx, table) (table->keys + ((key_idx) * table->key_len)) /** Hash function used to determine chunk_id and bin_id for a group */ @@ -512,13 +514,13 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len, efd_list = RTE_TAILQ_CAST(rte_efd_tailq.head, rte_efd_list); if (online_cpu_socket_bitmask == 0) { - RTE_LOG(ERR, EFD, "At least one CPU socket must be enabled " - "in the bitmask\n"); + EFD_LOG(ERR, "At least one CPU socket must be enabled " + "in the bitmask"); return NULL; } if (max_num_rules == 0) { - RTE_LOG(ERR, EFD, "Max num rules must be higher than 0\n"); + EFD_LOG(ERR, "Max num rules must be higher than 0"); return NULL; } @@ -557,7 +559,7 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len, te = rte_zmalloc("EFD_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, EFD, "tailq entry allocation failed\n"); + EFD_LOG(ERR, "tailq entry allocation failed"); goto error_unlock_exit; } @@ -567,15 +569,15 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len, RTE_CACHE_LINE_SIZE, offline_cpu_socket); if (table == NULL) { - RTE_LOG(ERR, EFD, "Allocating EFD table management structure" - " on socket %u failed\n", + EFD_LOG(ERR, "Allocating EFD table management structure" + " on socket %u failed", offline_cpu_socket); goto error_unlock_exit; } - RTE_LOG(DEBUG, EFD, "Allocated EFD table management structure " - "on socket %u\n", offline_cpu_socket); + EFD_LOG(DEBUG, "Allocated EFD table management structure " + "on socket %u", offline_cpu_socket); table->max_num_rules = num_chunks * EFD_TARGET_CHUNK_MAX_NUM_RULES; table->num_rules = 0; @@ -589,16 +591,16 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len, RTE_CACHE_LINE_SIZE, offline_cpu_socket); if (key_array == NULL) { - RTE_LOG(ERR, EFD, "Allocating key array" - " on socket %u failed\n", + EFD_LOG(ERR, "Allocating key array" + " on socket %u failed", offline_cpu_socket); goto error_unlock_exit; } table->keys = key_array; strlcpy(table->name, name, sizeof(table->name)); - RTE_LOG(DEBUG, EFD, "Creating an EFD table with %u chunks," - " which potentially supports %u entries\n", + EFD_LOG(DEBUG, "Creating an EFD table with %u chunks," + " which potentially supports %u entries", num_chunks, table->max_num_rules); /* Make sure all the allocatable table pointers are NULL initially */ @@ -626,15 +628,15 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len, RTE_CACHE_LINE_SIZE, socket_id); if (table->chunks[socket_id] == NULL) { - RTE_LOG(ERR, EFD, + EFD_LOG(ERR, "Allocating EFD online table on " - "socket %u failed\n", + "socket %u failed", socket_id); goto error_unlock_exit; } - RTE_LOG(DEBUG, EFD, + EFD_LOG(DEBUG, "Allocated EFD online table of size " - "%"PRIu64" bytes (%.2f MB) on socket %u\n", + "%"PRIu64" bytes (%.2f MB) on socket %u", online_table_size, (float) online_table_size / (1024.0F * 1024.0F), @@ -678,14 +680,14 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len, RTE_CACHE_LINE_SIZE, offline_cpu_socket); if (table->offline_chunks == NULL) { - RTE_LOG(ERR, EFD, "Allocating EFD offline table on socket %u " - "failed\n", offline_cpu_socket); + EFD_LOG(ERR, "Allocating EFD offline table on socket %u " + "failed", offline_cpu_socket); goto error_unlock_exit; } - RTE_LOG(DEBUG, EFD, + EFD_LOG(DEBUG, "Allocated EFD offline table of size %"PRIu64" bytes " - " (%.2f MB) on socket %u\n", offline_table_size, + " (%.2f MB) on socket %u", offline_table_size, (float) offline_table_size / (1024.0F * 1024.0F), offline_cpu_socket); @@ -698,7 +700,7 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len, r = rte_ring_create(ring_name, rte_align32pow2(table->max_num_rules), offline_cpu_socket, 0); if (r == NULL) { - RTE_LOG(ERR, EFD, "memory allocation failed\n"); + EFD_LOG(ERR, "memory allocation failed"); rte_efd_free(table); return NULL; } @@ -1018,9 +1020,9 @@ efd_compute_update(struct rte_efd_table * const table, if (found == 0) { /* Key does not exist. Insert the rule into the bin/group */ if (unlikely(current_group->num_rules >= EFD_MAX_GROUP_NUM_RULES)) { - RTE_LOG(ERR, EFD, + EFD_LOG(ERR, "Fatal: No room remaining for insert into " - "chunk %u group %u bin %u\n", + "chunk %u group %u bin %u", *chunk_id, current_group_id, *bin_id); return RTE_EFD_UPDATE_FAILED; @@ -1028,9 +1030,9 @@ efd_compute_update(struct rte_efd_table * const table, if (unlikely(current_group->num_rules == (EFD_MAX_GROUP_NUM_RULES - 1))) { - RTE_LOG(INFO, EFD, "Warn: Insert into last " + EFD_LOG(INFO, "Warn: Insert into last " "available slot in chunk %u " - "group %u bin %u\n", *chunk_id, + "group %u bin %u", *chunk_id, current_group_id, *bin_id); status = RTE_EFD_UPDATE_WARN_GROUP_FULL; } @@ -1117,10 +1119,10 @@ efd_compute_update(struct rte_efd_table * const table, if (current_group != new_group && new_group->num_rules + bin_size > EFD_MAX_GROUP_NUM_RULES) { - RTE_LOG(DEBUG, EFD, + EFD_LOG(DEBUG, "Unable to move_groups to dest group " "containing %u entries." - "bin_size:%u choice:%02x\n", + "bin_size:%u choice:%02x", new_group->num_rules, bin_size, choice - 1); goto next_choice; @@ -1135,9 +1137,9 @@ efd_compute_update(struct rte_efd_table * const table, if (!ret) return status; - RTE_LOG(DEBUG, EFD, + EFD_LOG(DEBUG, "Failed to find perfect hash for group " - "containing %u entries. bin_size:%u choice:%02x\n", + "containing %u entries. bin_size:%u choice:%02x", new_group->num_rules, bin_size, choice - 1); /* Restore groups modified to their previous state */ revert_groups(current_group, new_group, bin_size); diff --git a/lib/fib/fib_log.h b/lib/fib/fib_log.h index c731c820f6..aa901cb344 100644 --- a/lib/fib/fib_log.h +++ b/lib/fib/fib_log.h @@ -1,4 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause */ extern int fib_logtype; -#define RTE_LOGTYPE_LPM fib_logtype +#define RTE_LOGTYPE_FIB fib_logtype +#define FIB_LOG(level, fmt, ...) \ + RTE_LOG(level, FIB, fmt "\n", ## __VA_ARGS__) diff --git a/lib/fib/rte_fib.c b/lib/fib/rte_fib.c index f88e71a59d..4f9fba5a4f 100644 --- a/lib/fib/rte_fib.c +++ b/lib/fib/rte_fib.c @@ -171,8 +171,8 @@ rte_fib_create(const char *name, int socket_id, struct rte_fib_conf *conf) rib = rte_rib_create(name, socket_id, &rib_conf); if (rib == NULL) { - RTE_LOG(ERR, LPM, - "Can not allocate RIB %s\n", name); + FIB_LOG(ERR, + "Can not allocate RIB %s", name); return NULL; } @@ -196,8 +196,8 @@ rte_fib_create(const char *name, int socket_id, struct rte_fib_conf *conf) /* allocate tailq entry */ te = rte_zmalloc("FIB_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, LPM, - "Can not allocate tailq entry for FIB %s\n", name); + FIB_LOG(ERR, + "Can not allocate tailq entry for FIB %s", name); rte_errno = ENOMEM; goto exit; } @@ -206,7 +206,7 @@ rte_fib_create(const char *name, int socket_id, struct rte_fib_conf *conf) fib = rte_zmalloc_socket(mem_name, sizeof(struct rte_fib), RTE_CACHE_LINE_SIZE, socket_id); if (fib == NULL) { - RTE_LOG(ERR, LPM, "FIB %s memory allocation failed\n", name); + FIB_LOG(ERR, "FIB %s memory allocation failed", name); rte_errno = ENOMEM; goto free_te; } @@ -217,9 +217,9 @@ rte_fib_create(const char *name, int socket_id, struct rte_fib_conf *conf) fib->def_nh = conf->default_nh; ret = init_dataplane(fib, socket_id, conf); if (ret < 0) { - RTE_LOG(ERR, LPM, + FIB_LOG(ERR, "FIB dataplane struct %s memory allocation failed " - "with err %d\n", name, ret); + "with err %d", name, ret); rte_errno = -ret; goto free_fib; } diff --git a/lib/fib/rte_fib6.c b/lib/fib/rte_fib6.c index ab1d960479..9ad990724a 100644 --- a/lib/fib/rte_fib6.c +++ b/lib/fib/rte_fib6.c @@ -171,8 +171,8 @@ rte_fib6_create(const char *name, int socket_id, struct rte_fib6_conf *conf) rib = rte_rib6_create(name, socket_id, &rib_conf); if (rib == NULL) { - RTE_LOG(ERR, LPM, - "Can not allocate RIB %s\n", name); + FIB_LOG(ERR, + "Can not allocate RIB %s", name); return NULL; } @@ -196,8 +196,8 @@ rte_fib6_create(const char *name, int socket_id, struct rte_fib6_conf *conf) /* allocate tailq entry */ te = rte_zmalloc("FIB_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, LPM, - "Can not allocate tailq entry for FIB %s\n", name); + FIB_LOG(ERR, + "Can not allocate tailq entry for FIB %s", name); rte_errno = ENOMEM; goto exit; } @@ -206,7 +206,7 @@ rte_fib6_create(const char *name, int socket_id, struct rte_fib6_conf *conf) fib = rte_zmalloc_socket(mem_name, sizeof(struct rte_fib6), RTE_CACHE_LINE_SIZE, socket_id); if (fib == NULL) { - RTE_LOG(ERR, LPM, "FIB %s memory allocation failed\n", name); + FIB_LOG(ERR, "FIB %s memory allocation failed", name); rte_errno = ENOMEM; goto free_te; } @@ -217,8 +217,8 @@ rte_fib6_create(const char *name, int socket_id, struct rte_fib6_conf *conf) fib->def_nh = conf->default_nh; ret = init_dataplane(fib, socket_id, conf); if (ret < 0) { - RTE_LOG(ERR, LPM, - "FIB dataplane struct %s memory allocation failed\n", + FIB_LOG(ERR, + "FIB dataplane struct %s memory allocation failed", name); rte_errno = -ret; goto free_fib; diff --git a/lib/hash/rte_cuckoo_hash.c b/lib/hash/rte_cuckoo_hash.c index 8e4364f060..a1b5137bfb 100644 --- a/lib/hash/rte_cuckoo_hash.c +++ b/lib/hash/rte_cuckoo_hash.c @@ -28,6 +28,8 @@ /* needs to be before rte_cuckoo_hash.h */ RTE_LOG_REGISTER_DEFAULT(hash_logtype, INFO); #define RTE_LOGTYPE_HASH hash_logtype +#define HASH_LOG(level, fmt, ...) \ + RTE_LOG(level, HASH, fmt "\n", ## __VA_ARGS__) #include "rte_cuckoo_hash.h" @@ -164,7 +166,7 @@ rte_hash_create(const struct rte_hash_parameters *params) hash_list = RTE_TAILQ_CAST(rte_hash_tailq.head, rte_hash_list); if (params == NULL) { - RTE_LOG(ERR, HASH, "rte_hash_create has no parameters\n"); + HASH_LOG(ERR, "%s has no parameters", __func__); return NULL; } @@ -173,13 +175,13 @@ rte_hash_create(const struct rte_hash_parameters *params) (params->entries < RTE_HASH_BUCKET_ENTRIES) || (params->key_len == 0)) { rte_errno = EINVAL; - RTE_LOG(ERR, HASH, "rte_hash_create has invalid parameters\n"); + HASH_LOG(ERR, "%s has invalid parameters", __func__); return NULL; } if (params->extra_flag & ~RTE_HASH_EXTRA_FLAGS_MASK) { rte_errno = EINVAL; - RTE_LOG(ERR, HASH, "rte_hash_create: unsupported extra flags\n"); + HASH_LOG(ERR, "%s: unsupported extra flags", __func__); return NULL; } @@ -187,8 +189,8 @@ rte_hash_create(const struct rte_hash_parameters *params) if ((params->extra_flag & RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY) && (params->extra_flag & RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF)) { rte_errno = EINVAL; - RTE_LOG(ERR, HASH, "rte_hash_create: choose rw concurrency or " - "rw concurrency lock free\n"); + HASH_LOG(ERR, "%s: choose rw concurrency or rw concurrency lock free", + __func__); return NULL; } @@ -238,7 +240,7 @@ rte_hash_create(const struct rte_hash_parameters *params) r = rte_ring_create_elem(ring_name, sizeof(uint32_t), rte_align32pow2(num_key_slots), params->socket_id, 0); if (r == NULL) { - RTE_LOG(ERR, HASH, "memory allocation failed\n"); + HASH_LOG(ERR, "memory allocation failed"); goto err; } @@ -254,8 +256,8 @@ rte_hash_create(const struct rte_hash_parameters *params) params->socket_id, 0); if (r_ext == NULL) { - RTE_LOG(ERR, HASH, "ext buckets memory allocation " - "failed\n"); + HASH_LOG(ERR, "ext buckets memory allocation " + "failed"); goto err; } } @@ -280,7 +282,7 @@ rte_hash_create(const struct rte_hash_parameters *params) te = rte_zmalloc("HASH_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, HASH, "tailq entry allocation failed\n"); + HASH_LOG(ERR, "tailq entry allocation failed"); goto err_unlock; } @@ -288,7 +290,7 @@ rte_hash_create(const struct rte_hash_parameters *params) RTE_CACHE_LINE_SIZE, params->socket_id); if (h == NULL) { - RTE_LOG(ERR, HASH, "memory allocation failed\n"); + HASH_LOG(ERR, "memory allocation failed"); goto err_unlock; } @@ -297,7 +299,7 @@ rte_hash_create(const struct rte_hash_parameters *params) RTE_CACHE_LINE_SIZE, params->socket_id); if (buckets == NULL) { - RTE_LOG(ERR, HASH, "buckets memory allocation failed\n"); + HASH_LOG(ERR, "buckets memory allocation failed"); goto err_unlock; } @@ -307,8 +309,8 @@ rte_hash_create(const struct rte_hash_parameters *params) num_buckets * sizeof(struct rte_hash_bucket), RTE_CACHE_LINE_SIZE, params->socket_id); if (buckets_ext == NULL) { - RTE_LOG(ERR, HASH, "ext buckets memory allocation " - "failed\n"); + HASH_LOG(ERR, "ext buckets memory allocation " + "failed"); goto err_unlock; } /* Populate ext bkt ring. We reserve 0 similar to the @@ -323,8 +325,8 @@ rte_hash_create(const struct rte_hash_parameters *params) ext_bkt_to_free = rte_zmalloc(NULL, sizeof(uint32_t) * num_key_slots, 0); if (ext_bkt_to_free == NULL) { - RTE_LOG(ERR, HASH, "ext bkt to free memory allocation " - "failed\n"); + HASH_LOG(ERR, "ext bkt to free memory allocation " + "failed"); goto err_unlock; } } @@ -339,7 +341,7 @@ rte_hash_create(const struct rte_hash_parameters *params) RTE_CACHE_LINE_SIZE, params->socket_id); if (k == NULL) { - RTE_LOG(ERR, HASH, "memory allocation failed\n"); + HASH_LOG(ERR, "memory allocation failed"); goto err_unlock; } @@ -347,7 +349,7 @@ rte_hash_create(const struct rte_hash_parameters *params) RTE_CACHE_LINE_SIZE, params->socket_id); if (tbl_chng_cnt == NULL) { - RTE_LOG(ERR, HASH, "memory allocation failed\n"); + HASH_LOG(ERR, "memory allocation failed"); goto err_unlock; } @@ -395,7 +397,7 @@ rte_hash_create(const struct rte_hash_parameters *params) sizeof(struct lcore_cache) * RTE_MAX_LCORE, RTE_CACHE_LINE_SIZE, params->socket_id); if (local_free_slots == NULL) { - RTE_LOG(ERR, HASH, "local free slots memory allocation failed\n"); + HASH_LOG(ERR, "local free slots memory allocation failed"); goto err_unlock; } } @@ -637,7 +639,7 @@ rte_hash_reset(struct rte_hash *h) /* Reclaim all the resources */ rte_rcu_qsbr_dq_reclaim(h->dq, ~0, NULL, &pending, NULL); if (pending != 0) - RTE_LOG(ERR, HASH, "RCU reclaim all resources failed\n"); + HASH_LOG(ERR, "RCU reclaim all resources failed"); } memset(h->buckets, 0, h->num_buckets * sizeof(struct rte_hash_bucket)); @@ -1511,8 +1513,8 @@ __hash_rcu_qsbr_free_resource(void *p, void *e, unsigned int n) /* Return key indexes to free slot ring */ ret = free_slot(h, rcu_dq_entry.key_idx); if (ret < 0) { - RTE_LOG(ERR, HASH, - "%s: could not enqueue free slots in global ring\n", + HASH_LOG(ERR, + "%s: could not enqueue free slots in global ring", __func__); } } @@ -1540,7 +1542,7 @@ rte_hash_rcu_qsbr_add(struct rte_hash *h, struct rte_hash_rcu_config *cfg) hash_rcu_cfg = rte_zmalloc(NULL, sizeof(struct rte_hash_rcu_config), 0); if (hash_rcu_cfg == NULL) { - RTE_LOG(ERR, HASH, "memory allocation failed\n"); + HASH_LOG(ERR, "memory allocation failed"); return 1; } @@ -1564,7 +1566,7 @@ rte_hash_rcu_qsbr_add(struct rte_hash *h, struct rte_hash_rcu_config *cfg) h->dq = rte_rcu_qsbr_dq_create(¶ms); if (h->dq == NULL) { rte_free(hash_rcu_cfg); - RTE_LOG(ERR, HASH, "HASH defer queue creation failed\n"); + HASH_LOG(ERR, "HASH defer queue creation failed"); return 1; } } else { @@ -1593,8 +1595,8 @@ remove_entry(const struct rte_hash *h, struct rte_hash_bucket *bkt, int ret = free_slot(h, bkt->key_idx[i]); if (ret < 0) { - RTE_LOG(ERR, HASH, - "%s: could not enqueue free slots in global ring\n", + HASH_LOG(ERR, + "%s: could not enqueue free slots in global ring", __func__); } } @@ -1783,7 +1785,7 @@ __rte_hash_del_key_with_hash(const struct rte_hash *h, const void *key, } else if (h->dq) /* Push into QSBR FIFO if using RTE_HASH_QSBR_MODE_DQ */ if (rte_rcu_qsbr_dq_enqueue(h->dq, &rcu_dq_entry) != 0) - RTE_LOG(ERR, HASH, "Failed to push QSBR FIFO\n"); + HASH_LOG(ERR, "Failed to push QSBR FIFO"); } __hash_rw_writer_unlock(h); return ret; diff --git a/lib/hash/rte_fbk_hash.c b/lib/hash/rte_fbk_hash.c index faeb50cd89..681286b946 100644 --- a/lib/hash/rte_fbk_hash.c +++ b/lib/hash/rte_fbk_hash.c @@ -21,6 +21,8 @@ RTE_LOG_REGISTER_SUFFIX(fbk_hash_logtype, fbk, INFO); #define RTE_LOGTYPE_HASH fbk_hash_logtype +#define HASH_LOG(level, fmt, ...) \ + RTE_LOG(level, HASH, fmt "\n", ## __VA_ARGS__) TAILQ_HEAD(rte_fbk_hash_list, rte_tailq_entry); @@ -118,7 +120,7 @@ rte_fbk_hash_create(const struct rte_fbk_hash_params *params) te = rte_zmalloc("FBK_HASH_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, HASH, "Failed to allocate tailq entry\n"); + HASH_LOG(ERR, "Failed to allocate tailq entry"); goto exit; } @@ -126,7 +128,7 @@ rte_fbk_hash_create(const struct rte_fbk_hash_params *params) ht = rte_zmalloc_socket(hash_name, mem_size, 0, params->socket_id); if (ht == NULL) { - RTE_LOG(ERR, HASH, "Failed to allocate fbk hash table\n"); + HASH_LOG(ERR, "Failed to allocate fbk hash table"); rte_free(te); goto exit; } diff --git a/lib/hash/rte_hash_crc.c b/lib/hash/rte_hash_crc.c index 1439d8a71f..997ac3c5fa 100644 --- a/lib/hash/rte_hash_crc.c +++ b/lib/hash/rte_hash_crc.c @@ -9,6 +9,8 @@ RTE_LOG_REGISTER_SUFFIX(hash_crc_logtype, crc, INFO); #define RTE_LOGTYPE_HASH_CRC hash_crc_logtype +#define HASH_CRC_LOG(level, fmt, ...) \ + RTE_LOG(level, HASH_CRC, fmt "\n", ## __VA_ARGS__) uint8_t rte_hash_crc32_alg = CRC32_SW; @@ -34,8 +36,8 @@ rte_hash_crc_set_alg(uint8_t alg) #if defined RTE_ARCH_X86 if (!(alg & CRC32_SSE42_x64)) - RTE_LOG(WARNING, HASH_CRC, - "Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42\n"); + HASH_CRC_LOG(WARNING, + "Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42"); if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T) || alg == CRC32_SSE42) rte_hash_crc32_alg = CRC32_SSE42; else @@ -44,15 +46,15 @@ rte_hash_crc_set_alg(uint8_t alg) #if defined RTE_ARCH_ARM64 if (!(alg & CRC32_ARM64)) - RTE_LOG(WARNING, HASH_CRC, - "Unsupported CRC32 algorithm requested using CRC32_ARM64\n"); + HASH_CRC_LOG(WARNING, + "Unsupported CRC32 algorithm requested using CRC32_ARM64"); if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_CRC32)) rte_hash_crc32_alg = CRC32_ARM64; #endif if (rte_hash_crc32_alg == CRC32_SW) - RTE_LOG(WARNING, HASH_CRC, - "Unsupported CRC32 algorithm requested using CRC32_SW\n"); + HASH_CRC_LOG(WARNING, + "Unsupported CRC32 algorithm requested using CRC32_SW"); } /* Setting the best available algorithm */ diff --git a/lib/hash/rte_thash.c b/lib/hash/rte_thash.c index d819dddd84..3dcae495e9 100644 --- a/lib/hash/rte_thash.c +++ b/lib/hash/rte_thash.c @@ -15,6 +15,8 @@ RTE_LOG_REGISTER_SUFFIX(thash_logtype, thash, INFO); #define RTE_LOGTYPE_HASH thash_logtype +#define HASH_LOG(level, fmt, ...) \ + RTE_LOG(level, HASH, fmt "\n", ## __VA_ARGS__) #define THASH_NAME_LEN 64 #define TOEPLITZ_HASH_LEN 32 @@ -243,8 +245,8 @@ rte_thash_init_ctx(const char *name, uint32_t key_len, uint32_t reta_sz, /* allocate tailq entry */ te = rte_zmalloc("THASH_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, HASH, - "Can not allocate tailq entry for thash context %s\n", + HASH_LOG(ERR, + "Can not allocate tailq entry for thash context %s", name); rte_errno = ENOMEM; goto exit; @@ -252,7 +254,7 @@ rte_thash_init_ctx(const char *name, uint32_t key_len, uint32_t reta_sz, ctx = rte_zmalloc(NULL, sizeof(struct rte_thash_ctx) + key_len, 0); if (ctx == NULL) { - RTE_LOG(ERR, HASH, "thash ctx %s memory allocation failed\n", + HASH_LOG(ERR, "thash ctx %s memory allocation failed", name); rte_errno = ENOMEM; goto free_te; @@ -275,7 +277,7 @@ rte_thash_init_ctx(const char *name, uint32_t key_len, uint32_t reta_sz, ctx->matrices = rte_zmalloc(NULL, key_len * sizeof(uint64_t), RTE_CACHE_LINE_SIZE); if (ctx->matrices == NULL) { - RTE_LOG(ERR, HASH, "Cannot allocate matrices\n"); + HASH_LOG(ERR, "Cannot allocate matrices"); rte_errno = ENOMEM; goto free_ctx; } @@ -390,8 +392,8 @@ generate_subkey(struct rte_thash_ctx *ctx, struct thash_lfsr *lfsr, if (((lfsr->bits_cnt + req_bits) > (1ULL << lfsr->deg) - 1) && ((ctx->flags & RTE_THASH_IGNORE_PERIOD_OVERFLOW) != RTE_THASH_IGNORE_PERIOD_OVERFLOW)) { - RTE_LOG(ERR, HASH, - "Can't generate m-sequence due to period overflow\n"); + HASH_LOG(ERR, + "Can't generate m-sequence due to period overflow"); return -ENOSPC; } @@ -470,9 +472,9 @@ insert_before(struct rte_thash_ctx *ctx, return ret; } } else if ((next_ent != NULL) && (end > next_ent->offset)) { - RTE_LOG(ERR, HASH, + HASH_LOG(ERR, "Can't add helper %s due to conflict with existing" - " helper %s\n", ent->name, next_ent->name); + " helper %s", ent->name, next_ent->name); rte_free(ent); return -ENOSPC; } @@ -519,9 +521,9 @@ insert_after(struct rte_thash_ctx *ctx, int ret; if ((next_ent != NULL) && (end > next_ent->offset)) { - RTE_LOG(ERR, HASH, + HASH_LOG(ERR, "Can't add helper %s due to conflict with existing" - " helper %s\n", ent->name, next_ent->name); + " helper %s", ent->name, next_ent->name); rte_free(ent); return -EEXIST; } diff --git a/lib/hash/rte_thash_gfni.c b/lib/hash/rte_thash_gfni.c index c863789b51..cd22a649f7 100644 --- a/lib/hash/rte_thash_gfni.c +++ b/lib/hash/rte_thash_gfni.c @@ -11,6 +11,8 @@ RTE_LOG_REGISTER_SUFFIX(hash_gfni_logtype, gfni, INFO); #define RTE_LOGTYPE_HASH hash_gfni_logtype +#define HASH_LOG(level, fmt, ...) \ + RTE_LOG(level, HASH, fmt "\n", ## __VA_ARGS__) uint32_t rte_thash_gfni(const uint64_t *mtrx __rte_unused, @@ -20,8 +22,8 @@ rte_thash_gfni(const uint64_t *mtrx __rte_unused, if (!warned) { warned = true; - RTE_LOG(ERR, HASH, - "%s is undefined under given arch\n", __func__); + HASH_LOG(ERR, + "%s is undefined under given arch", __func__); } return 0; @@ -38,8 +40,8 @@ rte_thash_gfni_bulk(const uint64_t *mtrx __rte_unused, if (!warned) { warned = true; - RTE_LOG(ERR, HASH, - "%s is undefined under given arch\n", __func__); + HASH_LOG(ERR, + "%s is undefined under given arch", __func__); } for (i = 0; i < num; i++) diff --git a/lib/ip_frag/ip_frag_common.h b/lib/ip_frag/ip_frag_common.h index 537bce7c3b..4e9637d12f 100644 --- a/lib/ip_frag/ip_frag_common.h +++ b/lib/ip_frag/ip_frag_common.h @@ -22,6 +22,9 @@ extern int ipfrag_logtype; #define RTE_LOGTYPE_IPFRAG ipfrag_logtype /* logging macros. */ +#define IP_FRAG_LOG_LINE(level, fmt, ...) \ + RTE_LOG(level, IPFRAG, fmt "\n", ## __VA_ARGS__) + #ifdef RTE_LIBRTE_IP_FRAG_DEBUG #define IP_FRAG_LOG(lvl, fmt, args...) RTE_LOG(lvl, IPFRAG, fmt, ##args) #else diff --git a/lib/ip_frag/rte_ip_frag_common.c b/lib/ip_frag/rte_ip_frag_common.c index eed399da6b..05f9e479c2 100644 --- a/lib/ip_frag/rte_ip_frag_common.c +++ b/lib/ip_frag/rte_ip_frag_common.c @@ -54,20 +54,20 @@ rte_ip_frag_table_create(uint32_t bucket_num, uint32_t bucket_entries, if (rte_is_power_of_2(bucket_entries) == 0 || nb_entries > UINT32_MAX || nb_entries == 0 || nb_entries < max_entries) { - RTE_LOG(ERR, IPFRAG, "%s: invalid input parameter\n", __func__); + IP_FRAG_LOG_LINE(ERR, "%s: invalid input parameter", __func__); return NULL; } sz = sizeof (*tbl) + nb_entries * sizeof (tbl->pkt[0]); if ((tbl = rte_zmalloc_socket(__func__, sz, RTE_CACHE_LINE_SIZE, socket_id)) == NULL) { - RTE_LOG(ERR, IPFRAG, - "%s: allocation of %zu bytes at socket %d failed do\n", + IP_FRAG_LOG_LINE(ERR, + "%s: allocation of %zu bytes at socket %d failed do", __func__, sz, socket_id); return NULL; } - RTE_LOG(INFO, IPFRAG, "%s: allocated of %zu bytes at socket %d\n", + IP_FRAG_LOG_LINE(INFO, "%s: allocated of %zu bytes at socket %d", __func__, sz, socket_id); tbl->max_cycles = max_cycles; diff --git a/lib/latencystats/rte_latencystats.c b/lib/latencystats/rte_latencystats.c index f3c1746cca..6d7c4a3316 100644 --- a/lib/latencystats/rte_latencystats.c +++ b/lib/latencystats/rte_latencystats.c @@ -25,9 +25,10 @@ latencystat_cycles_per_ns(void) return rte_get_timer_hz() / NS_PER_SEC; } -/* Macros for printing using RTE_LOG */ RTE_LOG_REGISTER_DEFAULT(latencystat_logtype, INFO); #define RTE_LOGTYPE_LATENCY_STATS latencystat_logtype +#define LATENCY_STATS_LOG(level, fmt, ...) \ + RTE_LOG(level, LATENCY_STATS, fmt "\n", ## __VA_ARGS__) static uint64_t timestamp_dynflag; static int timestamp_dynfield_offset = -1; @@ -96,7 +97,7 @@ rte_latencystats_update(void) latency_stats_index, values, NUM_LATENCY_STATS); if (ret < 0) - RTE_LOG(INFO, LATENCY_STATS, "Failed to push the stats\n"); + LATENCY_STATS_LOG(INFO, "Failed to push the stats"); return ret; } @@ -228,7 +229,7 @@ rte_latencystats_init(uint64_t app_samp_intvl, mz = rte_memzone_reserve(MZ_RTE_LATENCY_STATS, sizeof(*glob_stats), rte_socket_id(), flags); if (mz == NULL) { - RTE_LOG(ERR, LATENCY_STATS, "Cannot reserve memory: %s:%d\n", + LATENCY_STATS_LOG(ERR, "Cannot reserve memory: %s:%d", __func__, __LINE__); return -ENOMEM; } @@ -244,8 +245,8 @@ rte_latencystats_init(uint64_t app_samp_intvl, latency_stats_index = rte_metrics_reg_names(ptr_strings, NUM_LATENCY_STATS); if (latency_stats_index < 0) { - RTE_LOG(DEBUG, LATENCY_STATS, - "Failed to register latency stats names\n"); + LATENCY_STATS_LOG(DEBUG, + "Failed to register latency stats names"); return -1; } @@ -253,8 +254,8 @@ rte_latencystats_init(uint64_t app_samp_intvl, ret = rte_mbuf_dyn_rx_timestamp_register(×tamp_dynfield_offset, ×tamp_dynflag); if (ret != 0) { - RTE_LOG(ERR, LATENCY_STATS, - "Cannot register mbuf field/flag for timestamp\n"); + LATENCY_STATS_LOG(ERR, + "Cannot register mbuf field/flag for timestamp"); return -rte_errno; } @@ -264,8 +265,8 @@ rte_latencystats_init(uint64_t app_samp_intvl, ret = rte_eth_dev_info_get(pid, &dev_info); if (ret != 0) { - RTE_LOG(INFO, LATENCY_STATS, - "Error during getting device (port %u) info: %s\n", + LATENCY_STATS_LOG(INFO, + "Error during getting device (port %u) info: %s", pid, strerror(-ret)); continue; @@ -276,18 +277,18 @@ rte_latencystats_init(uint64_t app_samp_intvl, cbs->cb = rte_eth_add_first_rx_callback(pid, qid, add_time_stamps, user_cb); if (!cbs->cb) - RTE_LOG(INFO, LATENCY_STATS, "Failed to " + LATENCY_STATS_LOG(INFO, "Failed to " "register Rx callback for pid=%d, " - "qid=%d\n", pid, qid); + "qid=%d", pid, qid); } for (qid = 0; qid < dev_info.nb_tx_queues; qid++) { cbs = &tx_cbs[pid][qid]; cbs->cb = rte_eth_add_tx_callback(pid, qid, calc_latency, user_cb); if (!cbs->cb) - RTE_LOG(INFO, LATENCY_STATS, "Failed to " + LATENCY_STATS_LOG(INFO, "Failed to " "register Tx callback for pid=%d, " - "qid=%d\n", pid, qid); + "qid=%d", pid, qid); } } return 0; @@ -308,8 +309,8 @@ rte_latencystats_uninit(void) ret = rte_eth_dev_info_get(pid, &dev_info); if (ret != 0) { - RTE_LOG(INFO, LATENCY_STATS, - "Error during getting device (port %u) info: %s\n", + LATENCY_STATS_LOG(INFO, + "Error during getting device (port %u) info: %s", pid, strerror(-ret)); continue; @@ -319,17 +320,17 @@ rte_latencystats_uninit(void) cbs = &rx_cbs[pid][qid]; ret = rte_eth_remove_rx_callback(pid, qid, cbs->cb); if (ret) - RTE_LOG(INFO, LATENCY_STATS, "failed to " + LATENCY_STATS_LOG(INFO, "failed to " "remove Rx callback for pid=%d, " - "qid=%d\n", pid, qid); + "qid=%d", pid, qid); } for (qid = 0; qid < dev_info.nb_tx_queues; qid++) { cbs = &tx_cbs[pid][qid]; ret = rte_eth_remove_tx_callback(pid, qid, cbs->cb); if (ret) - RTE_LOG(INFO, LATENCY_STATS, "failed to " + LATENCY_STATS_LOG(INFO, "failed to " "remove Tx callback for pid=%d, " - "qid=%d\n", pid, qid); + "qid=%d", pid, qid); } } @@ -366,8 +367,8 @@ rte_latencystats_get(struct rte_metric_value *values, uint16_t size) const struct rte_memzone *mz; mz = rte_memzone_lookup(MZ_RTE_LATENCY_STATS); if (mz == NULL) { - RTE_LOG(ERR, LATENCY_STATS, - "Latency stats memzone not found\n"); + LATENCY_STATS_LOG(ERR, + "Latency stats memzone not found"); return -ENOMEM; } glob_stats = mz->addr; diff --git a/lib/lpm/lpm_log.h b/lib/lpm/lpm_log.h index a0621b70a5..1385b9b02a 100644 --- a/lib/lpm/lpm_log.h +++ b/lib/lpm/lpm_log.h @@ -2,3 +2,5 @@ extern int lpm_logtype; #define RTE_LOGTYPE_LPM lpm_logtype +#define LPM_LOG(level, fmt, ...) \ + RTE_LOG(level, LPM, fmt "\n", ## __VA_ARGS__) diff --git a/lib/lpm/rte_lpm.c b/lib/lpm/rte_lpm.c index 0ca8214786..363058e118 100644 --- a/lib/lpm/rte_lpm.c +++ b/lib/lpm/rte_lpm.c @@ -192,7 +192,7 @@ rte_lpm_create(const char *name, int socket_id, /* allocate tailq entry */ te = rte_zmalloc("LPM_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, LPM, "Failed to allocate tailq entry\n"); + LPM_LOG(ERR, "Failed to allocate tailq entry"); rte_errno = ENOMEM; goto exit; } @@ -201,7 +201,7 @@ rte_lpm_create(const char *name, int socket_id, i_lpm = rte_zmalloc_socket(mem_name, mem_size, RTE_CACHE_LINE_SIZE, socket_id); if (i_lpm == NULL) { - RTE_LOG(ERR, LPM, "LPM memory allocation failed\n"); + LPM_LOG(ERR, "LPM memory allocation failed"); rte_free(te); rte_errno = ENOMEM; goto exit; @@ -211,7 +211,7 @@ rte_lpm_create(const char *name, int socket_id, (size_t)rules_size, RTE_CACHE_LINE_SIZE, socket_id); if (i_lpm->rules_tbl == NULL) { - RTE_LOG(ERR, LPM, "LPM rules_tbl memory allocation failed\n"); + LPM_LOG(ERR, "LPM rules_tbl memory allocation failed"); rte_free(i_lpm); i_lpm = NULL; rte_free(te); @@ -223,7 +223,7 @@ rte_lpm_create(const char *name, int socket_id, (size_t)tbl8s_size, RTE_CACHE_LINE_SIZE, socket_id); if (i_lpm->lpm.tbl8 == NULL) { - RTE_LOG(ERR, LPM, "LPM tbl8 memory allocation failed\n"); + LPM_LOG(ERR, "LPM tbl8 memory allocation failed"); rte_free(i_lpm->rules_tbl); rte_free(i_lpm); i_lpm = NULL; @@ -338,7 +338,7 @@ rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_lpm_rcu_config *cfg) params.v = cfg->v; i_lpm->dq = rte_rcu_qsbr_dq_create(¶ms); if (i_lpm->dq == NULL) { - RTE_LOG(ERR, LPM, "LPM defer queue creation failed\n"); + LPM_LOG(ERR, "LPM defer queue creation failed"); return 1; } } else { @@ -565,7 +565,7 @@ tbl8_free(struct __rte_lpm *i_lpm, uint32_t tbl8_group_start) status = rte_rcu_qsbr_dq_enqueue(i_lpm->dq, (void *)&tbl8_group_start); if (status == 1) { - RTE_LOG(ERR, LPM, "Failed to push QSBR FIFO\n"); + LPM_LOG(ERR, "Failed to push QSBR FIFO"); return -rte_errno; } } diff --git a/lib/lpm/rte_lpm6.c b/lib/lpm/rte_lpm6.c index 24ce7dd022..271bc480dc 100644 --- a/lib/lpm/rte_lpm6.c +++ b/lib/lpm/rte_lpm6.c @@ -280,7 +280,7 @@ rte_lpm6_create(const char *name, int socket_id, rules_tbl = rte_hash_create(&rule_hash_tbl_params); if (rules_tbl == NULL) { - RTE_LOG(ERR, LPM, "LPM rules hash table allocation failed: %s (%d)\n", + LPM_LOG(ERR, "LPM rules hash table allocation failed: %s (%d)", rte_strerror(rte_errno), rte_errno); goto fail_wo_unlock; } @@ -290,7 +290,7 @@ rte_lpm6_create(const char *name, int socket_id, sizeof(uint32_t) * config->number_tbl8s, RTE_CACHE_LINE_SIZE); if (tbl8_pool == NULL) { - RTE_LOG(ERR, LPM, "LPM tbl8 pool allocation failed: %s (%d)\n", + LPM_LOG(ERR, "LPM tbl8 pool allocation failed: %s (%d)", rte_strerror(rte_errno), rte_errno); rte_errno = ENOMEM; goto fail_wo_unlock; @@ -301,7 +301,7 @@ rte_lpm6_create(const char *name, int socket_id, sizeof(struct rte_lpm_tbl8_hdr) * config->number_tbl8s, RTE_CACHE_LINE_SIZE); if (tbl8_hdrs == NULL) { - RTE_LOG(ERR, LPM, "LPM tbl8 headers allocation failed: %s (%d)\n", + LPM_LOG(ERR, "LPM tbl8 headers allocation failed: %s (%d)", rte_strerror(rte_errno), rte_errno); rte_errno = ENOMEM; goto fail_wo_unlock; @@ -330,7 +330,7 @@ rte_lpm6_create(const char *name, int socket_id, /* allocate tailq entry */ te = rte_zmalloc("LPM6_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, LPM, "Failed to allocate tailq entry!\n"); + LPM_LOG(ERR, "Failed to allocate tailq entry!"); rte_errno = ENOMEM; goto fail; } @@ -340,7 +340,7 @@ rte_lpm6_create(const char *name, int socket_id, RTE_CACHE_LINE_SIZE, socket_id); if (lpm == NULL) { - RTE_LOG(ERR, LPM, "LPM memory allocation failed\n"); + LPM_LOG(ERR, "LPM memory allocation failed"); rte_free(te); rte_errno = ENOMEM; goto fail; diff --git a/lib/mbuf/mbuf_log.h b/lib/mbuf/mbuf_log.h index d759a9a255..a8a674f6be 100644 --- a/lib/mbuf/mbuf_log.h +++ b/lib/mbuf/mbuf_log.h @@ -2,3 +2,5 @@ extern int mbuf_logtype; #define RTE_LOGTYPE_MBUF mbuf_logtype +#define MBUF_LOG(level, fmt, ...) \ + RTE_LOG(level, MBUF, fmt "\n", ## __VA_ARGS__) diff --git a/lib/mbuf/rte_mbuf.c b/lib/mbuf/rte_mbuf.c index 3eccc61827..559d5ad8a7 100644 --- a/lib/mbuf/rte_mbuf.c +++ b/lib/mbuf/rte_mbuf.c @@ -231,7 +231,7 @@ rte_pktmbuf_pool_create_by_ops(const char *name, unsigned int n, int ret; if (RTE_ALIGN(priv_size, RTE_MBUF_PRIV_ALIGN) != priv_size) { - RTE_LOG(ERR, MBUF, "mbuf priv_size=%u is not aligned\n", + MBUF_LOG(ERR, "mbuf priv_size=%u is not aligned", priv_size); rte_errno = EINVAL; return NULL; @@ -251,7 +251,7 @@ rte_pktmbuf_pool_create_by_ops(const char *name, unsigned int n, mp_ops_name = rte_mbuf_best_mempool_ops(); ret = rte_mempool_set_ops_byname(mp, mp_ops_name, NULL); if (ret != 0) { - RTE_LOG(ERR, MBUF, "error setting mempool handler\n"); + MBUF_LOG(ERR, "error setting mempool handler"); rte_mempool_free(mp); rte_errno = -ret; return NULL; @@ -297,7 +297,7 @@ rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n, int ret; if (RTE_ALIGN(priv_size, RTE_MBUF_PRIV_ALIGN) != priv_size) { - RTE_LOG(ERR, MBUF, "mbuf priv_size=%u is not aligned\n", + MBUF_LOG(ERR, "mbuf priv_size=%u is not aligned", priv_size); rte_errno = EINVAL; return NULL; @@ -307,12 +307,12 @@ rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n, const struct rte_pktmbuf_extmem *extm = ext_mem + i; if (!extm->elt_size || !extm->buf_len || !extm->buf_ptr) { - RTE_LOG(ERR, MBUF, "invalid extmem descriptor\n"); + MBUF_LOG(ERR, "invalid extmem descriptor"); rte_errno = EINVAL; return NULL; } if (data_room_size > extm->elt_size) { - RTE_LOG(ERR, MBUF, "ext elt_size=%u is too small\n", + MBUF_LOG(ERR, "ext elt_size=%u is too small", priv_size); rte_errno = EINVAL; return NULL; @@ -321,7 +321,7 @@ rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n, } /* Check whether enough external memory provided. */ if (n_elts < n) { - RTE_LOG(ERR, MBUF, "not enough extmem\n"); + MBUF_LOG(ERR, "not enough extmem"); rte_errno = ENOMEM; return NULL; } @@ -342,7 +342,7 @@ rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n, mp_ops_name = rte_mbuf_best_mempool_ops(); ret = rte_mempool_set_ops_byname(mp, mp_ops_name, NULL); if (ret != 0) { - RTE_LOG(ERR, MBUF, "error setting mempool handler\n"); + MBUF_LOG(ERR, "error setting mempool handler"); rte_mempool_free(mp); rte_errno = -ret; return NULL; diff --git a/lib/mbuf/rte_mbuf_dyn.c b/lib/mbuf/rte_mbuf_dyn.c index 4fb1863a10..446018ffdc 100644 --- a/lib/mbuf/rte_mbuf_dyn.c +++ b/lib/mbuf/rte_mbuf_dyn.c @@ -118,7 +118,7 @@ init_shared_mem(void) mz = rte_memzone_lookup(RTE_MBUF_DYN_MZNAME); } if (mz == NULL) { - RTE_LOG(ERR, MBUF, "Failed to get mbuf dyn shared memory\n"); + MBUF_LOG(ERR, "Failed to get mbuf dyn shared memory"); return -1; } @@ -317,7 +317,7 @@ __rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield *params, shm->free_space[i] = 0; process_score(); - RTE_LOG(DEBUG, MBUF, "Registered dynamic field %s (sz=%zu, al=%zu, fl=0x%x) -> %zd\n", + MBUF_LOG(DEBUG, "Registered dynamic field %s (sz=%zu, al=%zu, fl=0x%x) -> %zd", params->name, params->size, params->align, params->flags, offset); @@ -491,7 +491,7 @@ __rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params, shm->free_flags &= ~(1ULL << bitnum); - RTE_LOG(DEBUG, MBUF, "Registered dynamic flag %s (fl=0x%x) -> %u\n", + MBUF_LOG(DEBUG, "Registered dynamic flag %s (fl=0x%x) -> %u", params->name, params->flags, bitnum); return bitnum; @@ -592,8 +592,8 @@ rte_mbuf_dyn_timestamp_register(int *field_offset, uint64_t *flag, offset = rte_mbuf_dynfield_register(&field_desc); if (offset < 0) { - RTE_LOG(ERR, MBUF, - "Failed to register mbuf field for timestamp\n"); + MBUF_LOG(ERR, + "Failed to register mbuf field for timestamp"); return -1; } if (field_offset != NULL) @@ -602,8 +602,8 @@ rte_mbuf_dyn_timestamp_register(int *field_offset, uint64_t *flag, strlcpy(flag_desc.name, flag_name, sizeof(flag_desc.name)); offset = rte_mbuf_dynflag_register(&flag_desc); if (offset < 0) { - RTE_LOG(ERR, MBUF, - "Failed to register mbuf flag for %s timestamp\n", + MBUF_LOG(ERR, + "Failed to register mbuf flag for %s timestamp", direction); return -1; } diff --git a/lib/mbuf/rte_mbuf_pool_ops.c b/lib/mbuf/rte_mbuf_pool_ops.c index 5318430126..8e93c6acbd 100644 --- a/lib/mbuf/rte_mbuf_pool_ops.c +++ b/lib/mbuf/rte_mbuf_pool_ops.c @@ -33,8 +33,8 @@ rte_mbuf_set_platform_mempool_ops(const char *ops_name) return 0; } - RTE_LOG(ERR, MBUF, - "%s is already registered as platform mbuf pool ops\n", + MBUF_LOG(ERR, + "%s is already registered as platform mbuf pool ops", (char *)mz->addr); return -EEXIST; } diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c index 2f8adad5ca..b7a19bea71 100644 --- a/lib/mempool/rte_mempool.c +++ b/lib/mempool/rte_mempool.c @@ -775,7 +775,7 @@ rte_mempool_cache_create(uint32_t size, int socket_id) cache = rte_zmalloc_socket("MEMPOOL_CACHE", sizeof(*cache), RTE_CACHE_LINE_SIZE, socket_id); if (cache == NULL) { - RTE_LOG(ERR, MEMPOOL, "Cannot allocate mempool cache.\n"); + RTE_MEMPOOL_LOG(ERR, "Cannot allocate mempool cache."); rte_errno = ENOMEM; return NULL; } @@ -877,7 +877,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size, /* try to allocate tailq entry */ te = rte_zmalloc("MEMPOOL_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, MEMPOOL, "Cannot allocate tailq entry!\n"); + RTE_MEMPOOL_LOG(ERR, "Cannot allocate tailq entry!"); goto exit_unlock; } @@ -1088,16 +1088,16 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp, if (free == 0) { if (cookie != RTE_MEMPOOL_HEADER_COOKIE1) { - RTE_LOG(CRIT, MEMPOOL, - "obj=%p, mempool=%p, cookie=%" PRIx64 "\n", + RTE_MEMPOOL_LOG(CRIT, + "obj=%p, mempool=%p, cookie=%" PRIx64, obj, (const void *) mp, cookie); rte_panic("MEMPOOL: bad header cookie (put)\n"); } hdr->cookie = RTE_MEMPOOL_HEADER_COOKIE2; } else if (free == 1) { if (cookie != RTE_MEMPOOL_HEADER_COOKIE2) { - RTE_LOG(CRIT, MEMPOOL, - "obj=%p, mempool=%p, cookie=%" PRIx64 "\n", + RTE_MEMPOOL_LOG(CRIT, + "obj=%p, mempool=%p, cookie=%" PRIx64, obj, (const void *) mp, cookie); rte_panic("MEMPOOL: bad header cookie (get)\n"); } @@ -1105,8 +1105,8 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp, } else if (free == 2) { if (cookie != RTE_MEMPOOL_HEADER_COOKIE1 && cookie != RTE_MEMPOOL_HEADER_COOKIE2) { - RTE_LOG(CRIT, MEMPOOL, - "obj=%p, mempool=%p, cookie=%" PRIx64 "\n", + RTE_MEMPOOL_LOG(CRIT, + "obj=%p, mempool=%p, cookie=%" PRIx64, obj, (const void *) mp, cookie); rte_panic("MEMPOOL: bad header cookie (audit)\n"); } @@ -1114,8 +1114,8 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp, tlr = rte_mempool_get_trailer(obj); cookie = tlr->cookie; if (cookie != RTE_MEMPOOL_TRAILER_COOKIE) { - RTE_LOG(CRIT, MEMPOOL, - "obj=%p, mempool=%p, cookie=%" PRIx64 "\n", + RTE_MEMPOOL_LOG(CRIT, + "obj=%p, mempool=%p, cookie=%" PRIx64, obj, (const void *) mp, cookie); rte_panic("MEMPOOL: bad trailer cookie\n"); } @@ -1200,7 +1200,7 @@ mempool_audit_cache(const struct rte_mempool *mp) const struct rte_mempool_cache *cache; cache = &mp->local_cache[lcore_id]; if (cache->len > RTE_DIM(cache->objs)) { - RTE_LOG(CRIT, MEMPOOL, "badness on cache[%u]\n", + RTE_MEMPOOL_LOG(CRIT, "badness on cache[%u]", lcore_id); rte_panic("MEMPOOL: invalid cache len\n"); } @@ -1429,7 +1429,7 @@ rte_mempool_event_callback_register(rte_mempool_event_callback *func, cb = calloc(1, sizeof(*cb)); if (cb == NULL) { - RTE_LOG(ERR, MEMPOOL, "Cannot allocate event callback!\n"); + RTE_MEMPOOL_LOG(ERR, "Cannot allocate event callback!"); ret = -ENOMEM; goto exit; } diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h index 4f8511b8f5..e44039dffb 100644 --- a/lib/mempool/rte_mempool.h +++ b/lib/mempool/rte_mempool.h @@ -182,6 +182,8 @@ struct rte_mempool_objtlr { */ extern int rte_mempool_logtype; #define RTE_LOGTYPE_MEMPOOL rte_mempool_logtype +#define RTE_MEMPOOL_LOG(level, fmt, ...) \ + RTE_LOG(level, MEMPOOL, fmt "\n", ## __VA_ARGS__) /** * A list of memory where objects are stored @@ -847,7 +849,7 @@ rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table, ret = ops->enqueue(mp, obj_table, n); #ifdef RTE_LIBRTE_MEMPOOL_DEBUG if (unlikely(ret < 0)) - RTE_LOG(CRIT, MEMPOOL, "cannot enqueue %u objects to mempool %s\n", + RTE_MEMPOOL_LOG(CRIT, "cannot enqueue %u objects to mempool %s", n, mp->name); #endif return ret; diff --git a/lib/mempool/rte_mempool_ops.c b/lib/mempool/rte_mempool_ops.c index e871de9ec9..1b33380259 100644 --- a/lib/mempool/rte_mempool_ops.c +++ b/lib/mempool/rte_mempool_ops.c @@ -31,22 +31,22 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h) if (rte_mempool_ops_table.num_ops >= RTE_MEMPOOL_MAX_OPS_IDX) { rte_spinlock_unlock(&rte_mempool_ops_table.sl); - RTE_LOG(ERR, MEMPOOL, - "Maximum number of mempool ops structs exceeded\n"); + RTE_MEMPOOL_LOG(ERR, + "Maximum number of mempool ops structs exceeded"); return -ENOSPC; } if (h->alloc == NULL || h->enqueue == NULL || h->dequeue == NULL || h->get_count == NULL) { rte_spinlock_unlock(&rte_mempool_ops_table.sl); - RTE_LOG(ERR, MEMPOOL, - "Missing callback while registering mempool ops\n"); + RTE_MEMPOOL_LOG(ERR, + "Missing callback while registering mempool ops"); return -EINVAL; } if (strlen(h->name) >= sizeof(ops->name) - 1) { rte_spinlock_unlock(&rte_mempool_ops_table.sl); - RTE_LOG(DEBUG, MEMPOOL, "%s(): mempool_ops <%s>: name too long\n", + RTE_MEMPOOL_LOG(DEBUG, "%s(): mempool_ops <%s>: name too long", __func__, h->name); rte_errno = EEXIST; return -EEXIST; diff --git a/lib/pipeline/rte_pipeline.c b/lib/pipeline/rte_pipeline.c index 436cf54953..d52b63506e 100644 --- a/lib/pipeline/rte_pipeline.c +++ b/lib/pipeline/rte_pipeline.c @@ -12,6 +12,9 @@ #include "rte_pipeline.h" +#define PIPELINE_LOG(level, fmt, ...) \ + RTE_LOG(level, PIPELINE, fmt "\n", ## __VA_ARGS__) + #define RTE_TABLE_INVALID UINT32_MAX #ifdef RTE_PIPELINE_STATS_COLLECT @@ -160,22 +163,22 @@ static int rte_pipeline_check_params(struct rte_pipeline_params *params) { if (params == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: Incorrect value for parameter params\n", __func__); + PIPELINE_LOG(ERR, + "%s: Incorrect value for parameter params", __func__); return -EINVAL; } /* name */ if (params->name == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: Incorrect value for parameter name\n", __func__); + PIPELINE_LOG(ERR, + "%s: Incorrect value for parameter name", __func__); return -EINVAL; } /* socket */ if (params->socket_id < 0) { - RTE_LOG(ERR, PIPELINE, - "%s: Incorrect value for parameter socket_id\n", + PIPELINE_LOG(ERR, + "%s: Incorrect value for parameter socket_id", __func__); return -EINVAL; } @@ -192,8 +195,8 @@ rte_pipeline_create(struct rte_pipeline_params *params) /* Check input parameters */ status = rte_pipeline_check_params(params); if (status != 0) { - RTE_LOG(ERR, PIPELINE, - "%s: Pipeline params check failed (%d)\n", + PIPELINE_LOG(ERR, + "%s: Pipeline params check failed (%d)", __func__, status); return NULL; } @@ -203,8 +206,8 @@ rte_pipeline_create(struct rte_pipeline_params *params) RTE_CACHE_LINE_SIZE, params->socket_id); if (p == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: Pipeline memory allocation failed\n", __func__); + PIPELINE_LOG(ERR, + "%s: Pipeline memory allocation failed", __func__); return NULL; } @@ -232,8 +235,8 @@ rte_pipeline_free(struct rte_pipeline *p) /* Check input parameters */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: rte_pipeline parameter is NULL\n", __func__); + PIPELINE_LOG(ERR, + "%s: rte_pipeline parameter is NULL", __func__); return -EINVAL; } @@ -273,44 +276,44 @@ rte_table_check_params(struct rte_pipeline *p, uint32_t *table_id) { if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter is NULL\n", + PIPELINE_LOG(ERR, "%s: pipeline parameter is NULL", __func__); return -EINVAL; } if (params == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: params parameter is NULL\n", + PIPELINE_LOG(ERR, "%s: params parameter is NULL", __func__); return -EINVAL; } if (table_id == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: table_id parameter is NULL\n", + PIPELINE_LOG(ERR, "%s: table_id parameter is NULL", __func__); return -EINVAL; } /* ops */ if (params->ops == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: params->ops is NULL\n", + PIPELINE_LOG(ERR, "%s: params->ops is NULL", __func__); return -EINVAL; } if (params->ops->f_create == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_create function pointer is NULL\n", __func__); + PIPELINE_LOG(ERR, + "%s: f_create function pointer is NULL", __func__); return -EINVAL; } if (params->ops->f_lookup == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_lookup function pointer is NULL\n", __func__); + PIPELINE_LOG(ERR, + "%s: f_lookup function pointer is NULL", __func__); return -EINVAL; } /* De we have room for one more table? */ if (p->num_tables == RTE_PIPELINE_TABLE_MAX) { - RTE_LOG(ERR, PIPELINE, - "%s: Incorrect value for num_tables parameter\n", + PIPELINE_LOG(ERR, + "%s: Incorrect value for num_tables parameter", __func__); return -EINVAL; } @@ -343,8 +346,8 @@ rte_pipeline_table_create(struct rte_pipeline *p, default_entry = rte_zmalloc_socket( "PIPELINE", entry_size, RTE_CACHE_LINE_SIZE, p->socket_id); if (default_entry == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: Failed to allocate default entry\n", __func__); + PIPELINE_LOG(ERR, + "%s: Failed to allocate default entry", __func__); return -EINVAL; } @@ -353,7 +356,7 @@ rte_pipeline_table_create(struct rte_pipeline *p, entry_size); if (h_table == NULL) { rte_free(default_entry); - RTE_LOG(ERR, PIPELINE, "%s: Table creation failed\n", __func__); + PIPELINE_LOG(ERR, "%s: Table creation failed", __func__); return -EINVAL; } @@ -399,20 +402,20 @@ rte_pipeline_table_default_entry_add(struct rte_pipeline *p, /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter is NULL\n", + PIPELINE_LOG(ERR, "%s: pipeline parameter is NULL", __func__); return -EINVAL; } if (default_entry == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: default_entry parameter is NULL\n", __func__); + PIPELINE_LOG(ERR, + "%s: default_entry parameter is NULL", __func__); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: table_id %d out of range\n", __func__, table_id); + PIPELINE_LOG(ERR, + "%s: table_id %d out of range", __func__, table_id); return -EINVAL; } @@ -421,8 +424,8 @@ rte_pipeline_table_default_entry_add(struct rte_pipeline *p, if ((default_entry->action == RTE_PIPELINE_ACTION_TABLE) && table->table_next_id_valid && (default_entry->table_id != table->table_next_id)) { - RTE_LOG(ERR, PIPELINE, - "%s: Tree-like topologies not allowed\n", __func__); + PIPELINE_LOG(ERR, + "%s: Tree-like topologies not allowed", __func__); return -EINVAL; } @@ -448,14 +451,14 @@ rte_pipeline_table_default_entry_delete(struct rte_pipeline *p, /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: pipeline parameter is NULL\n", __func__); + PIPELINE_LOG(ERR, + "%s: pipeline parameter is NULL", __func__); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: table_id %d out of range\n", __func__, table_id); + PIPELINE_LOG(ERR, + "%s: table_id %d out of range", __func__, table_id); return -EINVAL; } @@ -484,32 +487,32 @@ rte_pipeline_table_entry_add(struct rte_pipeline *p, /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter is NULL\n", + PIPELINE_LOG(ERR, "%s: pipeline parameter is NULL", __func__); return -EINVAL; } if (key == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: key parameter is NULL\n", __func__); + PIPELINE_LOG(ERR, "%s: key parameter is NULL", __func__); return -EINVAL; } if (entry == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: entry parameter is NULL\n", + PIPELINE_LOG(ERR, "%s: entry parameter is NULL", __func__); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: table_id %d out of range\n", __func__, table_id); + PIPELINE_LOG(ERR, + "%s: table_id %d out of range", __func__, table_id); return -EINVAL; } table = &p->tables[table_id]; if (table->ops.f_add == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: f_add function pointer NULL\n", + PIPELINE_LOG(ERR, "%s: f_add function pointer NULL", __func__); return -EINVAL; } @@ -517,8 +520,8 @@ rte_pipeline_table_entry_add(struct rte_pipeline *p, if ((entry->action == RTE_PIPELINE_ACTION_TABLE) && table->table_next_id_valid && (entry->table_id != table->table_next_id)) { - RTE_LOG(ERR, PIPELINE, - "%s: Tree-like topologies not allowed\n", __func__); + PIPELINE_LOG(ERR, + "%s: Tree-like topologies not allowed", __func__); return -EINVAL; } @@ -544,28 +547,28 @@ rte_pipeline_table_entry_delete(struct rte_pipeline *p, /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + PIPELINE_LOG(ERR, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (key == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: key parameter is NULL\n", + PIPELINE_LOG(ERR, "%s: key parameter is NULL", __func__); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: table_id %d out of range\n", __func__, table_id); + PIPELINE_LOG(ERR, + "%s: table_id %d out of range", __func__, table_id); return -EINVAL; } table = &p->tables[table_id]; if (table->ops.f_delete == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_delete function pointer NULL\n", __func__); + PIPELINE_LOG(ERR, + "%s: f_delete function pointer NULL", __func__); return -EINVAL; } @@ -585,32 +588,32 @@ int rte_pipeline_table_entry_add_bulk(struct rte_pipeline *p, /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter is NULL\n", + PIPELINE_LOG(ERR, "%s: pipeline parameter is NULL", __func__); return -EINVAL; } if (keys == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: keys parameter is NULL\n", __func__); + PIPELINE_LOG(ERR, "%s: keys parameter is NULL", __func__); return -EINVAL; } if (entries == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: entries parameter is NULL\n", + PIPELINE_LOG(ERR, "%s: entries parameter is NULL", __func__); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: table_id %d out of range\n", __func__, table_id); + PIPELINE_LOG(ERR, + "%s: table_id %d out of range", __func__, table_id); return -EINVAL; } table = &p->tables[table_id]; if (table->ops.f_add_bulk == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: f_add_bulk function pointer NULL\n", + PIPELINE_LOG(ERR, "%s: f_add_bulk function pointer NULL", __func__); return -EINVAL; } @@ -619,8 +622,8 @@ int rte_pipeline_table_entry_add_bulk(struct rte_pipeline *p, if ((entries[i]->action == RTE_PIPELINE_ACTION_TABLE) && table->table_next_id_valid && (entries[i]->table_id != table->table_next_id)) { - RTE_LOG(ERR, PIPELINE, - "%s: Tree-like topologies not allowed\n", __func__); + PIPELINE_LOG(ERR, + "%s: Tree-like topologies not allowed", __func__); return -EINVAL; } } @@ -649,28 +652,28 @@ int rte_pipeline_table_entry_delete_bulk(struct rte_pipeline *p, /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + PIPELINE_LOG(ERR, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (keys == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: key parameter is NULL\n", + PIPELINE_LOG(ERR, "%s: key parameter is NULL", __func__); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: table_id %d out of range\n", __func__, table_id); + PIPELINE_LOG(ERR, + "%s: table_id %d out of range", __func__, table_id); return -EINVAL; } table = &p->tables[table_id]; if (table->ops.f_delete_bulk == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_delete function pointer NULL\n", __func__); + PIPELINE_LOG(ERR, + "%s: f_delete function pointer NULL", __func__); return -EINVAL; } @@ -687,35 +690,35 @@ rte_pipeline_port_in_check_params(struct rte_pipeline *p, uint32_t *port_id) { if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + PIPELINE_LOG(ERR, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (params == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: params parameter NULL\n", __func__); + PIPELINE_LOG(ERR, "%s: params parameter NULL", __func__); return -EINVAL; } if (port_id == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: port_id parameter NULL\n", + PIPELINE_LOG(ERR, "%s: port_id parameter NULL", __func__); return -EINVAL; } /* ops */ if (params->ops == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: params->ops parameter NULL\n", + PIPELINE_LOG(ERR, "%s: params->ops parameter NULL", __func__); return -EINVAL; } if (params->ops->f_create == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_create function pointer NULL\n", __func__); + PIPELINE_LOG(ERR, + "%s: f_create function pointer NULL", __func__); return -EINVAL; } if (params->ops->f_rx == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: f_rx function pointer NULL\n", + PIPELINE_LOG(ERR, "%s: f_rx function pointer NULL", __func__); return -EINVAL; } @@ -723,15 +726,15 @@ rte_pipeline_port_in_check_params(struct rte_pipeline *p, /* burst_size */ if ((params->burst_size == 0) || (params->burst_size > RTE_PORT_IN_BURST_SIZE_MAX)) { - RTE_LOG(ERR, PIPELINE, "%s: invalid value for burst_size\n", + PIPELINE_LOG(ERR, "%s: invalid value for burst_size", __func__); return -EINVAL; } /* Do we have room for one more port? */ if (p->num_ports_in == RTE_PIPELINE_PORT_IN_MAX) { - RTE_LOG(ERR, PIPELINE, - "%s: invalid value for num_ports_in\n", __func__); + PIPELINE_LOG(ERR, + "%s: invalid value for num_ports_in", __func__); return -EINVAL; } @@ -744,51 +747,51 @@ rte_pipeline_port_out_check_params(struct rte_pipeline *p, uint32_t *port_id) { if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + PIPELINE_LOG(ERR, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (params == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: params parameter NULL\n", __func__); + PIPELINE_LOG(ERR, "%s: params parameter NULL", __func__); return -EINVAL; } if (port_id == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: port_id parameter NULL\n", + PIPELINE_LOG(ERR, "%s: port_id parameter NULL", __func__); return -EINVAL; } /* ops */ if (params->ops == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: params->ops parameter NULL\n", + PIPELINE_LOG(ERR, "%s: params->ops parameter NULL", __func__); return -EINVAL; } if (params->ops->f_create == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_create function pointer NULL\n", __func__); + PIPELINE_LOG(ERR, + "%s: f_create function pointer NULL", __func__); return -EINVAL; } if (params->ops->f_tx == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_tx function pointer NULL\n", __func__); + PIPELINE_LOG(ERR, + "%s: f_tx function pointer NULL", __func__); return -EINVAL; } if (params->ops->f_tx_bulk == NULL) { - RTE_LOG(ERR, PIPELINE, - "%s: f_tx_bulk function pointer NULL\n", __func__); + PIPELINE_LOG(ERR, + "%s: f_tx_bulk function pointer NULL", __func__); return -EINVAL; } /* Do we have room for one more port? */ if (p->num_ports_out == RTE_PIPELINE_PORT_OUT_MAX) { - RTE_LOG(ERR, PIPELINE, - "%s: invalid value for num_ports_out\n", __func__); + PIPELINE_LOG(ERR, + "%s: invalid value for num_ports_out", __func__); return -EINVAL; } @@ -816,7 +819,7 @@ rte_pipeline_port_in_create(struct rte_pipeline *p, /* Create the port */ h_port = params->ops->f_create(params->arg_create, p->socket_id); if (h_port == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: Port creation failed\n", __func__); + PIPELINE_LOG(ERR, "%s: Port creation failed", __func__); return -EINVAL; } @@ -866,7 +869,7 @@ rte_pipeline_port_out_create(struct rte_pipeline *p, /* Create the port */ h_port = params->ops->f_create(params->arg_create, p->socket_id); if (h_port == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: Port creation failed\n", __func__); + PIPELINE_LOG(ERR, "%s: Port creation failed", __func__); return -EINVAL; } @@ -901,21 +904,21 @@ rte_pipeline_port_in_connect_to_table(struct rte_pipeline *p, /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + PIPELINE_LOG(ERR, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (port_id >= p->num_ports_in) { - RTE_LOG(ERR, PIPELINE, - "%s: port IN ID %u is out of range\n", + PIPELINE_LOG(ERR, + "%s: port IN ID %u is out of range", __func__, port_id); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: Table ID %u is out of range\n", + PIPELINE_LOG(ERR, + "%s: Table ID %u is out of range", __func__, table_id); return -EINVAL; } @@ -935,14 +938,14 @@ rte_pipeline_port_in_enable(struct rte_pipeline *p, uint32_t port_id) /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + PIPELINE_LOG(ERR, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (port_id >= p->num_ports_in) { - RTE_LOG(ERR, PIPELINE, - "%s: port IN ID %u is out of range\n", + PIPELINE_LOG(ERR, + "%s: port IN ID %u is out of range", __func__, port_id); return -EINVAL; } @@ -982,13 +985,13 @@ rte_pipeline_port_in_disable(struct rte_pipeline *p, uint32_t port_id) /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + PIPELINE_LOG(ERR, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (port_id >= p->num_ports_in) { - RTE_LOG(ERR, PIPELINE, "%s: port IN ID %u is out of range\n", + PIPELINE_LOG(ERR, "%s: port IN ID %u is out of range", __func__, port_id); return -EINVAL; } @@ -1035,7 +1038,7 @@ rte_pipeline_check(struct rte_pipeline *p) /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + PIPELINE_LOG(ERR, "%s: pipeline parameter NULL", __func__); return -EINVAL; } @@ -1043,17 +1046,17 @@ rte_pipeline_check(struct rte_pipeline *p) /* Check that pipeline has at least one input port, one table and one output port */ if (p->num_ports_in == 0) { - RTE_LOG(ERR, PIPELINE, "%s: must have at least 1 input port\n", + PIPELINE_LOG(ERR, "%s: must have at least 1 input port", __func__); return -EINVAL; } if (p->num_tables == 0) { - RTE_LOG(ERR, PIPELINE, "%s: must have at least 1 table\n", + PIPELINE_LOG(ERR, "%s: must have at least 1 table", __func__); return -EINVAL; } if (p->num_ports_out == 0) { - RTE_LOG(ERR, PIPELINE, "%s: must have at least 1 output port\n", + PIPELINE_LOG(ERR, "%s: must have at least 1 output port", __func__); return -EINVAL; } @@ -1063,8 +1066,8 @@ rte_pipeline_check(struct rte_pipeline *p) struct rte_port_in *port_in = &p->ports_in[port_in_id]; if (port_in->table_id == RTE_TABLE_INVALID) { - RTE_LOG(ERR, PIPELINE, - "%s: Port IN ID %u is not connected\n", + PIPELINE_LOG(ERR, + "%s: Port IN ID %u is not connected", __func__, port_in_id); return -EINVAL; } @@ -1447,7 +1450,7 @@ rte_pipeline_flush(struct rte_pipeline *p) /* Check input arguments */ if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + PIPELINE_LOG(ERR, "%s: pipeline parameter NULL", __func__); return -EINVAL; } @@ -1500,14 +1503,14 @@ int rte_pipeline_port_in_stats_read(struct rte_pipeline *p, uint32_t port_id, int retval; if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + PIPELINE_LOG(ERR, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (port_id >= p->num_ports_in) { - RTE_LOG(ERR, PIPELINE, - "%s: port IN ID %u is out of range\n", + PIPELINE_LOG(ERR, + "%s: port IN ID %u is out of range", __func__, port_id); return -EINVAL; } @@ -1537,13 +1540,13 @@ int rte_pipeline_port_out_stats_read(struct rte_pipeline *p, uint32_t port_id, int retval; if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", __func__); + PIPELINE_LOG(ERR, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (port_id >= p->num_ports_out) { - RTE_LOG(ERR, PIPELINE, - "%s: port OUT ID %u is out of range\n", __func__, port_id); + PIPELINE_LOG(ERR, + "%s: port OUT ID %u is out of range", __func__, port_id); return -EINVAL; } @@ -1571,14 +1574,14 @@ int rte_pipeline_table_stats_read(struct rte_pipeline *p, uint32_t table_id, int retval; if (p == NULL) { - RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", + PIPELINE_LOG(ERR, "%s: pipeline parameter NULL", __func__); return -EINVAL; } if (table_id >= p->num_tables) { - RTE_LOG(ERR, PIPELINE, - "%s: table %u is out of range\n", __func__, table_id); + PIPELINE_LOG(ERR, + "%s: table %u is out of range", __func__, table_id); return -EINVAL; } diff --git a/lib/port/port_log.h b/lib/port/port_log.h new file mode 100644 index 0000000000..bc1744bd97 --- /dev/null +++ b/lib/port/port_log.h @@ -0,0 +1,9 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Red Hat, Inc. + */ + +#include <rte_log.h> + +#define PORT_LOG(level, fmt, ...) \ + RTE_LOG(level, PORT, fmt "\n", ## __VA_ARGS__) + diff --git a/lib/port/rte_port_ethdev.c b/lib/port/rte_port_ethdev.c index e6bb7ee480..d57f680e98 100644 --- a/lib/port/rte_port_ethdev.c +++ b/lib/port/rte_port_ethdev.c @@ -10,6 +10,8 @@ #include "rte_port_ethdev.h" +#include "port_log.h" + /* * Port ETHDEV Reader */ @@ -43,7 +45,7 @@ rte_port_ethdev_reader_create(void *params, int socket_id) /* Check input parameters */ if (conf == NULL) { - RTE_LOG(ERR, PORT, "%s: params is NULL\n", __func__); + PORT_LOG(ERR, "%s: params is NULL", __func__); return NULL; } @@ -51,7 +53,7 @@ rte_port_ethdev_reader_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + PORT_LOG(ERR, "%s: Failed to allocate port", __func__); return NULL; } @@ -78,7 +80,7 @@ static int rte_port_ethdev_reader_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + PORT_LOG(ERR, "%s: port is NULL", __func__); return -EINVAL; } @@ -142,7 +144,7 @@ rte_port_ethdev_writer_create(void *params, int socket_id) (conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->tx_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + PORT_LOG(ERR, "%s: Invalid input parameters", __func__); return NULL; } @@ -150,7 +152,7 @@ rte_port_ethdev_writer_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + PORT_LOG(ERR, "%s: Failed to allocate port", __func__); return NULL; } @@ -257,7 +259,7 @@ static int rte_port_ethdev_writer_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + PORT_LOG(ERR, "%s: Port is NULL", __func__); return -EINVAL; } @@ -323,7 +325,7 @@ rte_port_ethdev_writer_nodrop_create(void *params, int socket_id) (conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->tx_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + PORT_LOG(ERR, "%s: Invalid input parameters", __func__); return NULL; } @@ -331,7 +333,7 @@ rte_port_ethdev_writer_nodrop_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + PORT_LOG(ERR, "%s: Failed to allocate port", __func__); return NULL; } @@ -470,7 +472,7 @@ static int rte_port_ethdev_writer_nodrop_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + PORT_LOG(ERR, "%s: Port is NULL", __func__); return -EINVAL; } diff --git a/lib/port/rte_port_eventdev.c b/lib/port/rte_port_eventdev.c index 13350fd608..6acf22998d 100644 --- a/lib/port/rte_port_eventdev.c +++ b/lib/port/rte_port_eventdev.c @@ -10,6 +10,8 @@ #include "rte_port_eventdev.h" +#include "port_log.h" + /* * Port EVENTDEV Reader */ @@ -45,7 +47,7 @@ rte_port_eventdev_reader_create(void *params, int socket_id) /* Check input parameters */ if (conf == NULL) { - RTE_LOG(ERR, PORT, "%s: params is NULL\n", __func__); + PORT_LOG(ERR, "%s: params is NULL", __func__); return NULL; } @@ -53,7 +55,7 @@ rte_port_eventdev_reader_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + PORT_LOG(ERR, "%s: Failed to allocate port", __func__); return NULL; } @@ -85,7 +87,7 @@ static int rte_port_eventdev_reader_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + PORT_LOG(ERR, "%s: port is NULL", __func__); return -EINVAL; } @@ -155,7 +157,7 @@ rte_port_eventdev_writer_create(void *params, int socket_id) (conf->enq_burst_sz == 0) || (conf->enq_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->enq_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + PORT_LOG(ERR, "%s: Invalid input parameters", __func__); return NULL; } @@ -163,7 +165,7 @@ rte_port_eventdev_writer_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + PORT_LOG(ERR, "%s: Failed to allocate port", __func__); return NULL; } @@ -290,7 +292,7 @@ static int rte_port_eventdev_writer_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + PORT_LOG(ERR, "%s: Port is NULL", __func__); return -EINVAL; } @@ -362,7 +364,7 @@ rte_port_eventdev_writer_nodrop_create(void *params, int socket_id) (conf->enq_burst_sz == 0) || (conf->enq_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->enq_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + PORT_LOG(ERR, "%s: Invalid input parameters", __func__); return NULL; } @@ -370,7 +372,7 @@ rte_port_eventdev_writer_nodrop_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + PORT_LOG(ERR, "%s: Failed to allocate port", __func__); return NULL; } @@ -530,7 +532,7 @@ static int rte_port_eventdev_writer_nodrop_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + PORT_LOG(ERR, "%s: Port is NULL", __func__); return -EINVAL; } diff --git a/lib/port/rte_port_fd.c b/lib/port/rte_port_fd.c index 7e140793b2..281cba20c2 100644 --- a/lib/port/rte_port_fd.c +++ b/lib/port/rte_port_fd.c @@ -10,6 +10,8 @@ #include "rte_port_fd.h" +#include "port_log.h" + /* * Port FD Reader */ @@ -43,19 +45,19 @@ rte_port_fd_reader_create(void *params, int socket_id) /* Check input parameters */ if (conf == NULL) { - RTE_LOG(ERR, PORT, "%s: params is NULL\n", __func__); + PORT_LOG(ERR, "%s: params is NULL", __func__); return NULL; } if (conf->fd < 0) { - RTE_LOG(ERR, PORT, "%s: Invalid file descriptor\n", __func__); + PORT_LOG(ERR, "%s: Invalid file descriptor", __func__); return NULL; } if (conf->mtu == 0) { - RTE_LOG(ERR, PORT, "%s: Invalid MTU\n", __func__); + PORT_LOG(ERR, "%s: Invalid MTU", __func__); return NULL; } if (conf->mempool == NULL) { - RTE_LOG(ERR, PORT, "%s: Invalid mempool\n", __func__); + PORT_LOG(ERR, "%s: Invalid mempool", __func__); return NULL; } @@ -63,7 +65,7 @@ rte_port_fd_reader_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + PORT_LOG(ERR, "%s: Failed to allocate port", __func__); return NULL; } @@ -109,7 +111,7 @@ static int rte_port_fd_reader_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + PORT_LOG(ERR, "%s: port is NULL", __func__); return -EINVAL; } @@ -171,7 +173,7 @@ rte_port_fd_writer_create(void *params, int socket_id) (conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->tx_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + PORT_LOG(ERR, "%s: Invalid input parameters", __func__); return NULL; } @@ -179,7 +181,7 @@ rte_port_fd_writer_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + PORT_LOG(ERR, "%s: Failed to allocate port", __func__); return NULL; } @@ -279,7 +281,7 @@ static int rte_port_fd_writer_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + PORT_LOG(ERR, "%s: Port is NULL", __func__); return -EINVAL; } @@ -344,7 +346,7 @@ rte_port_fd_writer_nodrop_create(void *params, int socket_id) (conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->tx_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + PORT_LOG(ERR, "%s: Invalid input parameters", __func__); return NULL; } @@ -352,7 +354,7 @@ rte_port_fd_writer_nodrop_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + PORT_LOG(ERR, "%s: Failed to allocate port", __func__); return NULL; } @@ -464,7 +466,7 @@ static int rte_port_fd_writer_nodrop_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + PORT_LOG(ERR, "%s: Port is NULL", __func__); return -EINVAL; } diff --git a/lib/port/rte_port_frag.c b/lib/port/rte_port_frag.c index e1f1892176..883601a9ae 100644 --- a/lib/port/rte_port_frag.c +++ b/lib/port/rte_port_frag.c @@ -7,6 +7,8 @@ #include "rte_port_frag.h" +#include "port_log.h" + /* Max number of fragments per packet allowed */ #define RTE_PORT_FRAG_MAX_FRAGS_PER_PACKET 0x80 @@ -62,24 +64,24 @@ rte_port_ring_reader_frag_create(void *params, int socket_id, int is_ipv4) /* Check input parameters */ if (conf == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter conf is NULL\n", __func__); + PORT_LOG(ERR, "%s: Parameter conf is NULL", __func__); return NULL; } if (conf->ring == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter ring is NULL\n", __func__); + PORT_LOG(ERR, "%s: Parameter ring is NULL", __func__); return NULL; } if (conf->mtu == 0) { - RTE_LOG(ERR, PORT, "%s: Parameter mtu is invalid\n", __func__); + PORT_LOG(ERR, "%s: Parameter mtu is invalid", __func__); return NULL; } if (conf->pool_direct == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter pool_direct is NULL\n", + PORT_LOG(ERR, "%s: Parameter pool_direct is NULL", __func__); return NULL; } if (conf->pool_indirect == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter pool_indirect is NULL\n", + PORT_LOG(ERR, "%s: Parameter pool_indirect is NULL", __func__); return NULL; } @@ -88,7 +90,7 @@ rte_port_ring_reader_frag_create(void *params, int socket_id, int is_ipv4) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + PORT_LOG(ERR, "%s: port is NULL", __func__); return NULL; } @@ -232,7 +234,7 @@ static int rte_port_ring_reader_frag_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter port is NULL\n", __func__); + PORT_LOG(ERR, "%s: Parameter port is NULL", __func__); return -1; } diff --git a/lib/port/rte_port_ras.c b/lib/port/rte_port_ras.c index 15109661d1..42cf1ddf45 100644 --- a/lib/port/rte_port_ras.c +++ b/lib/port/rte_port_ras.c @@ -9,6 +9,8 @@ #include "rte_port_ras.h" +#include "port_log.h" + #ifndef RTE_PORT_RAS_N_BUCKETS #define RTE_PORT_RAS_N_BUCKETS 4094 #endif @@ -69,16 +71,16 @@ rte_port_ring_writer_ras_create(void *params, int socket_id, int is_ipv4) /* Check input parameters */ if (conf == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter conf is NULL\n", __func__); + PORT_LOG(ERR, "%s: Parameter conf is NULL", __func__); return NULL; } if (conf->ring == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter ring is NULL\n", __func__); + PORT_LOG(ERR, "%s: Parameter ring is NULL", __func__); return NULL; } if ((conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX)) { - RTE_LOG(ERR, PORT, "%s: Parameter tx_burst_sz is invalid\n", + PORT_LOG(ERR, "%s: Parameter tx_burst_sz is invalid", __func__); return NULL; } @@ -87,7 +89,7 @@ rte_port_ring_writer_ras_create(void *params, int socket_id, int is_ipv4) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate socket\n", __func__); + PORT_LOG(ERR, "%s: Failed to allocate socket", __func__); return NULL; } @@ -103,7 +105,7 @@ rte_port_ring_writer_ras_create(void *params, int socket_id, int is_ipv4) socket_id); if (port->frag_tbl == NULL) { - RTE_LOG(ERR, PORT, "%s: rte_ip_frag_table_create failed\n", + PORT_LOG(ERR, "%s: rte_ip_frag_table_create failed", __func__); rte_free(port); return NULL; @@ -282,7 +284,7 @@ rte_port_ring_writer_ras_free(void *port) port; if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Parameter port is NULL\n", __func__); + PORT_LOG(ERR, "%s: Parameter port is NULL", __func__); return -1; } diff --git a/lib/port/rte_port_ring.c b/lib/port/rte_port_ring.c index 002efb7c3e..680d208f45 100644 --- a/lib/port/rte_port_ring.c +++ b/lib/port/rte_port_ring.c @@ -10,6 +10,8 @@ #include "rte_port_ring.h" +#include "port_log.h" + /* * Port RING Reader */ @@ -46,7 +48,7 @@ rte_port_ring_reader_create_internal(void *params, int socket_id, (conf->ring == NULL) || (rte_ring_is_cons_single(conf->ring) && is_multi) || (!rte_ring_is_cons_single(conf->ring) && !is_multi)) { - RTE_LOG(ERR, PORT, "%s: Invalid Parameters\n", __func__); + PORT_LOG(ERR, "%s: Invalid Parameters", __func__); return NULL; } @@ -54,7 +56,7 @@ rte_port_ring_reader_create_internal(void *params, int socket_id, port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + PORT_LOG(ERR, "%s: Failed to allocate port", __func__); return NULL; } @@ -107,7 +109,7 @@ static int rte_port_ring_reader_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + PORT_LOG(ERR, "%s: port is NULL", __func__); return -EINVAL; } @@ -174,7 +176,7 @@ rte_port_ring_writer_create_internal(void *params, int socket_id, (rte_ring_is_prod_single(conf->ring) && is_multi) || (!rte_ring_is_prod_single(conf->ring) && !is_multi) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX)) { - RTE_LOG(ERR, PORT, "%s: Invalid Parameters\n", __func__); + PORT_LOG(ERR, "%s: Invalid Parameters", __func__); return NULL; } @@ -182,7 +184,7 @@ rte_port_ring_writer_create_internal(void *params, int socket_id, port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + PORT_LOG(ERR, "%s: Failed to allocate port", __func__); return NULL; } @@ -370,7 +372,7 @@ rte_port_ring_writer_free(void *port) struct rte_port_ring_writer *p = port; if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + PORT_LOG(ERR, "%s: Port is NULL", __func__); return -EINVAL; } @@ -443,7 +445,7 @@ rte_port_ring_writer_nodrop_create_internal(void *params, int socket_id, (rte_ring_is_prod_single(conf->ring) && is_multi) || (!rte_ring_is_prod_single(conf->ring) && !is_multi) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX)) { - RTE_LOG(ERR, PORT, "%s: Invalid Parameters\n", __func__); + PORT_LOG(ERR, "%s: Invalid Parameters", __func__); return NULL; } @@ -451,7 +453,7 @@ rte_port_ring_writer_nodrop_create_internal(void *params, int socket_id, port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + PORT_LOG(ERR, "%s: Failed to allocate port", __func__); return NULL; } @@ -703,7 +705,7 @@ rte_port_ring_writer_nodrop_free(void *port) port; if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + PORT_LOG(ERR, "%s: Port is NULL", __func__); return -EINVAL; } diff --git a/lib/port/rte_port_sched.c b/lib/port/rte_port_sched.c index f6255c4346..da7a6240d7 100644 --- a/lib/port/rte_port_sched.c +++ b/lib/port/rte_port_sched.c @@ -7,6 +7,8 @@ #include "rte_port_sched.h" +#include "port_log.h" + /* * Reader */ @@ -40,7 +42,7 @@ rte_port_sched_reader_create(void *params, int socket_id) /* Check input parameters */ if ((conf == NULL) || (conf->sched == NULL)) { - RTE_LOG(ERR, PORT, "%s: Invalid params\n", __func__); + PORT_LOG(ERR, "%s: Invalid params", __func__); return NULL; } @@ -48,7 +50,7 @@ rte_port_sched_reader_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + PORT_LOG(ERR, "%s: Failed to allocate port", __func__); return NULL; } @@ -74,7 +76,7 @@ static int rte_port_sched_reader_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + PORT_LOG(ERR, "%s: port is NULL", __func__); return -EINVAL; } @@ -139,7 +141,7 @@ rte_port_sched_writer_create(void *params, int socket_id) (conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->tx_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid params\n", __func__); + PORT_LOG(ERR, "%s: Invalid params", __func__); return NULL; } @@ -147,7 +149,7 @@ rte_port_sched_writer_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + PORT_LOG(ERR, "%s: Failed to allocate port", __func__); return NULL; } @@ -247,7 +249,7 @@ static int rte_port_sched_writer_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + PORT_LOG(ERR, "%s: port is NULL", __func__); return -EINVAL; } diff --git a/lib/port/rte_port_source_sink.c b/lib/port/rte_port_source_sink.c index ff9677cdfe..4426f13499 100644 --- a/lib/port/rte_port_source_sink.c +++ b/lib/port/rte_port_source_sink.c @@ -15,6 +15,8 @@ #include "rte_port_source_sink.h" +#include "port_log.h" + /* * Port SOURCE */ @@ -75,8 +77,8 @@ pcap_source_load(struct rte_port_source *port, /* first time open, get packet number */ pcap_handle = pcap_open_offline(file_name, pcap_errbuf); if (pcap_handle == NULL) { - RTE_LOG(ERR, PORT, "Failed to open pcap file " - "'%s' for reading\n", file_name); + PORT_LOG(ERR, "Failed to open pcap file " + "'%s' for reading", file_name); goto error_exit; } @@ -88,29 +90,29 @@ pcap_source_load(struct rte_port_source *port, port->pkt_len = rte_zmalloc_socket("PCAP", (sizeof(*port->pkt_len) * n_pkts), 0, socket_id); if (port->pkt_len == NULL) { - RTE_LOG(ERR, PORT, "No enough memory\n"); + PORT_LOG(ERR, "No enough memory"); goto error_exit; } pkt_len_aligns = rte_malloc("PCAP", (sizeof(*pkt_len_aligns) * n_pkts), 0); if (pkt_len_aligns == NULL) { - RTE_LOG(ERR, PORT, "No enough memory\n"); + PORT_LOG(ERR, "No enough memory"); goto error_exit; } port->pkts = rte_zmalloc_socket("PCAP", (sizeof(*port->pkts) * n_pkts), 0, socket_id); if (port->pkts == NULL) { - RTE_LOG(ERR, PORT, "No enough memory\n"); + PORT_LOG(ERR, "No enough memory"); goto error_exit; } /* open 2nd time, get pkt_len */ pcap_handle = pcap_open_offline(file_name, pcap_errbuf); if (pcap_handle == NULL) { - RTE_LOG(ERR, PORT, "Failed to open pcap file " - "'%s' for reading\n", file_name); + PORT_LOG(ERR, "Failed to open pcap file " + "'%s' for reading", file_name); goto error_exit; } @@ -128,7 +130,7 @@ pcap_source_load(struct rte_port_source *port, buff = rte_zmalloc_socket("PCAP", total_buff_len, 0, socket_id); if (buff == NULL) { - RTE_LOG(ERR, PORT, "No enough memory\n"); + PORT_LOG(ERR, "No enough memory"); goto error_exit; } @@ -137,8 +139,8 @@ pcap_source_load(struct rte_port_source *port, /* open file one last time to copy the pkt content */ pcap_handle = pcap_open_offline(file_name, pcap_errbuf); if (pcap_handle == NULL) { - RTE_LOG(ERR, PORT, "Failed to open pcap file " - "'%s' for reading\n", file_name); + PORT_LOG(ERR, "Failed to open pcap file " + "'%s' for reading", file_name); goto error_exit; } @@ -155,8 +157,8 @@ pcap_source_load(struct rte_port_source *port, rte_free(pkt_len_aligns); - RTE_LOG(INFO, PORT, "Successfully load pcap file " - "'%s' with %u pkts\n", + PORT_LOG(INFO, "Successfully load pcap file " + "'%s' with %u pkts", file_name, port->n_pkts); return 0; @@ -180,8 +182,8 @@ pcap_source_load(struct rte_port_source *port, int _ret = 0; \ \ if (file_name) { \ - RTE_LOG(ERR, PORT, "Source port field " \ - "\"file_name\" is not NULL.\n"); \ + PORT_LOG(ERR, "Source port field " \ + "\"file_name\" is not NULL."); \ _ret = -1; \ } \ \ @@ -199,7 +201,7 @@ rte_port_source_create(void *params, int socket_id) /* Check input arguments*/ if ((p == NULL) || (p->mempool == NULL)) { - RTE_LOG(ERR, PORT, "%s: Invalid params\n", __func__); + PORT_LOG(ERR, "%s: Invalid params", __func__); return NULL; } @@ -207,7 +209,7 @@ rte_port_source_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + PORT_LOG(ERR, "%s: Failed to allocate port", __func__); return NULL; } @@ -332,15 +334,15 @@ pcap_sink_open(struct rte_port_sink *port, /** Open a dead pcap handler for opening dumper file */ tx_pcap = pcap_open_dead(DLT_EN10MB, 65535); if (tx_pcap == NULL) { - RTE_LOG(ERR, PORT, "Cannot open pcap dead handler\n"); + PORT_LOG(ERR, "Cannot open pcap dead handler"); return -1; } /* The dumper is created using the previous pcap_t reference */ pcap_dumper = pcap_dump_open(tx_pcap, file_name); if (pcap_dumper == NULL) { - RTE_LOG(ERR, PORT, "Failed to open pcap file " - "\"%s\" for writing\n", file_name); + PORT_LOG(ERR, "Failed to open pcap file " + "\"%s\" for writing", file_name); return -1; } @@ -349,7 +351,7 @@ pcap_sink_open(struct rte_port_sink *port, port->pkt_index = 0; port->dump_finish = 0; - RTE_LOG(INFO, PORT, "Ready to dump packets to file \"%s\"\n", + PORT_LOG(INFO, "Ready to dump packets to file \"%s\"", file_name); return 0; @@ -402,7 +404,7 @@ pcap_sink_write_pkt(struct rte_port_sink *port, struct rte_mbuf *mbuf) if ((port->max_pkts != 0) && (port->pkt_index >= port->max_pkts)) { port->dump_finish = 1; - RTE_LOG(INFO, PORT, "Dumped %u packets to file\n", + PORT_LOG(INFO, "Dumped %u packets to file", port->pkt_index); } @@ -433,8 +435,8 @@ do { \ int _ret = 0; \ \ if (file_name) { \ - RTE_LOG(ERR, PORT, "Sink port field " \ - "\"file_name\" is not NULL.\n"); \ + PORT_LOG(ERR, "Sink port field " \ + "\"file_name\" is not NULL."); \ _ret = -1; \ } \ \ @@ -459,7 +461,7 @@ rte_port_sink_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + PORT_LOG(ERR, "%s: Failed to allocate port", __func__); return NULL; } diff --git a/lib/port/rte_port_sym_crypto.c b/lib/port/rte_port_sym_crypto.c index 27b7e07cea..a68a2396a5 100644 --- a/lib/port/rte_port_sym_crypto.c +++ b/lib/port/rte_port_sym_crypto.c @@ -8,6 +8,8 @@ #include "rte_port_sym_crypto.h" +#include "port_log.h" + /* * Port Crypto Reader */ @@ -44,7 +46,7 @@ rte_port_sym_crypto_reader_create(void *params, int socket_id) /* Check input parameters */ if (conf == NULL) { - RTE_LOG(ERR, PORT, "%s: params is NULL\n", __func__); + PORT_LOG(ERR, "%s: params is NULL", __func__); return NULL; } @@ -52,7 +54,7 @@ rte_port_sym_crypto_reader_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + PORT_LOG(ERR, "%s: Failed to allocate port", __func__); return NULL; } @@ -100,7 +102,7 @@ static int rte_port_sym_crypto_reader_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__); + PORT_LOG(ERR, "%s: port is NULL", __func__); return -EINVAL; } @@ -167,7 +169,7 @@ rte_port_sym_crypto_writer_create(void *params, int socket_id) (conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->tx_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + PORT_LOG(ERR, "%s: Invalid input parameters", __func__); return NULL; } @@ -175,7 +177,7 @@ rte_port_sym_crypto_writer_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + PORT_LOG(ERR, "%s: Failed to allocate port", __func__); return NULL; } @@ -285,7 +287,7 @@ static int rte_port_sym_crypto_writer_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + PORT_LOG(ERR, "%s: Port is NULL", __func__); return -EINVAL; } @@ -353,7 +355,7 @@ rte_port_sym_crypto_writer_nodrop_create(void *params, int socket_id) (conf->tx_burst_sz == 0) || (conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) || (!rte_is_power_of_2(conf->tx_burst_sz))) { - RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__); + PORT_LOG(ERR, "%s: Invalid input parameters", __func__); return NULL; } @@ -361,7 +363,7 @@ rte_port_sym_crypto_writer_nodrop_create(void *params, int socket_id) port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE, socket_id); if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__); + PORT_LOG(ERR, "%s: Failed to allocate port", __func__); return NULL; } @@ -497,7 +499,7 @@ static int rte_port_sym_crypto_writer_nodrop_free(void *port) { if (port == NULL) { - RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__); + PORT_LOG(ERR, "%s: Port is NULL", __func__); return -EINVAL; } diff --git a/lib/power/guest_channel.c b/lib/power/guest_channel.c index a6f2097d5b..6949f26d33 100644 --- a/lib/power/guest_channel.c +++ b/lib/power/guest_channel.c @@ -19,6 +19,8 @@ RTE_LOG_REGISTER_SUFFIX(guest_channel_logtype, guest_channel, INFO); #define RTE_LOGTYPE_GUEST_CHANNEL guest_channel_logtype +#define GUEST_CHANNEL_LOG(level, fmt, ...) \ + RTE_LOG(level, GUEST_CHANNEL, fmt "\n", ## __VA_ARGS__) /* Timeout for incoming message in milliseconds. */ #define TIMEOUT 10 @@ -59,38 +61,38 @@ guest_channel_host_connect(const char *path, unsigned int lcore_id) int fd = -1; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d\n", + GUEST_CHANNEL_LOG(ERR, "Channel(%u) is out of range 0...%d", lcore_id, RTE_MAX_LCORE-1); return -1; } /* check if path is already open */ if (global_fds[lcore_id] != -1) { - RTE_LOG(ERR, GUEST_CHANNEL, "Channel(%u) is already open with fd %d\n", + GUEST_CHANNEL_LOG(ERR, "Channel(%u) is already open with fd %d", lcore_id, global_fds[lcore_id]); return -1; } snprintf(fd_path, PATH_MAX, "%s.%u", path, lcore_id); - RTE_LOG(INFO, GUEST_CHANNEL, "Opening channel '%s' for lcore %u\n", + GUEST_CHANNEL_LOG(INFO, "Opening channel '%s' for lcore %u", fd_path, lcore_id); fd = open(fd_path, O_RDWR); if (fd < 0) { - RTE_LOG(ERR, GUEST_CHANNEL, "Unable to connect to '%s' with error " - "%s\n", fd_path, strerror(errno)); + GUEST_CHANNEL_LOG(ERR, "Unable to connect to '%s' with error " + "%s", fd_path, strerror(errno)); return -1; } flags = fcntl(fd, F_GETFL, 0); if (flags < 0) { - RTE_LOG(ERR, GUEST_CHANNEL, "Failed on fcntl get flags for file %s\n", + GUEST_CHANNEL_LOG(ERR, "Failed on fcntl get flags for file %s", fd_path); goto error; } flags |= O_NONBLOCK; if (fcntl(fd, F_SETFL, flags) < 0) { - RTE_LOG(ERR, GUEST_CHANNEL, "Failed on setting non-blocking mode for " - "file %s\n", fd_path); + GUEST_CHANNEL_LOG(ERR, "Failed on setting non-blocking mode for " + "file %s", fd_path); goto error; } /* QEMU needs a delay after connection */ @@ -103,13 +105,13 @@ guest_channel_host_connect(const char *path, unsigned int lcore_id) global_fds[lcore_id] = fd; ret = guest_channel_send_msg(&pkt, lcore_id); if (ret != 0) { - RTE_LOG(ERR, GUEST_CHANNEL, - "Error on channel '%s' communications test: %s\n", + GUEST_CHANNEL_LOG(ERR, + "Error on channel '%s' communications test: %s", fd_path, ret > 0 ? strerror(ret) : "channel not connected"); goto error; } - RTE_LOG(INFO, GUEST_CHANNEL, "Channel '%s' is now connected\n", fd_path); + GUEST_CHANNEL_LOG(INFO, "Channel '%s' is now connected", fd_path); return 0; error: close(fd); @@ -125,13 +127,13 @@ guest_channel_send_msg(struct rte_power_channel_packet *pkt, void *buffer = pkt; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d\n", + GUEST_CHANNEL_LOG(ERR, "Channel(%u) is out of range 0...%d", lcore_id, RTE_MAX_LCORE-1); return -1; } if (global_fds[lcore_id] < 0) { - RTE_LOG(ERR, GUEST_CHANNEL, "Channel is not connected\n"); + GUEST_CHANNEL_LOG(ERR, "Channel is not connected"); return -1; } while (buffer_len > 0) { @@ -166,13 +168,13 @@ int power_guest_channel_read_msg(void *pkt, return -1; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d\n", + GUEST_CHANNEL_LOG(ERR, "Channel(%u) is out of range 0...%d", lcore_id, RTE_MAX_LCORE-1); return -1; } if (global_fds[lcore_id] < 0) { - RTE_LOG(ERR, GUEST_CHANNEL, "Channel is not connected\n"); + GUEST_CHANNEL_LOG(ERR, "Channel is not connected"); return -1; } @@ -181,10 +183,10 @@ int power_guest_channel_read_msg(void *pkt, ret = poll(&fds, 1, TIMEOUT); if (ret == 0) { - RTE_LOG(DEBUG, GUEST_CHANNEL, "Timeout occurred during poll function.\n"); + GUEST_CHANNEL_LOG(DEBUG, "Timeout occurred during poll function."); return -1; } else if (ret < 0) { - RTE_LOG(ERR, GUEST_CHANNEL, "Error occurred during poll function: %s\n", + GUEST_CHANNEL_LOG(ERR, "Error occurred during poll function: %s", strerror(errno)); return -1; } @@ -200,7 +202,7 @@ int power_guest_channel_read_msg(void *pkt, } if (ret == 0) { - RTE_LOG(ERR, GUEST_CHANNEL, "Expected more data, but connection has been closed.\n"); + GUEST_CHANNEL_LOG(ERR, "Expected more data, but connection has been closed."); return -1; } pkt = (char *)pkt + ret; @@ -221,7 +223,7 @@ void guest_channel_host_disconnect(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d\n", + GUEST_CHANNEL_LOG(ERR, "Channel(%u) is out of range 0...%d", lcore_id, RTE_MAX_LCORE-1); return; } diff --git a/lib/power/power_acpi_cpufreq.c b/lib/power/power_acpi_cpufreq.c index 8b55f19247..416d1fb6da 100644 --- a/lib/power/power_acpi_cpufreq.c +++ b/lib/power/power_acpi_cpufreq.c @@ -63,8 +63,8 @@ static int set_freq_internal(struct acpi_power_info *pi, uint32_t idx) { if (idx >= RTE_MAX_LCORE_FREQS || idx >= pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Invalid frequency index %u, which " - "should be less than %u\n", idx, pi->nb_freqs); + POWER_LOG(ERR, "Invalid frequency index %u, which " + "should be less than %u", idx, pi->nb_freqs); return -1; } @@ -75,13 +75,13 @@ set_freq_internal(struct acpi_power_info *pi, uint32_t idx) POWER_DEBUG_TRACE("Frequency[%u] %u to be set for lcore %u\n", idx, pi->freqs[idx], pi->lcore_id); if (fseek(pi->f, 0, SEEK_SET) < 0) { - RTE_LOG(ERR, POWER, "Fail to set file position indicator to 0 " - "for setting frequency for lcore %u\n", pi->lcore_id); + POWER_LOG(ERR, "Fail to set file position indicator to 0 " + "for setting frequency for lcore %u", pi->lcore_id); return -1; } if (fprintf(pi->f, "%u", pi->freqs[idx]) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new frequency for " - "lcore %u\n", pi->lcore_id); + POWER_LOG(ERR, "Fail to write new frequency for " + "lcore %u", pi->lcore_id); return -1; } fflush(pi->f); @@ -127,14 +127,14 @@ power_get_available_freqs(struct acpi_power_info *pi) open_core_sysfs_file(&f, "r", POWER_SYSFILE_AVAIL_FREQ, pi->lcore_id); if (f == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + POWER_LOG(ERR, "failed to open %s", POWER_SYSFILE_AVAIL_FREQ); goto out; } ret = read_core_sysfs_s(f, buf, sizeof(buf)); if ((ret) < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + POWER_LOG(ERR, "Failed to read %s", POWER_SYSFILE_AVAIL_FREQ); goto out; } @@ -143,12 +143,12 @@ power_get_available_freqs(struct acpi_power_info *pi) count = rte_strsplit(buf, sizeof(buf), freqs, RTE_MAX_LCORE_FREQS, ' '); if (count <= 0) { - RTE_LOG(ERR, POWER, "No available frequency in " - ""POWER_SYSFILE_AVAIL_FREQ"\n", pi->lcore_id); + POWER_LOG(ERR, "No available frequency in " + POWER_SYSFILE_AVAIL_FREQ, pi->lcore_id); goto out; } if (count >= RTE_MAX_LCORE_FREQS) { - RTE_LOG(ERR, POWER, "Too many available frequencies : %d\n", + POWER_LOG(ERR, "Too many available frequencies : %d", count); goto out; } @@ -196,14 +196,14 @@ power_init_for_setting_freq(struct acpi_power_info *pi) open_core_sysfs_file(&f, "rw+", POWER_SYSFILE_SETSPEED, pi->lcore_id); if (f == NULL) { - RTE_LOG(ERR, POWER, "Failed to open %s\n", + POWER_LOG(ERR, "Failed to open %s", POWER_SYSFILE_SETSPEED); goto err; } ret = read_core_sysfs_s(f, buf, sizeof(buf)); if ((ret) < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + POWER_LOG(ERR, "Failed to read %s", POWER_SYSFILE_SETSPEED); goto err; } @@ -237,7 +237,7 @@ power_acpi_cpufreq_init(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n", + POWER_LOG(ERR, "Lcore id %u can not exceeds %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -253,42 +253,42 @@ power_acpi_cpufreq_init(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "in use\n", lcore_id); + POWER_LOG(INFO, "Power management of lcore %u is " + "in use", lcore_id); return -1; } pi->lcore_id = lcore_id; /* Check and set the governor */ if (power_set_governor_userspace(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to " - "userspace\n", lcore_id); + POWER_LOG(ERR, "Cannot set governor of lcore %u to " + "userspace", lcore_id); goto fail; } /* Get the available frequencies */ if (power_get_available_freqs(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot get available frequencies of " - "lcore %u\n", lcore_id); + POWER_LOG(ERR, "Cannot get available frequencies of " + "lcore %u", lcore_id); goto fail; } /* Init for setting lcore frequency */ if (power_init_for_setting_freq(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot init for setting frequency for " - "lcore %u\n", lcore_id); + POWER_LOG(ERR, "Cannot init for setting frequency for " + "lcore %u", lcore_id); goto fail; } /* Set freq to max by default */ if (power_acpi_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, "Cannot set frequency of lcore %u " - "to max\n", lcore_id); + POWER_LOG(ERR, "Cannot set frequency of lcore %u " + "to max", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Initialized successfully for lcore %u " - "power management\n", lcore_id); + POWER_LOG(INFO, "Initialized successfully for lcore %u " + "power management", lcore_id); exp_state = POWER_ONGOING; rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_USED, rte_memory_order_release, rte_memory_order_relaxed); @@ -310,7 +310,7 @@ power_acpi_cpufreq_exit(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n", + POWER_LOG(ERR, "Lcore id %u can not exceeds %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -325,8 +325,8 @@ power_acpi_cpufreq_exit(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "not used\n", lcore_id); + POWER_LOG(INFO, "Power management of lcore %u is " + "not used", lcore_id); return -1; } @@ -336,14 +336,14 @@ power_acpi_cpufreq_exit(unsigned int lcore_id) /* Set the governor back to the original */ if (power_set_governor_original(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set the governor of %u back " - "to the original\n", lcore_id); + POWER_LOG(ERR, "Cannot set the governor of %u back " + "to the original", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Power management of lcore %u has exited from " + POWER_LOG(INFO, "Power management of lcore %u has exited from " "'userspace' mode and been set back to the " - "original\n", lcore_id); + "original", lcore_id); exp_state = POWER_ONGOING; rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_IDLE, rte_memory_order_release, rte_memory_order_relaxed); @@ -364,18 +364,18 @@ power_acpi_cpufreq_freqs(unsigned int lcore_id, uint32_t *freqs, uint32_t num) struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return 0; } if (freqs == NULL) { - RTE_LOG(ERR, POWER, "NULL buffer supplied\n"); + POWER_LOG(ERR, "NULL buffer supplied"); return 0; } pi = &lcore_power_info[lcore_id]; if (num < pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Buffer size is not enough\n"); + POWER_LOG(ERR, "Buffer size is not enough"); return 0; } rte_memcpy(freqs, pi->freqs, pi->nb_freqs * sizeof(uint32_t)); @@ -387,7 +387,7 @@ uint32_t power_acpi_cpufreq_get_freq(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return RTE_POWER_INVALID_FREQ_INDEX; } @@ -398,7 +398,7 @@ int power_acpi_cpufreq_set_freq(unsigned int lcore_id, uint32_t index) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -411,7 +411,7 @@ power_acpi_cpufreq_freq_down(unsigned int lcore_id) struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -429,7 +429,7 @@ power_acpi_cpufreq_freq_up(unsigned int lcore_id) struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -446,7 +446,7 @@ int power_acpi_cpufreq_freq_max(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -470,7 +470,7 @@ power_acpi_cpufreq_freq_min(unsigned int lcore_id) struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -487,7 +487,7 @@ power_acpi_turbo_status(unsigned int lcore_id) struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -503,7 +503,7 @@ power_acpi_enable_turbo(unsigned int lcore_id) struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -513,16 +513,16 @@ power_acpi_enable_turbo(unsigned int lcore_id) pi->turbo_enable = 1; else { pi->turbo_enable = 0; - RTE_LOG(ERR, POWER, - "Failed to enable turbo on lcore %u\n", + POWER_LOG(ERR, + "Failed to enable turbo on lcore %u", lcore_id); return -1; } /* Max may have changed, so call to max function */ if (power_acpi_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, - "Failed to set frequency of lcore %u to max\n", + POWER_LOG(ERR, + "Failed to set frequency of lcore %u to max", lcore_id); return -1; } @@ -536,7 +536,7 @@ power_acpi_disable_turbo(unsigned int lcore_id) struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -547,8 +547,8 @@ power_acpi_disable_turbo(unsigned int lcore_id) if ((pi->turbo_available) && (pi->curr_idx <= 1)) { /* Try to set freq to max by default coming out of turbo */ if (power_acpi_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, - "Failed to set frequency of lcore %u to max\n", + POWER_LOG(ERR, + "Failed to set frequency of lcore %u to max", lcore_id); return -1; } @@ -563,11 +563,11 @@ int power_acpi_get_capabilities(unsigned int lcore_id, struct acpi_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } if (caps == NULL) { - RTE_LOG(ERR, POWER, "Invalid argument\n"); + POWER_LOG(ERR, "Invalid argument"); return -1; } diff --git a/lib/power/power_amd_pstate_cpufreq.c b/lib/power/power_amd_pstate_cpufreq.c index dbd9d2b3ee..67e3aa735a 100644 --- a/lib/power/power_amd_pstate_cpufreq.c +++ b/lib/power/power_amd_pstate_cpufreq.c @@ -70,8 +70,8 @@ static int set_freq_internal(struct amd_pstate_power_info *pi, uint32_t idx) { if (idx >= RTE_MAX_LCORE_FREQS || idx >= pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Invalid frequency index %u, which " - "should be less than %u\n", idx, pi->nb_freqs); + POWER_LOG(ERR, "Invalid frequency index %u, which " + "should be less than %u", idx, pi->nb_freqs); return -1; } @@ -82,13 +82,13 @@ set_freq_internal(struct amd_pstate_power_info *pi, uint32_t idx) POWER_DEBUG_TRACE("Frequency[%u] %u to be set for lcore %u\n", idx, pi->freqs[idx], pi->lcore_id); if (fseek(pi->f, 0, SEEK_SET) < 0) { - RTE_LOG(ERR, POWER, "Fail to set file position indicator to 0 " - "for setting frequency for lcore %u\n", pi->lcore_id); + POWER_LOG(ERR, "Fail to set file position indicator to 0 " + "for setting frequency for lcore %u", pi->lcore_id); return -1; } if (fprintf(pi->f, "%u", pi->freqs[idx]) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new frequency for " - "lcore %u\n", pi->lcore_id); + POWER_LOG(ERR, "Fail to write new frequency for " + "lcore %u", pi->lcore_id); return -1; } fflush(pi->f); @@ -119,7 +119,7 @@ power_check_turbo(struct amd_pstate_power_info *pi) open_core_sysfs_file(&f_max, "r", POWER_SYSFILE_HIGHEST_PERF, pi->lcore_id); if (f_max == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + POWER_LOG(ERR, "failed to open %s", POWER_SYSFILE_HIGHEST_PERF); goto err; } @@ -127,21 +127,21 @@ power_check_turbo(struct amd_pstate_power_info *pi) open_core_sysfs_file(&f_nom, "r", POWER_SYSFILE_NOMINAL_PERF, pi->lcore_id); if (f_nom == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + POWER_LOG(ERR, "failed to open %s", POWER_SYSFILE_NOMINAL_PERF); goto err; } ret = read_core_sysfs_u32(f_max, &highest_perf); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + POWER_LOG(ERR, "Failed to read %s", POWER_SYSFILE_HIGHEST_PERF); goto err; } ret = read_core_sysfs_u32(f_nom, &nominal_perf); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + POWER_LOG(ERR, "Failed to read %s", POWER_SYSFILE_NOMINAL_PERF); goto err; } @@ -190,7 +190,7 @@ power_get_available_freqs(struct amd_pstate_power_info *pi) open_core_sysfs_file(&f_max, "r", POWER_SYSFILE_SCALING_MAX_FREQ, pi->lcore_id); if (f_max == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + POWER_LOG(ERR, "failed to open %s", POWER_SYSFILE_SCALING_MAX_FREQ); goto out; } @@ -198,7 +198,7 @@ power_get_available_freqs(struct amd_pstate_power_info *pi) open_core_sysfs_file(&f_min, "r", POWER_SYSFILE_SCALING_MIN_FREQ, pi->lcore_id); if (f_min == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + POWER_LOG(ERR, "failed to open %s", POWER_SYSFILE_SCALING_MIN_FREQ); goto out; } @@ -206,28 +206,28 @@ power_get_available_freqs(struct amd_pstate_power_info *pi) open_core_sysfs_file(&f_nom, "r", POWER_SYSFILE_NOMINAL_FREQ, pi->lcore_id); if (f_nom == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + POWER_LOG(ERR, "failed to open %s", POWER_SYSFILE_NOMINAL_FREQ); goto out; } ret = read_core_sysfs_u32(f_max, &scaling_max_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + POWER_LOG(ERR, "Failed to read %s", POWER_SYSFILE_SCALING_MAX_FREQ); goto out; } ret = read_core_sysfs_u32(f_min, &scaling_min_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + POWER_LOG(ERR, "Failed to read %s", POWER_SYSFILE_SCALING_MIN_FREQ); goto out; } ret = read_core_sysfs_u32(f_nom, &nominal_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + POWER_LOG(ERR, "Failed to read %s", POWER_SYSFILE_NOMINAL_FREQ); goto out; } @@ -235,8 +235,8 @@ power_get_available_freqs(struct amd_pstate_power_info *pi) power_check_turbo(pi); if (scaling_max_freq < scaling_min_freq) { - RTE_LOG(ERR, POWER, "scaling min freq exceeds max freq, " - "not expected! Check system power policy\n"); + POWER_LOG(ERR, "scaling min freq exceeds max freq, " + "not expected! Check system power policy"); goto out; } else if (scaling_max_freq == scaling_min_freq) { num_freqs = 1; @@ -304,14 +304,14 @@ power_init_for_setting_freq(struct amd_pstate_power_info *pi) open_core_sysfs_file(&f, "rw+", POWER_SYSFILE_SETSPEED, pi->lcore_id); if (f == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + POWER_LOG(ERR, "failed to open %s", POWER_SYSFILE_SETSPEED); goto err; } ret = read_core_sysfs_s(f, buf, sizeof(buf)); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + POWER_LOG(ERR, "Failed to read %s", POWER_SYSFILE_SETSPEED); goto err; } @@ -355,7 +355,7 @@ power_amd_pstate_cpufreq_init(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n", + POWER_LOG(ERR, "Lcore id %u can not exceeds %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -371,42 +371,42 @@ power_amd_pstate_cpufreq_init(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "in use\n", lcore_id); + POWER_LOG(INFO, "Power management of lcore %u is " + "in use", lcore_id); return -1; } pi->lcore_id = lcore_id; /* Check and set the governor */ if (power_set_governor_userspace(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to " - "userspace\n", lcore_id); + POWER_LOG(ERR, "Cannot set governor of lcore %u to " + "userspace", lcore_id); goto fail; } /* Get the available frequencies */ if (power_get_available_freqs(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot get available frequencies of " - "lcore %u\n", lcore_id); + POWER_LOG(ERR, "Cannot get available frequencies of " + "lcore %u", lcore_id); goto fail; } /* Init for setting lcore frequency */ if (power_init_for_setting_freq(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot init for setting frequency for " - "lcore %u\n", lcore_id); + POWER_LOG(ERR, "Cannot init for setting frequency for " + "lcore %u", lcore_id); goto fail; } /* Set freq to max by default */ if (power_amd_pstate_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, "Cannot set frequency of lcore %u " - "to max\n", lcore_id); + POWER_LOG(ERR, "Cannot set frequency of lcore %u " + "to max", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Initialized successfully for lcore %u " - "power management\n", lcore_id); + POWER_LOG(INFO, "Initialized successfully for lcore %u " + "power management", lcore_id); rte_atomic_store_explicit(&(pi->state), POWER_USED, rte_memory_order_release); @@ -434,7 +434,7 @@ power_amd_pstate_cpufreq_exit(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n", + POWER_LOG(ERR, "Lcore id %u can not exceeds %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -449,8 +449,8 @@ power_amd_pstate_cpufreq_exit(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "not used\n", lcore_id); + POWER_LOG(INFO, "Power management of lcore %u is " + "not used", lcore_id); return -1; } @@ -460,14 +460,14 @@ power_amd_pstate_cpufreq_exit(unsigned int lcore_id) /* Set the governor back to the original */ if (power_set_governor_original(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set the governor of %u back " - "to the original\n", lcore_id); + POWER_LOG(ERR, "Cannot set the governor of %u back " + "to the original", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Power management of lcore %u has exited from " + POWER_LOG(INFO, "Power management of lcore %u has exited from " "'userspace' mode and been set back to the " - "original\n", lcore_id); + "original", lcore_id); rte_atomic_store_explicit(&(pi->state), POWER_IDLE, rte_memory_order_release); return 0; @@ -484,18 +484,18 @@ power_amd_pstate_cpufreq_freqs(unsigned int lcore_id, uint32_t *freqs, uint32_t struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return 0; } if (freqs == NULL) { - RTE_LOG(ERR, POWER, "NULL buffer supplied\n"); + POWER_LOG(ERR, "NULL buffer supplied"); return 0; } pi = &lcore_power_info[lcore_id]; if (num < pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Buffer size is not enough\n"); + POWER_LOG(ERR, "Buffer size is not enough"); return 0; } rte_memcpy(freqs, pi->freqs, pi->nb_freqs * sizeof(uint32_t)); @@ -507,7 +507,7 @@ uint32_t power_amd_pstate_cpufreq_get_freq(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return RTE_POWER_INVALID_FREQ_INDEX; } @@ -518,7 +518,7 @@ int power_amd_pstate_cpufreq_set_freq(unsigned int lcore_id, uint32_t index) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -531,7 +531,7 @@ power_amd_pstate_cpufreq_freq_down(unsigned int lcore_id) struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -549,7 +549,7 @@ power_amd_pstate_cpufreq_freq_up(unsigned int lcore_id) struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -566,7 +566,7 @@ int power_amd_pstate_cpufreq_freq_max(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -591,7 +591,7 @@ power_amd_pstate_cpufreq_freq_min(unsigned int lcore_id) struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -607,7 +607,7 @@ power_amd_pstate_turbo_status(unsigned int lcore_id) struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -622,7 +622,7 @@ power_amd_pstate_enable_turbo(unsigned int lcore_id) struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -632,8 +632,8 @@ power_amd_pstate_enable_turbo(unsigned int lcore_id) pi->turbo_enable = 1; else { pi->turbo_enable = 0; - RTE_LOG(ERR, POWER, - "Failed to enable turbo on lcore %u\n", + POWER_LOG(ERR, + "Failed to enable turbo on lcore %u", lcore_id); return -1; } @@ -643,8 +643,8 @@ power_amd_pstate_enable_turbo(unsigned int lcore_id) */ /* Max may have changed, so call to max function */ if (power_amd_pstate_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, - "Failed to set frequency of lcore %u to max\n", + POWER_LOG(ERR, + "Failed to set frequency of lcore %u to max", lcore_id); return -1; } @@ -658,7 +658,7 @@ power_amd_pstate_disable_turbo(unsigned int lcore_id) struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -669,8 +669,8 @@ power_amd_pstate_disable_turbo(unsigned int lcore_id) if ((pi->turbo_available) && (pi->curr_idx <= pi->nom_idx)) { /* Try to set freq to max by default coming out of turbo */ if (power_amd_pstate_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, - "Failed to set frequency of lcore %u to max\n", + POWER_LOG(ERR, + "Failed to set frequency of lcore %u to max", lcore_id); return -1; } @@ -686,11 +686,11 @@ power_amd_pstate_get_capabilities(unsigned int lcore_id, struct amd_pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } if (caps == NULL) { - RTE_LOG(ERR, POWER, "Invalid argument\n"); + POWER_LOG(ERR, "Invalid argument"); return -1; } diff --git a/lib/power/power_common.c b/lib/power/power_common.c index bf77eafa88..5d6c4291de 100644 --- a/lib/power/power_common.c +++ b/lib/power/power_common.c @@ -163,14 +163,14 @@ power_set_governor(unsigned int lcore_id, const char *new_governor, open_core_sysfs_file(&f_governor, "rw+", POWER_SYSFILE_GOVERNOR, lcore_id); if (f_governor == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + POWER_LOG(ERR, "failed to open %s", POWER_SYSFILE_GOVERNOR); goto out; } ret = read_core_sysfs_s(f_governor, buf, sizeof(buf)); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + POWER_LOG(ERR, "Failed to read %s", POWER_SYSFILE_GOVERNOR); goto out; } @@ -190,14 +190,14 @@ power_set_governor(unsigned int lcore_id, const char *new_governor, /* Write the new governor */ ret = write_core_sysfs_s(f_governor, new_governor); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to write %s\n", + POWER_LOG(ERR, "Failed to write %s", POWER_SYSFILE_GOVERNOR); goto out; } ret = 0; - RTE_LOG(INFO, POWER, "Power management governor of lcore %u has been " - "set to '%s' successfully\n", lcore_id, new_governor); + POWER_LOG(INFO, "Power management governor of lcore %u has been " + "set to '%s' successfully", lcore_id, new_governor); out: if (f_governor != NULL) fclose(f_governor); diff --git a/lib/power/power_common.h b/lib/power/power_common.h index c3fcbf4c10..4302370b5e 100644 --- a/lib/power/power_common.h +++ b/lib/power/power_common.h @@ -5,13 +5,15 @@ #ifndef _POWER_COMMON_H_ #define _POWER_COMMON_H_ - #include <rte_common.h> +#include <rte_log.h> #define RTE_POWER_INVALID_FREQ_INDEX (~0) extern int power_logtype; #define RTE_LOGTYPE_POWER power_logtype +#define POWER_LOG(level, fmt, ...) \ + RTE_LOG(level, POWER, fmt "\n", ## __VA_ARGS__) #ifdef RTE_LIBRTE_POWER_DEBUG #define POWER_DEBUG_TRACE(fmt, args...) \ diff --git a/lib/power/power_cppc_cpufreq.c b/lib/power/power_cppc_cpufreq.c index bb70f6ae52..dbbd166372 100644 --- a/lib/power/power_cppc_cpufreq.c +++ b/lib/power/power_cppc_cpufreq.c @@ -73,8 +73,8 @@ static int set_freq_internal(struct cppc_power_info *pi, uint32_t idx) { if (idx >= RTE_MAX_LCORE_FREQS || idx >= pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Invalid frequency index %u, which " - "should be less than %u\n", idx, pi->nb_freqs); + POWER_LOG(ERR, "Invalid frequency index %u, which " + "should be less than %u", idx, pi->nb_freqs); return -1; } @@ -85,13 +85,13 @@ set_freq_internal(struct cppc_power_info *pi, uint32_t idx) POWER_DEBUG_TRACE("Frequency[%u] %u to be set for lcore %u\n", idx, pi->freqs[idx], pi->lcore_id); if (fseek(pi->f, 0, SEEK_SET) < 0) { - RTE_LOG(ERR, POWER, "Fail to set file position indicator to 0 " - "for setting frequency for lcore %u\n", pi->lcore_id); + POWER_LOG(ERR, "Fail to set file position indicator to 0 " + "for setting frequency for lcore %u", pi->lcore_id); return -1; } if (fprintf(pi->f, "%u", pi->freqs[idx]) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new frequency for " - "lcore %u\n", pi->lcore_id); + POWER_LOG(ERR, "Fail to write new frequency for " + "lcore %u", pi->lcore_id); return -1; } fflush(pi->f); @@ -122,7 +122,7 @@ power_check_turbo(struct cppc_power_info *pi) open_core_sysfs_file(&f_max, "r", POWER_SYSFILE_HIGHEST_PERF, pi->lcore_id); if (f_max == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + POWER_LOG(ERR, "failed to open %s", POWER_SYSFILE_HIGHEST_PERF); goto err; } @@ -130,7 +130,7 @@ power_check_turbo(struct cppc_power_info *pi) open_core_sysfs_file(&f_nom, "r", POWER_SYSFILE_NOMINAL_PERF, pi->lcore_id); if (f_nom == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + POWER_LOG(ERR, "failed to open %s", POWER_SYSFILE_NOMINAL_PERF); goto err; } @@ -138,28 +138,28 @@ power_check_turbo(struct cppc_power_info *pi) open_core_sysfs_file(&f_cmax, "r", POWER_SYSFILE_SYS_MAX, pi->lcore_id); if (f_cmax == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + POWER_LOG(ERR, "failed to open %s", POWER_SYSFILE_SYS_MAX); goto err; } ret = read_core_sysfs_u32(f_max, &highest_perf); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + POWER_LOG(ERR, "Failed to read %s", POWER_SYSFILE_HIGHEST_PERF); goto err; } ret = read_core_sysfs_u32(f_nom, &nominal_perf); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + POWER_LOG(ERR, "Failed to read %s", POWER_SYSFILE_NOMINAL_PERF); goto err; } ret = read_core_sysfs_u32(f_cmax, &cpuinfo_max_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + POWER_LOG(ERR, "Failed to read %s", POWER_SYSFILE_SYS_MAX); goto err; } @@ -209,7 +209,7 @@ power_get_available_freqs(struct cppc_power_info *pi) open_core_sysfs_file(&f_max, "r", POWER_SYSFILE_SCALING_MAX_FREQ, pi->lcore_id); if (f_max == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + POWER_LOG(ERR, "failed to open %s", POWER_SYSFILE_SCALING_MAX_FREQ); goto out; } @@ -217,21 +217,21 @@ power_get_available_freqs(struct cppc_power_info *pi) open_core_sysfs_file(&f_min, "r", POWER_SYSFILE_SCALING_MIN_FREQ, pi->lcore_id); if (f_min == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + POWER_LOG(ERR, "failed to open %s", POWER_SYSFILE_SCALING_MIN_FREQ); goto out; } ret = read_core_sysfs_u32(f_max, &scaling_max_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + POWER_LOG(ERR, "Failed to read %s", POWER_SYSFILE_SCALING_MAX_FREQ); goto out; } ret = read_core_sysfs_u32(f_min, &scaling_min_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + POWER_LOG(ERR, "Failed to read %s", POWER_SYSFILE_SCALING_MIN_FREQ); goto out; } @@ -249,7 +249,7 @@ power_get_available_freqs(struct cppc_power_info *pi) num_freqs = (nominal_perf - scaling_min_freq) / BUS_FREQ + 1 + pi->turbo_available; if (num_freqs >= RTE_MAX_LCORE_FREQS) { - RTE_LOG(ERR, POWER, "Too many available frequencies: %d\n", + POWER_LOG(ERR, "Too many available frequencies: %d", num_freqs); goto out; } @@ -290,14 +290,14 @@ power_init_for_setting_freq(struct cppc_power_info *pi) open_core_sysfs_file(&f, "rw+", POWER_SYSFILE_SETSPEED, pi->lcore_id); if (f == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + POWER_LOG(ERR, "failed to open %s", POWER_SYSFILE_SETSPEED); goto err; } ret = read_core_sysfs_s(f, buf, sizeof(buf)); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + POWER_LOG(ERR, "Failed to read %s", POWER_SYSFILE_SETSPEED); goto err; } @@ -341,7 +341,7 @@ power_cppc_cpufreq_init(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n", + POWER_LOG(ERR, "Lcore id %u can not exceeds %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -357,42 +357,42 @@ power_cppc_cpufreq_init(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "in use\n", lcore_id); + POWER_LOG(INFO, "Power management of lcore %u is " + "in use", lcore_id); return -1; } pi->lcore_id = lcore_id; /* Check and set the governor */ if (power_set_governor_userspace(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to " - "userspace\n", lcore_id); + POWER_LOG(ERR, "Cannot set governor of lcore %u to " + "userspace", lcore_id); goto fail; } /* Get the available frequencies */ if (power_get_available_freqs(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot get available frequencies of " - "lcore %u\n", lcore_id); + POWER_LOG(ERR, "Cannot get available frequencies of " + "lcore %u", lcore_id); goto fail; } /* Init for setting lcore frequency */ if (power_init_for_setting_freq(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot init for setting frequency for " - "lcore %u\n", lcore_id); + POWER_LOG(ERR, "Cannot init for setting frequency for " + "lcore %u", lcore_id); goto fail; } /* Set freq to max by default */ if (power_cppc_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, "Cannot set frequency of lcore %u " - "to max\n", lcore_id); + POWER_LOG(ERR, "Cannot set frequency of lcore %u " + "to max", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Initialized successfully for lcore %u " - "power management\n", lcore_id); + POWER_LOG(INFO, "Initialized successfully for lcore %u " + "power management", lcore_id); rte_atomic_store_explicit(&(pi->state), POWER_USED, rte_memory_order_release); @@ -420,7 +420,7 @@ power_cppc_cpufreq_exit(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n", + POWER_LOG(ERR, "Lcore id %u can not exceeds %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -435,8 +435,8 @@ power_cppc_cpufreq_exit(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "not used\n", lcore_id); + POWER_LOG(INFO, "Power management of lcore %u is " + "not used", lcore_id); return -1; } @@ -446,14 +446,14 @@ power_cppc_cpufreq_exit(unsigned int lcore_id) /* Set the governor back to the original */ if (power_set_governor_original(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set the governor of %u back " - "to the original\n", lcore_id); + POWER_LOG(ERR, "Cannot set the governor of %u back " + "to the original", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Power management of lcore %u has exited from " + POWER_LOG(INFO, "Power management of lcore %u has exited from " "'userspace' mode and been set back to the " - "original\n", lcore_id); + "original", lcore_id); rte_atomic_store_explicit(&(pi->state), POWER_IDLE, rte_memory_order_release); return 0; @@ -470,18 +470,18 @@ power_cppc_cpufreq_freqs(unsigned int lcore_id, uint32_t *freqs, uint32_t num) struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return 0; } if (freqs == NULL) { - RTE_LOG(ERR, POWER, "NULL buffer supplied\n"); + POWER_LOG(ERR, "NULL buffer supplied"); return 0; } pi = &lcore_power_info[lcore_id]; if (num < pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Buffer size is not enough\n"); + POWER_LOG(ERR, "Buffer size is not enough"); return 0; } rte_memcpy(freqs, pi->freqs, pi->nb_freqs * sizeof(uint32_t)); @@ -493,7 +493,7 @@ uint32_t power_cppc_cpufreq_get_freq(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return RTE_POWER_INVALID_FREQ_INDEX; } @@ -504,7 +504,7 @@ int power_cppc_cpufreq_set_freq(unsigned int lcore_id, uint32_t index) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -517,7 +517,7 @@ power_cppc_cpufreq_freq_down(unsigned int lcore_id) struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -535,7 +535,7 @@ power_cppc_cpufreq_freq_up(unsigned int lcore_id) struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -552,7 +552,7 @@ int power_cppc_cpufreq_freq_max(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -576,7 +576,7 @@ power_cppc_cpufreq_freq_min(unsigned int lcore_id) struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -592,7 +592,7 @@ power_cppc_turbo_status(unsigned int lcore_id) struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -607,7 +607,7 @@ power_cppc_enable_turbo(unsigned int lcore_id) struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -617,8 +617,8 @@ power_cppc_enable_turbo(unsigned int lcore_id) pi->turbo_enable = 1; else { pi->turbo_enable = 0; - RTE_LOG(ERR, POWER, - "Failed to enable turbo on lcore %u\n", + POWER_LOG(ERR, + "Failed to enable turbo on lcore %u", lcore_id); return -1; } @@ -628,8 +628,8 @@ power_cppc_enable_turbo(unsigned int lcore_id) */ /* Max may have changed, so call to max function */ if (power_cppc_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, - "Failed to set frequency of lcore %u to max\n", + POWER_LOG(ERR, + "Failed to set frequency of lcore %u to max", lcore_id); return -1; } @@ -643,7 +643,7 @@ power_cppc_disable_turbo(unsigned int lcore_id) struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -654,8 +654,8 @@ power_cppc_disable_turbo(unsigned int lcore_id) if ((pi->turbo_available) && (pi->curr_idx <= 1)) { /* Try to set freq to max by default coming out of turbo */ if (power_cppc_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, - "Failed to set frequency of lcore %u to max\n", + POWER_LOG(ERR, + "Failed to set frequency of lcore %u to max", lcore_id); return -1; } @@ -671,11 +671,11 @@ power_cppc_get_capabilities(unsigned int lcore_id, struct cppc_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } if (caps == NULL) { - RTE_LOG(ERR, POWER, "Invalid argument\n"); + POWER_LOG(ERR, "Invalid argument"); return -1; } diff --git a/lib/power/power_intel_uncore.c b/lib/power/power_intel_uncore.c index 688aebc4ee..c5c204c670 100644 --- a/lib/power/power_intel_uncore.c +++ b/lib/power/power_intel_uncore.c @@ -52,8 +52,8 @@ set_uncore_freq_internal(struct uncore_power_info *ui, uint32_t idx) int ret; if (idx >= MAX_UNCORE_FREQS || idx >= ui->nb_freqs) { - RTE_LOG(DEBUG, POWER, "Invalid uncore frequency index %u, which " - "should be less than %u\n", idx, ui->nb_freqs); + POWER_LOG(DEBUG, "Invalid uncore frequency index %u, which " + "should be less than %u", idx, ui->nb_freqs); return -1; } @@ -65,13 +65,13 @@ set_uncore_freq_internal(struct uncore_power_info *ui, uint32_t idx) open_core_sysfs_file(&ui->f_cur_max, "rw+", POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ, ui->pkg, ui->die); if (ui->f_cur_max == NULL) { - RTE_LOG(DEBUG, POWER, "failed to open %s\n", + POWER_LOG(DEBUG, "failed to open %s", POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ); return -1; } ret = read_core_sysfs_u32(ui->f_cur_max, &curr_max_freq); if (ret < 0) { - RTE_LOG(DEBUG, POWER, "Failed to read %s\n", + POWER_LOG(DEBUG, "Failed to read %s", POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ); fclose(ui->f_cur_max); return -1; @@ -79,14 +79,14 @@ set_uncore_freq_internal(struct uncore_power_info *ui, uint32_t idx) /* check this value first before fprintf value to f_cur_max, so value isn't overwritten */ if (fprintf(ui->f_cur_min, "%u", target_uncore_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new uncore frequency for " - "pkg %02u die %02u\n", ui->pkg, ui->die); + POWER_LOG(ERR, "Fail to write new uncore frequency for " + "pkg %02u die %02u", ui->pkg, ui->die); return -1; } if (fprintf(ui->f_cur_max, "%u", target_uncore_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new uncore frequency for " - "pkg %02u die %02u\n", ui->pkg, ui->die); + POWER_LOG(ERR, "Fail to write new uncore frequency for " + "pkg %02u die %02u", ui->pkg, ui->die); return -1; } @@ -121,13 +121,13 @@ power_init_for_setting_uncore_freq(struct uncore_power_info *ui) open_core_sysfs_file(&f_base_max, "r", POWER_INTEL_UNCORE_SYSFILE_BASE_MAX_FREQ, ui->pkg, ui->die); if (f_base_max == NULL) { - RTE_LOG(DEBUG, POWER, "failed to open %s\n", + POWER_LOG(DEBUG, "failed to open %s", POWER_INTEL_UNCORE_SYSFILE_BASE_MAX_FREQ); goto err; } ret = read_core_sysfs_u32(f_base_max, &base_max_freq); if (ret < 0) { - RTE_LOG(DEBUG, POWER, "Failed to read %s\n", + POWER_LOG(DEBUG, "Failed to read %s", POWER_INTEL_UNCORE_SYSFILE_BASE_MAX_FREQ); goto err; } @@ -136,14 +136,14 @@ power_init_for_setting_uncore_freq(struct uncore_power_info *ui) open_core_sysfs_file(&f_base_min, "r", POWER_INTEL_UNCORE_SYSFILE_BASE_MIN_FREQ, ui->pkg, ui->die); if (f_base_min == NULL) { - RTE_LOG(DEBUG, POWER, "failed to open %s\n", + POWER_LOG(DEBUG, "failed to open %s", POWER_INTEL_UNCORE_SYSFILE_BASE_MIN_FREQ); goto err; } if (f_base_min != NULL) { ret = read_core_sysfs_u32(f_base_min, &base_min_freq); if (ret < 0) { - RTE_LOG(DEBUG, POWER, "Failed to read %s\n", + POWER_LOG(DEBUG, "Failed to read %s", POWER_INTEL_UNCORE_SYSFILE_BASE_MIN_FREQ); goto err; } @@ -153,14 +153,14 @@ power_init_for_setting_uncore_freq(struct uncore_power_info *ui) open_core_sysfs_file(&f_min, "rw+", POWER_INTEL_UNCORE_SYSFILE_MIN_FREQ, ui->pkg, ui->die); if (f_min == NULL) { - RTE_LOG(DEBUG, POWER, "failed to open %s\n", + POWER_LOG(DEBUG, "failed to open %s", POWER_INTEL_UNCORE_SYSFILE_MIN_FREQ); goto err; } if (f_min != NULL) { ret = read_core_sysfs_u32(f_min, &min_freq); if (ret < 0) { - RTE_LOG(DEBUG, POWER, "Failed to read %s\n", + POWER_LOG(DEBUG, "Failed to read %s", POWER_INTEL_UNCORE_SYSFILE_MIN_FREQ); goto err; } @@ -170,14 +170,14 @@ power_init_for_setting_uncore_freq(struct uncore_power_info *ui) open_core_sysfs_file(&f_max, "rw+", POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ, ui->pkg, ui->die); if (f_max == NULL) { - RTE_LOG(DEBUG, POWER, "failed to open %s\n", + POWER_LOG(DEBUG, "failed to open %s", POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ); goto err; } if (f_max != NULL) { ret = read_core_sysfs_u32(f_max, &max_freq); if (ret < 0) { - RTE_LOG(DEBUG, POWER, "Failed to read %s\n", + POWER_LOG(DEBUG, "Failed to read %s", POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ); goto err; } @@ -222,7 +222,7 @@ power_get_available_uncore_freqs(struct uncore_power_info *ui) num_uncore_freqs = (ui->init_max_freq - ui->init_min_freq) / BUS_FREQ + 1; if (num_uncore_freqs >= MAX_UNCORE_FREQS) { - RTE_LOG(ERR, POWER, "Too many available uncore frequencies: %d\n", + POWER_LOG(ERR, "Too many available uncore frequencies: %d", num_uncore_freqs); goto out; } @@ -250,7 +250,7 @@ check_pkg_die_values(unsigned int pkg, unsigned int die) if (max_pkgs == 0) return -1; if (pkg >= max_pkgs) { - RTE_LOG(DEBUG, POWER, "Package number %02u can not exceed %u\n", + POWER_LOG(DEBUG, "Package number %02u can not exceed %u", pkg, max_pkgs); return -1; } @@ -259,7 +259,7 @@ check_pkg_die_values(unsigned int pkg, unsigned int die) if (max_dies == 0) return -1; if (die >= max_dies) { - RTE_LOG(DEBUG, POWER, "Die number %02u can not exceed %u\n", + POWER_LOG(DEBUG, "Die number %02u can not exceed %u", die, max_dies); return -1; } @@ -282,15 +282,15 @@ power_intel_uncore_init(unsigned int pkg, unsigned int die) /* Init for setting uncore die frequency */ if (power_init_for_setting_uncore_freq(ui) < 0) { - RTE_LOG(DEBUG, POWER, "Cannot init for setting uncore frequency for " - "pkg %02u die %02u\n", pkg, die); + POWER_LOG(DEBUG, "Cannot init for setting uncore frequency for " + "pkg %02u die %02u", pkg, die); return -1; } /* Get the available frequencies */ if (power_get_available_uncore_freqs(ui) < 0) { - RTE_LOG(DEBUG, POWER, "Cannot get available uncore frequencies of " - "pkg %02u die %02u\n", pkg, die); + POWER_LOG(DEBUG, "Cannot get available uncore frequencies of " + "pkg %02u die %02u", pkg, die); return -1; } @@ -309,14 +309,14 @@ power_intel_uncore_exit(unsigned int pkg, unsigned int die) ui = &uncore_info[pkg][die]; if (fprintf(ui->f_cur_min, "%u", ui->org_min_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write original uncore frequency for " - "pkg %02u die %02u\n", ui->pkg, ui->die); + POWER_LOG(ERR, "Fail to write original uncore frequency for " + "pkg %02u die %02u", ui->pkg, ui->die); return -1; } if (fprintf(ui->f_cur_max, "%u", ui->org_max_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write original uncore frequency for " - "pkg %02u die %02u\n", ui->pkg, ui->die); + POWER_LOG(ERR, "Fail to write original uncore frequency for " + "pkg %02u die %02u", ui->pkg, ui->die); return -1; } @@ -385,13 +385,13 @@ power_intel_uncore_freqs(unsigned int pkg, unsigned int die, uint32_t *freqs, ui return -1; if (freqs == NULL) { - RTE_LOG(ERR, POWER, "NULL buffer supplied\n"); + POWER_LOG(ERR, "NULL buffer supplied"); return 0; } ui = &uncore_info[pkg][die]; if (num < ui->nb_freqs) { - RTE_LOG(ERR, POWER, "Buffer size is not enough\n"); + POWER_LOG(ERR, "Buffer size is not enough"); return 0; } rte_memcpy(freqs, ui->freqs, ui->nb_freqs * sizeof(uint32_t)); @@ -419,10 +419,10 @@ power_intel_uncore_get_num_pkgs(void) d = opendir(INTEL_UNCORE_FREQUENCY_DIR); if (d == NULL) { - RTE_LOG(ERR, POWER, + POWER_LOG(ERR, "Uncore frequency management not supported/enabled on this kernel. " "Please enable CONFIG_INTEL_UNCORE_FREQ_CONTROL if on Intel x86 with linux kernel" - " >= 5.6\n"); + " >= 5.6"); return 0; } @@ -451,16 +451,16 @@ power_intel_uncore_get_num_dies(unsigned int pkg) if (max_pkgs == 0) return 0; if (pkg >= max_pkgs) { - RTE_LOG(DEBUG, POWER, "Invalid package number\n"); + POWER_LOG(DEBUG, "Invalid package number"); return 0; } d = opendir(INTEL_UNCORE_FREQUENCY_DIR); if (d == NULL) { - RTE_LOG(ERR, POWER, + POWER_LOG(ERR, "Uncore frequency management not supported/enabled on this kernel. " "Please enable CONFIG_INTEL_UNCORE_FREQ_CONTROL if on Intel x86 with linux kernel" - " >= 5.6\n"); + " >= 5.6"); return 0; } diff --git a/lib/power/power_kvm_vm.c b/lib/power/power_kvm_vm.c index db031f4310..f15be8fac5 100644 --- a/lib/power/power_kvm_vm.c +++ b/lib/power/power_kvm_vm.c @@ -25,7 +25,7 @@ int power_kvm_vm_init(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Core(%u) is out of range 0...%d\n", + POWER_LOG(ERR, "Core(%u) is out of range 0...%d", lcore_id, RTE_MAX_LCORE-1); return -1; } @@ -46,16 +46,16 @@ power_kvm_vm_freqs(__rte_unused unsigned int lcore_id, __rte_unused uint32_t *freqs, __rte_unused uint32_t num) { - RTE_LOG(ERR, POWER, "rte_power_freqs is not implemented " - "for Virtual Machine Power Management\n"); + POWER_LOG(ERR, "rte_power_freqs is not implemented " + "for Virtual Machine Power Management"); return -ENOTSUP; } uint32_t power_kvm_vm_get_freq(__rte_unused unsigned int lcore_id) { - RTE_LOG(ERR, POWER, "rte_power_get_freq is not implemented " - "for Virtual Machine Power Management\n"); + POWER_LOG(ERR, "rte_power_get_freq is not implemented " + "for Virtual Machine Power Management"); return -ENOTSUP; } @@ -63,8 +63,8 @@ int power_kvm_vm_set_freq(__rte_unused unsigned int lcore_id, __rte_unused uint32_t index) { - RTE_LOG(ERR, POWER, "rte_power_set_freq is not implemented " - "for Virtual Machine Power Management\n"); + POWER_LOG(ERR, "rte_power_set_freq is not implemented " + "for Virtual Machine Power Management"); return -ENOTSUP; } @@ -74,7 +74,7 @@ send_msg(unsigned int lcore_id, uint32_t scale_direction) int ret; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Core(%u) is out of range 0...%d\n", + POWER_LOG(ERR, "Core(%u) is out of range 0...%d", lcore_id, RTE_MAX_LCORE-1); return -1; } @@ -82,7 +82,7 @@ send_msg(unsigned int lcore_id, uint32_t scale_direction) ret = guest_channel_send_msg(&pkt[lcore_id], lcore_id); if (ret == 0) return 1; - RTE_LOG(DEBUG, POWER, "Error sending message: %s\n", + POWER_LOG(DEBUG, "Error sending message: %s", ret > 0 ? strerror(ret) : "channel not connected"); return -1; } @@ -114,7 +114,7 @@ power_kvm_vm_freq_min(unsigned int lcore_id) int power_kvm_vm_turbo_status(__rte_unused unsigned int lcore_id) { - RTE_LOG(ERR, POWER, "rte_power_turbo_status is not implemented for Virtual Machine Power Management\n"); + POWER_LOG(ERR, "rte_power_turbo_status is not implemented for Virtual Machine Power Management"); return -ENOTSUP; } @@ -134,6 +134,6 @@ struct rte_power_core_capabilities; int power_kvm_vm_get_capabilities(__rte_unused unsigned int lcore_id, __rte_unused struct rte_power_core_capabilities *caps) { - RTE_LOG(ERR, POWER, "rte_power_get_capabilities is not implemented for Virtual Machine Power Management\n"); + POWER_LOG(ERR, "rte_power_get_capabilities is not implemented for Virtual Machine Power Management"); return -ENOTSUP; } diff --git a/lib/power/power_pstate_cpufreq.c b/lib/power/power_pstate_cpufreq.c index 5ca5f60bcd..c287ac54f8 100644 --- a/lib/power/power_pstate_cpufreq.c +++ b/lib/power/power_pstate_cpufreq.c @@ -82,7 +82,7 @@ power_read_turbo_pct(uint64_t *outVal) fd = open(POWER_SYSFILE_TURBO_PCT, O_RDONLY); if (fd < 0) { - RTE_LOG(ERR, POWER, "Error opening '%s': %s\n", POWER_SYSFILE_TURBO_PCT, + POWER_LOG(ERR, "Error opening '%s': %s", POWER_SYSFILE_TURBO_PCT, strerror(errno)); return fd; } @@ -90,7 +90,7 @@ power_read_turbo_pct(uint64_t *outVal) ret = read(fd, val, sizeof(val)); if (ret < 0) { - RTE_LOG(ERR, POWER, "Error reading '%s': %s\n", POWER_SYSFILE_TURBO_PCT, + POWER_LOG(ERR, "Error reading '%s': %s", POWER_SYSFILE_TURBO_PCT, strerror(errno)); goto out; } @@ -98,7 +98,7 @@ power_read_turbo_pct(uint64_t *outVal) errno = 0; *outVal = (uint64_t) strtol(val, &endptr, 10); if (errno != 0 || (*endptr != 0 && *endptr != '\n')) { - RTE_LOG(ERR, POWER, "Error converting str to digits, read from %s: %s\n", + POWER_LOG(ERR, "Error converting str to digits, read from %s: %s", POWER_SYSFILE_TURBO_PCT, strerror(errno)); ret = -1; goto out; @@ -126,7 +126,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) open_core_sysfs_file(&f_base_max, "r", POWER_SYSFILE_BASE_MAX_FREQ, pi->lcore_id); if (f_base_max == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + POWER_LOG(ERR, "failed to open %s", POWER_SYSFILE_BASE_MAX_FREQ); goto err; } @@ -134,7 +134,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) open_core_sysfs_file(&f_base_min, "r", POWER_SYSFILE_BASE_MIN_FREQ, pi->lcore_id); if (f_base_min == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + POWER_LOG(ERR, "failed to open %s", POWER_SYSFILE_BASE_MIN_FREQ); goto err; } @@ -142,7 +142,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) open_core_sysfs_file(&f_min, "rw+", POWER_SYSFILE_MIN_FREQ, pi->lcore_id); if (f_min == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + POWER_LOG(ERR, "failed to open %s", POWER_SYSFILE_MIN_FREQ); goto err; } @@ -150,7 +150,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) open_core_sysfs_file(&f_max, "rw+", POWER_SYSFILE_MAX_FREQ, pi->lcore_id); if (f_max == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + POWER_LOG(ERR, "failed to open %s", POWER_SYSFILE_MAX_FREQ); goto err; } @@ -162,7 +162,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) /* read base max ratio */ ret = read_core_sysfs_u32(f_base_max, &base_max_ratio); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + POWER_LOG(ERR, "Failed to read %s", POWER_SYSFILE_BASE_MAX_FREQ); goto err; } @@ -170,7 +170,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) /* read base min ratio */ ret = read_core_sysfs_u32(f_base_min, &base_min_ratio); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + POWER_LOG(ERR, "Failed to read %s", POWER_SYSFILE_BASE_MIN_FREQ); goto err; } @@ -179,7 +179,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) if (f_base != NULL) { ret = read_core_sysfs_u32(f_base, &base_ratio); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + POWER_LOG(ERR, "Failed to read %s", POWER_SYSFILE_BASE_FREQ); goto err; } @@ -257,8 +257,8 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx) uint32_t target_freq = 0; if (idx >= RTE_MAX_LCORE_FREQS || idx >= pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Invalid frequency index %u, which " - "should be less than %u\n", idx, pi->nb_freqs); + POWER_LOG(ERR, "Invalid frequency index %u, which " + "should be less than %u", idx, pi->nb_freqs); return -1; } @@ -270,15 +270,15 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx) * User need change the min/max as same value. */ if (fseek(pi->f_cur_min, 0, SEEK_SET) < 0) { - RTE_LOG(ERR, POWER, "Fail to set file position indicator to 0 " - "for setting frequency for lcore %u\n", + POWER_LOG(ERR, "Fail to set file position indicator to 0 " + "for setting frequency for lcore %u", pi->lcore_id); return -1; } if (fseek(pi->f_cur_max, 0, SEEK_SET) < 0) { - RTE_LOG(ERR, POWER, "Fail to set file position indicator to 0 " - "for setting frequency for lcore %u\n", + POWER_LOG(ERR, "Fail to set file position indicator to 0 " + "for setting frequency for lcore %u", pi->lcore_id); return -1; } @@ -288,7 +288,7 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx) if (pi->turbo_enable) target_freq = pi->sys_max_freq; else { - RTE_LOG(ERR, POWER, "Turbo is off, frequency can't be scaled up more %u\n", + POWER_LOG(ERR, "Turbo is off, frequency can't be scaled up more %u", pi->lcore_id); return -1; } @@ -299,14 +299,14 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx) if (idx > pi->curr_idx) { if (fprintf(pi->f_cur_min, "%u", target_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new frequency for " - "lcore %u\n", pi->lcore_id); + POWER_LOG(ERR, "Fail to write new frequency for " + "lcore %u", pi->lcore_id); return -1; } if (fprintf(pi->f_cur_max, "%u", target_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new frequency for " - "lcore %u\n", pi->lcore_id); + POWER_LOG(ERR, "Fail to write new frequency for " + "lcore %u", pi->lcore_id); return -1; } @@ -322,14 +322,14 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx) if (idx < pi->curr_idx) { if (fprintf(pi->f_cur_max, "%u", target_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new frequency for " - "lcore %u\n", pi->lcore_id); + POWER_LOG(ERR, "Fail to write new frequency for " + "lcore %u", pi->lcore_id); return -1; } if (fprintf(pi->f_cur_min, "%u", target_freq) < 0) { - RTE_LOG(ERR, POWER, "Fail to write new frequency for " - "lcore %u\n", pi->lcore_id); + POWER_LOG(ERR, "Fail to write new frequency for " + "lcore %u", pi->lcore_id); return -1; } @@ -384,7 +384,7 @@ power_get_available_freqs(struct pstate_power_info *pi) open_core_sysfs_file(&f_max, "r", POWER_SYSFILE_BASE_MAX_FREQ, pi->lcore_id); if (f_max == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + POWER_LOG(ERR, "failed to open %s", POWER_SYSFILE_BASE_MAX_FREQ); goto out; } @@ -392,7 +392,7 @@ power_get_available_freqs(struct pstate_power_info *pi) open_core_sysfs_file(&f_min, "r", POWER_SYSFILE_BASE_MIN_FREQ, pi->lcore_id); if (f_min == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + POWER_LOG(ERR, "failed to open %s", POWER_SYSFILE_BASE_MIN_FREQ); goto out; } @@ -400,14 +400,14 @@ power_get_available_freqs(struct pstate_power_info *pi) /* read base ratios */ ret = read_core_sysfs_u32(f_max, &sys_max_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + POWER_LOG(ERR, "Failed to read %s", POWER_SYSFILE_BASE_MAX_FREQ); goto out; } ret = read_core_sysfs_u32(f_min, &sys_min_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + POWER_LOG(ERR, "Failed to read %s", POWER_SYSFILE_BASE_MIN_FREQ); goto out; } @@ -450,7 +450,7 @@ power_get_available_freqs(struct pstate_power_info *pi) num_freqs = (RTE_MIN(base_max_freq, sys_max_freq) - sys_min_freq) / BUS_FREQ + 1 + pi->turbo_available; if (num_freqs >= RTE_MAX_LCORE_FREQS) { - RTE_LOG(ERR, POWER, "Too many available frequencies: %d\n", + POWER_LOG(ERR, "Too many available frequencies: %d", num_freqs); goto out; } @@ -494,14 +494,14 @@ power_get_cur_idx(struct pstate_power_info *pi) open_core_sysfs_file(&f_cur, "r", POWER_SYSFILE_CUR_FREQ, pi->lcore_id); if (f_cur == NULL) { - RTE_LOG(ERR, POWER, "failed to open %s\n", + POWER_LOG(ERR, "failed to open %s", POWER_SYSFILE_CUR_FREQ); goto fail; } ret = read_core_sysfs_u32(f_cur, &sys_cur_freq); if (ret < 0) { - RTE_LOG(ERR, POWER, "Failed to read %s\n", + POWER_LOG(ERR, "Failed to read %s", POWER_SYSFILE_CUR_FREQ); goto fail; } @@ -543,7 +543,7 @@ power_pstate_cpufreq_init(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceed %u\n", + POWER_LOG(ERR, "Lcore id %u can not exceed %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -559,47 +559,47 @@ power_pstate_cpufreq_init(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "in use\n", lcore_id); + POWER_LOG(INFO, "Power management of lcore %u is " + "in use", lcore_id); return -1; } pi->lcore_id = lcore_id; /* Check and set the governor */ if (power_set_governor_performance(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to " - "performance\n", lcore_id); + POWER_LOG(ERR, "Cannot set governor of lcore %u to " + "performance", lcore_id); goto fail; } /* Init for setting lcore frequency */ if (power_init_for_setting_freq(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot init for setting frequency for " - "lcore %u\n", lcore_id); + POWER_LOG(ERR, "Cannot init for setting frequency for " + "lcore %u", lcore_id); goto fail; } /* Get the available frequencies */ if (power_get_available_freqs(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot get available frequencies of " - "lcore %u\n", lcore_id); + POWER_LOG(ERR, "Cannot get available frequencies of " + "lcore %u", lcore_id); goto fail; } if (power_get_cur_idx(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot get current frequency " - "index of lcore %u\n", lcore_id); + POWER_LOG(ERR, "Cannot get current frequency " + "index of lcore %u", lcore_id); goto fail; } /* Set freq to max by default */ if (power_pstate_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, "Cannot set frequency of lcore %u " - "to max\n", lcore_id); + POWER_LOG(ERR, "Cannot set frequency of lcore %u " + "to max", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Initialized successfully for lcore %u " - "power management\n", lcore_id); + POWER_LOG(INFO, "Initialized successfully for lcore %u " + "power management", lcore_id); exp_state = POWER_ONGOING; rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_USED, rte_memory_order_release, rte_memory_order_relaxed); @@ -621,7 +621,7 @@ power_pstate_cpufreq_exit(unsigned int lcore_id) uint32_t exp_state; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n", + POWER_LOG(ERR, "Lcore id %u can not exceeds %u", lcore_id, RTE_MAX_LCORE - 1U); return -1; } @@ -637,8 +637,8 @@ power_pstate_cpufreq_exit(unsigned int lcore_id) if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_ONGOING, rte_memory_order_acquire, rte_memory_order_relaxed)) { - RTE_LOG(INFO, POWER, "Power management of lcore %u is " - "not used\n", lcore_id); + POWER_LOG(INFO, "Power management of lcore %u is " + "not used", lcore_id); return -1; } @@ -650,14 +650,14 @@ power_pstate_cpufreq_exit(unsigned int lcore_id) /* Set the governor back to the original */ if (power_set_governor_original(pi) < 0) { - RTE_LOG(ERR, POWER, "Cannot set the governor of %u back " - "to the original\n", lcore_id); + POWER_LOG(ERR, "Cannot set the governor of %u back " + "to the original", lcore_id); goto fail; } - RTE_LOG(INFO, POWER, "Power management of lcore %u has exited from " + POWER_LOG(INFO, "Power management of lcore %u has exited from " "'performance' mode and been set back to the " - "original\n", lcore_id); + "original", lcore_id); exp_state = POWER_ONGOING; rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_IDLE, rte_memory_order_release, rte_memory_order_relaxed); @@ -679,18 +679,18 @@ power_pstate_cpufreq_freqs(unsigned int lcore_id, uint32_t *freqs, uint32_t num) struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return 0; } if (freqs == NULL) { - RTE_LOG(ERR, POWER, "NULL buffer supplied\n"); + POWER_LOG(ERR, "NULL buffer supplied"); return 0; } pi = &lcore_power_info[lcore_id]; if (num < pi->nb_freqs) { - RTE_LOG(ERR, POWER, "Buffer size is not enough\n"); + POWER_LOG(ERR, "Buffer size is not enough"); return 0; } rte_memcpy(freqs, pi->freqs, pi->nb_freqs * sizeof(uint32_t)); @@ -702,7 +702,7 @@ uint32_t power_pstate_cpufreq_get_freq(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return RTE_POWER_INVALID_FREQ_INDEX; } @@ -714,7 +714,7 @@ int power_pstate_cpufreq_set_freq(unsigned int lcore_id, uint32_t index) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -727,7 +727,7 @@ power_pstate_cpufreq_freq_up(unsigned int lcore_id) struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -746,7 +746,7 @@ power_pstate_cpufreq_freq_down(unsigned int lcore_id) struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -762,7 +762,7 @@ int power_pstate_cpufreq_freq_max(unsigned int lcore_id) { if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -787,7 +787,7 @@ power_pstate_cpufreq_freq_min(unsigned int lcore_id) struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -804,7 +804,7 @@ power_pstate_turbo_status(unsigned int lcore_id) struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -819,7 +819,7 @@ power_pstate_enable_turbo(unsigned int lcore_id) struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -829,8 +829,8 @@ power_pstate_enable_turbo(unsigned int lcore_id) pi->turbo_enable = 1; else { pi->turbo_enable = 0; - RTE_LOG(ERR, POWER, - "Failed to enable turbo on lcore %u\n", + POWER_LOG(ERR, + "Failed to enable turbo on lcore %u", lcore_id); return -1; } @@ -845,7 +845,7 @@ power_pstate_disable_turbo(unsigned int lcore_id) struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } @@ -856,8 +856,8 @@ power_pstate_disable_turbo(unsigned int lcore_id) if (pi->turbo_available && pi->curr_idx <= 1) { /* Try to set freq to max by default coming out of turbo */ if (power_pstate_cpufreq_freq_max(lcore_id) < 0) { - RTE_LOG(ERR, POWER, - "Failed to set frequency of lcore %u to max\n", + POWER_LOG(ERR, + "Failed to set frequency of lcore %u to max", lcore_id); return -1; } @@ -873,11 +873,11 @@ int power_pstate_get_capabilities(unsigned int lcore_id, struct pstate_power_info *pi; if (lcore_id >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID\n"); + POWER_LOG(ERR, "Invalid lcore ID"); return -1; } if (caps == NULL) { - RTE_LOG(ERR, POWER, "Invalid argument\n"); + POWER_LOG(ERR, "Invalid argument"); return -1; } diff --git a/lib/power/rte_power.c b/lib/power/rte_power.c index 1502612b0a..36c3f3da98 100644 --- a/lib/power/rte_power.c +++ b/lib/power/rte_power.c @@ -74,7 +74,7 @@ rte_power_set_env(enum power_management_env env) rte_spinlock_lock(&global_env_cfg_lock); if (global_default_env != PM_ENV_NOT_SET) { - RTE_LOG(ERR, POWER, "Power Management Environment already set.\n"); + POWER_LOG(ERR, "Power Management Environment already set."); rte_spinlock_unlock(&global_env_cfg_lock); return -1; } @@ -143,7 +143,7 @@ rte_power_set_env(enum power_management_env env) rte_power_freq_disable_turbo = power_amd_pstate_disable_turbo; rte_power_get_capabilities = power_amd_pstate_get_capabilities; } else { - RTE_LOG(ERR, POWER, "Invalid Power Management Environment(%d) set\n", + POWER_LOG(ERR, "Invalid Power Management Environment(%d) set", env); ret = -1; } @@ -190,46 +190,46 @@ rte_power_init(unsigned int lcore_id) case PM_ENV_AMD_PSTATE_CPUFREQ: return power_amd_pstate_cpufreq_init(lcore_id); default: - RTE_LOG(INFO, POWER, "Env isn't set yet!\n"); + POWER_LOG(INFO, "Env isn't set yet!"); } /* Auto detect Environment */ - RTE_LOG(INFO, POWER, "Attempting to initialise ACPI cpufreq power management...\n"); + POWER_LOG(INFO, "Attempting to initialise ACPI cpufreq power management..."); ret = power_acpi_cpufreq_init(lcore_id); if (ret == 0) { rte_power_set_env(PM_ENV_ACPI_CPUFREQ); goto out; } - RTE_LOG(INFO, POWER, "Attempting to initialise PSTAT power management...\n"); + POWER_LOG(INFO, "Attempting to initialise PSTAT power management..."); ret = power_pstate_cpufreq_init(lcore_id); if (ret == 0) { rte_power_set_env(PM_ENV_PSTATE_CPUFREQ); goto out; } - RTE_LOG(INFO, POWER, "Attempting to initialise AMD PSTATE power management...\n"); + POWER_LOG(INFO, "Attempting to initialise AMD PSTATE power management..."); ret = power_amd_pstate_cpufreq_init(lcore_id); if (ret == 0) { rte_power_set_env(PM_ENV_AMD_PSTATE_CPUFREQ); goto out; } - RTE_LOG(INFO, POWER, "Attempting to initialise CPPC power management...\n"); + POWER_LOG(INFO, "Attempting to initialise CPPC power management..."); ret = power_cppc_cpufreq_init(lcore_id); if (ret == 0) { rte_power_set_env(PM_ENV_CPPC_CPUFREQ); goto out; } - RTE_LOG(INFO, POWER, "Attempting to initialise VM power management...\n"); + POWER_LOG(INFO, "Attempting to initialise VM power management..."); ret = power_kvm_vm_init(lcore_id); if (ret == 0) { rte_power_set_env(PM_ENV_KVM_VM); goto out; } - RTE_LOG(ERR, POWER, "Unable to set Power Management Environment for lcore " - "%u\n", lcore_id); + POWER_LOG(ERR, "Unable to set Power Management Environment for lcore " + "%u", lcore_id); out: return ret; } @@ -249,7 +249,7 @@ rte_power_exit(unsigned int lcore_id) case PM_ENV_AMD_PSTATE_CPUFREQ: return power_amd_pstate_cpufreq_exit(lcore_id); default: - RTE_LOG(ERR, POWER, "Environment has not been set, unable to exit gracefully\n"); + POWER_LOG(ERR, "Environment has not been set, unable to exit gracefully"); } return -1; diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c index 6f18ed0adf..591fc69f36 100644 --- a/lib/power/rte_power_pmd_mgmt.c +++ b/lib/power/rte_power_pmd_mgmt.c @@ -146,7 +146,7 @@ get_monitor_addresses(struct pmd_core_cfg *cfg, /* attempted out of bounds access */ if (i >= len) { - RTE_LOG(ERR, POWER, "Too many queues being monitored\n"); + POWER_LOG(ERR, "Too many queues being monitored"); return -1; } @@ -423,7 +423,7 @@ check_scale(unsigned int lcore) if (!rte_power_check_env_supported(PM_ENV_ACPI_CPUFREQ) && !rte_power_check_env_supported(PM_ENV_PSTATE_CPUFREQ) && !rte_power_check_env_supported(PM_ENV_AMD_PSTATE_CPUFREQ)) { - RTE_LOG(DEBUG, POWER, "Neither ACPI nor PSTATE modes are supported\n"); + POWER_LOG(DEBUG, "Neither ACPI nor PSTATE modes are supported"); return -ENOTSUP; } /* ensure we could initialize the power library */ @@ -434,7 +434,7 @@ check_scale(unsigned int lcore) env = rte_power_get_env(); if (env != PM_ENV_ACPI_CPUFREQ && env != PM_ENV_PSTATE_CPUFREQ && env != PM_ENV_AMD_PSTATE_CPUFREQ) { - RTE_LOG(DEBUG, POWER, "Neither ACPI nor PSTATE modes were initialized\n"); + POWER_LOG(DEBUG, "Neither ACPI nor PSTATE modes were initialized"); return -ENOTSUP; } @@ -450,7 +450,7 @@ check_monitor(struct pmd_core_cfg *cfg, const union queue *qdata) /* check if rte_power_monitor is supported */ if (!global_data.intrinsics_support.power_monitor) { - RTE_LOG(DEBUG, POWER, "Monitoring intrinsics are not supported\n"); + POWER_LOG(DEBUG, "Monitoring intrinsics are not supported"); return -ENOTSUP; } /* check if multi-monitor is supported */ @@ -459,14 +459,14 @@ check_monitor(struct pmd_core_cfg *cfg, const union queue *qdata) /* if we're adding a new queue, do we support multiple queues? */ if (cfg->n_queues > 0 && !multimonitor_supported) { - RTE_LOG(DEBUG, POWER, "Monitoring multiple queues is not supported\n"); + POWER_LOG(DEBUG, "Monitoring multiple queues is not supported"); return -ENOTSUP; } /* check if the device supports the necessary PMD API */ if (rte_eth_get_monitor_addr(qdata->portid, qdata->qid, &dummy) == -ENOTSUP) { - RTE_LOG(DEBUG, POWER, "The device does not support rte_eth_get_monitor_addr\n"); + POWER_LOG(DEBUG, "The device does not support rte_eth_get_monitor_addr"); return -ENOTSUP; } @@ -566,14 +566,14 @@ rte_power_ethdev_pmgmt_queue_enable(unsigned int lcore_id, uint16_t port_id, clb = clb_pause; break; default: - RTE_LOG(DEBUG, POWER, "Invalid power management type\n"); + POWER_LOG(DEBUG, "Invalid power management type"); ret = -EINVAL; goto end; } /* add this queue to the list */ ret = queue_list_add(lcore_cfg, &qdata); if (ret < 0) { - RTE_LOG(DEBUG, POWER, "Failed to add queue to list: %s\n", + POWER_LOG(DEBUG, "Failed to add queue to list: %s", strerror(-ret)); goto end; } @@ -686,7 +686,7 @@ int rte_power_pmd_mgmt_set_pause_duration(unsigned int duration) { if (duration == 0) { - RTE_LOG(ERR, POWER, "Pause duration must be greater than 0, value unchanged\n"); + POWER_LOG(ERR, "Pause duration must be greater than 0, value unchanged"); return -EINVAL; } pause_duration = duration; @@ -704,12 +704,12 @@ int rte_power_pmd_mgmt_set_scaling_freq_min(unsigned int lcore, unsigned int min) { if (lcore >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID: %u\n", lcore); + POWER_LOG(ERR, "Invalid lcore ID: %u", lcore); return -EINVAL; } if (min > scale_freq_max[lcore]) { - RTE_LOG(ERR, POWER, "Invalid min frequency: Cannot be greater than max frequency\n"); + POWER_LOG(ERR, "Invalid min frequency: Cannot be greater than max frequency"); return -EINVAL; } scale_freq_min[lcore] = min; @@ -721,7 +721,7 @@ int rte_power_pmd_mgmt_set_scaling_freq_max(unsigned int lcore, unsigned int max) { if (lcore >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID: %u\n", lcore); + POWER_LOG(ERR, "Invalid lcore ID: %u", lcore); return -EINVAL; } @@ -729,7 +729,7 @@ rte_power_pmd_mgmt_set_scaling_freq_max(unsigned int lcore, unsigned int max) if (max == 0) max = UINT32_MAX; if (max < scale_freq_min[lcore]) { - RTE_LOG(ERR, POWER, "Invalid max frequency: Cannot be less than min frequency\n"); + POWER_LOG(ERR, "Invalid max frequency: Cannot be less than min frequency"); return -EINVAL; } @@ -742,12 +742,12 @@ int rte_power_pmd_mgmt_get_scaling_freq_min(unsigned int lcore) { if (lcore >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID: %u\n", lcore); + POWER_LOG(ERR, "Invalid lcore ID: %u", lcore); return -EINVAL; } if (scale_freq_max[lcore] == 0) - RTE_LOG(DEBUG, POWER, "Scaling freq min config not set. Using sysfs min freq.\n"); + POWER_LOG(DEBUG, "Scaling freq min config not set. Using sysfs min freq."); return scale_freq_min[lcore]; } @@ -756,12 +756,12 @@ int rte_power_pmd_mgmt_get_scaling_freq_max(unsigned int lcore) { if (lcore >= RTE_MAX_LCORE) { - RTE_LOG(ERR, POWER, "Invalid lcore ID: %u\n", lcore); + POWER_LOG(ERR, "Invalid lcore ID: %u", lcore); return -EINVAL; } if (scale_freq_max[lcore] == UINT32_MAX) { - RTE_LOG(DEBUG, POWER, "Scaling freq max config not set. Using sysfs max freq.\n"); + POWER_LOG(DEBUG, "Scaling freq max config not set. Using sysfs max freq."); return 0; } diff --git a/lib/power/rte_power_uncore.c b/lib/power/rte_power_uncore.c index 9c20fe150d..48c75a5da0 100644 --- a/lib/power/rte_power_uncore.c +++ b/lib/power/rte_power_uncore.c @@ -101,7 +101,7 @@ rte_power_set_uncore_env(enum rte_uncore_power_mgmt_env env) rte_spinlock_lock(&global_env_cfg_lock); if (default_uncore_env != RTE_UNCORE_PM_ENV_NOT_SET) { - RTE_LOG(ERR, POWER, "Uncore Power Management Env already set.\n"); + POWER_LOG(ERR, "Uncore Power Management Env already set."); rte_spinlock_unlock(&global_env_cfg_lock); return -1; } @@ -124,7 +124,7 @@ rte_power_set_uncore_env(enum rte_uncore_power_mgmt_env env) rte_power_uncore_get_num_pkgs = power_intel_uncore_get_num_pkgs; rte_power_uncore_get_num_dies = power_intel_uncore_get_num_dies; } else { - RTE_LOG(ERR, POWER, "Invalid Power Management Environment(%d) set\n", env); + POWER_LOG(ERR, "Invalid Power Management Environment(%d) set", env); ret = -1; goto out; } @@ -159,12 +159,12 @@ rte_power_uncore_init(unsigned int pkg, unsigned int die) case RTE_UNCORE_PM_ENV_INTEL_UNCORE: return power_intel_uncore_init(pkg, die); default: - RTE_LOG(INFO, POWER, "Uncore Env isn't set yet!\n"); + POWER_LOG(INFO, "Uncore Env isn't set yet!"); break; } /* Auto detect Environment */ - RTE_LOG(INFO, POWER, "Attempting to initialise Intel Uncore power mgmt...\n"); + POWER_LOG(INFO, "Attempting to initialise Intel Uncore power mgmt..."); ret = power_intel_uncore_init(pkg, die); if (ret == 0) { rte_power_set_uncore_env(RTE_UNCORE_PM_ENV_INTEL_UNCORE); @@ -172,8 +172,8 @@ rte_power_uncore_init(unsigned int pkg, unsigned int die) } if (default_uncore_env == RTE_UNCORE_PM_ENV_NOT_SET) { - RTE_LOG(ERR, POWER, "Unable to set Power Management Environment " - "for package %u Die %u\n", pkg, die); + POWER_LOG(ERR, "Unable to set Power Management Environment " + "for package %u Die %u", pkg, die); ret = 0; } out: @@ -187,7 +187,7 @@ rte_power_uncore_exit(unsigned int pkg, unsigned int die) case RTE_UNCORE_PM_ENV_INTEL_UNCORE: return power_intel_uncore_exit(pkg, die); default: - RTE_LOG(ERR, POWER, "Uncore Env has not been set, unable to exit gracefully\n"); + POWER_LOG(ERR, "Uncore Env has not been set, unable to exit gracefully"); break; } return -1; diff --git a/lib/rcu/rte_rcu_qsbr.c b/lib/rcu/rte_rcu_qsbr.c index 41a44be4b9..5b6530788a 100644 --- a/lib/rcu/rte_rcu_qsbr.c +++ b/lib/rcu/rte_rcu_qsbr.c @@ -19,6 +19,9 @@ #include "rte_rcu_qsbr.h" #include "rcu_qsbr_pvt.h" +#define RCU_LOG(level, fmt, args...) \ + RTE_LOG(level, RCU, "%s(): " fmt "\n", __func__, ## args) + /* Get the memory size of QSBR variable */ size_t rte_rcu_qsbr_get_memsize(uint32_t max_threads) @@ -26,9 +29,7 @@ rte_rcu_qsbr_get_memsize(uint32_t max_threads) size_t sz; if (max_threads == 0) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid max_threads %u\n", - __func__, max_threads); + RCU_LOG(ERR, "Invalid max_threads %u", max_threads); rte_errno = EINVAL; return 1; @@ -52,8 +53,7 @@ rte_rcu_qsbr_init(struct rte_rcu_qsbr *v, uint32_t max_threads) size_t sz; if (v == NULL) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(ERR, "Invalid input parameter"); rte_errno = EINVAL; return 1; @@ -85,8 +85,7 @@ rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, unsigned int thread_id) uint64_t old_bmap, new_bmap; if (v == NULL || thread_id >= v->max_threads) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(ERR, "Invalid input parameter"); rte_errno = EINVAL; return 1; @@ -137,8 +136,7 @@ rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, unsigned int thread_id) uint64_t old_bmap, new_bmap; if (v == NULL || thread_id >= v->max_threads) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(ERR, "Invalid input parameter"); rte_errno = EINVAL; return 1; @@ -211,8 +209,7 @@ rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v) uint32_t i, t, id; if (v == NULL || f == NULL) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(ERR, "Invalid input parameter"); rte_errno = EINVAL; return 1; @@ -282,8 +279,7 @@ rte_rcu_qsbr_dq_create(const struct rte_rcu_qsbr_dq_parameters *params) params->v == NULL || params->name == NULL || params->size == 0 || params->esize == 0 || (params->esize % 4 != 0)) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(ERR, "Invalid input parameter"); rte_errno = EINVAL; return NULL; @@ -293,9 +289,10 @@ rte_rcu_qsbr_dq_create(const struct rte_rcu_qsbr_dq_parameters *params) */ if ((params->trigger_reclaim_limit <= params->size) && (params->max_reclaim_size == 0)) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter, size = %u, trigger_reclaim_limit = %u, max_reclaim_size = %u\n", - __func__, params->size, params->trigger_reclaim_limit, + RCU_LOG(ERR, + "Invalid input parameter, size = %u, trigger_reclaim_limit = %u, " + "max_reclaim_size = %u", + params->size, params->trigger_reclaim_limit, params->max_reclaim_size); rte_errno = EINVAL; @@ -328,8 +325,7 @@ rte_rcu_qsbr_dq_create(const struct rte_rcu_qsbr_dq_parameters *params) __RTE_QSBR_TOKEN_SIZE + params->esize, qs_fifo_size, SOCKET_ID_ANY, flags); if (dq->r == NULL) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): defer queue create failed\n", __func__); + RCU_LOG(ERR, "defer queue create failed"); rte_free(dq); return NULL; } @@ -354,8 +350,7 @@ int rte_rcu_qsbr_dq_enqueue(struct rte_rcu_qsbr_dq *dq, void *e) uint32_t cur_size; if (dq == NULL || e == NULL) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(ERR, "Invalid input parameter"); rte_errno = EINVAL; return 1; @@ -372,8 +367,7 @@ int rte_rcu_qsbr_dq_enqueue(struct rte_rcu_qsbr_dq *dq, void *e) */ cur_size = rte_ring_count(dq->r); if (cur_size > dq->trigger_reclaim_limit) { - rte_log(RTE_LOG_INFO, rte_rcu_log_type, - "%s(): Triggering reclamation\n", __func__); + RCU_LOG(INFO, "Triggering reclamation"); rte_rcu_qsbr_dq_reclaim(dq, dq->max_reclaim_size, NULL, NULL, NULL); } @@ -391,23 +385,18 @@ int rte_rcu_qsbr_dq_enqueue(struct rte_rcu_qsbr_dq *dq, void *e) * Enqueue uses the configured flags when the DQ was created. */ if (rte_ring_enqueue_elem(dq->r, data, dq->esize) != 0) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Enqueue failed\n", __func__); + RCU_LOG(ERR, "Enqueue failed"); /* Note that the token generated above is not used. * Other than wasting tokens, it should not cause any * other issues. */ - rte_log(RTE_LOG_INFO, rte_rcu_log_type, - "%s(): Skipped enqueuing token = %" PRIu64 "\n", - __func__, dq_elem->token); + RCU_LOG(INFO, "Skipped enqueuing token = %" PRIu64, dq_elem->token); rte_errno = ENOSPC; return 1; } - rte_log(RTE_LOG_INFO, rte_rcu_log_type, - "%s(): Enqueued token = %" PRIu64 "\n", - __func__, dq_elem->token); + RCU_LOG(INFO, "Enqueued token = %" PRIu64, dq_elem->token); return 0; } @@ -422,8 +411,7 @@ rte_rcu_qsbr_dq_reclaim(struct rte_rcu_qsbr_dq *dq, unsigned int n, __rte_rcu_qsbr_dq_elem_t *dq_elem; if (dq == NULL || n == 0) { - rte_log(RTE_LOG_ERR, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(ERR, "Invalid input parameter"); rte_errno = EINVAL; return 1; @@ -445,17 +433,14 @@ rte_rcu_qsbr_dq_reclaim(struct rte_rcu_qsbr_dq *dq, unsigned int n, } rte_ring_dequeue_elem_finish(dq->r, 1); - rte_log(RTE_LOG_INFO, rte_rcu_log_type, - "%s(): Reclaimed token = %" PRIu64 "\n", - __func__, dq_elem->token); + RCU_LOG(INFO, "Reclaimed token = %" PRIu64, dq_elem->token); dq->free_fn(dq->p, dq_elem->elem, 1); cnt++; } - rte_log(RTE_LOG_INFO, rte_rcu_log_type, - "%s(): Reclaimed %u resources\n", __func__, cnt); + RCU_LOG(INFO, "Reclaimed %u resources", cnt); if (freed != NULL) *freed = cnt; @@ -472,8 +457,7 @@ rte_rcu_qsbr_dq_delete(struct rte_rcu_qsbr_dq *dq) unsigned int pending; if (dq == NULL) { - rte_log(RTE_LOG_DEBUG, rte_rcu_log_type, - "%s(): Invalid input parameter\n", __func__); + RCU_LOG(DEBUG, "Invalid input parameter"); return 0; } diff --git a/lib/rcu/rte_rcu_qsbr.h b/lib/rcu/rte_rcu_qsbr.h index 6b908e7ee0..0dca8310c0 100644 --- a/lib/rcu/rte_rcu_qsbr.h +++ b/lib/rcu/rte_rcu_qsbr.h @@ -36,6 +36,7 @@ extern "C" { #include <rte_ring.h> extern int rte_rcu_log_type; +#define RTE_LOGTYPE_RCU rte_rcu_log_type #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG #define __RTE_RCU_DP_LOG(level, fmt, args...) \ diff --git a/lib/reorder/rte_reorder.c b/lib/reorder/rte_reorder.c index 640719c3ec..fd550b03af 100644 --- a/lib/reorder/rte_reorder.c +++ b/lib/reorder/rte_reorder.c @@ -18,6 +18,8 @@ RTE_LOG_REGISTER_DEFAULT(reorder_logtype, INFO); #define RTE_LOGTYPE_REORDER reorder_logtype +#define REORDER_LOG(level, fmt, ...) \ + RTE_LOG(level, REORDER, fmt "\n", ## __VA_ARGS__) TAILQ_HEAD(rte_reorder_list, rte_tailq_entry); @@ -74,34 +76,34 @@ rte_reorder_init(struct rte_reorder_buffer *b, unsigned int bufsize, }; if (b == NULL) { - RTE_LOG(ERR, REORDER, "Invalid reorder buffer parameter:" - " NULL\n"); + REORDER_LOG(ERR, "Invalid reorder buffer parameter:" + " NULL"); rte_errno = EINVAL; return NULL; } if (!rte_is_power_of_2(size)) { - RTE_LOG(ERR, REORDER, "Invalid reorder buffer size" - " - Not a power of 2\n"); + REORDER_LOG(ERR, "Invalid reorder buffer size" + " - Not a power of 2"); rte_errno = EINVAL; return NULL; } if (name == NULL) { - RTE_LOG(ERR, REORDER, "Invalid reorder buffer name ptr:" - " NULL\n"); + REORDER_LOG(ERR, "Invalid reorder buffer name ptr:" + " NULL"); rte_errno = EINVAL; return NULL; } if (bufsize < min_bufsize) { - RTE_LOG(ERR, REORDER, "Invalid reorder buffer memory size: %u, " - "minimum required: %u\n", bufsize, min_bufsize); + REORDER_LOG(ERR, "Invalid reorder buffer memory size: %u, " + "minimum required: %u", bufsize, min_bufsize); rte_errno = EINVAL; return NULL; } rte_reorder_seqn_dynfield_offset = rte_mbuf_dynfield_register(&reorder_seqn_dynfield_desc); if (rte_reorder_seqn_dynfield_offset < 0) { - RTE_LOG(ERR, REORDER, - "Failed to register mbuf field for reorder sequence number, rte_errno: %i\n", + REORDER_LOG(ERR, + "Failed to register mbuf field for reorder sequence number, rte_errno: %i", rte_errno); rte_errno = ENOMEM; return NULL; @@ -161,14 +163,14 @@ rte_reorder_create(const char *name, unsigned socket_id, unsigned int size) /* Check user arguments. */ if (!rte_is_power_of_2(size)) { - RTE_LOG(ERR, REORDER, "Invalid reorder buffer size" - " - Not a power of 2\n"); + REORDER_LOG(ERR, "Invalid reorder buffer size" + " - Not a power of 2"); rte_errno = EINVAL; return NULL; } if (name == NULL) { - RTE_LOG(ERR, REORDER, "Invalid reorder buffer name ptr:" - " NULL\n"); + REORDER_LOG(ERR, "Invalid reorder buffer name ptr:" + " NULL"); rte_errno = EINVAL; return NULL; } @@ -176,7 +178,7 @@ rte_reorder_create(const char *name, unsigned socket_id, unsigned int size) /* allocate tailq entry */ te = rte_zmalloc("REORDER_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, REORDER, "Failed to allocate tailq entry\n"); + REORDER_LOG(ERR, "Failed to allocate tailq entry"); rte_errno = ENOMEM; return NULL; } @@ -184,7 +186,7 @@ rte_reorder_create(const char *name, unsigned socket_id, unsigned int size) /* Allocate memory to store the reorder buffer structure. */ b = rte_zmalloc_socket("REORDER_BUFFER", bufsize, 0, socket_id); if (b == NULL) { - RTE_LOG(ERR, REORDER, "Memzone allocation failed\n"); + REORDER_LOG(ERR, "Memzone allocation failed"); rte_errno = ENOMEM; rte_free(te); return NULL; diff --git a/lib/rib/rib_log.h b/lib/rib/rib_log.h index f3ee513ca8..ce74a4ce3e 100644 --- a/lib/rib/rib_log.h +++ b/lib/rib/rib_log.h @@ -1,4 +1,8 @@ /* SPDX-License-Identifier: BSD-3-Clause */ +#include <rte_log.h> + extern int rib_logtype; -#define RTE_LOGTYPE_LPM rib_logtype +#define RTE_LOGTYPE_RIB rib_logtype +#define RIB_LOG(level, fmt, ...) \ + RTE_LOG(level, RIB, fmt "\n", ## __VA_ARGS__) diff --git a/lib/rib/rte_rib.c b/lib/rib/rte_rib.c index 251d0d4ef1..aa3296de19 100644 --- a/lib/rib/rte_rib.c +++ b/lib/rib/rte_rib.c @@ -16,8 +16,9 @@ #include <rte_rib.h> +#include "rib_log.h" + RTE_LOG_REGISTER_DEFAULT(rib_logtype, INFO); -#define RTE_LOGTYPE_LPM rib_logtype TAILQ_HEAD(rte_rib_list, rte_tailq_entry); static struct rte_tailq_elem rte_rib_tailq = { @@ -416,8 +417,8 @@ rte_rib_create(const char *name, int socket_id, const struct rte_rib_conf *conf) NULL, NULL, NULL, NULL, socket_id, 0); if (node_pool == NULL) { - RTE_LOG(ERR, LPM, - "Can not allocate mempool for RIB %s\n", name); + RIB_LOG(ERR, + "Can not allocate mempool for RIB %s", name); return NULL; } @@ -441,8 +442,8 @@ rte_rib_create(const char *name, int socket_id, const struct rte_rib_conf *conf) /* allocate tailq entry */ te = rte_zmalloc("RIB_TAILQ_ENTRY", sizeof(*te), 0); if (unlikely(te == NULL)) { - RTE_LOG(ERR, LPM, - "Can not allocate tailq entry for RIB %s\n", name); + RIB_LOG(ERR, + "Can not allocate tailq entry for RIB %s", name); rte_errno = ENOMEM; goto exit; } @@ -451,7 +452,7 @@ rte_rib_create(const char *name, int socket_id, const struct rte_rib_conf *conf) rib = rte_zmalloc_socket(mem_name, sizeof(struct rte_rib), RTE_CACHE_LINE_SIZE, socket_id); if (unlikely(rib == NULL)) { - RTE_LOG(ERR, LPM, "RIB %s memory allocation failed\n", name); + RIB_LOG(ERR, "RIB %s memory allocation failed", name); rte_errno = ENOMEM; goto free_te; } diff --git a/lib/rib/rte_rib6.c b/lib/rib/rte_rib6.c index ad3d48ab8e..b7df3926f8 100644 --- a/lib/rib/rte_rib6.c +++ b/lib/rib/rte_rib6.c @@ -485,8 +485,8 @@ rte_rib6_create(const char *name, int socket_id, NULL, NULL, NULL, NULL, socket_id, 0); if (node_pool == NULL) { - RTE_LOG(ERR, LPM, - "Can not allocate mempool for RIB6 %s\n", name); + RIB_LOG(ERR, + "Can not allocate mempool for RIB6 %s", name); return NULL; } @@ -510,8 +510,8 @@ rte_rib6_create(const char *name, int socket_id, /* allocate tailq entry */ te = rte_zmalloc("RIB6_TAILQ_ENTRY", sizeof(*te), 0); if (unlikely(te == NULL)) { - RTE_LOG(ERR, LPM, - "Can not allocate tailq entry for RIB6 %s\n", name); + RIB_LOG(ERR, + "Can not allocate tailq entry for RIB6 %s", name); rte_errno = ENOMEM; goto exit; } @@ -520,7 +520,7 @@ rte_rib6_create(const char *name, int socket_id, rib = rte_zmalloc_socket(mem_name, sizeof(struct rte_rib6), RTE_CACHE_LINE_SIZE, socket_id); if (unlikely(rib == NULL)) { - RTE_LOG(ERR, LPM, "RIB6 %s memory allocation failed\n", name); + RIB_LOG(ERR, "RIB6 %s memory allocation failed", name); rte_errno = ENOMEM; goto free_te; } diff --git a/lib/ring/rte_ring.c b/lib/ring/rte_ring.c index 12046419f1..118ffab4b9 100644 --- a/lib/ring/rte_ring.c +++ b/lib/ring/rte_ring.c @@ -28,6 +28,8 @@ RTE_LOG_REGISTER_DEFAULT(ring_logtype, INFO); #define RTE_LOGTYPE_RING ring_logtype +#define RING_LOG(level, fmt, ...) \ + RTE_LOG(level, RING, fmt "\n", ## __VA_ARGS__) TAILQ_HEAD(rte_ring_list, rte_tailq_entry); @@ -55,15 +57,15 @@ rte_ring_get_memsize_elem(unsigned int esize, unsigned int count) /* Check if element size is a multiple of 4B */ if (esize % 4 != 0) { - RTE_LOG(ERR, RING, "element size is not a multiple of 4\n"); + RING_LOG(ERR, "element size is not a multiple of 4"); return -EINVAL; } /* count must be a power of 2 */ if ((!POWEROF2(count)) || (count > RTE_RING_SZ_MASK )) { - RTE_LOG(ERR, RING, - "Requested number of elements is invalid, must be power of 2, and not exceed %u\n", + RING_LOG(ERR, + "Requested number of elements is invalid, must be power of 2, and not exceed %u", RTE_RING_SZ_MASK); return -EINVAL; @@ -198,8 +200,8 @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned int count, /* future proof flags, only allow supported values */ if (flags & ~RING_F_MASK) { - RTE_LOG(ERR, RING, - "Unsupported flags requested %#x\n", flags); + RING_LOG(ERR, + "Unsupported flags requested %#x", flags); return -EINVAL; } @@ -219,8 +221,8 @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned int count, r->capacity = count; } else { if ((!POWEROF2(count)) || (count > RTE_RING_SZ_MASK)) { - RTE_LOG(ERR, RING, - "Requested size is invalid, must be power of 2, and not exceed the size limit %u\n", + RING_LOG(ERR, + "Requested size is invalid, must be power of 2, and not exceed the size limit %u", RTE_RING_SZ_MASK); return -EINVAL; } @@ -274,7 +276,7 @@ rte_ring_create_elem(const char *name, unsigned int esize, unsigned int count, te = rte_zmalloc("RING_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_LOG(ERR, RING, "Cannot reserve memory for tailq\n"); + RING_LOG(ERR, "Cannot reserve memory for tailq"); rte_errno = ENOMEM; return NULL; } @@ -299,7 +301,7 @@ rte_ring_create_elem(const char *name, unsigned int esize, unsigned int count, TAILQ_INSERT_TAIL(ring_list, te, next); } else { r = NULL; - RTE_LOG(ERR, RING, "Cannot reserve memory\n"); + RING_LOG(ERR, "Cannot reserve memory"); rte_free(te); } rte_mcfg_tailq_write_unlock(); @@ -331,8 +333,8 @@ rte_ring_free(struct rte_ring *r) * therefore, there is no memzone to free. */ if (r->memzone == NULL) { - RTE_LOG(ERR, RING, - "Cannot free ring, not created with rte_ring_create()\n"); + RING_LOG(ERR, + "Cannot free ring, not created with rte_ring_create()"); return; } @@ -355,7 +357,7 @@ rte_ring_free(struct rte_ring *r) rte_mcfg_tailq_write_unlock(); if (rte_memzone_free(r->memzone) != 0) - RTE_LOG(ERR, RING, "Cannot free memory\n"); + RING_LOG(ERR, "Cannot free memory"); rte_free(te); } diff --git a/lib/sched/rte_pie.c b/lib/sched/rte_pie.c index cce0ce762d..2eb0b0f74e 100644 --- a/lib/sched/rte_pie.c +++ b/lib/sched/rte_pie.c @@ -17,7 +17,7 @@ int rte_pie_rt_data_init(struct rte_pie *pie) { if (pie == NULL) { - RTE_LOG(ERR, SCHED, "%s: Invalid addr for pie\n", __func__); + SCHED_LOG(ERR, "%s: Invalid addr for pie", __func__); return -EINVAL; } @@ -39,26 +39,26 @@ rte_pie_config_init(struct rte_pie_config *pie_cfg, return -1; if (qdelay_ref <= 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for qdelay_ref\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for qdelay_ref", __func__); return -EINVAL; } if (dp_update_interval <= 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for dp_update_interval\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for dp_update_interval", __func__); return -EINVAL; } if (max_burst <= 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for max_burst\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for max_burst", __func__); return -EINVAL; } if (tailq_th <= 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for tailq_th\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for tailq_th", __func__); return -EINVAL; } diff --git a/lib/sched/rte_sched.c b/lib/sched/rte_sched.c index 76dd8dd738..d90aa53bee 100644 --- a/lib/sched/rte_sched.c +++ b/lib/sched/rte_sched.c @@ -325,23 +325,23 @@ pipe_profile_check(struct rte_sched_pipe_params *params, /* Pipe parameters */ if (params == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter params\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for parameter params", __func__); return -EINVAL; } /* TB rate: non-zero, not greater than port rate */ if (params->tb_rate == 0 || params->tb_rate > rate) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for tb rate\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for tb rate", __func__); return -EINVAL; } /* TB size: non-zero */ if (params->tb_size == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for tb size\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for tb size", __func__); return -EINVAL; } @@ -350,38 +350,38 @@ pipe_profile_check(struct rte_sched_pipe_params *params, if ((qsize[i] == 0 && params->tc_rate[i] != 0) || (qsize[i] != 0 && (params->tc_rate[i] == 0 || params->tc_rate[i] > params->tb_rate))) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for qsize or tc_rate\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for qsize or tc_rate", __func__); return -EINVAL; } } if (params->tc_rate[RTE_SCHED_TRAFFIC_CLASS_BE] == 0 || qsize[RTE_SCHED_TRAFFIC_CLASS_BE] == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for be traffic class rate\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for be traffic class rate", __func__); return -EINVAL; } /* TC period: non-zero */ if (params->tc_period == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for tc period\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for tc period", __func__); return -EINVAL; } /* Best effort tc oversubscription weight: non-zero */ if (params->tc_ov_weight == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for tc ov weight\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for tc ov weight", __func__); return -EINVAL; } /* Queue WRR weights: non-zero */ for (i = 0; i < RTE_SCHED_BE_QUEUES_PER_PIPE; i++) { if (params->wrr_weights[i] == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for wrr weight\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for wrr weight", __func__); return -EINVAL; } } @@ -397,20 +397,20 @@ subport_profile_check(struct rte_sched_subport_profile_params *params, /* Check user parameters */ if (params == NULL) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for parameter params\n", __func__); + SCHED_LOG(ERR, "%s: " + "Incorrect value for parameter params", __func__); return -EINVAL; } if (params->tb_rate == 0 || params->tb_rate > rate) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for tb rate\n", __func__); + SCHED_LOG(ERR, "%s: " + "Incorrect value for tb rate", __func__); return -EINVAL; } if (params->tb_size == 0) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for tb size\n", __func__); + SCHED_LOG(ERR, "%s: " + "Incorrect value for tb size", __func__); return -EINVAL; } @@ -418,21 +418,21 @@ subport_profile_check(struct rte_sched_subport_profile_params *params, uint64_t tc_rate = params->tc_rate[i]; if (tc_rate == 0 || (tc_rate > params->tb_rate)) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for tc rate\n", __func__); + SCHED_LOG(ERR, "%s: " + "Incorrect value for tc rate", __func__); return -EINVAL; } } if (params->tc_rate[RTE_SCHED_TRAFFIC_CLASS_BE] == 0) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect tc rate(best effort)\n", __func__); + SCHED_LOG(ERR, "%s: " + "Incorrect tc rate(best effort)", __func__); return -EINVAL; } if (params->tc_period == 0) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for tc period\n", __func__); + SCHED_LOG(ERR, "%s: " + "Incorrect value for tc period", __func__); return -EINVAL; } @@ -445,29 +445,29 @@ rte_sched_port_check_params(struct rte_sched_port_params *params) uint32_t i; if (params == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter params\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for parameter params", __func__); return -EINVAL; } /* socket */ if (params->socket < 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for socket id\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for socket id", __func__); return -EINVAL; } /* rate */ if (params->rate == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for rate\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for rate", __func__); return -EINVAL; } /* mtu */ if (params->mtu == 0) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for mtu\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for mtu", __func__); return -EINVAL; } @@ -475,8 +475,8 @@ rte_sched_port_check_params(struct rte_sched_port_params *params) if (params->n_subports_per_port == 0 || params->n_subports_per_port > 1u << 16 || !rte_is_power_of_2(params->n_subports_per_port)) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for number of subports\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for number of subports", __func__); return -EINVAL; } @@ -484,8 +484,8 @@ rte_sched_port_check_params(struct rte_sched_port_params *params) params->n_subport_profiles == 0 || params->n_max_subport_profiles == 0 || params->n_subport_profiles > params->n_max_subport_profiles) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for subport profiles\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for subport profiles", __func__); return -EINVAL; } @@ -496,8 +496,8 @@ rte_sched_port_check_params(struct rte_sched_port_params *params) status = subport_profile_check(p, params->rate); if (status != 0) { - RTE_LOG(ERR, SCHED, - "%s: subport profile check failed(%d)\n", + SCHED_LOG(ERR, + "%s: subport profile check failed(%d)", __func__, status); return -EINVAL; } @@ -506,8 +506,8 @@ rte_sched_port_check_params(struct rte_sched_port_params *params) /* n_pipes_per_subport: non-zero, power of 2 */ if (params->n_pipes_per_subport == 0 || !rte_is_power_of_2(params->n_pipes_per_subport)) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for maximum pipes number\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for maximum pipes number", __func__); return -EINVAL; } @@ -830,8 +830,8 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params, /* Check user parameters */ if (params == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter params\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for parameter params", __func__); return -EINVAL; } @@ -842,14 +842,14 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params, uint16_t qsize = params->qsize[i]; if (qsize != 0 && !rte_is_power_of_2(qsize)) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for qsize\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for qsize", __func__); return -EINVAL; } } if (params->qsize[RTE_SCHED_TRAFFIC_CLASS_BE] == 0) { - RTE_LOG(ERR, SCHED, "%s: Incorrect qsize\n", __func__); + SCHED_LOG(ERR, "%s: Incorrect qsize", __func__); return -EINVAL; } @@ -857,8 +857,8 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params, if (params->n_pipes_per_subport_enabled == 0 || params->n_pipes_per_subport_enabled > n_max_pipes_per_subport || !rte_is_power_of_2(params->n_pipes_per_subport_enabled)) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for pipes number\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for pipes number", __func__); return -EINVAL; } @@ -867,8 +867,8 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params, params->n_pipe_profiles == 0 || params->n_max_pipe_profiles == 0 || params->n_pipe_profiles > params->n_max_pipe_profiles) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for pipe profiles\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for pipe profiles", __func__); return -EINVAL; } @@ -878,8 +878,8 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params, status = pipe_profile_check(p, rate, ¶ms->qsize[0]); if (status != 0) { - RTE_LOG(ERR, SCHED, - "%s: Pipe profile check failed(%d)\n", __func__, status); + SCHED_LOG(ERR, + "%s: Pipe profile check failed(%d)", __func__, status); return -EINVAL; } } @@ -896,8 +896,8 @@ rte_sched_port_get_memory_footprint(struct rte_sched_port_params *port_params, status = rte_sched_port_check_params(port_params); if (status != 0) { - RTE_LOG(ERR, SCHED, - "%s: Port scheduler port params check failed (%d)\n", + SCHED_LOG(ERR, + "%s: Port scheduler port params check failed (%d)", __func__, status); return 0; @@ -910,8 +910,8 @@ rte_sched_port_get_memory_footprint(struct rte_sched_port_params *port_params, port_params->n_pipes_per_subport, port_params->rate); if (status != 0) { - RTE_LOG(ERR, SCHED, - "%s: Port scheduler subport params check failed (%d)\n", + SCHED_LOG(ERR, + "%s: Port scheduler subport params check failed (%d)", __func__, status); return 0; @@ -941,8 +941,8 @@ rte_sched_port_config(struct rte_sched_port_params *params) status = rte_sched_port_check_params(params); if (status != 0) { - RTE_LOG(ERR, SCHED, - "%s: Port scheduler params check failed (%d)\n", + SCHED_LOG(ERR, + "%s: Port scheduler params check failed (%d)", __func__, status); return NULL; } @@ -956,7 +956,7 @@ rte_sched_port_config(struct rte_sched_port_params *params) port = rte_zmalloc_socket("qos_params", size0 + size1, RTE_CACHE_LINE_SIZE, params->socket); if (port == NULL) { - RTE_LOG(ERR, SCHED, "%s: Memory allocation fails\n", __func__); + SCHED_LOG(ERR, "%s: Memory allocation fails", __func__); return NULL; } @@ -965,7 +965,7 @@ rte_sched_port_config(struct rte_sched_port_params *params) port->subport_profiles = rte_zmalloc_socket("subport_profile", size2, RTE_CACHE_LINE_SIZE, params->socket); if (port->subport_profiles == NULL) { - RTE_LOG(ERR, SCHED, "%s: Memory allocation fails\n", __func__); + SCHED_LOG(ERR, "%s: Memory allocation fails", __func__); rte_free(port); return NULL; } @@ -1107,8 +1107,8 @@ rte_sched_red_config(struct rte_sched_port *port, params->cman_params->red_params[i][j].maxp_inv) != 0) { rte_sched_free_memory(port, n_subports); - RTE_LOG(NOTICE, SCHED, - "%s: RED configuration init fails\n", __func__); + SCHED_LOG(NOTICE, + "%s: RED configuration init fails", __func__); return -EINVAL; } } @@ -1127,8 +1127,8 @@ rte_sched_pie_config(struct rte_sched_port *port, for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) { if (params->cman_params->pie_params[i].tailq_th > params->qsize[i]) { - RTE_LOG(NOTICE, SCHED, - "%s: PIE tailq threshold incorrect\n", __func__); + SCHED_LOG(NOTICE, + "%s: PIE tailq threshold incorrect", __func__); return -EINVAL; } @@ -1139,8 +1139,8 @@ rte_sched_pie_config(struct rte_sched_port *port, params->cman_params->pie_params[i].tailq_th) != 0) { rte_sched_free_memory(port, n_subports); - RTE_LOG(NOTICE, SCHED, - "%s: PIE configuration init fails\n", __func__); + SCHED_LOG(NOTICE, + "%s: PIE configuration init fails", __func__); return -EINVAL; } } @@ -1171,14 +1171,14 @@ rte_sched_subport_tc_ov_config(struct rte_sched_port *port, struct rte_sched_subport *s; if (port == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter port\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for parameter port", __func__); return -EINVAL; } if (subport_id >= port->n_subports_per_port) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter subport id\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for parameter subport id", __func__); return -EINVAL; } @@ -1204,21 +1204,21 @@ rte_sched_subport_config(struct rte_sched_port *port, /* Check user parameters */ if (port == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter port\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for parameter port", __func__); return 0; } if (subport_id >= port->n_subports_per_port) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for subport id\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for subport id", __func__); ret = -EINVAL; goto out; } if (subport_profile_id >= port->n_max_subport_profiles) { - RTE_LOG(ERR, SCHED, "%s: " - "Number of subport profile exceeds the max limit\n", + SCHED_LOG(ERR, "%s: " + "Number of subport profile exceeds the max limit", __func__); ret = -EINVAL; goto out; @@ -1234,8 +1234,8 @@ rte_sched_subport_config(struct rte_sched_port *port, port->n_pipes_per_subport, port->rate); if (status != 0) { - RTE_LOG(NOTICE, SCHED, - "%s: Port scheduler params check failed (%d)\n", + SCHED_LOG(NOTICE, + "%s: Port scheduler params check failed (%d)", __func__, status); ret = -EINVAL; goto out; @@ -1250,8 +1250,8 @@ rte_sched_subport_config(struct rte_sched_port *port, s = rte_zmalloc_socket("subport_params", size0 + size1, RTE_CACHE_LINE_SIZE, port->socket); if (s == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Memory allocation fails\n", __func__); + SCHED_LOG(ERR, + "%s: Memory allocation fails", __func__); ret = -ENOMEM; goto out; } @@ -1282,8 +1282,8 @@ rte_sched_subport_config(struct rte_sched_port *port, s->cman_enabled = true; status = rte_sched_cman_config(port, s, params, n_subports); if (status) { - RTE_LOG(NOTICE, SCHED, - "%s: CMAN configuration fails\n", __func__); + SCHED_LOG(NOTICE, + "%s: CMAN configuration fails", __func__); return status; } } else { @@ -1330,8 +1330,8 @@ rte_sched_subport_config(struct rte_sched_port *port, s->bmp = rte_bitmap_init(n_subport_pipe_queues, s->bmp_array, bmp_mem_size); if (s->bmp == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Subport bitmap init error\n", __func__); + SCHED_LOG(ERR, + "%s: Subport bitmap init error", __func__); ret = -EINVAL; goto out; } @@ -1400,29 +1400,29 @@ rte_sched_pipe_config(struct rte_sched_port *port, deactivate = (pipe_profile < 0); if (port == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter port\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for parameter port", __func__); return -EINVAL; } if (subport_id >= port->n_subports_per_port) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter subport id\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for parameter subport id", __func__); ret = -EINVAL; goto out; } s = port->subports[subport_id]; if (pipe_id >= s->n_pipes_per_subport_enabled) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter pipe id\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for parameter pipe id", __func__); ret = -EINVAL; goto out; } if (!deactivate && profile >= s->n_pipe_profiles) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter pipe profile\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for parameter pipe profile", __func__); ret = -EINVAL; goto out; } @@ -1447,8 +1447,8 @@ rte_sched_pipe_config(struct rte_sched_port *port, s->tc_ov = s->tc_ov_rate > subport_tc_be_rate; if (s->tc_ov != tc_be_ov) { - RTE_LOG(DEBUG, SCHED, - "Subport %u Best-effort TC oversubscription is OFF (%.4lf >= %.4lf)\n", + SCHED_LOG(DEBUG, + "Subport %u Best-effort TC oversubscription is OFF (%.4lf >= %.4lf)", subport_id, subport_tc_be_rate, s->tc_ov_rate); } @@ -1489,8 +1489,8 @@ rte_sched_pipe_config(struct rte_sched_port *port, s->tc_ov = s->tc_ov_rate > subport_tc_be_rate; if (s->tc_ov != tc_be_ov) { - RTE_LOG(DEBUG, SCHED, - "Subport %u Best effort TC oversubscription is ON (%.4lf < %.4lf)\n", + SCHED_LOG(DEBUG, + "Subport %u Best effort TC oversubscription is ON (%.4lf < %.4lf)", subport_id, subport_tc_be_rate, s->tc_ov_rate); } p->tc_ov_period_id = s->tc_ov_period_id; @@ -1518,15 +1518,15 @@ rte_sched_subport_pipe_profile_add(struct rte_sched_port *port, /* Port */ if (port == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter port\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for parameter port", __func__); return -EINVAL; } /* Subport id not exceeds the max limit */ if (subport_id > port->n_subports_per_port) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for subport id\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for subport id", __func__); return -EINVAL; } @@ -1534,16 +1534,16 @@ rte_sched_subport_pipe_profile_add(struct rte_sched_port *port, /* Pipe profiles exceeds the max limit */ if (s->n_pipe_profiles >= s->n_max_pipe_profiles) { - RTE_LOG(ERR, SCHED, - "%s: Number of pipe profiles exceeds the max limit\n", __func__); + SCHED_LOG(ERR, + "%s: Number of pipe profiles exceeds the max limit", __func__); return -EINVAL; } /* Pipe params */ status = pipe_profile_check(params, port->rate, &s->qsize[0]); if (status != 0) { - RTE_LOG(ERR, SCHED, - "%s: Pipe profile check failed(%d)\n", __func__, status); + SCHED_LOG(ERR, + "%s: Pipe profile check failed(%d)", __func__, status); return -EINVAL; } @@ -1553,8 +1553,8 @@ rte_sched_subport_pipe_profile_add(struct rte_sched_port *port, /* Pipe profile should not exists */ for (i = 0; i < s->n_pipe_profiles; i++) if (memcmp(s->pipe_profiles + i, pp, sizeof(*pp)) == 0) { - RTE_LOG(ERR, SCHED, - "%s: Pipe profile exists\n", __func__); + SCHED_LOG(ERR, + "%s: Pipe profile exists", __func__); return -EINVAL; } @@ -1581,20 +1581,20 @@ rte_sched_port_subport_profile_add(struct rte_sched_port *port, /* Port */ if (port == NULL) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for parameter port\n", __func__); + SCHED_LOG(ERR, "%s: " + "Incorrect value for parameter port", __func__); return -EINVAL; } if (params == NULL) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for parameter profile\n", __func__); + SCHED_LOG(ERR, "%s: " + "Incorrect value for parameter profile", __func__); return -EINVAL; } if (subport_profile_id == NULL) { - RTE_LOG(ERR, SCHED, "%s: " - "Incorrect value for parameter subport_profile_id\n", + SCHED_LOG(ERR, "%s: " + "Incorrect value for parameter subport_profile_id", __func__); return -EINVAL; } @@ -1603,16 +1603,16 @@ rte_sched_port_subport_profile_add(struct rte_sched_port *port, /* Subport profiles exceeds the max limit */ if (port->n_subport_profiles >= port->n_max_subport_profiles) { - RTE_LOG(ERR, SCHED, "%s: " - "Number of subport profiles exceeds the max limit\n", + SCHED_LOG(ERR, "%s: " + "Number of subport profiles exceeds the max limit", __func__); return -EINVAL; } status = subport_profile_check(params, port->rate); if (status != 0) { - RTE_LOG(ERR, SCHED, - "%s: subport profile check failed(%d)\n", __func__, status); + SCHED_LOG(ERR, + "%s: subport profile check failed(%d)", __func__, status); return -EINVAL; } @@ -1622,8 +1622,8 @@ rte_sched_port_subport_profile_add(struct rte_sched_port *port, for (i = 0; i < port->n_subport_profiles; i++) if (memcmp(port->subport_profiles + i, dst, sizeof(*dst)) == 0) { - RTE_LOG(ERR, SCHED, - "%s: subport profile exists\n", __func__); + SCHED_LOG(ERR, + "%s: subport profile exists", __func__); return -EINVAL; } @@ -1695,26 +1695,26 @@ rte_sched_subport_read_stats(struct rte_sched_port *port, /* Check user parameters */ if (port == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter port\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for parameter port", __func__); return -EINVAL; } if (subport_id >= port->n_subports_per_port) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for subport id\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for subport id", __func__); return -EINVAL; } if (stats == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter stats\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for parameter stats", __func__); return -EINVAL; } if (tc_ov == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for tc_ov\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for tc_ov", __func__); return -EINVAL; } @@ -1743,26 +1743,26 @@ rte_sched_queue_read_stats(struct rte_sched_port *port, /* Check user parameters */ if (port == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter port\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for parameter port", __func__); return -EINVAL; } if (queue_id >= rte_sched_port_queues_per_port(port)) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for queue id\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for queue id", __func__); return -EINVAL; } if (stats == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter stats\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for parameter stats", __func__); return -EINVAL; } if (qlen == NULL) { - RTE_LOG(ERR, SCHED, - "%s: Incorrect value for parameter qlen\n", __func__); + SCHED_LOG(ERR, + "%s: Incorrect value for parameter qlen", __func__); return -EINVAL; } subport_qmask = port->n_pipes_per_subport_log2 + 4; diff --git a/lib/sched/rte_sched_log.h b/lib/sched/rte_sched_log.h index fde051f49d..d050b8fda1 100644 --- a/lib/sched/rte_sched_log.h +++ b/lib/sched/rte_sched_log.h @@ -2,3 +2,5 @@ extern int sched_logtype; #define RTE_LOGTYPE_SCHED sched_logtype +#define SCHED_LOG(level, fmt, ...) \ + RTE_LOG(level, SCHED, fmt "\n", ## __VA_ARGS__) diff --git a/lib/table/rte_table_acl.c b/lib/table/rte_table_acl.c index 902cb78eac..83411d2b92 100644 --- a/lib/table/rte_table_acl.c +++ b/lib/table/rte_table_acl.c @@ -11,6 +11,8 @@ #include "rte_table_acl.h" +#include "table_log.h" + #ifdef RTE_TABLE_STATS_COLLECT #define RTE_TABLE_ACL_STATS_PKTS_IN_ADD(table, val) \ @@ -65,21 +67,21 @@ rte_table_acl_create( /* Check input parameters */ if (p == NULL) { - RTE_LOG(ERR, TABLE, "%s: Invalid value for params\n", __func__); + TABLE_LOG(ERR, "%s: Invalid value for params", __func__); return NULL; } if (p->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: Invalid value for name\n", __func__); + TABLE_LOG(ERR, "%s: Invalid value for name", __func__); return NULL; } if (p->n_rules == 0) { - RTE_LOG(ERR, TABLE, "%s: Invalid value for n_rules\n", + TABLE_LOG(ERR, "%s: Invalid value for n_rules", __func__); return NULL; } if ((p->n_rule_fields == 0) || (p->n_rule_fields > RTE_ACL_MAX_FIELDS)) { - RTE_LOG(ERR, TABLE, "%s: Invalid value for n_rule_fields\n", + TABLE_LOG(ERR, "%s: Invalid value for n_rule_fields", __func__); return NULL; } @@ -98,8 +100,8 @@ rte_table_acl_create( acl = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE, socket_id); if (acl == NULL) { - RTE_LOG(ERR, TABLE, - "%s: Cannot allocate %u bytes for ACL table\n", + TABLE_LOG(ERR, + "%s: Cannot allocate %u bytes for ACL table", __func__, total_size); return NULL; } @@ -140,7 +142,7 @@ rte_table_acl_free(void *table) /* Check input parameters */ if (table == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + TABLE_LOG(ERR, "%s: table parameter is NULL", __func__); return -EINVAL; } @@ -164,7 +166,7 @@ rte_table_acl_build(struct rte_table_acl *acl, struct rte_acl_ctx **acl_ctx) /* Create low level ACL table */ ctx = rte_acl_create(&acl->acl_params); if (ctx == NULL) { - RTE_LOG(ERR, TABLE, "%s: Cannot create low level ACL table\n", + TABLE_LOG(ERR, "%s: Cannot create low level ACL table", __func__); return -1; } @@ -176,8 +178,8 @@ rte_table_acl_build(struct rte_table_acl *acl, struct rte_acl_ctx **acl_ctx) status = rte_acl_add_rules(ctx, acl->acl_rule_list[i], 1); if (status != 0) { - RTE_LOG(ERR, TABLE, - "%s: Cannot add rule to low level ACL table\n", + TABLE_LOG(ERR, + "%s: Cannot add rule to low level ACL table", __func__); rte_acl_free(ctx); return -1; @@ -196,8 +198,8 @@ rte_table_acl_build(struct rte_table_acl *acl, struct rte_acl_ctx **acl_ctx) /* Build low level ACl table */ status = rte_acl_build(ctx, &acl->cfg); if (status != 0) { - RTE_LOG(ERR, TABLE, - "%s: Cannot build the low level ACL table\n", + TABLE_LOG(ERR, + "%s: Cannot build the low level ACL table", __func__); rte_acl_free(ctx); return -1; @@ -226,29 +228,29 @@ rte_table_acl_entry_add( /* Check input parameters */ if (table == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + TABLE_LOG(ERR, "%s: table parameter is NULL", __func__); return -EINVAL; } if (key == NULL) { - RTE_LOG(ERR, TABLE, "%s: key parameter is NULL\n", __func__); + TABLE_LOG(ERR, "%s: key parameter is NULL", __func__); return -EINVAL; } if (entry == NULL) { - RTE_LOG(ERR, TABLE, "%s: entry parameter is NULL\n", __func__); + TABLE_LOG(ERR, "%s: entry parameter is NULL", __func__); return -EINVAL; } if (key_found == NULL) { - RTE_LOG(ERR, TABLE, "%s: key_found parameter is NULL\n", + TABLE_LOG(ERR, "%s: key_found parameter is NULL", __func__); return -EINVAL; } if (entry_ptr == NULL) { - RTE_LOG(ERR, TABLE, "%s: entry_ptr parameter is NULL\n", + TABLE_LOG(ERR, "%s: entry_ptr parameter is NULL", __func__); return -EINVAL; } if (rule->priority > RTE_ACL_MAX_PRIORITY) { - RTE_LOG(ERR, TABLE, "%s: Priority is too high\n", __func__); + TABLE_LOG(ERR, "%s: Priority is too high", __func__); return -EINVAL; } @@ -291,7 +293,7 @@ rte_table_acl_entry_add( /* Return if max rules */ if (free_pos_valid == 0) { - RTE_LOG(ERR, TABLE, "%s: Max number of rules reached\n", + TABLE_LOG(ERR, "%s: Max number of rules reached", __func__); return -ENOSPC; } @@ -342,15 +344,15 @@ rte_table_acl_entry_delete( /* Check input parameters */ if (table == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + TABLE_LOG(ERR, "%s: table parameter is NULL", __func__); return -EINVAL; } if (key == NULL) { - RTE_LOG(ERR, TABLE, "%s: key parameter is NULL\n", __func__); + TABLE_LOG(ERR, "%s: key parameter is NULL", __func__); return -EINVAL; } if (key_found == NULL) { - RTE_LOG(ERR, TABLE, "%s: key_found parameter is NULL\n", + TABLE_LOG(ERR, "%s: key_found parameter is NULL", __func__); return -EINVAL; } @@ -424,28 +426,28 @@ rte_table_acl_entry_add_bulk( /* Check input parameters */ if (table == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + TABLE_LOG(ERR, "%s: table parameter is NULL", __func__); return -EINVAL; } if (keys == NULL) { - RTE_LOG(ERR, TABLE, "%s: keys parameter is NULL\n", __func__); + TABLE_LOG(ERR, "%s: keys parameter is NULL", __func__); return -EINVAL; } if (entries == NULL) { - RTE_LOG(ERR, TABLE, "%s: entries parameter is NULL\n", __func__); + TABLE_LOG(ERR, "%s: entries parameter is NULL", __func__); return -EINVAL; } if (n_keys == 0) { - RTE_LOG(ERR, TABLE, "%s: 0 rules to add\n", __func__); + TABLE_LOG(ERR, "%s: 0 rules to add", __func__); return -EINVAL; } if (key_found == NULL) { - RTE_LOG(ERR, TABLE, "%s: key_found parameter is NULL\n", + TABLE_LOG(ERR, "%s: key_found parameter is NULL", __func__); return -EINVAL; } if (entries_ptr == NULL) { - RTE_LOG(ERR, TABLE, "%s: entries_ptr parameter is NULL\n", + TABLE_LOG(ERR, "%s: entries_ptr parameter is NULL", __func__); return -EINVAL; } @@ -455,20 +457,20 @@ rte_table_acl_entry_add_bulk( struct rte_table_acl_rule_add_params *rule; if (keys[i] == NULL) { - RTE_LOG(ERR, TABLE, "%s: keys[%" PRIu32 "] parameter is NULL\n", + TABLE_LOG(ERR, "%s: keys[%" PRIu32 "] parameter is NULL", __func__, i); return -EINVAL; } if (entries[i] == NULL) { - RTE_LOG(ERR, TABLE, "%s: entries[%" PRIu32 "] parameter is NULL\n", + TABLE_LOG(ERR, "%s: entries[%" PRIu32 "] parameter is NULL", __func__, i); return -EINVAL; } rule = keys[i]; if (rule->priority > RTE_ACL_MAX_PRIORITY) { - RTE_LOG(ERR, TABLE, "%s: Priority is too high\n", __func__); + TABLE_LOG(ERR, "%s: Priority is too high", __func__); return -EINVAL; } } @@ -604,26 +606,26 @@ rte_table_acl_entry_delete_bulk( /* Check input parameters */ if (table == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + TABLE_LOG(ERR, "%s: table parameter is NULL", __func__); return -EINVAL; } if (keys == NULL) { - RTE_LOG(ERR, TABLE, "%s: key parameter is NULL\n", __func__); + TABLE_LOG(ERR, "%s: key parameter is NULL", __func__); return -EINVAL; } if (n_keys == 0) { - RTE_LOG(ERR, TABLE, "%s: 0 rules to delete\n", __func__); + TABLE_LOG(ERR, "%s: 0 rules to delete", __func__); return -EINVAL; } if (key_found == NULL) { - RTE_LOG(ERR, TABLE, "%s: key_found parameter is NULL\n", + TABLE_LOG(ERR, "%s: key_found parameter is NULL", __func__); return -EINVAL; } for (i = 0; i < n_keys; i++) { if (keys[i] == NULL) { - RTE_LOG(ERR, TABLE, "%s: keys[%" PRIu32 "] parameter is NULL\n", + TABLE_LOG(ERR, "%s: keys[%" PRIu32 "] parameter is NULL", __func__, i); return -EINVAL; } diff --git a/lib/table/rte_table_array.c b/lib/table/rte_table_array.c index a45b29ed6a..80bc2a74f5 100644 --- a/lib/table/rte_table_array.c +++ b/lib/table/rte_table_array.c @@ -11,6 +11,8 @@ #include "rte_table_array.h" +#include "table_log.h" + #ifdef RTE_TABLE_STATS_COLLECT #define RTE_TABLE_ARRAY_STATS_PKTS_IN_ADD(table, val) \ @@ -61,8 +63,8 @@ rte_table_array_create(void *params, int socket_id, uint32_t entry_size) total_size = total_cl_size * RTE_CACHE_LINE_SIZE; t = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE, socket_id); if (t == NULL) { - RTE_LOG(ERR, TABLE, - "%s: Cannot allocate %u bytes for array table\n", + TABLE_LOG(ERR, + "%s: Cannot allocate %u bytes for array table", __func__, total_size); return NULL; } @@ -83,7 +85,7 @@ rte_table_array_free(void *table) /* Check input parameters */ if (t == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + TABLE_LOG(ERR, "%s: table parameter is NULL", __func__); return -EINVAL; } @@ -107,24 +109,24 @@ rte_table_array_entry_add( /* Check input parameters */ if (table == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + TABLE_LOG(ERR, "%s: table parameter is NULL", __func__); return -EINVAL; } if (key == NULL) { - RTE_LOG(ERR, TABLE, "%s: key parameter is NULL\n", __func__); + TABLE_LOG(ERR, "%s: key parameter is NULL", __func__); return -EINVAL; } if (entry == NULL) { - RTE_LOG(ERR, TABLE, "%s: entry parameter is NULL\n", __func__); + TABLE_LOG(ERR, "%s: entry parameter is NULL", __func__); return -EINVAL; } if (key_found == NULL) { - RTE_LOG(ERR, TABLE, "%s: key_found parameter is NULL\n", + TABLE_LOG(ERR, "%s: key_found parameter is NULL", __func__); return -EINVAL; } if (entry_ptr == NULL) { - RTE_LOG(ERR, TABLE, "%s: entry_ptr parameter is NULL\n", + TABLE_LOG(ERR, "%s: entry_ptr parameter is NULL", __func__); return -EINVAL; } diff --git a/lib/table/rte_table_hash_cuckoo.c b/lib/table/rte_table_hash_cuckoo.c index 86c960c103..0f4900c4df 100644 --- a/lib/table/rte_table_hash_cuckoo.c +++ b/lib/table/rte_table_hash_cuckoo.c @@ -10,6 +10,8 @@ #include "rte_table_hash_cuckoo.h" +#include "table_log.h" + #ifdef RTE_TABLE_STATS_COLLECT #define RTE_TABLE_HASH_CUCKOO_STATS_PKTS_IN_ADD(table, val) \ @@ -47,27 +49,27 @@ static int check_params_create_hash_cuckoo(struct rte_table_hash_cuckoo_params *params) { if (params == NULL) { - RTE_LOG(ERR, TABLE, "NULL Input Parameters.\n"); + TABLE_LOG(ERR, "NULL Input Parameters."); return -EINVAL; } if (params->name == NULL) { - RTE_LOG(ERR, TABLE, "Table name is NULL.\n"); + TABLE_LOG(ERR, "Table name is NULL."); return -EINVAL; } if (params->key_size == 0) { - RTE_LOG(ERR, TABLE, "Invalid key_size.\n"); + TABLE_LOG(ERR, "Invalid key_size."); return -EINVAL; } if (params->n_keys == 0) { - RTE_LOG(ERR, TABLE, "Invalid n_keys.\n"); + TABLE_LOG(ERR, "Invalid n_keys."); return -EINVAL; } if (params->f_hash == NULL) { - RTE_LOG(ERR, TABLE, "f_hash is NULL.\n"); + TABLE_LOG(ERR, "f_hash is NULL."); return -EINVAL; } @@ -94,8 +96,8 @@ rte_table_hash_cuckoo_create(void *params, t = rte_zmalloc_socket(p->name, total_size, RTE_CACHE_LINE_SIZE, socket_id); if (t == NULL) { - RTE_LOG(ERR, TABLE, - "%s: Cannot allocate %u bytes for cuckoo hash table %s\n", + TABLE_LOG(ERR, + "%s: Cannot allocate %u bytes for cuckoo hash table %s", __func__, total_size, p->name); return NULL; } @@ -114,8 +116,8 @@ rte_table_hash_cuckoo_create(void *params, if (h_table == NULL) { h_table = rte_hash_create(&hash_cuckoo_params); if (h_table == NULL) { - RTE_LOG(ERR, TABLE, - "%s: failed to create cuckoo hash table %s\n", + TABLE_LOG(ERR, + "%s: failed to create cuckoo hash table %s", __func__, p->name); rte_free(t); return NULL; @@ -131,8 +133,8 @@ rte_table_hash_cuckoo_create(void *params, t->key_offset = p->key_offset; t->h_table = h_table; - RTE_LOG(INFO, TABLE, - "%s: Cuckoo hash table %s memory footprint is %u bytes\n", + TABLE_LOG(INFO, + "%s: Cuckoo hash table %s memory footprint is %u bytes", __func__, p->name, total_size); return t; } diff --git a/lib/table/rte_table_hash_ext.c b/lib/table/rte_table_hash_ext.c index 9f0220ded2..2148d83509 100644 --- a/lib/table/rte_table_hash_ext.c +++ b/lib/table/rte_table_hash_ext.c @@ -11,6 +11,8 @@ #include "rte_table_hash.h" +#include "table_log.h" + #define KEYS_PER_BUCKET 4 struct bucket { @@ -128,33 +130,33 @@ check_params_create(struct rte_table_hash_params *params) { /* name */ if (params->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: name invalid value\n", __func__); + TABLE_LOG(ERR, "%s: name invalid value", __func__); return -EINVAL; } /* key_size */ if ((params->key_size < sizeof(uint64_t)) || (!rte_is_power_of_2(params->key_size))) { - RTE_LOG(ERR, TABLE, "%s: key_size invalid value\n", __func__); + TABLE_LOG(ERR, "%s: key_size invalid value", __func__); return -EINVAL; } /* n_keys */ if (params->n_keys == 0) { - RTE_LOG(ERR, TABLE, "%s: n_keys invalid value\n", __func__); + TABLE_LOG(ERR, "%s: n_keys invalid value", __func__); return -EINVAL; } /* n_buckets */ if ((params->n_buckets == 0) || (!rte_is_power_of_2(params->n_buckets))) { - RTE_LOG(ERR, TABLE, "%s: n_buckets invalid value\n", __func__); + TABLE_LOG(ERR, "%s: n_buckets invalid value", __func__); return -EINVAL; } /* f_hash */ if (params->f_hash == NULL) { - RTE_LOG(ERR, TABLE, "%s: f_hash invalid value\n", __func__); + TABLE_LOG(ERR, "%s: f_hash invalid value", __func__); return -EINVAL; } @@ -211,8 +213,8 @@ rte_table_hash_ext_create(void *params, int socket_id, uint32_t entry_size) key_sz + key_stack_sz + bkt_ext_stack_sz + data_sz; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes" - " for hash table %s\n", + TABLE_LOG(ERR, "%s: Cannot allocate %" PRIu64 " bytes" + " for hash table %s", __func__, total_size, p->name); return NULL; } @@ -222,13 +224,13 @@ rte_table_hash_ext_create(void *params, int socket_id, uint32_t entry_size) RTE_CACHE_LINE_SIZE, socket_id); if (t == NULL) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes" - " for hash table %s\n", + TABLE_LOG(ERR, "%s: Cannot allocate %" PRIu64 " bytes" + " for hash table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, "%s (%u-byte key): Hash table %s memory " - "footprint is %" PRIu64 " bytes\n", + TABLE_LOG(INFO, "%s (%u-byte key): Hash table %s memory " + "footprint is %" PRIu64 " bytes", __func__, p->key_size, p->name, total_size); /* Memory initialization */ diff --git a/lib/table/rte_table_hash_key16.c b/lib/table/rte_table_hash_key16.c index 584c3f2c98..7734aef0cf 100644 --- a/lib/table/rte_table_hash_key16.c +++ b/lib/table/rte_table_hash_key16.c @@ -11,6 +11,8 @@ #include "rte_table_hash.h" #include "rte_lru.h" +#include "table_log.h" + #define KEY_SIZE 16 #define KEYS_PER_BUCKET 4 @@ -107,32 +109,32 @@ check_params_create(struct rte_table_hash_params *params) { /* name */ if (params->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: name invalid value\n", __func__); + TABLE_LOG(ERR, "%s: name invalid value", __func__); return -EINVAL; } /* key_size */ if (params->key_size != KEY_SIZE) { - RTE_LOG(ERR, TABLE, "%s: key_size invalid value\n", __func__); + TABLE_LOG(ERR, "%s: key_size invalid value", __func__); return -EINVAL; } /* n_keys */ if (params->n_keys == 0) { - RTE_LOG(ERR, TABLE, "%s: n_keys is zero\n", __func__); + TABLE_LOG(ERR, "%s: n_keys is zero", __func__); return -EINVAL; } /* n_buckets */ if ((params->n_buckets == 0) || (!rte_is_power_of_2(params->n_buckets))) { - RTE_LOG(ERR, TABLE, "%s: n_buckets invalid value\n", __func__); + TABLE_LOG(ERR, "%s: n_buckets invalid value", __func__); return -EINVAL; } /* f_hash */ if (params->f_hash == NULL) { - RTE_LOG(ERR, TABLE, "%s: f_hash function pointer is NULL\n", + TABLE_LOG(ERR, "%s: f_hash function pointer is NULL", __func__); return -EINVAL; } @@ -181,8 +183,8 @@ rte_table_hash_create_key16_lru(void *params, total_size = sizeof(struct rte_table_hash) + n_buckets * bucket_size; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + TABLE_LOG(ERR, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } @@ -192,13 +194,13 @@ rte_table_hash_create_key16_lru(void *params, RTE_CACHE_LINE_SIZE, socket_id); if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + TABLE_LOG(ERR, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, "%s: Hash table %s memory footprint " - "is %" PRIu64 " bytes\n", + TABLE_LOG(INFO, "%s: Hash table %s memory footprint " + "is %" PRIu64 " bytes", __func__, p->name, total_size); /* Memory initialization */ @@ -236,7 +238,7 @@ rte_table_hash_free_key16_lru(void *table) /* Check input parameters */ if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + TABLE_LOG(ERR, "%s: table parameter is NULL", __func__); return -EINVAL; } @@ -391,8 +393,8 @@ rte_table_hash_create_key16_ext(void *params, total_size = sizeof(struct rte_table_hash) + (p->n_buckets + n_buckets_ext) * bucket_size + stack_size; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + TABLE_LOG(ERR, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } @@ -402,13 +404,13 @@ rte_table_hash_create_key16_ext(void *params, RTE_CACHE_LINE_SIZE, socket_id); if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + TABLE_LOG(ERR, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, "%s: Hash table %s memory footprint " - "is %" PRIu64 " bytes\n", + TABLE_LOG(INFO, "%s: Hash table %s memory footprint " + "is %" PRIu64 " bytes", __func__, p->name, total_size); /* Memory initialization */ @@ -446,7 +448,7 @@ rte_table_hash_free_key16_ext(void *table) /* Check input parameters */ if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + TABLE_LOG(ERR, "%s: table parameter is NULL", __func__); return -EINVAL; } diff --git a/lib/table/rte_table_hash_key32.c b/lib/table/rte_table_hash_key32.c index 22b5ca9166..fcb4348e8b 100644 --- a/lib/table/rte_table_hash_key32.c +++ b/lib/table/rte_table_hash_key32.c @@ -11,6 +11,8 @@ #include "rte_table_hash.h" #include "rte_lru.h" +#include "table_log.h" + #define KEY_SIZE 32 #define KEYS_PER_BUCKET 4 @@ -111,32 +113,32 @@ check_params_create(struct rte_table_hash_params *params) { /* name */ if (params->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: name invalid value\n", __func__); + TABLE_LOG(ERR, "%s: name invalid value", __func__); return -EINVAL; } /* key_size */ if (params->key_size != KEY_SIZE) { - RTE_LOG(ERR, TABLE, "%s: key_size invalid value\n", __func__); + TABLE_LOG(ERR, "%s: key_size invalid value", __func__); return -EINVAL; } /* n_keys */ if (params->n_keys == 0) { - RTE_LOG(ERR, TABLE, "%s: n_keys is zero\n", __func__); + TABLE_LOG(ERR, "%s: n_keys is zero", __func__); return -EINVAL; } /* n_buckets */ if ((params->n_buckets == 0) || (!rte_is_power_of_2(params->n_buckets))) { - RTE_LOG(ERR, TABLE, "%s: n_buckets invalid value\n", __func__); + TABLE_LOG(ERR, "%s: n_buckets invalid value", __func__); return -EINVAL; } /* f_hash */ if (params->f_hash == NULL) { - RTE_LOG(ERR, TABLE, "%s: f_hash function pointer is NULL\n", + TABLE_LOG(ERR, "%s: f_hash function pointer is NULL", __func__); return -EINVAL; } @@ -184,8 +186,8 @@ rte_table_hash_create_key32_lru(void *params, KEYS_PER_BUCKET * entry_size); total_size = sizeof(struct rte_table_hash) + n_buckets * bucket_size; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + TABLE_LOG(ERR, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } @@ -195,14 +197,14 @@ rte_table_hash_create_key32_lru(void *params, RTE_CACHE_LINE_SIZE, socket_id); if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + TABLE_LOG(ERR, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, + TABLE_LOG(INFO, "%s: Hash table %s memory footprint " - "is %" PRIu64 " bytes\n", + "is %" PRIu64 " bytes", __func__, p->name, total_size); /* Memory initialization */ @@ -244,7 +246,7 @@ rte_table_hash_free_key32_lru(void *table) /* Check input parameters */ if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + TABLE_LOG(ERR, "%s: table parameter is NULL", __func__); return -EINVAL; } @@ -400,8 +402,8 @@ rte_table_hash_create_key32_ext(void *params, total_size = sizeof(struct rte_table_hash) + (p->n_buckets + n_buckets_ext) * bucket_size + stack_size; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + TABLE_LOG(ERR, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } @@ -411,14 +413,14 @@ rte_table_hash_create_key32_ext(void *params, RTE_CACHE_LINE_SIZE, socket_id); if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + TABLE_LOG(ERR, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, + TABLE_LOG(INFO, "%s: Hash table %s memory footprint " - "is %" PRIu64" bytes\n", + "is %" PRIu64" bytes", __func__, p->name, total_size); /* Memory initialization */ @@ -460,7 +462,7 @@ rte_table_hash_free_key32_ext(void *table) /* Check input parameters */ if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + TABLE_LOG(ERR, "%s: table parameter is NULL", __func__); return -EINVAL; } diff --git a/lib/table/rte_table_hash_key8.c b/lib/table/rte_table_hash_key8.c index bd0ec4aac0..bbe65625c9 100644 --- a/lib/table/rte_table_hash_key8.c +++ b/lib/table/rte_table_hash_key8.c @@ -11,6 +11,8 @@ #include "rte_table_hash.h" #include "rte_lru.h" +#include "table_log.h" + #define KEY_SIZE 8 #define KEYS_PER_BUCKET 4 @@ -101,32 +103,32 @@ check_params_create(struct rte_table_hash_params *params) { /* name */ if (params->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: name invalid value\n", __func__); + TABLE_LOG(ERR, "%s: name invalid value", __func__); return -EINVAL; } /* key_size */ if (params->key_size != KEY_SIZE) { - RTE_LOG(ERR, TABLE, "%s: key_size invalid value\n", __func__); + TABLE_LOG(ERR, "%s: key_size invalid value", __func__); return -EINVAL; } /* n_keys */ if (params->n_keys == 0) { - RTE_LOG(ERR, TABLE, "%s: n_keys is zero\n", __func__); + TABLE_LOG(ERR, "%s: n_keys is zero", __func__); return -EINVAL; } /* n_buckets */ if ((params->n_buckets == 0) || (!rte_is_power_of_2(params->n_buckets))) { - RTE_LOG(ERR, TABLE, "%s: n_buckets invalid value\n", __func__); + TABLE_LOG(ERR, "%s: n_buckets invalid value", __func__); return -EINVAL; } /* f_hash */ if (params->f_hash == NULL) { - RTE_LOG(ERR, TABLE, "%s: f_hash function pointer is NULL\n", + TABLE_LOG(ERR, "%s: f_hash function pointer is NULL", __func__); return -EINVAL; } @@ -173,8 +175,8 @@ rte_table_hash_create_key8_lru(void *params, int socket_id, uint32_t entry_size) total_size = sizeof(struct rte_table_hash) + n_buckets * bucket_size; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes" - " for hash table %s\n", + TABLE_LOG(ERR, "%s: Cannot allocate %" PRIu64 " bytes" + " for hash table %s", __func__, total_size, p->name); return NULL; } @@ -184,14 +186,14 @@ rte_table_hash_create_key8_lru(void *params, int socket_id, uint32_t entry_size) RTE_CACHE_LINE_SIZE, socket_id); if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes" - " for hash table %s\n", + TABLE_LOG(ERR, "%s: Cannot allocate %" PRIu64 " bytes" + " for hash table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, "%s: Hash table %s memory footprint " - "is %" PRIu64 " bytes\n", + TABLE_LOG(INFO, "%s: Hash table %s memory footprint " + "is %" PRIu64 " bytes", __func__, p->name, total_size); /* Memory initialization */ @@ -226,7 +228,7 @@ rte_table_hash_free_key8_lru(void *table) /* Check input parameters */ if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + TABLE_LOG(ERR, "%s: table parameter is NULL", __func__); return -EINVAL; } @@ -377,8 +379,8 @@ rte_table_hash_create_key8_ext(void *params, int socket_id, uint32_t entry_size) (p->n_buckets + n_buckets_ext) * bucket_size + stack_size; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + TABLE_LOG(ERR, "%s: Cannot allocate %" PRIu64 " bytes " + "for hash table %s", __func__, total_size, p->name); return NULL; } @@ -388,14 +390,14 @@ rte_table_hash_create_key8_ext(void *params, int socket_id, uint32_t entry_size) RTE_CACHE_LINE_SIZE, socket_id); if (f == NULL) { - RTE_LOG(ERR, TABLE, + TABLE_LOG(ERR, "%s: Cannot allocate %" PRIu64 " bytes " - "for hash table %s\n", + "for hash table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, "%s: Hash table %s memory footprint " - "is %" PRIu64 " bytes\n", + TABLE_LOG(INFO, "%s: Hash table %s memory footprint " + "is %" PRIu64 " bytes", __func__, p->name, total_size); /* Memory initialization */ @@ -430,7 +432,7 @@ rte_table_hash_free_key8_ext(void *table) /* Check input parameters */ if (f == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + TABLE_LOG(ERR, "%s: table parameter is NULL", __func__); return -EINVAL; } diff --git a/lib/table/rte_table_hash_lru.c b/lib/table/rte_table_hash_lru.c index 758ec4fe7a..cb4f32991e 100644 --- a/lib/table/rte_table_hash_lru.c +++ b/lib/table/rte_table_hash_lru.c @@ -12,6 +12,8 @@ #include "rte_table_hash.h" #include "rte_lru.h" +#include "table_log.h" + #define KEYS_PER_BUCKET 4 #ifdef RTE_TABLE_STATS_COLLECT @@ -105,33 +107,33 @@ check_params_create(struct rte_table_hash_params *params) { /* name */ if (params->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: name invalid value\n", __func__); + TABLE_LOG(ERR, "%s: name invalid value", __func__); return -EINVAL; } /* key_size */ if ((params->key_size < sizeof(uint64_t)) || (!rte_is_power_of_2(params->key_size))) { - RTE_LOG(ERR, TABLE, "%s: key_size invalid value\n", __func__); + TABLE_LOG(ERR, "%s: key_size invalid value", __func__); return -EINVAL; } /* n_keys */ if (params->n_keys == 0) { - RTE_LOG(ERR, TABLE, "%s: n_keys invalid value\n", __func__); + TABLE_LOG(ERR, "%s: n_keys invalid value", __func__); return -EINVAL; } /* n_buckets */ if ((params->n_buckets == 0) || (!rte_is_power_of_2(params->n_buckets))) { - RTE_LOG(ERR, TABLE, "%s: n_buckets invalid value\n", __func__); + TABLE_LOG(ERR, "%s: n_buckets invalid value", __func__); return -EINVAL; } /* f_hash */ if (params->f_hash == NULL) { - RTE_LOG(ERR, TABLE, "%s: f_hash invalid value\n", __func__); + TABLE_LOG(ERR, "%s: f_hash invalid value", __func__); return -EINVAL; } @@ -187,9 +189,9 @@ rte_table_hash_lru_create(void *params, int socket_id, uint32_t entry_size) key_stack_sz + data_sz; if (total_size > SIZE_MAX) { - RTE_LOG(ERR, TABLE, + TABLE_LOG(ERR, "%s: Cannot allocate %" PRIu64 " bytes for hash " - "table %s\n", + "table %s", __func__, total_size, p->name); return NULL; } @@ -199,14 +201,14 @@ rte_table_hash_lru_create(void *params, int socket_id, uint32_t entry_size) RTE_CACHE_LINE_SIZE, socket_id); if (t == NULL) { - RTE_LOG(ERR, TABLE, + TABLE_LOG(ERR, "%s: Cannot allocate %" PRIu64 " bytes for hash " - "table %s\n", + "table %s", __func__, total_size, p->name); return NULL; } - RTE_LOG(INFO, TABLE, "%s (%u-byte key): Hash table %s memory footprint" - " is %" PRIu64 " bytes\n", + TABLE_LOG(INFO, "%s (%u-byte key): Hash table %s memory footprint" + " is %" PRIu64 " bytes", __func__, p->key_size, p->name, total_size); /* Memory initialization */ diff --git a/lib/table/rte_table_lpm.c b/lib/table/rte_table_lpm.c index c2ef0d9ba0..b9cff251f6 100644 --- a/lib/table/rte_table_lpm.c +++ b/lib/table/rte_table_lpm.c @@ -13,6 +13,8 @@ #include "rte_table_lpm.h" +#include "table_log.h" + #ifndef RTE_TABLE_LPM_MAX_NEXT_HOPS #define RTE_TABLE_LPM_MAX_NEXT_HOPS 65536 #endif @@ -59,29 +61,29 @@ rte_table_lpm_create(void *params, int socket_id, uint32_t entry_size) /* Check input parameters */ if (p == NULL) { - RTE_LOG(ERR, TABLE, "%s: NULL input parameters\n", __func__); + TABLE_LOG(ERR, "%s: NULL input parameters", __func__); return NULL; } if (p->n_rules == 0) { - RTE_LOG(ERR, TABLE, "%s: Invalid n_rules\n", __func__); + TABLE_LOG(ERR, "%s: Invalid n_rules", __func__); return NULL; } if (p->number_tbl8s == 0) { - RTE_LOG(ERR, TABLE, "%s: Invalid number_tbl8s\n", __func__); + TABLE_LOG(ERR, "%s: Invalid number_tbl8s", __func__); return NULL; } if (p->entry_unique_size == 0) { - RTE_LOG(ERR, TABLE, "%s: Invalid entry_unique_size\n", + TABLE_LOG(ERR, "%s: Invalid entry_unique_size", __func__); return NULL; } if (p->entry_unique_size > entry_size) { - RTE_LOG(ERR, TABLE, "%s: Invalid entry_unique_size\n", + TABLE_LOG(ERR, "%s: Invalid entry_unique_size", __func__); return NULL; } if (p->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: Table name is NULL\n", + TABLE_LOG(ERR, "%s: Table name is NULL", __func__); return NULL; } @@ -93,8 +95,8 @@ rte_table_lpm_create(void *params, int socket_id, uint32_t entry_size) lpm = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE, socket_id); if (lpm == NULL) { - RTE_LOG(ERR, TABLE, - "%s: Cannot allocate %u bytes for LPM table\n", + TABLE_LOG(ERR, + "%s: Cannot allocate %u bytes for LPM table", __func__, total_size); return NULL; } @@ -107,7 +109,7 @@ rte_table_lpm_create(void *params, int socket_id, uint32_t entry_size) if (lpm->lpm == NULL) { rte_free(lpm); - RTE_LOG(ERR, TABLE, "Unable to create low-level LPM table\n"); + TABLE_LOG(ERR, "Unable to create low-level LPM table"); return NULL; } @@ -127,7 +129,7 @@ rte_table_lpm_free(void *table) /* Check input parameters */ if (lpm == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + TABLE_LOG(ERR, "%s: table parameter is NULL", __func__); return -EINVAL; } @@ -187,21 +189,21 @@ rte_table_lpm_entry_add( /* Check input parameters */ if (lpm == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + TABLE_LOG(ERR, "%s: table parameter is NULL", __func__); return -EINVAL; } if (ip_prefix == NULL) { - RTE_LOG(ERR, TABLE, "%s: ip_prefix parameter is NULL\n", + TABLE_LOG(ERR, "%s: ip_prefix parameter is NULL", __func__); return -EINVAL; } if (entry == NULL) { - RTE_LOG(ERR, TABLE, "%s: entry parameter is NULL\n", __func__); + TABLE_LOG(ERR, "%s: entry parameter is NULL", __func__); return -EINVAL; } if ((ip_prefix->depth == 0) || (ip_prefix->depth > 32)) { - RTE_LOG(ERR, TABLE, "%s: invalid depth (%d)\n", + TABLE_LOG(ERR, "%s: invalid depth (%d)", __func__, ip_prefix->depth); return -EINVAL; } @@ -216,7 +218,7 @@ rte_table_lpm_entry_add( uint8_t *nht_entry; if (nht_find_free(lpm, &nht_pos) == 0) { - RTE_LOG(ERR, TABLE, "%s: NHT full\n", __func__); + TABLE_LOG(ERR, "%s: NHT full", __func__); return -1; } @@ -226,7 +228,7 @@ rte_table_lpm_entry_add( /* Add rule to low level LPM table */ if (rte_lpm_add(lpm->lpm, ip_prefix->ip, ip_prefix->depth, nht_pos) < 0) { - RTE_LOG(ERR, TABLE, "%s: LPM rule add failed\n", __func__); + TABLE_LOG(ERR, "%s: LPM rule add failed", __func__); return -1; } @@ -253,16 +255,16 @@ rte_table_lpm_entry_delete( /* Check input parameters */ if (lpm == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + TABLE_LOG(ERR, "%s: table parameter is NULL", __func__); return -EINVAL; } if (ip_prefix == NULL) { - RTE_LOG(ERR, TABLE, "%s: ip_prefix parameter is NULL\n", + TABLE_LOG(ERR, "%s: ip_prefix parameter is NULL", __func__); return -EINVAL; } if ((ip_prefix->depth == 0) || (ip_prefix->depth > 32)) { - RTE_LOG(ERR, TABLE, "%s: invalid depth (%d)\n", __func__, + TABLE_LOG(ERR, "%s: invalid depth (%d)", __func__, ip_prefix->depth); return -EINVAL; } @@ -271,7 +273,7 @@ rte_table_lpm_entry_delete( status = rte_lpm_is_rule_present(lpm->lpm, ip_prefix->ip, ip_prefix->depth, &nht_pos); if (status < 0) { - RTE_LOG(ERR, TABLE, "%s: LPM algorithmic error\n", __func__); + TABLE_LOG(ERR, "%s: LPM algorithmic error", __func__); return -1; } if (status == 0) { @@ -282,7 +284,7 @@ rte_table_lpm_entry_delete( /* Delete rule from the low-level LPM table */ status = rte_lpm_delete(lpm->lpm, ip_prefix->ip, ip_prefix->depth); if (status) { - RTE_LOG(ERR, TABLE, "%s: LPM rule delete failed\n", __func__); + TABLE_LOG(ERR, "%s: LPM rule delete failed", __func__); return -1; } diff --git a/lib/table/rte_table_lpm_ipv6.c b/lib/table/rte_table_lpm_ipv6.c index 6f3e11a14f..e4e823a732 100644 --- a/lib/table/rte_table_lpm_ipv6.c +++ b/lib/table/rte_table_lpm_ipv6.c @@ -12,6 +12,8 @@ #include "rte_table_lpm_ipv6.h" +#include "table_log.h" + #define RTE_TABLE_LPM_MAX_NEXT_HOPS 256 #ifdef RTE_TABLE_STATS_COLLECT @@ -56,29 +58,29 @@ rte_table_lpm_ipv6_create(void *params, int socket_id, uint32_t entry_size) /* Check input parameters */ if (p == NULL) { - RTE_LOG(ERR, TABLE, "%s: NULL input parameters\n", __func__); + TABLE_LOG(ERR, "%s: NULL input parameters", __func__); return NULL; } if (p->n_rules == 0) { - RTE_LOG(ERR, TABLE, "%s: Invalid n_rules\n", __func__); + TABLE_LOG(ERR, "%s: Invalid n_rules", __func__); return NULL; } if (p->number_tbl8s == 0) { - RTE_LOG(ERR, TABLE, "%s: Invalid n_rules\n", __func__); + TABLE_LOG(ERR, "%s: Invalid n_rules", __func__); return NULL; } if (p->entry_unique_size == 0) { - RTE_LOG(ERR, TABLE, "%s: Invalid entry_unique_size\n", + TABLE_LOG(ERR, "%s: Invalid entry_unique_size", __func__); return NULL; } if (p->entry_unique_size > entry_size) { - RTE_LOG(ERR, TABLE, "%s: Invalid entry_unique_size\n", + TABLE_LOG(ERR, "%s: Invalid entry_unique_size", __func__); return NULL; } if (p->name == NULL) { - RTE_LOG(ERR, TABLE, "%s: Table name is NULL\n", + TABLE_LOG(ERR, "%s: Table name is NULL", __func__); return NULL; } @@ -90,8 +92,8 @@ rte_table_lpm_ipv6_create(void *params, int socket_id, uint32_t entry_size) lpm = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE, socket_id); if (lpm == NULL) { - RTE_LOG(ERR, TABLE, - "%s: Cannot allocate %u bytes for LPM IPv6 table\n", + TABLE_LOG(ERR, + "%s: Cannot allocate %u bytes for LPM IPv6 table", __func__, total_size); return NULL; } @@ -103,8 +105,8 @@ rte_table_lpm_ipv6_create(void *params, int socket_id, uint32_t entry_size) lpm->lpm = rte_lpm6_create(p->name, socket_id, &lpm6_config); if (lpm->lpm == NULL) { rte_free(lpm); - RTE_LOG(ERR, TABLE, - "Unable to create low-level LPM IPv6 table\n"); + TABLE_LOG(ERR, + "Unable to create low-level LPM IPv6 table"); return NULL; } @@ -124,7 +126,7 @@ rte_table_lpm_ipv6_free(void *table) /* Check input parameters */ if (lpm == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + TABLE_LOG(ERR, "%s: table parameter is NULL", __func__); return -EINVAL; } @@ -184,21 +186,21 @@ rte_table_lpm_ipv6_entry_add( /* Check input parameters */ if (lpm == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + TABLE_LOG(ERR, "%s: table parameter is NULL", __func__); return -EINVAL; } if (ip_prefix == NULL) { - RTE_LOG(ERR, TABLE, "%s: ip_prefix parameter is NULL\n", + TABLE_LOG(ERR, "%s: ip_prefix parameter is NULL", __func__); return -EINVAL; } if (entry == NULL) { - RTE_LOG(ERR, TABLE, "%s: entry parameter is NULL\n", __func__); + TABLE_LOG(ERR, "%s: entry parameter is NULL", __func__); return -EINVAL; } if ((ip_prefix->depth == 0) || (ip_prefix->depth > 128)) { - RTE_LOG(ERR, TABLE, "%s: invalid depth (%d)\n", __func__, + TABLE_LOG(ERR, "%s: invalid depth (%d)", __func__, ip_prefix->depth); return -EINVAL; } @@ -213,7 +215,7 @@ rte_table_lpm_ipv6_entry_add( uint8_t *nht_entry; if (nht_find_free(lpm, &nht_pos) == 0) { - RTE_LOG(ERR, TABLE, "%s: NHT full\n", __func__); + TABLE_LOG(ERR, "%s: NHT full", __func__); return -1; } @@ -224,7 +226,7 @@ rte_table_lpm_ipv6_entry_add( /* Add rule to low level LPM table */ if (rte_lpm6_add(lpm->lpm, ip_prefix->ip, ip_prefix->depth, nht_pos) < 0) { - RTE_LOG(ERR, TABLE, "%s: LPM IPv6 rule add failed\n", __func__); + TABLE_LOG(ERR, "%s: LPM IPv6 rule add failed", __func__); return -1; } @@ -252,16 +254,16 @@ rte_table_lpm_ipv6_entry_delete( /* Check input parameters */ if (lpm == NULL) { - RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__); + TABLE_LOG(ERR, "%s: table parameter is NULL", __func__); return -EINVAL; } if (ip_prefix == NULL) { - RTE_LOG(ERR, TABLE, "%s: ip_prefix parameter is NULL\n", + TABLE_LOG(ERR, "%s: ip_prefix parameter is NULL", __func__); return -EINVAL; } if ((ip_prefix->depth == 0) || (ip_prefix->depth > 128)) { - RTE_LOG(ERR, TABLE, "%s: invalid depth (%d)\n", __func__, + TABLE_LOG(ERR, "%s: invalid depth (%d)", __func__, ip_prefix->depth); return -EINVAL; } @@ -270,7 +272,7 @@ rte_table_lpm_ipv6_entry_delete( status = rte_lpm6_is_rule_present(lpm->lpm, ip_prefix->ip, ip_prefix->depth, &nht_pos); if (status < 0) { - RTE_LOG(ERR, TABLE, "%s: LPM IPv6 algorithmic error\n", + TABLE_LOG(ERR, "%s: LPM IPv6 algorithmic error", __func__); return -1; } @@ -282,7 +284,7 @@ rte_table_lpm_ipv6_entry_delete( /* Delete rule from the low-level LPM table */ status = rte_lpm6_delete(lpm->lpm, ip_prefix->ip, ip_prefix->depth); if (status) { - RTE_LOG(ERR, TABLE, "%s: LPM IPv6 rule delete failed\n", + TABLE_LOG(ERR, "%s: LPM IPv6 rule delete failed", __func__); return -1; } diff --git a/lib/table/rte_table_stub.c b/lib/table/rte_table_stub.c index cc21516995..7147f7146e 100644 --- a/lib/table/rte_table_stub.c +++ b/lib/table/rte_table_stub.c @@ -8,6 +8,8 @@ #include "rte_table_stub.h" +#include "table_log.h" + #ifdef RTE_TABLE_STATS_COLLECT #define RTE_TABLE_LPM_STATS_PKTS_IN_ADD(table, val) \ @@ -38,8 +40,8 @@ rte_table_stub_create(__rte_unused void *params, stub = rte_zmalloc_socket("TABLE", size, RTE_CACHE_LINE_SIZE, socket_id); if (stub == NULL) { - RTE_LOG(ERR, TABLE, - "%s: Cannot allocate %u bytes for stub table\n", + TABLE_LOG(ERR, + "%s: Cannot allocate %u bytes for stub table", __func__, size); return NULL; } diff --git a/lib/table/table_log.h b/lib/table/table_log.h new file mode 100644 index 0000000000..b50b20e595 --- /dev/null +++ b/lib/table/table_log.h @@ -0,0 +1,9 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Red Hat, Inc. + */ + +#include <rte_log.h> + +#define TABLE_LOG(level, fmt, ...) \ + RTE_LOG(level, TABLE, fmt "\n", ## __VA_ARGS__) + diff --git a/lib/vhost/fd_man.c b/lib/vhost/fd_man.c index 83586c5b4f..01cb77257e 100644 --- a/lib/vhost/fd_man.c +++ b/lib/vhost/fd_man.c @@ -12,6 +12,8 @@ RTE_LOG_REGISTER_SUFFIX(vhost_fdset_logtype, fdset, INFO); #define RTE_LOGTYPE_VHOST_FDMAN vhost_fdset_logtype +#define VHOST_FDMAN_LOG(level, fmt, ...) \ + RTE_LOG(level, VHOST_FDMAN, fmt "\n", ## __VA_ARGS__) #define FDPOLLERR (POLLERR | POLLHUP | POLLNVAL) @@ -334,8 +336,8 @@ fdset_pipe_init(struct fdset *fdset) int ret; if (pipe(fdset->u.pipefd) < 0) { - RTE_LOG(ERR, VHOST_FDMAN, - "failed to create pipe for vhost fdset\n"); + VHOST_FDMAN_LOG(ERR, + "failed to create pipe for vhost fdset"); return -1; } @@ -343,8 +345,8 @@ fdset_pipe_init(struct fdset *fdset) fdset_pipe_read_cb, NULL, NULL); if (ret < 0) { - RTE_LOG(ERR, VHOST_FDMAN, - "failed to add pipe readfd %d into vhost server fdset\n", + VHOST_FDMAN_LOG(ERR, + "failed to add pipe readfd %d into vhost server fdset", fdset->u.readfd); fdset_pipe_uninit(fdset); -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v5 10/13] vhost: improve log for memory dumping configuration 2023-12-20 15:35 ` [PATCH v5 00/13] Detect superfluous newline in logs David Marchand ` (8 preceding siblings ...) 2023-12-20 15:36 ` [PATCH v5 09/13] lib: add more logging helpers David Marchand @ 2023-12-20 15:36 ` David Marchand 2023-12-20 15:36 ` [PATCH v5 11/13] log: add a per line log helper David Marchand ` (4 subsequent siblings) 14 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-20 15:36 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Maxime Coquelin, Chenbo Xia Add the device name as a prefix of logs associated to madvise() calls. Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> --- lib/vhost/iotlb.c | 18 +++++++++--------- lib/vhost/vhost.h | 2 +- lib/vhost/vhost_user.c | 26 +++++++++++++------------- 3 files changed, 23 insertions(+), 23 deletions(-) diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c index 87ac0e5126..10ab77262e 100644 --- a/lib/vhost/iotlb.c +++ b/lib/vhost/iotlb.c @@ -54,16 +54,16 @@ vhost_user_iotlb_share_page(struct vhost_iotlb_entry *a, struct vhost_iotlb_entr } static void -vhost_user_iotlb_set_dump(struct vhost_iotlb_entry *node) +vhost_user_iotlb_set_dump(struct virtio_net *dev, struct vhost_iotlb_entry *node) { uint64_t start; start = node->uaddr + node->uoffset; - mem_set_dump((void *)(uintptr_t)start, node->size, true, RTE_BIT64(node->page_shift)); + mem_set_dump(dev, (void *)(uintptr_t)start, node->size, true, RTE_BIT64(node->page_shift)); } static void -vhost_user_iotlb_clear_dump(struct vhost_iotlb_entry *node, +vhost_user_iotlb_clear_dump(struct virtio_net *dev, struct vhost_iotlb_entry *node, struct vhost_iotlb_entry *prev, struct vhost_iotlb_entry *next) { uint64_t start, end; @@ -80,7 +80,7 @@ vhost_user_iotlb_clear_dump(struct vhost_iotlb_entry *node, end = RTE_ALIGN_FLOOR(end, RTE_BIT64(node->page_shift)); if (end > start) - mem_set_dump((void *)(uintptr_t)start, end - start, false, + mem_set_dump(dev, (void *)(uintptr_t)start, end - start, false, RTE_BIT64(node->page_shift)); } @@ -204,7 +204,7 @@ vhost_user_iotlb_cache_remove_all(struct virtio_net *dev) vhost_user_iotlb_wr_lock_all(dev); RTE_TAILQ_FOREACH_SAFE(node, &dev->iotlb_list, next, temp_node) { - vhost_user_iotlb_clear_dump(node, NULL, NULL); + vhost_user_iotlb_clear_dump(dev, node, NULL, NULL); TAILQ_REMOVE(&dev->iotlb_list, node, next); vhost_user_iotlb_remove_notify(dev, node); @@ -230,7 +230,7 @@ vhost_user_iotlb_cache_random_evict(struct virtio_net *dev) if (!entry_idx) { struct vhost_iotlb_entry *next_node = RTE_TAILQ_NEXT(node, next); - vhost_user_iotlb_clear_dump(node, prev_node, next_node); + vhost_user_iotlb_clear_dump(dev, node, prev_node, next_node); TAILQ_REMOVE(&dev->iotlb_list, node, next); vhost_user_iotlb_remove_notify(dev, node); @@ -285,7 +285,7 @@ vhost_user_iotlb_cache_insert(struct virtio_net *dev, uint64_t iova, uint64_t ua vhost_user_iotlb_pool_put(dev, new_node); goto unlock; } else if (node->iova > new_node->iova) { - vhost_user_iotlb_set_dump(new_node); + vhost_user_iotlb_set_dump(dev, new_node); TAILQ_INSERT_BEFORE(node, new_node, next); dev->iotlb_cache_nr++; @@ -293,7 +293,7 @@ vhost_user_iotlb_cache_insert(struct virtio_net *dev, uint64_t iova, uint64_t ua } } - vhost_user_iotlb_set_dump(new_node); + vhost_user_iotlb_set_dump(dev, new_node); TAILQ_INSERT_TAIL(&dev->iotlb_list, new_node, next); dev->iotlb_cache_nr++; @@ -322,7 +322,7 @@ vhost_user_iotlb_cache_remove(struct virtio_net *dev, uint64_t iova, uint64_t si if (iova < node->iova + node->size) { struct vhost_iotlb_entry *next_node = RTE_TAILQ_NEXT(node, next); - vhost_user_iotlb_clear_dump(node, prev_node, next_node); + vhost_user_iotlb_clear_dump(dev, node, prev_node, next_node); TAILQ_REMOVE(&dev->iotlb_list, node, next); vhost_user_iotlb_remove_notify(dev, node); diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index f8624fba3d..5f24911190 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -1062,6 +1062,6 @@ mbuf_is_consumed(struct rte_mbuf *m) return true; } -void mem_set_dump(void *ptr, size_t size, bool enable, uint64_t alignment); +void mem_set_dump(struct virtio_net *dev, void *ptr, size_t size, bool enable, uint64_t alignment); #endif /* _VHOST_NET_CDEV_H_ */ diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index e36312181a..413f068bcd 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -763,7 +763,7 @@ hua_to_alignment(struct rte_vhost_memory *mem, void *ptr) } void -mem_set_dump(void *ptr, size_t size, bool enable, uint64_t pagesz) +mem_set_dump(struct virtio_net *dev, void *ptr, size_t size, bool enable, uint64_t pagesz) { #ifdef MADV_DONTDUMP void *start = RTE_PTR_ALIGN_FLOOR(ptr, pagesz); @@ -771,8 +771,8 @@ mem_set_dump(void *ptr, size_t size, bool enable, uint64_t pagesz) size_t len = end - (uintptr_t)start; if (madvise(start, len, enable ? MADV_DODUMP : MADV_DONTDUMP) == -1) { - rte_log(RTE_LOG_INFO, vhost_config_log_level, - "VHOST_CONFIG: could not set coredump preference (%s).\n", strerror(errno)); + VHOST_LOG_CONFIG(dev->ifname, INFO, + "could not set coredump preference (%s).\n", strerror(errno)); } #endif } @@ -807,7 +807,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->desc_packed, len, true, + mem_set_dump(dev, vq->desc_packed, len, true, hua_to_alignment(dev->mem, vq->desc_packed)); numa_realloc(&dev, &vq); *pdev = dev; @@ -824,7 +824,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->driver_event, len, true, + mem_set_dump(dev, vq->driver_event, len, true, hua_to_alignment(dev->mem, vq->driver_event)); len = sizeof(struct vring_packed_desc_event); vq->device_event = (struct vring_packed_desc_event *) @@ -837,7 +837,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->device_event, len, true, + mem_set_dump(dev, vq->device_event, len, true, hua_to_alignment(dev->mem, vq->device_event)); vq->access_ok = true; return; @@ -855,7 +855,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->desc, len, true, hua_to_alignment(dev->mem, vq->desc)); + mem_set_dump(dev, vq->desc, len, true, hua_to_alignment(dev->mem, vq->desc)); numa_realloc(&dev, &vq); *pdev = dev; *pvq = vq; @@ -871,7 +871,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->avail, len, true, hua_to_alignment(dev->mem, vq->avail)); + mem_set_dump(dev, vq->avail, len, true, hua_to_alignment(dev->mem, vq->avail)); len = sizeof(struct vring_used) + sizeof(struct vring_used_elem) * vq->size; if (dev->features & (1ULL << VIRTIO_RING_F_EVENT_IDX)) @@ -884,7 +884,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } - mem_set_dump(vq->used, len, true, hua_to_alignment(dev->mem, vq->used)); + mem_set_dump(dev, vq->used, len, true, hua_to_alignment(dev->mem, vq->used)); if (vq->last_used_idx != vq->used->idx) { VHOST_LOG_CONFIG(dev->ifname, WARNING, @@ -1274,7 +1274,7 @@ vhost_user_mmap_region(struct virtio_net *dev, region->mmap_addr = mmap_addr; region->mmap_size = mmap_size; region->host_user_addr = (uint64_t)(uintptr_t)mmap_addr + mmap_offset; - mem_set_dump(mmap_addr, mmap_size, false, alignment); + mem_set_dump(dev, mmap_addr, mmap_size, false, alignment); if (dev->async_copy) { if (add_guest_pages(dev, region, alignment) < 0) { @@ -1580,7 +1580,7 @@ inflight_mem_alloc(struct virtio_net *dev, const char *name, size_t size, int *f } alignment = get_blk_size(mfd); - mem_set_dump(ptr, size, false, alignment); + mem_set_dump(dev, ptr, size, false, alignment); *fd = mfd; return ptr; } @@ -1789,7 +1789,7 @@ vhost_user_set_inflight_fd(struct virtio_net **pdev, dev->inflight_info->fd = -1; } - mem_set_dump(addr, mmap_size, false, get_blk_size(fd)); + mem_set_dump(dev, addr, mmap_size, false, get_blk_size(fd)); dev->inflight_info->fd = fd; dev->inflight_info->addr = addr; dev->inflight_info->size = mmap_size; @@ -2343,7 +2343,7 @@ vhost_user_set_log_base(struct virtio_net **pdev, dev->log_addr = (uint64_t)(uintptr_t)addr; dev->log_base = dev->log_addr + off; dev->log_size = size; - mem_set_dump(addr, size + off, false, alignment); + mem_set_dump(dev, addr, size + off, false, alignment); for (i = 0; i < dev->nr_vring; i++) { struct vhost_virtqueue *vq = dev->virtqueue[i]; -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v5 11/13] log: add a per line log helper 2023-12-20 15:35 ` [PATCH v5 00/13] Detect superfluous newline in logs David Marchand ` (9 preceding siblings ...) 2023-12-20 15:36 ` [PATCH v5 10/13] vhost: improve log for memory dumping configuration David Marchand @ 2023-12-20 15:36 ` David Marchand 2023-12-20 15:42 ` David Marchand 2023-12-20 15:36 ` [PATCH v5 12/13] lib: replace logging helpers David Marchand ` (3 subsequent siblings) 14 siblings, 1 reply; 122+ messages in thread From: David Marchand @ 2023-12-20 15:36 UTC (permalink / raw) To: dev; +Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Chengwen Feng gcc builtin __builtin_strchr can be used as a static assertion to check whether passed format strings contain a \n. This can be useful to detect double \n in log messages. Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Acked-by: Chengwen Feng <fengchengwen@huawei.com> --- Changes since v4: - fixed build with -pedantic, Changes since v3: - fixed some checkpatch complaints, Changes since RFC v1: - added a check in checkpatches.sh, --- devtools/checkpatches.sh | 8 ++++++++ lib/log/rte_log.h | 21 +++++++++++++++++++++ 2 files changed, 29 insertions(+) diff --git a/devtools/checkpatches.sh b/devtools/checkpatches.sh index 10b79ca2bc..10d1bf490b 100755 --- a/devtools/checkpatches.sh +++ b/devtools/checkpatches.sh @@ -53,6 +53,14 @@ print_usage () { check_forbidden_additions() { # <patch> res=0 + # refrain from new calls to RTE_LOG + awk -v FOLDERS="lib" \ + -v EXPRESSIONS="RTE_LOG\\\(" \ + -v RET_ON_FAIL=1 \ + -v MESSAGE='Prefer RTE_LOG_LINE' \ + -f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \ + "$1" || res=1 + # refrain from new additions of rte_panic() and rte_exit() # multiple folders and expressions are separated by spaces awk -v FOLDERS="lib drivers" \ diff --git a/lib/log/rte_log.h b/lib/log/rte_log.h index 3394746103..5ba198ba24 100644 --- a/lib/log/rte_log.h +++ b/lib/log/rte_log.h @@ -17,6 +17,7 @@ extern "C" { #endif +#include <assert.h> #include <stdint.h> #include <stdio.h> #include <stdarg.h> @@ -358,6 +359,26 @@ int rte_vlog(uint32_t level, uint32_t logtype, const char *format, va_list ap) RTE_LOGTYPE_ ## t, # t ": " __VA_ARGS__) : \ 0) +#if defined(RTE_TOOLCHAIN_GCC) && !defined(PEDANTIC) +#define RTE_LOG_CHECK_NO_NEWLINE(fmt) \ + static_assert(!__builtin_strchr(fmt, '\n'), \ + "This log format string contains a \\n") +#else +#define RTE_LOG_CHECK_NO_NEWLINE(...) +#endif + +#define RTE_LOG_LINE(l, t, ...) do { \ + RTE_LOG_CHECK_NO_NEWLINE(RTE_FMT_HEAD(__VA_ARGS__ ,)); \ + RTE_LOG(l, t, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + RTE_FMT_TAIL(__VA_ARGS__ ,))); \ +} while (0) + +#define RTE_LOG_DP_LINE(l, t, ...) do { \ + RTE_LOG_CHECK_NO_NEWLINE(RTE_FMT_HEAD(__VA_ARGS__ ,)); \ + RTE_LOG_DP(l, t, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + RTE_FMT_TAIL(__VA_ARGS__ ,))); \ +} while (0) + #define RTE_LOG_REGISTER_IMPL(type, name, level) \ int type; \ RTE_INIT(__##type) \ -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [PATCH v5 11/13] log: add a per line log helper 2023-12-20 15:36 ` [PATCH v5 11/13] log: add a per line log helper David Marchand @ 2023-12-20 15:42 ` David Marchand 0 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-20 15:42 UTC (permalink / raw) To: dev; +Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Chengwen Feng On Wed, Dec 20, 2023 at 4:39 PM David Marchand <david.marchand@redhat.com> wrote: > > gcc builtin __builtin_strchr can be used as a static assertion to check > whether passed format strings contain a \n. > This can be useful to detect double \n in log messages. > > Signed-off-by: David Marchand <david.marchand@redhat.com> > Acked-by: Stephen Hemminger <stephen@networkplumber.org> > Acked-by: Chengwen Feng <fengchengwen@huawei.com> > --- > Changes since v4: > - fixed build with -pedantic, Unfortunately, upon testing, clang does not support constant expression folding in (Ubuntu 20.04 and Fedora 37 at least) older versions. So we may manage to make this check work with clang, but that's for the future. -- David Marchand ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v5 12/13] lib: replace logging helpers 2023-12-20 15:35 ` [PATCH v5 00/13] Detect superfluous newline in logs David Marchand ` (10 preceding siblings ...) 2023-12-20 15:36 ` [PATCH v5 11/13] log: add a per line log helper David Marchand @ 2023-12-20 15:36 ` David Marchand 2023-12-20 15:36 ` [PATCH v5 13/13] lib: use per line logging in helpers David Marchand ` (2 subsequent siblings) 14 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-20 15:36 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Andrew Rybchenko, Konstantin Ananyev, Ruifeng Wang, Ori Kam, Yipeng Wang, Sameh Gobriel, Reshma Pattan, Anatoly Burakov, David Hunt, Sivaprasad Tummala, Ciara Power, Maxime Coquelin, Chenbo Xia This is a preparation step before the next change. Many libraries have their own logging helpers that do not add a newline in their format string. Some previous changes fixed places where some of those helpers are called without a trailing newline. Using RTE_LOG_LINE in the existing helpers will ensure we don't introduce new issues in the future. The problem is that if we simply convert to the RTE_LOG_LINE helper, a future fix may introduce a regression since the logging helper change won't be backported. To address this concern, rename existing helpers: backporting a call to them will trigger some conflict or build issue in LTS branches. Note: - bpf and vhost that still has some debug multilines messages, a direct call to RTE_LOG/RTE_LOG_DP is used: this will make it easier to notice such special cases, - about previously publicly exposed logging helpers, when such helper is not publicly used (iow in public inline API), it is removed from the public API (this is the case for the member library), Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> --- Changes since RFC v2: - kept a RTE_ prefix for bpf log macro to avoid potential collision with external code, --- lib/bpf/bpf.c | 2 +- lib/bpf/bpf_convert.c | 16 +- lib/bpf/bpf_exec.c | 12 +- lib/bpf/bpf_impl.h | 5 +- lib/bpf/bpf_jit_arm64.c | 8 +- lib/bpf/bpf_jit_x86.c | 4 +- lib/bpf/bpf_load.c | 2 +- lib/bpf/bpf_load_elf.c | 24 +- lib/bpf/bpf_pkt.c | 4 +- lib/bpf/bpf_stub.c | 4 +- lib/bpf/bpf_validate.c | 38 +- lib/ethdev/ethdev_driver.c | 44 +- lib/ethdev/ethdev_pci.h | 2 +- lib/ethdev/ethdev_private.c | 10 +- lib/ethdev/rte_class_eth.c | 2 +- lib/ethdev/rte_ethdev.c | 878 +++++++++++++-------------- lib/ethdev/rte_ethdev.h | 52 +- lib/ethdev/rte_ethdev_cman.c | 16 +- lib/ethdev/rte_ethdev_telemetry.c | 44 +- lib/ethdev/rte_flow.c | 64 +- lib/ethdev/rte_flow.h | 3 - lib/ethdev/sff_telemetry.c | 30 +- lib/member/member.h | 14 + lib/member/rte_member.c | 15 +- lib/member/rte_member.h | 9 - lib/member/rte_member_heap.h | 39 +- lib/member/rte_member_ht.c | 13 +- lib/member/rte_member_sketch.c | 41 +- lib/member/rte_member_vbf.c | 9 +- lib/pdump/rte_pdump.c | 112 ++-- lib/power/power_acpi_cpufreq.c | 10 +- lib/power/power_amd_pstate_cpufreq.c | 12 +- lib/power/power_common.c | 4 +- lib/power/power_common.h | 6 +- lib/power/power_cppc_cpufreq.c | 12 +- lib/power/power_intel_uncore.c | 4 +- lib/power/power_pstate_cpufreq.c | 12 +- lib/regexdev/rte_regexdev.c | 86 +-- lib/regexdev/rte_regexdev.h | 14 +- lib/telemetry/telemetry.c | 41 +- lib/vhost/iotlb.c | 18 +- lib/vhost/socket.c | 102 ++-- lib/vhost/vdpa.c | 8 +- lib/vhost/vduse.c | 120 ++-- lib/vhost/vduse.h | 4 +- lib/vhost/vhost.c | 118 ++-- lib/vhost/vhost.h | 22 +- lib/vhost/vhost_user.c | 508 ++++++++-------- lib/vhost/virtio_net.c | 188 +++--- lib/vhost/virtio_net_ctrl.c | 38 +- 50 files changed, 1429 insertions(+), 1414 deletions(-) create mode 100644 lib/member/member.h diff --git a/lib/bpf/bpf.c b/lib/bpf/bpf.c index 8a0254d8bb..bbe75c8bfe 100644 --- a/lib/bpf/bpf.c +++ b/lib/bpf/bpf.c @@ -44,7 +44,7 @@ __rte_bpf_jit(struct rte_bpf *bpf) #endif if (rc != 0) - RTE_BPF_LOG(WARNING, "%s(%p) failed, error code: %d;\n", + RTE_BPF_LOG_LINE(WARNING, "%s(%p) failed, error code: %d;", __func__, bpf, rc); return rc; } diff --git a/lib/bpf/bpf_convert.c b/lib/bpf/bpf_convert.c index d441be6663..d7ff2b4325 100644 --- a/lib/bpf/bpf_convert.c +++ b/lib/bpf/bpf_convert.c @@ -226,8 +226,8 @@ static bool convert_bpf_load(const struct bpf_insn *fp, case SKF_AD_OFF + SKF_AD_RANDOM: case SKF_AD_OFF + SKF_AD_ALU_XOR_X: /* Linux has special negative offsets to access meta-data. */ - RTE_BPF_LOG(ERR, - "rte_bpf_convert: socket offset %d not supported\n", + RTE_BPF_LOG_LINE(ERR, + "rte_bpf_convert: socket offset %d not supported", fp->k - SKF_AD_OFF); return true; default: @@ -246,7 +246,7 @@ static int bpf_convert_filter(const struct bpf_insn *prog, size_t len, uint8_t bpf_src; if (len > BPF_MAXINSNS) { - RTE_BPF_LOG(ERR, "%s: cBPF program too long (%zu insns)\n", + RTE_BPF_LOG_LINE(ERR, "%s: cBPF program too long (%zu insns)", __func__, len); return -EINVAL; } @@ -482,7 +482,7 @@ static int bpf_convert_filter(const struct bpf_insn *prog, size_t len, /* Unknown instruction. */ default: - RTE_BPF_LOG(ERR, "%s: Unknown instruction!: %#x\n", + RTE_BPF_LOG_LINE(ERR, "%s: Unknown instruction!: %#x", __func__, fp->code); goto err; } @@ -526,7 +526,7 @@ rte_bpf_convert(const struct bpf_program *prog) int ret; if (prog == NULL) { - RTE_BPF_LOG(ERR, "%s: NULL program\n", __func__); + RTE_BPF_LOG_LINE(ERR, "%s: NULL program", __func__); rte_errno = EINVAL; return NULL; } @@ -534,12 +534,12 @@ rte_bpf_convert(const struct bpf_program *prog) /* 1st pass: calculate the eBPF program length */ ret = bpf_convert_filter(prog->bf_insns, prog->bf_len, NULL, &ebpf_len); if (ret < 0) { - RTE_BPF_LOG(ERR, "%s: cannot get eBPF length\n", __func__); + RTE_BPF_LOG_LINE(ERR, "%s: cannot get eBPF length", __func__); rte_errno = -ret; return NULL; } - RTE_BPF_LOG(DEBUG, "%s: prog len cBPF=%u -> eBPF=%u\n", + RTE_BPF_LOG_LINE(DEBUG, "%s: prog len cBPF=%u -> eBPF=%u", __func__, prog->bf_len, ebpf_len); prm = rte_zmalloc("bpf_filter", @@ -555,7 +555,7 @@ rte_bpf_convert(const struct bpf_program *prog) /* 2nd pass: remap cBPF to eBPF instructions */ ret = bpf_convert_filter(prog->bf_insns, prog->bf_len, ebpf, &ebpf_len); if (ret < 0) { - RTE_BPF_LOG(ERR, "%s: cannot convert cBPF to eBPF\n", __func__); + RTE_BPF_LOG_LINE(ERR, "%s: cannot convert cBPF to eBPF", __func__); free(prm); rte_errno = -ret; return NULL; diff --git a/lib/bpf/bpf_exec.c b/lib/bpf/bpf_exec.c index 09f4a9a571..5d597ec170 100644 --- a/lib/bpf/bpf_exec.c +++ b/lib/bpf/bpf_exec.c @@ -43,8 +43,8 @@ #define BPF_DIV_ZERO_CHECK(bpf, reg, ins, type) do { \ if ((type)(reg)[(ins)->src_reg] == 0) { \ - RTE_BPF_LOG(ERR, \ - "%s(%p): division by 0 at pc: %#zx;\n", \ + RTE_BPF_LOG_LINE(ERR, \ + "%s(%p): division by 0 at pc: %#zx;", \ __func__, bpf, \ (uintptr_t)(ins) - (uintptr_t)(bpf)->prm.ins); \ return 0; \ @@ -136,8 +136,8 @@ bpf_ld_mbuf(const struct rte_bpf *bpf, uint64_t reg[EBPF_REG_NUM], mb = (const struct rte_mbuf *)(uintptr_t)reg[EBPF_REG_6]; p = rte_pktmbuf_read(mb, off, len, reg + EBPF_REG_0); if (p == NULL) - RTE_BPF_LOG(DEBUG, "%s(bpf=%p, mbuf=%p, ofs=%u, len=%u): " - "load beyond packet boundary at pc: %#zx;\n", + RTE_BPF_LOG_LINE(DEBUG, "%s(bpf=%p, mbuf=%p, ofs=%u, len=%u): " + "load beyond packet boundary at pc: %#zx;", __func__, bpf, mb, off, len, (uintptr_t)(ins) - (uintptr_t)(bpf)->prm.ins); return p; @@ -462,8 +462,8 @@ bpf_exec(const struct rte_bpf *bpf, uint64_t reg[EBPF_REG_NUM]) case (BPF_JMP | EBPF_EXIT): return reg[EBPF_REG_0]; default: - RTE_BPF_LOG(ERR, - "%s(%p): invalid opcode %#x at pc: %#zx;\n", + RTE_BPF_LOG_LINE(ERR, + "%s(%p): invalid opcode %#x at pc: %#zx;", __func__, bpf, ins->code, (uintptr_t)ins - (uintptr_t)bpf->prm.ins); return 0; diff --git a/lib/bpf/bpf_impl.h b/lib/bpf/bpf_impl.h index b483569071..6a82ae4ef2 100644 --- a/lib/bpf/bpf_impl.h +++ b/lib/bpf/bpf_impl.h @@ -27,9 +27,10 @@ int __rte_bpf_jit_x86(struct rte_bpf *bpf); int __rte_bpf_jit_arm64(struct rte_bpf *bpf); extern int rte_bpf_logtype; +#define RTE_LOGTYPE_BPF rte_bpf_logtype -#define RTE_BPF_LOG(lvl, fmt, args...) \ - rte_log(RTE_LOG_## lvl, rte_bpf_logtype, fmt, ##args) +#define RTE_BPF_LOG_LINE(lvl, fmt, args...) \ + RTE_LOG(lvl, BPF, fmt "\n", ##args) static inline size_t bpf_size(uint32_t bpf_op_sz) diff --git a/lib/bpf/bpf_jit_arm64.c b/lib/bpf/bpf_jit_arm64.c index f9ddafd7dc..96b8cd2e03 100644 --- a/lib/bpf/bpf_jit_arm64.c +++ b/lib/bpf/bpf_jit_arm64.c @@ -98,8 +98,8 @@ check_invalid_args(struct a64_jit_ctx *ctx, uint32_t limit) for (idx = 0; idx < limit; idx++) { if (rte_le_to_cpu_32(ctx->ins[idx]) == A64_INVALID_OP_CODE) { - RTE_BPF_LOG(ERR, - "%s: invalid opcode at %u;\n", __func__, idx); + RTE_BPF_LOG_LINE(ERR, + "%s: invalid opcode at %u;", __func__, idx); return -EINVAL; } } @@ -1378,8 +1378,8 @@ emit(struct a64_jit_ctx *ctx, struct rte_bpf *bpf) emit_epilogue(ctx); break; default: - RTE_BPF_LOG(ERR, - "%s(%p): invalid opcode %#x at pc: %u;\n", + RTE_BPF_LOG_LINE(ERR, + "%s(%p): invalid opcode %#x at pc: %u;", __func__, bpf, ins->code, i); return -EINVAL; } diff --git a/lib/bpf/bpf_jit_x86.c b/lib/bpf/bpf_jit_x86.c index a73b2006db..4d74e418f8 100644 --- a/lib/bpf/bpf_jit_x86.c +++ b/lib/bpf/bpf_jit_x86.c @@ -1476,8 +1476,8 @@ emit(struct bpf_jit_state *st, const struct rte_bpf *bpf) emit_epilog(st); break; default: - RTE_BPF_LOG(ERR, - "%s(%p): invalid opcode %#x at pc: %u;\n", + RTE_BPF_LOG_LINE(ERR, + "%s(%p): invalid opcode %#x at pc: %u;", __func__, bpf, ins->code, i); return -EINVAL; } diff --git a/lib/bpf/bpf_load.c b/lib/bpf/bpf_load.c index 45ce9210da..de43347405 100644 --- a/lib/bpf/bpf_load.c +++ b/lib/bpf/bpf_load.c @@ -98,7 +98,7 @@ rte_bpf_load(const struct rte_bpf_prm *prm) if (rc != 0) { rte_errno = -rc; - RTE_BPF_LOG(ERR, "%s: %d-th xsym is invalid\n", __func__, i); + RTE_BPF_LOG_LINE(ERR, "%s: %d-th xsym is invalid", __func__, i); return NULL; } diff --git a/lib/bpf/bpf_load_elf.c b/lib/bpf/bpf_load_elf.c index 02a5d8ba0d..e0abd3c856 100644 --- a/lib/bpf/bpf_load_elf.c +++ b/lib/bpf/bpf_load_elf.c @@ -84,8 +84,8 @@ resolve_xsym(const char *sn, size_t ofs, struct ebpf_insn *ins, size_t ins_sz, * as an ordinary EBPF_CALL. */ if (ins[idx].src_reg == EBPF_PSEUDO_CALL) { - RTE_BPF_LOG(INFO, "%s(%u): " - "EBPF_PSEUDO_CALL to external function: %s\n", + RTE_BPF_LOG_LINE(INFO, "%s(%u): " + "EBPF_PSEUDO_CALL to external function: %s", __func__, idx, sn); ins[idx].src_reg = EBPF_REG_0; } @@ -121,7 +121,7 @@ check_elf_header(const Elf64_Ehdr *eh) err = "unexpected machine type"; if (err != NULL) { - RTE_BPF_LOG(ERR, "%s(): %s\n", __func__, err); + RTE_BPF_LOG_LINE(ERR, "%s(): %s", __func__, err); return -EINVAL; } @@ -144,7 +144,7 @@ find_elf_code(Elf *elf, const char *section, Elf_Data **psd, size_t *pidx) eh = elf64_getehdr(elf); if (eh == NULL) { rc = elf_errno(); - RTE_BPF_LOG(ERR, "%s(%p, %s) error code: %d(%s)\n", + RTE_BPF_LOG_LINE(ERR, "%s(%p, %s) error code: %d(%s)", __func__, elf, section, rc, elf_errmsg(rc)); return -EINVAL; } @@ -167,7 +167,7 @@ find_elf_code(Elf *elf, const char *section, Elf_Data **psd, size_t *pidx) if (sd == NULL || sd->d_size == 0 || sd->d_size % sizeof(struct ebpf_insn) != 0) { rc = elf_errno(); - RTE_BPF_LOG(ERR, "%s(%p, %s) error code: %d(%s)\n", + RTE_BPF_LOG_LINE(ERR, "%s(%p, %s) error code: %d(%s)", __func__, elf, section, rc, elf_errmsg(rc)); return -EINVAL; } @@ -216,8 +216,8 @@ process_reloc(Elf *elf, size_t sym_idx, Elf64_Rel *re, size_t re_sz, rc = resolve_xsym(sn, ofs, ins, ins_sz, prm); if (rc != 0) { - RTE_BPF_LOG(ERR, - "resolve_xsym(%s, %zu) error code: %d\n", + RTE_BPF_LOG_LINE(ERR, + "resolve_xsym(%s, %zu) error code: %d", sn, ofs, rc); return rc; } @@ -309,7 +309,7 @@ rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname, fd = open(fname, O_RDONLY); if (fd < 0) { rc = errno; - RTE_BPF_LOG(ERR, "%s(%s) error code: %d(%s)\n", + RTE_BPF_LOG_LINE(ERR, "%s(%s) error code: %d(%s)", __func__, fname, rc, strerror(rc)); rte_errno = EINVAL; return NULL; @@ -319,15 +319,15 @@ rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname, close(fd); if (bpf == NULL) { - RTE_BPF_LOG(ERR, + RTE_BPF_LOG_LINE(ERR, "%s(fname=\"%s\", sname=\"%s\") failed, " - "error code: %d\n", + "error code: %d", __func__, fname, sname, rte_errno); return NULL; } - RTE_BPF_LOG(INFO, "%s(fname=\"%s\", sname=\"%s\") " - "successfully creates %p(jit={.func=%p,.sz=%zu});\n", + RTE_BPF_LOG_LINE(INFO, "%s(fname=\"%s\", sname=\"%s\") " + "successfully creates %p(jit={.func=%p,.sz=%zu});", __func__, fname, sname, bpf, bpf->jit.func, bpf->jit.sz); return bpf; } diff --git a/lib/bpf/bpf_pkt.c b/lib/bpf/bpf_pkt.c index 7a8e4a6ef4..793a75ded9 100644 --- a/lib/bpf/bpf_pkt.c +++ b/lib/bpf/bpf_pkt.c @@ -512,7 +512,7 @@ bpf_eth_elf_load(struct bpf_eth_cbh *cbh, uint16_t port, uint16_t queue, ftx = select_tx_callback(prm->prog_arg.type, flags); if (frx == NULL && ftx == NULL) { - RTE_BPF_LOG(ERR, "%s(%u, %u): no callback selected;\n", + RTE_BPF_LOG_LINE(ERR, "%s(%u, %u): no callback selected;", __func__, port, queue); return -EINVAL; } @@ -524,7 +524,7 @@ bpf_eth_elf_load(struct bpf_eth_cbh *cbh, uint16_t port, uint16_t queue, rte_bpf_get_jit(bpf, &jit); if ((flags & RTE_BPF_ETH_F_JIT) != 0 && jit.func == NULL) { - RTE_BPF_LOG(ERR, "%s(%u, %u): no JIT generated;\n", + RTE_BPF_LOG_LINE(ERR, "%s(%u, %u): no JIT generated;", __func__, port, queue); rte_bpf_destroy(bpf); return -ENOTSUP; diff --git a/lib/bpf/bpf_stub.c b/lib/bpf/bpf_stub.c index 83c2203622..1babb16bde 100644 --- a/lib/bpf/bpf_stub.c +++ b/lib/bpf/bpf_stub.c @@ -19,7 +19,7 @@ rte_bpf_elf_load(const struct rte_bpf_prm *prm, const char *fname, return NULL; } - RTE_BPF_LOG(ERR, "%s() is not supported, rebuild with libelf installed\n", + RTE_BPF_LOG_LINE(ERR, "%s() is not supported, rebuild with libelf installed", __func__); rte_errno = ENOTSUP; return NULL; @@ -35,7 +35,7 @@ rte_bpf_convert(const struct bpf_program *prog) return NULL; } - RTE_BPF_LOG(ERR, "%s() is not supported, rebuild with libpcap installed\n", + RTE_BPF_LOG_LINE(ERR, "%s() is not supported, rebuild with libpcap installed", __func__); rte_errno = ENOTSUP; return NULL; diff --git a/lib/bpf/bpf_validate.c b/lib/bpf/bpf_validate.c index f246b3c5eb..79be5e917d 100644 --- a/lib/bpf/bpf_validate.c +++ b/lib/bpf/bpf_validate.c @@ -1812,15 +1812,15 @@ add_edge(struct bpf_verifier *bvf, struct inst_node *node, uint32_t nidx) uint32_t ne; if (nidx > bvf->prm->nb_ins) { - RTE_BPF_LOG(ERR, "%s: program boundary violation at pc: %u, " - "next pc: %u\n", + RTE_BPF_LOG_LINE(ERR, "%s: program boundary violation at pc: %u, " + "next pc: %u", __func__, get_node_idx(bvf, node), nidx); return -EINVAL; } ne = node->nb_edge; if (ne >= RTE_DIM(node->edge_dest)) { - RTE_BPF_LOG(ERR, "%s: internal error at pc: %u\n", + RTE_BPF_LOG_LINE(ERR, "%s: internal error at pc: %u", __func__, get_node_idx(bvf, node)); return -EINVAL; } @@ -1927,7 +1927,7 @@ log_unreachable(const struct bpf_verifier *bvf) if (node->colour == WHITE && ins->code != (BPF_LD | BPF_IMM | EBPF_DW)) - RTE_BPF_LOG(ERR, "unreachable code at pc: %u;\n", i); + RTE_BPF_LOG_LINE(ERR, "unreachable code at pc: %u;", i); } } @@ -1948,8 +1948,8 @@ log_loop(const struct bpf_verifier *bvf) for (j = 0; j != node->nb_edge; j++) { if (node->edge_type[j] == BACK_EDGE) - RTE_BPF_LOG(ERR, - "loop at pc:%u --> pc:%u;\n", + RTE_BPF_LOG_LINE(ERR, + "loop at pc:%u --> pc:%u;", i, node->edge_dest[j]); } } @@ -1979,7 +1979,7 @@ validate(struct bpf_verifier *bvf) err = check_syntax(ins); if (err != 0) { - RTE_BPF_LOG(ERR, "%s: %s at pc: %u\n", + RTE_BPF_LOG_LINE(ERR, "%s: %s at pc: %u", __func__, err, i); rc |= -EINVAL; } @@ -2048,7 +2048,7 @@ validate(struct bpf_verifier *bvf) dfs(bvf); - RTE_BPF_LOG(DEBUG, "%s(%p) stats:\n" + RTE_LOG(DEBUG, BPF, "%s(%p) stats:\n" "nb_nodes=%u;\n" "nb_jcc_nodes=%u;\n" "node_color={[WHITE]=%u, [GREY]=%u,, [BLACK]=%u};\n" @@ -2062,7 +2062,7 @@ validate(struct bpf_verifier *bvf) bvf->edge_type[BACK_EDGE], bvf->edge_type[CROSS_EDGE]); if (bvf->node_colour[BLACK] != bvf->nb_nodes) { - RTE_BPF_LOG(ERR, "%s(%p) unreachable instructions;\n", + RTE_BPF_LOG_LINE(ERR, "%s(%p) unreachable instructions;", __func__, bvf); log_unreachable(bvf); return -EINVAL; @@ -2070,13 +2070,13 @@ validate(struct bpf_verifier *bvf) if (bvf->node_colour[GREY] != 0 || bvf->node_colour[WHITE] != 0 || bvf->edge_type[UNKNOWN_EDGE] != 0) { - RTE_BPF_LOG(ERR, "%s(%p) DFS internal error;\n", + RTE_BPF_LOG_LINE(ERR, "%s(%p) DFS internal error;", __func__, bvf); return -EINVAL; } if (bvf->edge_type[BACK_EDGE] != 0) { - RTE_BPF_LOG(ERR, "%s(%p) loops detected;\n", + RTE_BPF_LOG_LINE(ERR, "%s(%p) loops detected;", __func__, bvf); log_loop(bvf); return -EINVAL; @@ -2144,8 +2144,8 @@ save_eval_state(struct bpf_verifier *bvf, struct inst_node *node) /* get new eval_state for this node */ st = pull_eval_state(bvf); if (st == NULL) { - RTE_BPF_LOG(ERR, - "%s: internal error (out of space) at pc: %u\n", + RTE_BPF_LOG_LINE(ERR, + "%s: internal error (out of space) at pc: %u", __func__, get_node_idx(bvf, node)); return -ENOMEM; } @@ -2157,7 +2157,7 @@ save_eval_state(struct bpf_verifier *bvf, struct inst_node *node) node->evst = bvf->evst; bvf->evst = st; - RTE_BPF_LOG(DEBUG, "%s(bvf=%p,node=%u) old/new states: %p/%p;\n", + RTE_BPF_LOG_LINE(DEBUG, "%s(bvf=%p,node=%u) old/new states: %p/%p;", __func__, bvf, get_node_idx(bvf, node), node->evst, bvf->evst); return 0; @@ -2169,7 +2169,7 @@ save_eval_state(struct bpf_verifier *bvf, struct inst_node *node) static void restore_eval_state(struct bpf_verifier *bvf, struct inst_node *node) { - RTE_BPF_LOG(DEBUG, "%s(bvf=%p,node=%u) old/new states: %p/%p;\n", + RTE_BPF_LOG_LINE(DEBUG, "%s(bvf=%p,node=%u) old/new states: %p/%p;", __func__, bvf, get_node_idx(bvf, node), bvf->evst, node->evst); bvf->evst = node->evst; @@ -2184,12 +2184,12 @@ log_dbg_eval_state(const struct bpf_verifier *bvf, const struct ebpf_insn *ins, const struct bpf_eval_state *st; const struct bpf_reg_val *rv; - RTE_BPF_LOG(DEBUG, "%s(pc=%u):\n", __func__, pc); + RTE_BPF_LOG_LINE(DEBUG, "%s(pc=%u):", __func__, pc); st = bvf->evst; rv = st->rv + ins->dst_reg; - RTE_BPF_LOG(DEBUG, + RTE_LOG(DEBUG, BPF, "r%u={\n" "\tv={type=%u, size=%zu},\n" "\tmask=0x%" PRIx64 ",\n" @@ -2263,7 +2263,7 @@ evaluate(struct bpf_verifier *bvf) if (ins_chk[op].eval != NULL && rc == 0) { err = ins_chk[op].eval(bvf, ins + idx); if (err != NULL) { - RTE_BPF_LOG(ERR, "%s: %s at pc: %u\n", + RTE_BPF_LOG_LINE(ERR, "%s: %s at pc: %u", __func__, err, idx); rc = -EINVAL; } @@ -2312,7 +2312,7 @@ __rte_bpf_validate(struct rte_bpf *bpf) bpf->prm.prog_arg.type != RTE_BPF_ARG_PTR && (sizeof(uint64_t) != sizeof(uintptr_t) || bpf->prm.prog_arg.type != RTE_BPF_ARG_PTR_MBUF)) { - RTE_BPF_LOG(ERR, "%s: unsupported argument type\n", __func__); + RTE_BPF_LOG_LINE(ERR, "%s: unsupported argument type", __func__); return -ENOTSUP; } diff --git a/lib/ethdev/ethdev_driver.c b/lib/ethdev/ethdev_driver.c index 55a9dcc565..bd917a15fc 100644 --- a/lib/ethdev/ethdev_driver.c +++ b/lib/ethdev/ethdev_driver.c @@ -80,12 +80,12 @@ rte_eth_dev_allocate(const char *name) name_len = strnlen(name, RTE_ETH_NAME_MAX_LEN); if (name_len == 0) { - RTE_ETHDEV_LOG(ERR, "Zero length Ethernet device name\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Zero length Ethernet device name"); return NULL; } if (name_len >= RTE_ETH_NAME_MAX_LEN) { - RTE_ETHDEV_LOG(ERR, "Ethernet device name is too long\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Ethernet device name is too long"); return NULL; } @@ -96,16 +96,16 @@ rte_eth_dev_allocate(const char *name) goto unlock; if (eth_dev_allocated(name) != NULL) { - RTE_ETHDEV_LOG(ERR, - "Ethernet device with name %s already allocated\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethernet device with name %s already allocated", name); goto unlock; } port_id = eth_dev_find_free_port(); if (port_id == RTE_MAX_ETHPORTS) { - RTE_ETHDEV_LOG(ERR, - "Reached maximum number of Ethernet ports\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Reached maximum number of Ethernet ports"); goto unlock; } @@ -163,8 +163,8 @@ rte_eth_dev_attach_secondary(const char *name) break; } if (i == RTE_MAX_ETHPORTS) { - RTE_ETHDEV_LOG(ERR, - "Device %s is not driven by the primary process\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Device %s is not driven by the primary process", name); } else { eth_dev = eth_dev_get(i); @@ -302,8 +302,8 @@ rte_eth_dev_create(struct rte_device *device, const char *name, device->numa_node); if (!ethdev->data->dev_private) { - RTE_ETHDEV_LOG(ERR, - "failed to allocate private data\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "failed to allocate private data"); retval = -ENOMEM; goto probe_failed; } @@ -311,8 +311,8 @@ rte_eth_dev_create(struct rte_device *device, const char *name, } else { ethdev = rte_eth_dev_attach_secondary(name); if (!ethdev) { - RTE_ETHDEV_LOG(ERR, - "secondary process attach failed, ethdev doesn't exist\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "secondary process attach failed, ethdev doesn't exist"); return -ENODEV; } } @@ -322,15 +322,15 @@ rte_eth_dev_create(struct rte_device *device, const char *name, if (ethdev_bus_specific_init) { retval = ethdev_bus_specific_init(ethdev, bus_init_params); if (retval) { - RTE_ETHDEV_LOG(ERR, - "ethdev bus specific initialisation failed\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "ethdev bus specific initialisation failed"); goto probe_failed; } } retval = ethdev_init(ethdev, init_params); if (retval) { - RTE_ETHDEV_LOG(ERR, "ethdev initialisation failed\n"); + RTE_ETHDEV_LOG_LINE(ERR, "ethdev initialisation failed"); goto probe_failed; } @@ -394,7 +394,7 @@ void rte_eth_dev_internal_reset(struct rte_eth_dev *dev) { if (dev->data->dev_started) { - RTE_ETHDEV_LOG(ERR, "Port %u must be stopped to allow reset\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port %u must be stopped to allow reset", dev->data->port_id); return; } @@ -487,7 +487,7 @@ rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da) pair = &args.pairs[i]; if (strcmp("representor", pair->key) == 0) { if (eth_da->type != RTE_ETH_REPRESENTOR_NONE) { - RTE_ETHDEV_LOG(ERR, "duplicated representor key: %s\n", + RTE_ETHDEV_LOG_LINE(ERR, "duplicated representor key: %s", dargs); result = -1; goto parse_cleanup; @@ -524,7 +524,7 @@ rte_eth_dma_zone_free(const struct rte_eth_dev *dev, const char *ring_name, rc = eth_dev_dma_mzone_name(z_name, sizeof(z_name), dev->data->port_id, queue_id, ring_name); if (rc >= RTE_MEMZONE_NAMESIZE) { - RTE_ETHDEV_LOG(ERR, "ring name too long\n"); + RTE_ETHDEV_LOG_LINE(ERR, "ring name too long"); return -ENAMETOOLONG; } @@ -549,7 +549,7 @@ rte_eth_dma_zone_reserve(const struct rte_eth_dev *dev, const char *ring_name, rc = eth_dev_dma_mzone_name(z_name, sizeof(z_name), dev->data->port_id, queue_id, ring_name); if (rc >= RTE_MEMZONE_NAMESIZE) { - RTE_ETHDEV_LOG(ERR, "ring name too long\n"); + RTE_ETHDEV_LOG_LINE(ERR, "ring name too long"); rte_errno = ENAMETOOLONG; return NULL; } @@ -559,8 +559,8 @@ rte_eth_dma_zone_reserve(const struct rte_eth_dev *dev, const char *ring_name, if ((socket_id != SOCKET_ID_ANY && socket_id != mz->socket_id) || size > mz->len || ((uintptr_t)mz->addr & (align - 1)) != 0) { - RTE_ETHDEV_LOG(ERR, - "memzone %s does not justify the requested attributes\n", + RTE_ETHDEV_LOG_LINE(ERR, + "memzone %s does not justify the requested attributes", mz->name); return NULL; } @@ -713,7 +713,7 @@ rte_eth_representor_id_get(uint16_t port_id, if (info->ranges[i].controller != controller) continue; if (info->ranges[i].id_end < info->ranges[i].id_base) { - RTE_ETHDEV_LOG(WARNING, "Port %hu invalid representor ID Range %u - %u, entry %d\n", + RTE_ETHDEV_LOG_LINE(WARNING, "Port %hu invalid representor ID Range %u - %u, entry %d", port_id, info->ranges[i].id_base, info->ranges[i].id_end, i); continue; diff --git a/lib/ethdev/ethdev_pci.h b/lib/ethdev/ethdev_pci.h index ddb559aa95..737fff1833 100644 --- a/lib/ethdev/ethdev_pci.h +++ b/lib/ethdev/ethdev_pci.h @@ -31,7 +31,7 @@ rte_eth_copy_pci_info(struct rte_eth_dev *eth_dev, struct rte_pci_device *pci_dev) { if ((eth_dev == NULL) || (pci_dev == NULL)) { - RTE_ETHDEV_LOG(ERR, "NULL pointer eth_dev=%p pci_dev=%p\n", + RTE_ETHDEV_LOG_LINE(ERR, "NULL pointer eth_dev=%p pci_dev=%p", (void *)eth_dev, (void *)pci_dev); return; } diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c index 0e1c7b23c1..a656df293c 100644 --- a/lib/ethdev/ethdev_private.c +++ b/lib/ethdev/ethdev_private.c @@ -182,7 +182,7 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data) RTE_DIM(eth_da->representor_ports)); done: if (str == NULL) - RTE_ETHDEV_LOG(ERR, "wrong representor format: %s\n", str); + RTE_ETHDEV_LOG_LINE(ERR, "wrong representor format: %s", str); return str == NULL ? -1 : 0; } @@ -214,7 +214,7 @@ dummy_eth_rx_burst(void *rxq, port_id = queue - per_port_queues; if (port_id < RTE_DIM(per_port_queues) && !queue->rx_warn_once) { - RTE_ETHDEV_LOG(ERR, "lcore %u called rx_pkt_burst for not ready port %"PRIuPTR"\n", + RTE_ETHDEV_LOG_LINE(ERR, "lcore %u called rx_pkt_burst for not ready port %"PRIuPTR, rte_lcore_id(), port_id); rte_dump_stack(); queue->rx_warn_once = true; @@ -233,7 +233,7 @@ dummy_eth_tx_burst(void *txq, port_id = queue - per_port_queues; if (port_id < RTE_DIM(per_port_queues) && !queue->tx_warn_once) { - RTE_ETHDEV_LOG(ERR, "lcore %u called tx_pkt_burst for not ready port %"PRIuPTR"\n", + RTE_ETHDEV_LOG_LINE(ERR, "lcore %u called tx_pkt_burst for not ready port %"PRIuPTR, rte_lcore_id(), port_id); rte_dump_stack(); queue->tx_warn_once = true; @@ -337,7 +337,7 @@ eth_dev_shared_data_prepare(void) sizeof(*eth_dev_shared_data), rte_socket_id(), flags); if (mz == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot allocate ethdev shared data\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot allocate ethdev shared data"); goto out; } @@ -355,7 +355,7 @@ eth_dev_shared_data_prepare(void) /* Clean remaining any traces of a previous shared mem */ eth_dev_shared_mz = NULL; eth_dev_shared_data = NULL; - RTE_ETHDEV_LOG(ERR, "Cannot lookup ethdev shared data\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot lookup ethdev shared data"); goto out; } if (mz == eth_dev_shared_mz && mz->addr == eth_dev_shared_data) diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c index 311beb17cb..bc003db8af 100644 --- a/lib/ethdev/rte_class_eth.c +++ b/lib/ethdev/rte_class_eth.c @@ -165,7 +165,7 @@ eth_dev_iterate(const void *start, valid_keys = eth_params_keys; kvargs = rte_kvargs_parse(str, valid_keys); if (kvargs == NULL) { - RTE_ETHDEV_LOG(ERR, "cannot parse argument list\n"); + RTE_ETHDEV_LOG_LINE(ERR, "cannot parse argument list"); rte_errno = EINVAL; return NULL; } diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 9dd0efa9d8..c5e75a91c8 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -182,13 +182,13 @@ rte_eth_iterator_init(struct rte_dev_iterator *iter, const char *devargs_str) int str_size; if (iter == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot initialize NULL iterator\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot initialize NULL iterator"); return -EINVAL; } if (devargs_str == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot initialize iterator from NULL device description string\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot initialize iterator from NULL device description string"); return -EINVAL; } @@ -279,7 +279,7 @@ rte_eth_iterator_init(struct rte_dev_iterator *iter, const char *devargs_str) error: if (ret == -ENOTSUP) - RTE_ETHDEV_LOG(ERR, "Bus %s does not support iterating.\n", + RTE_ETHDEV_LOG_LINE(ERR, "Bus %s does not support iterating.", iter->bus->name); rte_devargs_reset(&devargs); free(bus_str); @@ -291,8 +291,8 @@ uint16_t rte_eth_iterator_next(struct rte_dev_iterator *iter) { if (iter == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get next device from NULL iterator\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get next device from NULL iterator"); return RTE_MAX_ETHPORTS; } @@ -331,7 +331,7 @@ void rte_eth_iterator_cleanup(struct rte_dev_iterator *iter) { if (iter == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot do clean up from NULL iterator\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot do clean up from NULL iterator"); return; } @@ -447,7 +447,7 @@ rte_eth_dev_owner_new(uint64_t *owner_id) int ret; if (owner_id == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get new owner ID to NULL\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get new owner ID to NULL"); return -EINVAL; } @@ -477,30 +477,30 @@ eth_dev_owner_set(const uint16_t port_id, const uint64_t old_owner_id, struct rte_eth_dev_owner *port_owner; if (port_id >= RTE_MAX_ETHPORTS || !eth_dev_is_allocated(ethdev)) { - RTE_ETHDEV_LOG(ERR, "Port ID %"PRIu16" is not allocated\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port ID %"PRIu16" is not allocated", port_id); return -ENODEV; } if (new_owner == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot set ethdev port %u owner from NULL owner\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set ethdev port %u owner from NULL owner", port_id); return -EINVAL; } if (!eth_is_valid_owner_id(new_owner->id) && !eth_is_valid_owner_id(old_owner_id)) { - RTE_ETHDEV_LOG(ERR, - "Invalid owner old_id=%016"PRIx64" new_id=%016"PRIx64"\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid owner old_id=%016"PRIx64" new_id=%016"PRIx64, old_owner_id, new_owner->id); return -EINVAL; } port_owner = &rte_eth_devices[port_id].data->owner; if (port_owner->id != old_owner_id) { - RTE_ETHDEV_LOG(ERR, - "Cannot set owner to port %u already owned by %s_%016"PRIX64"\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set owner to port %u already owned by %s_%016"PRIX64, port_id, port_owner->name, port_owner->id); return -EPERM; } @@ -510,7 +510,7 @@ eth_dev_owner_set(const uint16_t port_id, const uint64_t old_owner_id, port_owner->id = new_owner->id; - RTE_ETHDEV_LOG(DEBUG, "Port %u owner is %s_%016"PRIx64"\n", + RTE_ETHDEV_LOG_LINE(DEBUG, "Port %u owner is %s_%016"PRIx64, port_id, new_owner->name, new_owner->id); return 0; @@ -575,14 +575,14 @@ rte_eth_dev_owner_delete(const uint64_t owner_id) memset(&data->owner, 0, sizeof(struct rte_eth_dev_owner)); } - RTE_ETHDEV_LOG(NOTICE, - "All port owners owned by %016"PRIx64" identifier have removed\n", + RTE_ETHDEV_LOG_LINE(NOTICE, + "All port owners owned by %016"PRIx64" identifier have removed", owner_id); eth_dev_shared_data->allocated_owners--; eth_dev_shared_data_release(); } else { - RTE_ETHDEV_LOG(ERR, - "Invalid owner ID=%016"PRIx64"\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid owner ID=%016"PRIx64, owner_id); ret = -EINVAL; } @@ -604,13 +604,13 @@ rte_eth_dev_owner_get(const uint16_t port_id, struct rte_eth_dev_owner *owner) ethdev = &rte_eth_devices[port_id]; if (!eth_dev_is_allocated(ethdev)) { - RTE_ETHDEV_LOG(ERR, "Port ID %"PRIu16" is not allocated\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port ID %"PRIu16" is not allocated", port_id); return -ENODEV; } if (owner == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u owner to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u owner to NULL", port_id); return -EINVAL; } @@ -699,7 +699,7 @@ rte_eth_dev_get_name_by_port(uint16_t port_id, char *name) RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); if (name == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u name to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u name to NULL", port_id); return -EINVAL; } @@ -724,13 +724,13 @@ rte_eth_dev_get_port_by_name(const char *name, uint16_t *port_id) uint16_t pid; if (name == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get port ID from NULL name\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get port ID from NULL name"); return -EINVAL; } if (port_id == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get port ID to NULL for %s\n", name); + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get port ID to NULL for %s", name); return -EINVAL; } @@ -766,16 +766,16 @@ eth_dev_validate_rx_queue(const struct rte_eth_dev *dev, uint16_t rx_queue_id) if (rx_queue_id >= dev->data->nb_rx_queues) { port_id = dev->data->port_id; - RTE_ETHDEV_LOG(ERR, - "Invalid Rx queue_id=%u of device with port_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid Rx queue_id=%u of device with port_id=%u", rx_queue_id, port_id); return -EINVAL; } if (dev->data->rx_queues[rx_queue_id] == NULL) { port_id = dev->data->port_id; - RTE_ETHDEV_LOG(ERR, - "Queue %u of device with port_id=%u has not been setup\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Queue %u of device with port_id=%u has not been setup", rx_queue_id, port_id); return -EINVAL; } @@ -790,16 +790,16 @@ eth_dev_validate_tx_queue(const struct rte_eth_dev *dev, uint16_t tx_queue_id) if (tx_queue_id >= dev->data->nb_tx_queues) { port_id = dev->data->port_id; - RTE_ETHDEV_LOG(ERR, - "Invalid Tx queue_id=%u of device with port_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid Tx queue_id=%u of device with port_id=%u", tx_queue_id, port_id); return -EINVAL; } if (dev->data->tx_queues[tx_queue_id] == NULL) { port_id = dev->data->port_id; - RTE_ETHDEV_LOG(ERR, - "Queue %u of device with port_id=%u has not been setup\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Queue %u of device with port_id=%u has not been setup", tx_queue_id, port_id); return -EINVAL; } @@ -839,8 +839,8 @@ rte_eth_dev_rx_queue_start(uint16_t port_id, uint16_t rx_queue_id) dev = &rte_eth_devices[port_id]; if (!dev->data->dev_started) { - RTE_ETHDEV_LOG(ERR, - "Port %u must be started before start any queue\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u must be started before start any queue", port_id); return -EINVAL; } @@ -853,15 +853,15 @@ rte_eth_dev_rx_queue_start(uint16_t port_id, uint16_t rx_queue_id) return -ENOTSUP; if (rte_eth_dev_is_rx_hairpin_queue(dev, rx_queue_id)) { - RTE_ETHDEV_LOG(INFO, - "Can't start Rx hairpin queue %"PRIu16" of device with port_id=%"PRIu16"\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Can't start Rx hairpin queue %"PRIu16" of device with port_id=%"PRIu16, rx_queue_id, port_id); return -EINVAL; } if (dev->data->rx_queue_state[rx_queue_id] != RTE_ETH_QUEUE_STATE_STOPPED) { - RTE_ETHDEV_LOG(INFO, - "Queue %"PRIu16" of device with port_id=%"PRIu16" already started\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Queue %"PRIu16" of device with port_id=%"PRIu16" already started", rx_queue_id, port_id); return 0; } @@ -890,15 +890,15 @@ rte_eth_dev_rx_queue_stop(uint16_t port_id, uint16_t rx_queue_id) return -ENOTSUP; if (rte_eth_dev_is_rx_hairpin_queue(dev, rx_queue_id)) { - RTE_ETHDEV_LOG(INFO, - "Can't stop Rx hairpin queue %"PRIu16" of device with port_id=%"PRIu16"\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Can't stop Rx hairpin queue %"PRIu16" of device with port_id=%"PRIu16, rx_queue_id, port_id); return -EINVAL; } if (dev->data->rx_queue_state[rx_queue_id] == RTE_ETH_QUEUE_STATE_STOPPED) { - RTE_ETHDEV_LOG(INFO, - "Queue %"PRIu16" of device with port_id=%"PRIu16" already stopped\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Queue %"PRIu16" of device with port_id=%"PRIu16" already stopped", rx_queue_id, port_id); return 0; } @@ -920,8 +920,8 @@ rte_eth_dev_tx_queue_start(uint16_t port_id, uint16_t tx_queue_id) dev = &rte_eth_devices[port_id]; if (!dev->data->dev_started) { - RTE_ETHDEV_LOG(ERR, - "Port %u must be started before start any queue\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u must be started before start any queue", port_id); return -EINVAL; } @@ -934,15 +934,15 @@ rte_eth_dev_tx_queue_start(uint16_t port_id, uint16_t tx_queue_id) return -ENOTSUP; if (rte_eth_dev_is_tx_hairpin_queue(dev, tx_queue_id)) { - RTE_ETHDEV_LOG(INFO, - "Can't start Tx hairpin queue %"PRIu16" of device with port_id=%"PRIu16"\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Can't start Tx hairpin queue %"PRIu16" of device with port_id=%"PRIu16, tx_queue_id, port_id); return -EINVAL; } if (dev->data->tx_queue_state[tx_queue_id] != RTE_ETH_QUEUE_STATE_STOPPED) { - RTE_ETHDEV_LOG(INFO, - "Queue %"PRIu16" of device with port_id=%"PRIu16" already started\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Queue %"PRIu16" of device with port_id=%"PRIu16" already started", tx_queue_id, port_id); return 0; } @@ -971,15 +971,15 @@ rte_eth_dev_tx_queue_stop(uint16_t port_id, uint16_t tx_queue_id) return -ENOTSUP; if (rte_eth_dev_is_tx_hairpin_queue(dev, tx_queue_id)) { - RTE_ETHDEV_LOG(INFO, - "Can't stop Tx hairpin queue %"PRIu16" of device with port_id=%"PRIu16"\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Can't stop Tx hairpin queue %"PRIu16" of device with port_id=%"PRIu16, tx_queue_id, port_id); return -EINVAL; } if (dev->data->tx_queue_state[tx_queue_id] == RTE_ETH_QUEUE_STATE_STOPPED) { - RTE_ETHDEV_LOG(INFO, - "Queue %"PRIu16" of device with port_id=%"PRIu16" already stopped\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Queue %"PRIu16" of device with port_id=%"PRIu16" already stopped", tx_queue_id, port_id); return 0; } @@ -1153,19 +1153,19 @@ eth_dev_check_lro_pkt_size(uint16_t port_id, uint32_t config_size, if (dev_info_size == 0) { if (config_size != max_rx_pkt_len) { - RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size" - " %u != %u is not allowed\n", + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%d max_lro_pkt_size" + " %u != %u is not allowed", port_id, config_size, max_rx_pkt_len); ret = -EINVAL; } } else if (config_size > dev_info_size) { - RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size %u " - "> max allowed value %u\n", port_id, config_size, + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%d max_lro_pkt_size %u " + "> max allowed value %u", port_id, config_size, dev_info_size); ret = -EINVAL; } else if (config_size < RTE_ETHER_MIN_LEN) { - RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size %u " - "< min allowed value %u\n", port_id, config_size, + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%d max_lro_pkt_size %u " + "< min allowed value %u", port_id, config_size, (unsigned int)RTE_ETHER_MIN_LEN); ret = -EINVAL; } @@ -1203,16 +1203,16 @@ eth_dev_validate_offloads(uint16_t port_id, uint64_t req_offloads, /* Check if any offload is requested but not enabled. */ offload = RTE_BIT64(rte_ctz64(offloads_diff)); if (offload & req_offloads) { - RTE_ETHDEV_LOG(ERR, - "Port %u failed to enable %s offload %s\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u failed to enable %s offload %s", port_id, offload_type, offload_name(offload)); ret = -EINVAL; } /* Check if offload couldn't be disabled. */ if (offload & set_offloads) { - RTE_ETHDEV_LOG(DEBUG, - "Port %u %s offload %s is not requested but enabled\n", + RTE_ETHDEV_LOG_LINE(DEBUG, + "Port %u %s offload %s is not requested but enabled", port_id, offload_type, offload_name(offload)); } @@ -1244,14 +1244,14 @@ eth_dev_validate_mtu(uint16_t port_id, struct rte_eth_dev_info *dev_info, uint32_t frame_size; if (mtu < dev_info->min_mtu) { - RTE_ETHDEV_LOG(ERR, - "MTU (%u) < device min MTU (%u) for port_id %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "MTU (%u) < device min MTU (%u) for port_id %u", mtu, dev_info->min_mtu, port_id); return -EINVAL; } if (mtu > dev_info->max_mtu) { - RTE_ETHDEV_LOG(ERR, - "MTU (%u) > device max MTU (%u) for port_id %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "MTU (%u) > device max MTU (%u) for port_id %u", mtu, dev_info->max_mtu, port_id); return -EINVAL; } @@ -1260,15 +1260,15 @@ eth_dev_validate_mtu(uint16_t port_id, struct rte_eth_dev_info *dev_info, dev_info->max_mtu); frame_size = mtu + overhead_len; if (frame_size < RTE_ETHER_MIN_LEN) { - RTE_ETHDEV_LOG(ERR, - "Frame size (%u) < min frame size (%u) for port_id %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Frame size (%u) < min frame size (%u) for port_id %u", frame_size, RTE_ETHER_MIN_LEN, port_id); return -EINVAL; } if (frame_size > dev_info->max_rx_pktlen) { - RTE_ETHDEV_LOG(ERR, - "Frame size (%u) > device max frame size (%u) for port_id %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Frame size (%u) > device max frame size (%u) for port_id %u", frame_size, dev_info->max_rx_pktlen, port_id); return -EINVAL; } @@ -1292,8 +1292,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, dev = &rte_eth_devices[port_id]; if (dev_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot configure ethdev port %u from NULL config\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot configure ethdev port %u from NULL config", port_id); return -EINVAL; } @@ -1302,8 +1302,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, return -ENOTSUP; if (dev->data->dev_started) { - RTE_ETHDEV_LOG(ERR, - "Port %u must be stopped to allow configuration\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u must be stopped to allow configuration", port_id); return -EBUSY; } @@ -1334,7 +1334,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, dev_conf->rxmode.reserved_64s[1] != 0 || dev_conf->rxmode.reserved_ptrs[0] != NULL || dev_conf->rxmode.reserved_ptrs[1] != NULL) { - RTE_ETHDEV_LOG(ERR, "Rxmode reserved fields not zero\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rxmode reserved fields not zero"); ret = -EINVAL; goto rollback; } @@ -1343,7 +1343,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, dev_conf->txmode.reserved_64s[1] != 0 || dev_conf->txmode.reserved_ptrs[0] != NULL || dev_conf->txmode.reserved_ptrs[1] != NULL) { - RTE_ETHDEV_LOG(ERR, "txmode reserved fields not zero\n"); + RTE_ETHDEV_LOG_LINE(ERR, "txmode reserved fields not zero"); ret = -EINVAL; goto rollback; } @@ -1368,16 +1368,16 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, } if (nb_rx_q > RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Number of Rx queues requested (%u) is greater than max supported(%d)\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Number of Rx queues requested (%u) is greater than max supported(%d)", nb_rx_q, RTE_MAX_QUEUES_PER_PORT); ret = -EINVAL; goto rollback; } if (nb_tx_q > RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Number of Tx queues requested (%u) is greater than max supported(%d)\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Number of Tx queues requested (%u) is greater than max supported(%d)", nb_tx_q, RTE_MAX_QUEUES_PER_PORT); ret = -EINVAL; goto rollback; @@ -1389,14 +1389,14 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, * configured device. */ if (nb_rx_q > dev_info.max_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%u nb_rx_queues=%u > %u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u nb_rx_queues=%u > %u", port_id, nb_rx_q, dev_info.max_rx_queues); ret = -EINVAL; goto rollback; } if (nb_tx_q > dev_info.max_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%u nb_tx_queues=%u > %u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u nb_tx_queues=%u > %u", port_id, nb_tx_q, dev_info.max_tx_queues); ret = -EINVAL; goto rollback; @@ -1405,14 +1405,14 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, /* Check that the device supports requested interrupts */ if ((dev_conf->intr_conf.lsc == 1) && (!(dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC))) { - RTE_ETHDEV_LOG(ERR, "Driver %s does not support lsc\n", + RTE_ETHDEV_LOG_LINE(ERR, "Driver %s does not support lsc", dev->device->driver->name); ret = -EINVAL; goto rollback; } if ((dev_conf->intr_conf.rmv == 1) && (!(dev->data->dev_flags & RTE_ETH_DEV_INTR_RMV))) { - RTE_ETHDEV_LOG(ERR, "Driver %s does not support rmv\n", + RTE_ETHDEV_LOG_LINE(ERR, "Driver %s does not support rmv", dev->device->driver->name); ret = -EINVAL; goto rollback; @@ -1456,14 +1456,14 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, dev_conf->rxmode.offloads) { char buffer[512]; - RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%u does not support Rx offloads %s\n", + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u does not support Rx offloads %s", port_id, eth_dev_offload_names( dev_conf->rxmode.offloads & ~dev_info.rx_offload_capa, buffer, sizeof(buffer), rte_eth_dev_rx_offload_name)); - RTE_ETHDEV_LOG(DEBUG, "Ethdev port_id=%u was requested Rx offloads %s\n", + RTE_ETHDEV_LOG_LINE(DEBUG, "Ethdev port_id=%u was requested Rx offloads %s", port_id, eth_dev_offload_names(dev_conf->rxmode.offloads, buffer, sizeof(buffer), rte_eth_dev_rx_offload_name)); - RTE_ETHDEV_LOG(DEBUG, "Ethdev port_id=%u supports Rx offloads %s\n", + RTE_ETHDEV_LOG_LINE(DEBUG, "Ethdev port_id=%u supports Rx offloads %s", port_id, eth_dev_offload_names(dev_info.rx_offload_capa, buffer, sizeof(buffer), rte_eth_dev_rx_offload_name)); @@ -1474,14 +1474,14 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, dev_conf->txmode.offloads) { char buffer[512]; - RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%u does not support Tx offloads %s\n", + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u does not support Tx offloads %s", port_id, eth_dev_offload_names( dev_conf->txmode.offloads & ~dev_info.tx_offload_capa, buffer, sizeof(buffer), rte_eth_dev_tx_offload_name)); - RTE_ETHDEV_LOG(DEBUG, "Ethdev port_id=%u was requested Tx offloads %s\n", + RTE_ETHDEV_LOG_LINE(DEBUG, "Ethdev port_id=%u was requested Tx offloads %s", port_id, eth_dev_offload_names(dev_conf->txmode.offloads, buffer, sizeof(buffer), rte_eth_dev_tx_offload_name)); - RTE_ETHDEV_LOG(DEBUG, "Ethdev port_id=%u supports Tx offloads %s\n", + RTE_ETHDEV_LOG_LINE(DEBUG, "Ethdev port_id=%u supports Tx offloads %s", port_id, eth_dev_offload_names(dev_info.tx_offload_capa, buffer, sizeof(buffer), rte_eth_dev_tx_offload_name)); ret = -EINVAL; @@ -1495,8 +1495,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, if ((dev_info.flow_type_rss_offloads | dev_conf->rx_adv_conf.rss_conf.rss_hf) != dev_info.flow_type_rss_offloads) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%u invalid rss_hf: 0x%"PRIx64", valid value: 0x%"PRIx64"\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethdev port_id=%u invalid rss_hf: 0x%"PRIx64", valid value: 0x%"PRIx64, port_id, dev_conf->rx_adv_conf.rss_conf.rss_hf, dev_info.flow_type_rss_offloads); ret = -EINVAL; @@ -1506,8 +1506,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, /* Check if Rx RSS distribution is disabled but RSS hash is enabled. */ if (((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0) && (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%u config invalid Rx mq_mode without RSS but %s offload is requested\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethdev port_id=%u config invalid Rx mq_mode without RSS but %s offload is requested", port_id, rte_eth_dev_rx_offload_name(RTE_ETH_RX_OFFLOAD_RSS_HASH)); ret = -EINVAL; @@ -1516,8 +1516,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, if (dev_conf->rx_adv_conf.rss_conf.rss_key != NULL && dev_conf->rx_adv_conf.rss_conf.rss_key_len != dev_info.hash_key_size) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%u invalid RSS key len: %u, valid value: %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethdev port_id=%u invalid RSS key len: %u, valid value: %u", port_id, dev_conf->rx_adv_conf.rss_conf.rss_key_len, dev_info.hash_key_size); ret = -EINVAL; @@ -1527,9 +1527,9 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, algorithm = dev_conf->rx_adv_conf.rss_conf.algorithm; if ((size_t)algorithm >= CHAR_BIT * sizeof(dev_info.rss_algo_capa) || (dev_info.rss_algo_capa & RTE_ETH_HASH_ALGO_TO_CAPA(algorithm)) == 0) { - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u configured RSS hash algorithm (%u)" - "is not in the algorithm capability (0x%" PRIx32 ")\n", + "is not in the algorithm capability (0x%" PRIx32 ")", port_id, algorithm, dev_info.rss_algo_capa); ret = -EINVAL; goto rollback; @@ -1540,8 +1540,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, */ diag = eth_dev_rx_queue_config(dev, nb_rx_q); if (diag != 0) { - RTE_ETHDEV_LOG(ERR, - "Port%u eth_dev_rx_queue_config = %d\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port%u eth_dev_rx_queue_config = %d", port_id, diag); ret = diag; goto rollback; @@ -1549,8 +1549,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, diag = eth_dev_tx_queue_config(dev, nb_tx_q); if (diag != 0) { - RTE_ETHDEV_LOG(ERR, - "Port%u eth_dev_tx_queue_config = %d\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port%u eth_dev_tx_queue_config = %d", port_id, diag); eth_dev_rx_queue_config(dev, 0); ret = diag; @@ -1559,7 +1559,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, diag = (*dev->dev_ops->dev_configure)(dev); if (diag != 0) { - RTE_ETHDEV_LOG(ERR, "Port%u dev_configure = %d\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port%u dev_configure = %d", port_id, diag); ret = eth_err(port_id, diag); goto reset_queues; @@ -1568,7 +1568,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q, /* Initialize Rx profiling if enabled at compilation time. */ diag = __rte_eth_dev_profile_init(port_id, dev); if (diag != 0) { - RTE_ETHDEV_LOG(ERR, "Port%u __rte_eth_dev_profile_init = %d\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port%u __rte_eth_dev_profile_init = %d", port_id, diag); ret = eth_err(port_id, diag); goto reset_queues; @@ -1666,8 +1666,8 @@ eth_dev_config_restore(struct rte_eth_dev *dev, ret = eth_err(port_id, (*dev->dev_ops->promiscuous_enable)(dev)); if (ret != 0 && ret != -ENOTSUP) { - RTE_ETHDEV_LOG(ERR, - "Failed to enable promiscuous mode for device (port %u): %s\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to enable promiscuous mode for device (port %u): %s", port_id, rte_strerror(-ret)); return ret; } @@ -1676,8 +1676,8 @@ eth_dev_config_restore(struct rte_eth_dev *dev, ret = eth_err(port_id, (*dev->dev_ops->promiscuous_disable)(dev)); if (ret != 0 && ret != -ENOTSUP) { - RTE_ETHDEV_LOG(ERR, - "Failed to disable promiscuous mode for device (port %u): %s\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to disable promiscuous mode for device (port %u): %s", port_id, rte_strerror(-ret)); return ret; } @@ -1693,8 +1693,8 @@ eth_dev_config_restore(struct rte_eth_dev *dev, ret = eth_err(port_id, (*dev->dev_ops->allmulticast_enable)(dev)); if (ret != 0 && ret != -ENOTSUP) { - RTE_ETHDEV_LOG(ERR, - "Failed to enable allmulticast mode for device (port %u): %s\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to enable allmulticast mode for device (port %u): %s", port_id, rte_strerror(-ret)); return ret; } @@ -1703,8 +1703,8 @@ eth_dev_config_restore(struct rte_eth_dev *dev, ret = eth_err(port_id, (*dev->dev_ops->allmulticast_disable)(dev)); if (ret != 0 && ret != -ENOTSUP) { - RTE_ETHDEV_LOG(ERR, - "Failed to disable allmulticast mode for device (port %u): %s\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to disable allmulticast mode for device (port %u): %s", port_id, rte_strerror(-ret)); return ret; } @@ -1728,15 +1728,15 @@ rte_eth_dev_start(uint16_t port_id) return -ENOTSUP; if (dev->data->dev_configured == 0) { - RTE_ETHDEV_LOG(INFO, - "Device with port_id=%"PRIu16" is not configured.\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Device with port_id=%"PRIu16" is not configured.", port_id); return -EINVAL; } if (dev->data->dev_started != 0) { - RTE_ETHDEV_LOG(INFO, - "Device with port_id=%"PRIu16" already started\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Device with port_id=%"PRIu16" already started", port_id); return 0; } @@ -1757,13 +1757,13 @@ rte_eth_dev_start(uint16_t port_id) ret = eth_dev_config_restore(dev, &dev_info, port_id); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, - "Error during restoring configuration for device (port %u): %s\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Error during restoring configuration for device (port %u): %s", port_id, rte_strerror(-ret)); ret_stop = rte_eth_dev_stop(port_id); if (ret_stop != 0) { - RTE_ETHDEV_LOG(ERR, - "Failed to stop device (port %u): %s\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to stop device (port %u): %s", port_id, rte_strerror(-ret_stop)); } @@ -1796,8 +1796,8 @@ rte_eth_dev_stop(uint16_t port_id) return -ENOTSUP; if (dev->data->dev_started == 0) { - RTE_ETHDEV_LOG(INFO, - "Device with port_id=%"PRIu16" already stopped\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Device with port_id=%"PRIu16" already stopped", port_id); return 0; } @@ -1866,7 +1866,7 @@ rte_eth_dev_close(uint16_t port_id) */ if (rte_eal_process_type() == RTE_PROC_PRIMARY && dev->data->dev_started) { - RTE_ETHDEV_LOG(ERR, "Cannot close started device (port %u)\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot close started device (port %u)", port_id); return -EINVAL; } @@ -1897,8 +1897,8 @@ rte_eth_dev_reset(uint16_t port_id) ret = rte_eth_dev_stop(port_id); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, - "Failed to stop device (port %u) before reset: %s - ignore\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to stop device (port %u) before reset: %s - ignore", port_id, rte_strerror(-ret)); } ret = eth_err(port_id, dev->dev_ops->dev_reset(dev)); @@ -1946,7 +1946,7 @@ rte_eth_check_rx_mempool(struct rte_mempool *mp, uint16_t offset, */ if (mp->private_data_size < sizeof(struct rte_pktmbuf_pool_private)) { - RTE_ETHDEV_LOG(ERR, "%s private_data_size %u < %u\n", + RTE_ETHDEV_LOG_LINE(ERR, "%s private_data_size %u < %u", mp->name, mp->private_data_size, (unsigned int) sizeof(struct rte_pktmbuf_pool_private)); @@ -1954,8 +1954,8 @@ rte_eth_check_rx_mempool(struct rte_mempool *mp, uint16_t offset, } data_room_size = rte_pktmbuf_data_room_size(mp); if (data_room_size < offset + min_length) { - RTE_ETHDEV_LOG(ERR, - "%s mbuf_data_room_size %u < %u (%u + %u)\n", + RTE_ETHDEV_LOG_LINE(ERR, + "%s mbuf_data_room_size %u < %u (%u + %u)", mp->name, data_room_size, offset + min_length, offset, min_length); return -EINVAL; @@ -2001,8 +2001,8 @@ rte_eth_rx_queue_check_split(uint16_t port_id, int i; if (n_seg > seg_capa->max_nseg) { - RTE_ETHDEV_LOG(ERR, - "Requested Rx segments %u exceed supported %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Requested Rx segments %u exceed supported %u", n_seg, seg_capa->max_nseg); return -EINVAL; } @@ -2023,24 +2023,24 @@ rte_eth_rx_queue_check_split(uint16_t port_id, uint32_t proto_hdr = rx_seg[seg_idx].proto_hdr; if (mpl == NULL) { - RTE_ETHDEV_LOG(ERR, "null mempool pointer\n"); + RTE_ETHDEV_LOG_LINE(ERR, "null mempool pointer"); ret = -EINVAL; goto out; } if (seg_idx != 0 && mp_first != mpl && seg_capa->multi_pools == 0) { - RTE_ETHDEV_LOG(ERR, "Receiving to multiple pools is not supported\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Receiving to multiple pools is not supported"); ret = -ENOTSUP; goto out; } if (offset != 0) { if (seg_capa->offset_allowed == 0) { - RTE_ETHDEV_LOG(ERR, "Rx segmentation with offset is not supported\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx segmentation with offset is not supported"); ret = -ENOTSUP; goto out; } if (offset & offset_mask) { - RTE_ETHDEV_LOG(ERR, "Rx segmentation invalid offset alignment %u, %u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Rx segmentation invalid offset alignment %u, %u", offset, seg_capa->offset_align_log2); ret = -EINVAL; @@ -2053,22 +2053,22 @@ rte_eth_rx_queue_check_split(uint16_t port_id, if (proto_hdr != 0) { /* Split based on protocol headers. */ if (length != 0) { - RTE_ETHDEV_LOG(ERR, - "Do not set length split and protocol split within a segment\n" + RTE_ETHDEV_LOG_LINE(ERR, + "Do not set length split and protocol split within a segment" ); ret = -EINVAL; goto out; } if ((proto_hdr & prev_proto_hdrs) != 0) { - RTE_ETHDEV_LOG(ERR, - "Repeat with previous protocol headers or proto-split after length-based split\n" + RTE_ETHDEV_LOG_LINE(ERR, + "Repeat with previous protocol headers or proto-split after length-based split" ); ret = -EINVAL; goto out; } if (ptype_cnt <= 0) { - RTE_ETHDEV_LOG(ERR, - "Port %u failed to get supported buffer split header protocols\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u failed to get supported buffer split header protocols", port_id); ret = -ENOTSUP; goto out; @@ -2078,8 +2078,8 @@ rte_eth_rx_queue_check_split(uint16_t port_id, break; } if (i == ptype_cnt) { - RTE_ETHDEV_LOG(ERR, - "Requested Rx split header protocols 0x%x is not supported.\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Requested Rx split header protocols 0x%x is not supported.", proto_hdr); ret = -EINVAL; goto out; @@ -2109,8 +2109,8 @@ rte_eth_rx_queue_check_mempools(struct rte_mempool **rx_mempools, int ret; if (n_mempools > dev_info->max_rx_mempools) { - RTE_ETHDEV_LOG(ERR, - "Too many Rx mempools %u vs maximum %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Too many Rx mempools %u vs maximum %u", n_mempools, dev_info->max_rx_mempools); return -EINVAL; } @@ -2119,7 +2119,7 @@ rte_eth_rx_queue_check_mempools(struct rte_mempool **rx_mempools, struct rte_mempool *mp = rx_mempools[pool_idx]; if (mp == NULL) { - RTE_ETHDEV_LOG(ERR, "null Rx mempool pointer\n"); + RTE_ETHDEV_LOG_LINE(ERR, "null Rx mempool pointer"); return -EINVAL; } @@ -2153,7 +2153,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, dev = &rte_eth_devices[port_id]; if (rx_queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", rx_queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", rx_queue_id); return -EINVAL; } @@ -2165,7 +2165,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, rx_conf->reserved_64s[1] != 0 || rx_conf->reserved_ptrs[0] != NULL || rx_conf->reserved_ptrs[1] != NULL)) { - RTE_ETHDEV_LOG(ERR, "Rx conf reserved fields not zero\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx conf reserved fields not zero"); return -EINVAL; } @@ -2181,8 +2181,8 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, if ((mp != NULL) + (rx_conf != NULL && rx_conf->rx_nseg > 0) + (rx_conf != NULL && rx_conf->rx_nmempool > 0) != 1) { - RTE_ETHDEV_LOG(ERR, - "Ambiguous Rx mempools configuration\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Ambiguous Rx mempools configuration"); return -EINVAL; } @@ -2196,9 +2196,9 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, mbp_buf_size = rte_pktmbuf_data_room_size(mp); buf_data_size = mbp_buf_size - RTE_PKTMBUF_HEADROOM; if (buf_data_size > dev_info.max_rx_bufsize) - RTE_ETHDEV_LOG(DEBUG, + RTE_ETHDEV_LOG_LINE(DEBUG, "For port_id=%u, the mbuf data buffer size (%u) is bigger than " - "max buffer size (%u) device can utilize, so mbuf size can be reduced.\n", + "max buffer size (%u) device can utilize, so mbuf size can be reduced.", port_id, buf_data_size, dev_info.max_rx_bufsize); } else if (rx_conf != NULL && rx_conf->rx_nseg > 0) { const struct rte_eth_rxseg_split *rx_seg; @@ -2206,8 +2206,8 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, /* Extended multi-segment configuration check. */ if (rx_conf->rx_seg == NULL) { - RTE_ETHDEV_LOG(ERR, - "Memory pool is null and no multi-segment configuration provided\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Memory pool is null and no multi-segment configuration provided"); return -EINVAL; } @@ -2221,13 +2221,13 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, if (ret != 0) return ret; } else { - RTE_ETHDEV_LOG(ERR, "No Rx segmentation offload configured\n"); + RTE_ETHDEV_LOG_LINE(ERR, "No Rx segmentation offload configured"); return -EINVAL; } } else if (rx_conf != NULL && rx_conf->rx_nmempool > 0) { /* Extended multi-pool configuration check. */ if (rx_conf->rx_mempools == NULL) { - RTE_ETHDEV_LOG(ERR, "Memory pools array is null\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Memory pools array is null"); return -EINVAL; } @@ -2238,7 +2238,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, if (ret != 0) return ret; } else { - RTE_ETHDEV_LOG(ERR, "Missing Rx mempool configuration\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Missing Rx mempool configuration"); return -EINVAL; } @@ -2254,8 +2254,8 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, nb_rx_desc < dev_info.rx_desc_lim.nb_min || nb_rx_desc % dev_info.rx_desc_lim.nb_align != 0) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for nb_rx_desc(=%hu), should be: <= %hu, >= %hu, and a product of %hu\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for nb_rx_desc(=%hu), should be: <= %hu, >= %hu, and a product of %hu", nb_rx_desc, dev_info.rx_desc_lim.nb_max, dev_info.rx_desc_lim.nb_min, dev_info.rx_desc_lim.nb_align); @@ -2299,9 +2299,9 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, */ if ((local_conf.offloads & dev_info.rx_queue_offload_capa) != local_conf.offloads) { - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%d rx_queue_id=%d, new added offloads 0x%"PRIx64" must be " - "within per-queue offload capabilities 0x%"PRIx64" in %s()\n", + "within per-queue offload capabilities 0x%"PRIx64" in %s()", port_id, rx_queue_id, local_conf.offloads, dev_info.rx_queue_offload_capa, __func__); @@ -2310,8 +2310,8 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, if (local_conf.share_group > 0 && (dev_info.dev_capa & RTE_ETH_DEV_CAPA_RXQ_SHARE) == 0) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%d rx_queue_id=%d, enabled share_group=%hu while device doesn't support Rx queue share\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethdev port_id=%d rx_queue_id=%d, enabled share_group=%hu while device doesn't support Rx queue share", port_id, rx_queue_id, local_conf.share_group); return -EINVAL; } @@ -2367,20 +2367,20 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id, dev = &rte_eth_devices[port_id]; if (rx_queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", rx_queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", rx_queue_id); return -EINVAL; } if (conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot setup ethdev port %u Rx hairpin queue from NULL config\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot setup ethdev port %u Rx hairpin queue from NULL config", port_id); return -EINVAL; } if (conf->reserved != 0) { - RTE_ETHDEV_LOG(ERR, - "Rx hairpin reserved field not zero\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Rx hairpin reserved field not zero"); return -EINVAL; } @@ -2393,42 +2393,42 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id, if (nb_rx_desc == 0) nb_rx_desc = cap.max_nb_desc; if (nb_rx_desc > cap.max_nb_desc) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for nb_rx_desc(=%hu), should be: <= %hu\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for nb_rx_desc(=%hu), should be: <= %hu", nb_rx_desc, cap.max_nb_desc); return -EINVAL; } if (conf->peer_count > cap.max_rx_2_tx) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Rx queue(=%u), should be: <= %hu\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for number of peers for Rx queue(=%u), should be: <= %hu", conf->peer_count, cap.max_rx_2_tx); return -EINVAL; } if (conf->use_locked_device_memory && !cap.rx_cap.locked_device_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to use locked device memory for Rx queue, which is not supported\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to use locked device memory for Rx queue, which is not supported"); return -EINVAL; } if (conf->use_rte_memory && !cap.rx_cap.rte_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to use DPDK memory for Rx queue, which is not supported\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to use DPDK memory for Rx queue, which is not supported"); return -EINVAL; } if (conf->use_locked_device_memory && conf->use_rte_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to use mutually exclusive memory settings for Rx queue\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to use mutually exclusive memory settings for Rx queue"); return -EINVAL; } if (conf->force_memory && !conf->use_locked_device_memory && !conf->use_rte_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to force Rx queue memory settings, but none is set\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to force Rx queue memory settings, but none is set"); return -EINVAL; } if (conf->peer_count == 0) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Rx queue(=%u), should be: > 0\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for number of peers for Rx queue(=%u), should be: > 0", conf->peer_count); return -EINVAL; } @@ -2438,7 +2438,7 @@ rte_eth_rx_hairpin_queue_setup(uint16_t port_id, uint16_t rx_queue_id, count++; } if (count > cap.max_nb_queues) { - RTE_ETHDEV_LOG(ERR, "To many Rx hairpin queues max is %d\n", + RTE_ETHDEV_LOG_LINE(ERR, "To many Rx hairpin queues max is %d", cap.max_nb_queues); return -EINVAL; } @@ -2472,7 +2472,7 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id, dev = &rte_eth_devices[port_id]; if (tx_queue_id >= dev->data->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u\n", tx_queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u", tx_queue_id); return -EINVAL; } @@ -2484,7 +2484,7 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id, tx_conf->reserved_64s[1] != 0 || tx_conf->reserved_ptrs[0] != NULL || tx_conf->reserved_ptrs[1] != NULL)) { - RTE_ETHDEV_LOG(ERR, "Tx conf reserved fields not zero\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Tx conf reserved fields not zero"); return -EINVAL; } @@ -2502,8 +2502,8 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id, if (nb_tx_desc > dev_info.tx_desc_lim.nb_max || nb_tx_desc < dev_info.tx_desc_lim.nb_min || nb_tx_desc % dev_info.tx_desc_lim.nb_align != 0) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for nb_tx_desc(=%hu), should be: <= %hu, >= %hu, and a product of %hu\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for nb_tx_desc(=%hu), should be: <= %hu, >= %hu, and a product of %hu", nb_tx_desc, dev_info.tx_desc_lim.nb_max, dev_info.tx_desc_lim.nb_min, dev_info.tx_desc_lim.nb_align); @@ -2547,9 +2547,9 @@ rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id, */ if ((local_conf.offloads & dev_info.tx_queue_offload_capa) != local_conf.offloads) { - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%d tx_queue_id=%d, new added offloads 0x%"PRIx64" must be " - "within per-queue offload capabilities 0x%"PRIx64" in %s()\n", + "within per-queue offload capabilities 0x%"PRIx64" in %s()", port_id, tx_queue_id, local_conf.offloads, dev_info.tx_queue_offload_capa, __func__); @@ -2576,13 +2576,13 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id, dev = &rte_eth_devices[port_id]; if (tx_queue_id >= dev->data->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u\n", tx_queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u", tx_queue_id); return -EINVAL; } if (conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot setup ethdev port %u Tx hairpin queue from NULL config\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot setup ethdev port %u Tx hairpin queue from NULL config", port_id); return -EINVAL; } @@ -2596,42 +2596,42 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id, if (nb_tx_desc == 0) nb_tx_desc = cap.max_nb_desc; if (nb_tx_desc > cap.max_nb_desc) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for nb_tx_desc(=%hu), should be: <= %hu\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for nb_tx_desc(=%hu), should be: <= %hu", nb_tx_desc, cap.max_nb_desc); return -EINVAL; } if (conf->peer_count > cap.max_tx_2_rx) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Tx queue(=%u), should be: <= %hu\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for number of peers for Tx queue(=%u), should be: <= %hu", conf->peer_count, cap.max_tx_2_rx); return -EINVAL; } if (conf->use_locked_device_memory && !cap.tx_cap.locked_device_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to use locked device memory for Tx queue, which is not supported\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to use locked device memory for Tx queue, which is not supported"); return -EINVAL; } if (conf->use_rte_memory && !cap.tx_cap.rte_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to use DPDK memory for Tx queue, which is not supported\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to use DPDK memory for Tx queue, which is not supported"); return -EINVAL; } if (conf->use_locked_device_memory && conf->use_rte_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to use mutually exclusive memory settings for Tx queue\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to use mutually exclusive memory settings for Tx queue"); return -EINVAL; } if (conf->force_memory && !conf->use_locked_device_memory && !conf->use_rte_memory) { - RTE_ETHDEV_LOG(ERR, - "Attempt to force Tx queue memory settings, but none is set\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Attempt to force Tx queue memory settings, but none is set"); return -EINVAL; } if (conf->peer_count == 0) { - RTE_ETHDEV_LOG(ERR, - "Invalid value for number of peers for Tx queue(=%u), should be: > 0\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid value for number of peers for Tx queue(=%u), should be: > 0", conf->peer_count); return -EINVAL; } @@ -2641,7 +2641,7 @@ rte_eth_tx_hairpin_queue_setup(uint16_t port_id, uint16_t tx_queue_id, count++; } if (count > cap.max_nb_queues) { - RTE_ETHDEV_LOG(ERR, "To many Tx hairpin queues max is %d\n", + RTE_ETHDEV_LOG_LINE(ERR, "To many Tx hairpin queues max is %d", cap.max_nb_queues); return -EINVAL; } @@ -2671,7 +2671,7 @@ rte_eth_hairpin_bind(uint16_t tx_port, uint16_t rx_port) dev = &rte_eth_devices[tx_port]; if (dev->data->dev_started == 0) { - RTE_ETHDEV_LOG(ERR, "Tx port %d is not started\n", tx_port); + RTE_ETHDEV_LOG_LINE(ERR, "Tx port %d is not started", tx_port); return -EBUSY; } @@ -2679,8 +2679,8 @@ rte_eth_hairpin_bind(uint16_t tx_port, uint16_t rx_port) return -ENOTSUP; ret = (*dev->dev_ops->hairpin_bind)(dev, rx_port); if (ret != 0) - RTE_ETHDEV_LOG(ERR, "Failed to bind hairpin Tx %d" - " to Rx %d (%d - all ports)\n", + RTE_ETHDEV_LOG_LINE(ERR, "Failed to bind hairpin Tx %d" + " to Rx %d (%d - all ports)", tx_port, rx_port, RTE_MAX_ETHPORTS); rte_eth_trace_hairpin_bind(tx_port, rx_port, ret); @@ -2698,7 +2698,7 @@ rte_eth_hairpin_unbind(uint16_t tx_port, uint16_t rx_port) dev = &rte_eth_devices[tx_port]; if (dev->data->dev_started == 0) { - RTE_ETHDEV_LOG(ERR, "Tx port %d is already stopped\n", tx_port); + RTE_ETHDEV_LOG_LINE(ERR, "Tx port %d is already stopped", tx_port); return -EBUSY; } @@ -2706,8 +2706,8 @@ rte_eth_hairpin_unbind(uint16_t tx_port, uint16_t rx_port) return -ENOTSUP; ret = (*dev->dev_ops->hairpin_unbind)(dev, rx_port); if (ret != 0) - RTE_ETHDEV_LOG(ERR, "Failed to unbind hairpin Tx %d" - " from Rx %d (%d - all ports)\n", + RTE_ETHDEV_LOG_LINE(ERR, "Failed to unbind hairpin Tx %d" + " from Rx %d (%d - all ports)", tx_port, rx_port, RTE_MAX_ETHPORTS); rte_eth_trace_hairpin_unbind(tx_port, rx_port, ret); @@ -2726,15 +2726,15 @@ rte_eth_hairpin_get_peer_ports(uint16_t port_id, uint16_t *peer_ports, dev = &rte_eth_devices[port_id]; if (peer_ports == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u hairpin peer ports to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u hairpin peer ports to NULL", port_id); return -EINVAL; } if (len == 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u hairpin peer ports to array with zero size\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u hairpin peer ports to array with zero size", port_id); return -EINVAL; } @@ -2745,7 +2745,7 @@ rte_eth_hairpin_get_peer_ports(uint16_t port_id, uint16_t *peer_ports, ret = (*dev->dev_ops->hairpin_get_peer_ports)(dev, peer_ports, len, direction); if (ret < 0) - RTE_ETHDEV_LOG(ERR, "Failed to get %d hairpin peer %s ports\n", + RTE_ETHDEV_LOG_LINE(ERR, "Failed to get %d hairpin peer %s ports", port_id, direction ? "Rx" : "Tx"); rte_eth_trace_hairpin_get_peer_ports(port_id, peer_ports, len, @@ -2780,8 +2780,8 @@ rte_eth_tx_buffer_set_err_callback(struct rte_eth_dev_tx_buffer *buffer, buffer_tx_error_fn cbfn, void *userdata) { if (buffer == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot set Tx buffer error callback to NULL buffer\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set Tx buffer error callback to NULL buffer"); return -EINVAL; } @@ -2799,7 +2799,7 @@ rte_eth_tx_buffer_init(struct rte_eth_dev_tx_buffer *buffer, uint16_t size) int ret = 0; if (buffer == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot initialize NULL buffer\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot initialize NULL buffer"); return -EINVAL; } @@ -2977,7 +2977,7 @@ rte_eth_link_get(uint16_t port_id, struct rte_eth_link *eth_link) dev = &rte_eth_devices[port_id]; if (eth_link == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u link to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u link to NULL", port_id); return -EINVAL; } @@ -3005,7 +3005,7 @@ rte_eth_link_get_nowait(uint16_t port_id, struct rte_eth_link *eth_link) dev = &rte_eth_devices[port_id]; if (eth_link == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u link to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u link to NULL", port_id); return -EINVAL; } @@ -3093,18 +3093,18 @@ rte_eth_link_to_str(char *str, size_t len, const struct rte_eth_link *eth_link) int ret; if (str == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot convert link to NULL string\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot convert link to NULL string"); return -EINVAL; } if (len == 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot convert link to string with zero size\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot convert link to string with zero size"); return -EINVAL; } if (eth_link == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot convert to string from NULL link\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot convert to string from NULL link"); return -EINVAL; } @@ -3133,7 +3133,7 @@ rte_eth_stats_get(uint16_t port_id, struct rte_eth_stats *stats) dev = &rte_eth_devices[port_id]; if (stats == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u stats to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u stats to NULL", port_id); return -EINVAL; } @@ -3220,15 +3220,15 @@ rte_eth_xstats_get_id_by_name(uint16_t port_id, const char *xstat_name, RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); if (xstat_name == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u xstats ID from NULL xstat name\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u xstats ID from NULL xstat name", port_id); return -ENOMEM; } if (id == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u xstats ID to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u xstats ID to NULL", port_id); return -ENOMEM; } @@ -3236,7 +3236,7 @@ rte_eth_xstats_get_id_by_name(uint16_t port_id, const char *xstat_name, /* Get count */ cnt_xstats = rte_eth_xstats_get_names_by_id(port_id, NULL, 0, NULL); if (cnt_xstats < 0) { - RTE_ETHDEV_LOG(ERR, "Cannot get count of xstats\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get count of xstats"); return -ENODEV; } @@ -3245,7 +3245,7 @@ rte_eth_xstats_get_id_by_name(uint16_t port_id, const char *xstat_name, if (cnt_xstats != rte_eth_xstats_get_names_by_id( port_id, xstats_names, cnt_xstats, NULL)) { - RTE_ETHDEV_LOG(ERR, "Cannot get xstats lookup\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get xstats lookup"); return -1; } @@ -3376,7 +3376,7 @@ rte_eth_xstats_get_names_by_id(uint16_t port_id, sizeof(struct rte_eth_xstat_name)); if (!xstats_names_copy) { - RTE_ETHDEV_LOG(ERR, "Can't allocate memory\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Can't allocate memory"); return -ENOMEM; } @@ -3404,7 +3404,7 @@ rte_eth_xstats_get_names_by_id(uint16_t port_id, /* Filter stats */ for (i = 0; i < size; i++) { if (ids[i] >= expected_entries) { - RTE_ETHDEV_LOG(ERR, "Id value isn't valid\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Id value isn't valid"); free(xstats_names_copy); return -1; } @@ -3600,7 +3600,7 @@ rte_eth_xstats_get_by_id(uint16_t port_id, const uint64_t *ids, /* Filter stats */ for (i = 0; i < size; i++) { if (ids[i] >= expected_entries) { - RTE_ETHDEV_LOG(ERR, "Id value isn't valid\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Id value isn't valid"); return -1; } values[i] = xstats[ids[i]].value; @@ -3748,8 +3748,8 @@ rte_eth_dev_fw_version_get(uint16_t port_id, char *fw_version, size_t fw_size) dev = &rte_eth_devices[port_id]; if (fw_version == NULL && fw_size > 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u FW version to NULL when string size is non zero\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u FW version to NULL when string size is non zero", port_id); return -EINVAL; } @@ -3781,7 +3781,7 @@ rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info) dev = &rte_eth_devices[port_id]; if (dev_info == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u info to NULL", port_id); return -EINVAL; } @@ -3837,8 +3837,8 @@ rte_eth_dev_conf_get(uint16_t port_id, struct rte_eth_conf *dev_conf) dev = &rte_eth_devices[port_id]; if (dev_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u configuration to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u configuration to NULL", port_id); return -EINVAL; } @@ -3862,8 +3862,8 @@ rte_eth_dev_get_supported_ptypes(uint16_t port_id, uint32_t ptype_mask, dev = &rte_eth_devices[port_id]; if (ptypes == NULL && num > 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u supported packet types to NULL when array size is non zero\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u supported packet types to NULL when array size is non zero", port_id); return -EINVAL; } @@ -3912,8 +3912,8 @@ rte_eth_dev_set_ptypes(uint16_t port_id, uint32_t ptype_mask, dev = &rte_eth_devices[port_id]; if (num > 0 && set_ptypes == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u set packet types to NULL when array size is non zero\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u set packet types to NULL when array size is non zero", port_id); return -EINVAL; } @@ -3992,7 +3992,7 @@ rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr *ma, struct rte_eth_dev_info dev_info; if (ma == NULL) { - RTE_ETHDEV_LOG(ERR, "%s: invalid parameters\n", __func__); + RTE_ETHDEV_LOG_LINE(ERR, "%s: invalid parameters", __func__); return -EINVAL; } @@ -4019,8 +4019,8 @@ rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr) dev = &rte_eth_devices[port_id]; if (mac_addr == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u MAC address to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u MAC address to NULL", port_id); return -EINVAL; } @@ -4041,7 +4041,7 @@ rte_eth_dev_get_mtu(uint16_t port_id, uint16_t *mtu) dev = &rte_eth_devices[port_id]; if (mtu == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u MTU to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u MTU to NULL", port_id); return -EINVAL; } @@ -4082,8 +4082,8 @@ rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu) } if (dev->data->dev_configured == 0) { - RTE_ETHDEV_LOG(ERR, - "Port %u must be configured before MTU set\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u must be configured before MTU set", port_id); return -EINVAL; } @@ -4110,13 +4110,13 @@ rte_eth_dev_vlan_filter(uint16_t port_id, uint16_t vlan_id, int on) if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)) { - RTE_ETHDEV_LOG(ERR, "Port %u: VLAN-filtering disabled\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port %u: VLAN-filtering disabled", port_id); return -ENOSYS; } if (vlan_id > 4095) { - RTE_ETHDEV_LOG(ERR, "Port_id=%u invalid vlan_id=%u > 4095\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port_id=%u invalid vlan_id=%u > 4095", port_id, vlan_id); return -EINVAL; } @@ -4156,7 +4156,7 @@ rte_eth_dev_set_vlan_strip_on_queue(uint16_t port_id, uint16_t rx_queue_id, dev = &rte_eth_devices[port_id]; if (rx_queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid rx_queue_id=%u\n", rx_queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid rx_queue_id=%u", rx_queue_id); return -EINVAL; } @@ -4261,10 +4261,10 @@ rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask) /* Rx VLAN offloading must be within its device capabilities */ if ((dev_offloads & dev_info.rx_offload_capa) != dev_offloads) { new_offloads = dev_offloads & ~orig_offloads; - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u requested new added VLAN offloads " "0x%" PRIx64 " must be within Rx offloads capabilities " - "0x%" PRIx64 " in %s()\n", + "0x%" PRIx64 " in %s()", port_id, new_offloads, dev_info.rx_offload_capa, __func__); return -EINVAL; @@ -4342,8 +4342,8 @@ rte_eth_dev_flow_ctrl_get(uint16_t port_id, struct rte_eth_fc_conf *fc_conf) dev = &rte_eth_devices[port_id]; if (fc_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u flow control config to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u flow control config to NULL", port_id); return -EINVAL; } @@ -4368,14 +4368,14 @@ rte_eth_dev_flow_ctrl_set(uint16_t port_id, struct rte_eth_fc_conf *fc_conf) dev = &rte_eth_devices[port_id]; if (fc_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot set ethdev port %u flow control from NULL config\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set ethdev port %u flow control from NULL config", port_id); return -EINVAL; } if ((fc_conf->send_xon != 0) && (fc_conf->send_xon != 1)) { - RTE_ETHDEV_LOG(ERR, "Invalid send_xon, only 0/1 allowed\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid send_xon, only 0/1 allowed"); return -EINVAL; } @@ -4399,14 +4399,14 @@ rte_eth_dev_priority_flow_ctrl_set(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (pfc_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot set ethdev port %u priority flow control from NULL config\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set ethdev port %u priority flow control from NULL config", port_id); return -EINVAL; } if (pfc_conf->priority > (RTE_ETH_DCB_NUM_USER_PRIORITIES - 1)) { - RTE_ETHDEV_LOG(ERR, "Invalid priority, only 0-7 allowed\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid priority, only 0-7 allowed"); return -EINVAL; } @@ -4428,16 +4428,16 @@ validate_rx_pause_config(struct rte_eth_dev_info *dev_info, uint8_t tc_max, if ((pfc_queue_conf->mode == RTE_ETH_FC_RX_PAUSE) || (pfc_queue_conf->mode == RTE_ETH_FC_FULL)) { if (pfc_queue_conf->rx_pause.tx_qid >= dev_info->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, - "PFC Tx queue not in range for Rx pause requested:%d configured:%d\n", + RTE_ETHDEV_LOG_LINE(ERR, + "PFC Tx queue not in range for Rx pause requested:%d configured:%d", pfc_queue_conf->rx_pause.tx_qid, dev_info->nb_tx_queues); return -EINVAL; } if (pfc_queue_conf->rx_pause.tc >= tc_max) { - RTE_ETHDEV_LOG(ERR, - "PFC TC not in range for Rx pause requested:%d max:%d\n", + RTE_ETHDEV_LOG_LINE(ERR, + "PFC TC not in range for Rx pause requested:%d max:%d", pfc_queue_conf->rx_pause.tc, tc_max); return -EINVAL; } @@ -4453,16 +4453,16 @@ validate_tx_pause_config(struct rte_eth_dev_info *dev_info, uint8_t tc_max, if ((pfc_queue_conf->mode == RTE_ETH_FC_TX_PAUSE) || (pfc_queue_conf->mode == RTE_ETH_FC_FULL)) { if (pfc_queue_conf->tx_pause.rx_qid >= dev_info->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, - "PFC Rx queue not in range for Tx pause requested:%d configured:%d\n", + RTE_ETHDEV_LOG_LINE(ERR, + "PFC Rx queue not in range for Tx pause requested:%d configured:%d", pfc_queue_conf->tx_pause.rx_qid, dev_info->nb_rx_queues); return -EINVAL; } if (pfc_queue_conf->tx_pause.tc >= tc_max) { - RTE_ETHDEV_LOG(ERR, - "PFC TC not in range for Tx pause requested:%d max:%d\n", + RTE_ETHDEV_LOG_LINE(ERR, + "PFC TC not in range for Tx pause requested:%d max:%d", pfc_queue_conf->tx_pause.tc, tc_max); return -EINVAL; } @@ -4482,7 +4482,7 @@ rte_eth_dev_priority_flow_ctrl_queue_info_get(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (pfc_queue_info == NULL) { - RTE_ETHDEV_LOG(ERR, "PFC info param is NULL for port (%u)\n", + RTE_ETHDEV_LOG_LINE(ERR, "PFC info param is NULL for port (%u)", port_id); return -EINVAL; } @@ -4511,7 +4511,7 @@ rte_eth_dev_priority_flow_ctrl_queue_configure(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (pfc_queue_conf == NULL) { - RTE_ETHDEV_LOG(ERR, "PFC parameters are NULL for port (%u)\n", + RTE_ETHDEV_LOG_LINE(ERR, "PFC parameters are NULL for port (%u)", port_id); return -EINVAL; } @@ -4525,7 +4525,7 @@ rte_eth_dev_priority_flow_ctrl_queue_configure(uint16_t port_id, return ret; if (pfc_info.tc_max == 0) { - RTE_ETHDEV_LOG(ERR, "Ethdev port %u does not support PFC TC values\n", + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port %u does not support PFC TC values", port_id); return -ENOTSUP; } @@ -4533,14 +4533,14 @@ rte_eth_dev_priority_flow_ctrl_queue_configure(uint16_t port_id, /* Check requested mode supported or not */ if (pfc_info.mode_capa == RTE_ETH_FC_RX_PAUSE && pfc_queue_conf->mode == RTE_ETH_FC_TX_PAUSE) { - RTE_ETHDEV_LOG(ERR, "PFC Tx pause unsupported for port (%d)\n", + RTE_ETHDEV_LOG_LINE(ERR, "PFC Tx pause unsupported for port (%d)", port_id); return -EINVAL; } if (pfc_info.mode_capa == RTE_ETH_FC_TX_PAUSE && pfc_queue_conf->mode == RTE_ETH_FC_RX_PAUSE) { - RTE_ETHDEV_LOG(ERR, "PFC Rx pause unsupported for port (%d)\n", + RTE_ETHDEV_LOG_LINE(ERR, "PFC Rx pause unsupported for port (%d)", port_id); return -EINVAL; } @@ -4597,7 +4597,7 @@ eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t i, idx, shift; if (max_rxq == 0) { - RTE_ETHDEV_LOG(ERR, "No receive queue is available\n"); + RTE_ETHDEV_LOG_LINE(ERR, "No receive queue is available"); return -EINVAL; } @@ -4606,8 +4606,8 @@ eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf, shift = i % RTE_ETH_RETA_GROUP_SIZE; if ((reta_conf[idx].mask & RTE_BIT64(shift)) && (reta_conf[idx].reta[shift] >= max_rxq)) { - RTE_ETHDEV_LOG(ERR, - "reta_conf[%u]->reta[%u]: %u exceeds the maximum rxq index: %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "reta_conf[%u]->reta[%u]: %u exceeds the maximum rxq index: %u", idx, shift, reta_conf[idx].reta[shift], max_rxq); return -EINVAL; @@ -4630,15 +4630,15 @@ rte_eth_dev_rss_reta_update(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (reta_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot update ethdev port %u RSS RETA to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot update ethdev port %u RSS RETA to NULL", port_id); return -EINVAL; } if (reta_size == 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot update ethdev port %u RSS RETA with zero size\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot update ethdev port %u RSS RETA with zero size", port_id); return -EINVAL; } @@ -4656,7 +4656,7 @@ rte_eth_dev_rss_reta_update(uint16_t port_id, mq_mode = dev->data->dev_conf.rxmode.mq_mode; if (!(mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) { - RTE_ETHDEV_LOG(ERR, "Multi-queue RSS mode isn't enabled.\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Multi-queue RSS mode isn't enabled."); return -ENOTSUP; } @@ -4682,8 +4682,8 @@ rte_eth_dev_rss_reta_query(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (reta_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot query ethdev port %u RSS RETA from NULL config\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot query ethdev port %u RSS RETA from NULL config", port_id); return -EINVAL; } @@ -4716,8 +4716,8 @@ rte_eth_dev_rss_hash_update(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (rss_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot update ethdev port %u RSS hash from NULL config\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot update ethdev port %u RSS hash from NULL config", port_id); return -EINVAL; } @@ -4729,8 +4729,8 @@ rte_eth_dev_rss_hash_update(uint16_t port_id, rss_conf->rss_hf = rte_eth_rss_hf_refine(rss_conf->rss_hf); if ((dev_info.flow_type_rss_offloads | rss_conf->rss_hf) != dev_info.flow_type_rss_offloads) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%u invalid rss_hf: 0x%"PRIx64", valid value: 0x%"PRIx64"\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethdev port_id=%u invalid rss_hf: 0x%"PRIx64", valid value: 0x%"PRIx64, port_id, rss_conf->rss_hf, dev_info.flow_type_rss_offloads); return -EINVAL; @@ -4738,14 +4738,14 @@ rte_eth_dev_rss_hash_update(uint16_t port_id, mq_mode = dev->data->dev_conf.rxmode.mq_mode; if (!(mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) { - RTE_ETHDEV_LOG(ERR, "Multi-queue RSS mode isn't enabled.\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Multi-queue RSS mode isn't enabled."); return -ENOTSUP; } if (rss_conf->rss_key != NULL && rss_conf->rss_key_len != dev_info.hash_key_size) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%u invalid RSS key len: %u, valid value: %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethdev port_id=%u invalid RSS key len: %u, valid value: %u", port_id, rss_conf->rss_key_len, dev_info.hash_key_size); return -EINVAL; } @@ -4753,9 +4753,9 @@ rte_eth_dev_rss_hash_update(uint16_t port_id, if ((size_t)rss_conf->algorithm >= CHAR_BIT * sizeof(dev_info.rss_algo_capa) || (dev_info.rss_algo_capa & RTE_ETH_HASH_ALGO_TO_CAPA(rss_conf->algorithm)) == 0) { - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Ethdev port_id=%u configured RSS hash algorithm (%u)" - "is not in the algorithm capability (0x%" PRIx32 ")\n", + "is not in the algorithm capability (0x%" PRIx32 ")", port_id, rss_conf->algorithm, dev_info.rss_algo_capa); return -EINVAL; } @@ -4782,8 +4782,8 @@ rte_eth_dev_rss_hash_conf_get(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (rss_conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u RSS hash config to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u RSS hash config to NULL", port_id); return -EINVAL; } @@ -4794,8 +4794,8 @@ rte_eth_dev_rss_hash_conf_get(uint16_t port_id, if (rss_conf->rss_key != NULL && rss_conf->rss_key_len < dev_info.hash_key_size) { - RTE_ETHDEV_LOG(ERR, - "Ethdev port_id=%u invalid RSS key len: %u, should not be less than: %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Ethdev port_id=%u invalid RSS key len: %u, should not be less than: %u", port_id, rss_conf->rss_key_len, dev_info.hash_key_size); return -EINVAL; } @@ -4837,14 +4837,14 @@ rte_eth_dev_udp_tunnel_port_add(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (udp_tunnel == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot add ethdev port %u UDP tunnel port from NULL UDP tunnel\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot add ethdev port %u UDP tunnel port from NULL UDP tunnel", port_id); return -EINVAL; } if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) { - RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid tunnel type"); return -EINVAL; } @@ -4869,14 +4869,14 @@ rte_eth_dev_udp_tunnel_port_delete(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (udp_tunnel == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot delete ethdev port %u UDP tunnel port from NULL UDP tunnel\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot delete ethdev port %u UDP tunnel port from NULL UDP tunnel", port_id); return -EINVAL; } if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) { - RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid tunnel type"); return -EINVAL; } @@ -4938,8 +4938,8 @@ rte_eth_fec_get_capability(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (speed_fec_capa == NULL && num > 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u FEC capability to NULL when array size is non zero\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u FEC capability to NULL when array size is non zero", port_id); return -EINVAL; } @@ -4963,8 +4963,8 @@ rte_eth_fec_get(uint16_t port_id, uint32_t *fec_capa) dev = &rte_eth_devices[port_id]; if (fec_capa == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u current FEC mode to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u current FEC mode to NULL", port_id); return -EINVAL; } @@ -4988,7 +4988,7 @@ rte_eth_fec_set(uint16_t port_id, uint32_t fec_capa) dev = &rte_eth_devices[port_id]; if (fec_capa == 0) { - RTE_ETHDEV_LOG(ERR, "At least one FEC mode should be specified\n"); + RTE_ETHDEV_LOG_LINE(ERR, "At least one FEC mode should be specified"); return -EINVAL; } @@ -5040,8 +5040,8 @@ rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr, dev = &rte_eth_devices[port_id]; if (addr == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot add ethdev port %u MAC address from NULL address\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot add ethdev port %u MAC address from NULL address", port_id); return -EINVAL; } @@ -5050,12 +5050,12 @@ rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr, return -ENOTSUP; if (rte_is_zero_ether_addr(addr)) { - RTE_ETHDEV_LOG(ERR, "Port %u: Cannot add NULL MAC address\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port %u: Cannot add NULL MAC address", port_id); return -EINVAL; } if (pool >= RTE_ETH_64_POOLS) { - RTE_ETHDEV_LOG(ERR, "Pool ID must be 0-%d\n", RTE_ETH_64_POOLS - 1); + RTE_ETHDEV_LOG_LINE(ERR, "Pool ID must be 0-%d", RTE_ETH_64_POOLS - 1); return -EINVAL; } @@ -5063,7 +5063,7 @@ rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr, if (index < 0) { index = eth_dev_get_mac_addr_index(port_id, &null_mac_addr); if (index < 0) { - RTE_ETHDEV_LOG(ERR, "Port %u: MAC address array full\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port %u: MAC address array full", port_id); return -ENOSPC; } @@ -5103,8 +5103,8 @@ rte_eth_dev_mac_addr_remove(uint16_t port_id, struct rte_ether_addr *addr) dev = &rte_eth_devices[port_id]; if (addr == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot remove ethdev port %u MAC address from NULL address\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot remove ethdev port %u MAC address from NULL address", port_id); return -EINVAL; } @@ -5114,8 +5114,8 @@ rte_eth_dev_mac_addr_remove(uint16_t port_id, struct rte_ether_addr *addr) index = eth_dev_get_mac_addr_index(port_id, addr); if (index == 0) { - RTE_ETHDEV_LOG(ERR, - "Port %u: Cannot remove default MAC address\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u: Cannot remove default MAC address", port_id); return -EADDRINUSE; } else if (index < 0) @@ -5146,8 +5146,8 @@ rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct rte_ether_addr *addr) dev = &rte_eth_devices[port_id]; if (addr == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot set ethdev port %u default MAC address from NULL address\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set ethdev port %u default MAC address from NULL address", port_id); return -EINVAL; } @@ -5161,8 +5161,8 @@ rte_eth_dev_default_mac_addr_set(uint16_t port_id, struct rte_ether_addr *addr) /* Keep address unique in dev->data->mac_addrs[]. */ index = eth_dev_get_mac_addr_index(port_id, addr); if (index > 0) { - RTE_ETHDEV_LOG(ERR, - "New default address for port %u was already in the address list. Please remove it first.\n", + RTE_ETHDEV_LOG_LINE(ERR, + "New default address for port %u was already in the address list. Please remove it first.", port_id); return -EEXIST; } @@ -5220,14 +5220,14 @@ rte_eth_dev_uc_hash_table_set(uint16_t port_id, struct rte_ether_addr *addr, dev = &rte_eth_devices[port_id]; if (addr == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot set ethdev port %u unicast hash table from NULL address\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set ethdev port %u unicast hash table from NULL address", port_id); return -EINVAL; } if (rte_is_zero_ether_addr(addr)) { - RTE_ETHDEV_LOG(ERR, "Port %u: Cannot add NULL MAC address\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port %u: Cannot add NULL MAC address", port_id); return -EINVAL; } @@ -5239,15 +5239,15 @@ rte_eth_dev_uc_hash_table_set(uint16_t port_id, struct rte_ether_addr *addr, if (index < 0) { if (!on) { - RTE_ETHDEV_LOG(ERR, - "Port %u: the MAC address was not set in UTA\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u: the MAC address was not set in UTA", port_id); return -EINVAL; } index = eth_dev_get_hash_mac_addr_index(port_id, &null_mac_addr); if (index < 0) { - RTE_ETHDEV_LOG(ERR, "Port %u: MAC address array full\n", + RTE_ETHDEV_LOG_LINE(ERR, "Port %u: MAC address array full", port_id); return -ENOSPC; } @@ -5309,15 +5309,15 @@ int rte_eth_set_queue_rate_limit(uint16_t port_id, uint16_t queue_idx, link = dev->data->dev_link; if (queue_idx > dev_info.max_tx_queues) { - RTE_ETHDEV_LOG(ERR, - "Set queue rate limit:port %u: invalid queue ID=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Set queue rate limit:port %u: invalid queue ID=%u", port_id, queue_idx); return -EINVAL; } if (tx_rate > link.link_speed) { - RTE_ETHDEV_LOG(ERR, - "Set queue rate limit:invalid tx_rate=%u, bigger than link speed= %d\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Set queue rate limit:invalid tx_rate=%u, bigger than link speed= %d", tx_rate, link.link_speed); return -EINVAL; } @@ -5342,15 +5342,15 @@ int rte_eth_rx_avail_thresh_set(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id > dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, - "Set queue avail thresh: port %u: invalid queue ID=%u.\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Set queue avail thresh: port %u: invalid queue ID=%u.", port_id, queue_id); return -EINVAL; } if (avail_thresh > 99) { - RTE_ETHDEV_LOG(ERR, - "Set queue avail thresh: port %u: threshold should be <= 99.\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Set queue avail thresh: port %u: threshold should be <= 99.", port_id); return -EINVAL; } @@ -5415,14 +5415,14 @@ rte_eth_dev_callback_register(uint16_t port_id, uint16_t last_port; if (cb_fn == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot register ethdev port %u callback from NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot register ethdev port %u callback from NULL", port_id); return -EINVAL; } if (!rte_eth_dev_is_valid_port(port_id) && port_id != RTE_ETH_ALL) { - RTE_ETHDEV_LOG(ERR, "Invalid port_id=%d\n", port_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid port_id=%d", port_id); return -EINVAL; } @@ -5485,14 +5485,14 @@ rte_eth_dev_callback_unregister(uint16_t port_id, uint16_t last_port; if (cb_fn == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot unregister ethdev port %u callback from NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot unregister ethdev port %u callback from NULL", port_id); return -EINVAL; } if (!rte_eth_dev_is_valid_port(port_id) && port_id != RTE_ETH_ALL) { - RTE_ETHDEV_LOG(ERR, "Invalid port_id=%d\n", port_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid port_id=%d", port_id); return -EINVAL; } @@ -5551,13 +5551,13 @@ rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data) dev = &rte_eth_devices[port_id]; if (!dev->intr_handle) { - RTE_ETHDEV_LOG(ERR, "Rx Intr handle unset\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx Intr handle unset"); return -ENOTSUP; } intr_handle = dev->intr_handle; if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) { - RTE_ETHDEV_LOG(ERR, "Rx Intr vector unset\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx Intr vector unset"); return -EPERM; } @@ -5568,8 +5568,8 @@ rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data) rte_ethdev_trace_rx_intr_ctl(port_id, qid, epfd, op, data, rc); if (rc && rc != -EEXIST) { - RTE_ETHDEV_LOG(ERR, - "p %u q %u Rx ctl error op %d epfd %d vec %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "p %u q %u Rx ctl error op %d epfd %d vec %u", port_id, qid, op, epfd, vec); } } @@ -5590,18 +5590,18 @@ rte_eth_dev_rx_intr_ctl_q_get_fd(uint16_t port_id, uint16_t queue_id) dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", queue_id); return -1; } if (!dev->intr_handle) { - RTE_ETHDEV_LOG(ERR, "Rx Intr handle unset\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx Intr handle unset"); return -1; } intr_handle = dev->intr_handle; if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) { - RTE_ETHDEV_LOG(ERR, "Rx Intr vector unset\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx Intr vector unset"); return -1; } @@ -5628,18 +5628,18 @@ rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", queue_id); return -EINVAL; } if (!dev->intr_handle) { - RTE_ETHDEV_LOG(ERR, "Rx Intr handle unset\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx Intr handle unset"); return -ENOTSUP; } intr_handle = dev->intr_handle; if (rte_intr_vec_list_index_get(intr_handle, 0) < 0) { - RTE_ETHDEV_LOG(ERR, "Rx Intr vector unset\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Rx Intr vector unset"); return -EPERM; } @@ -5649,8 +5649,8 @@ rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id, rte_ethdev_trace_rx_intr_ctl_q(port_id, queue_id, epfd, op, data, rc); if (rc && rc != -EEXIST) { - RTE_ETHDEV_LOG(ERR, - "p %u q %u Rx ctl error op %d epfd %d vec %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "p %u q %u Rx ctl error op %d epfd %d vec %u", port_id, queue_id, op, epfd, vec); return rc; } @@ -5949,28 +5949,28 @@ rte_eth_rx_queue_info_get(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", queue_id); return -EINVAL; } if (qinfo == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u Rx queue %u info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u Rx queue %u info to NULL", port_id, queue_id); return -EINVAL; } if (dev->data->rx_queues == NULL || dev->data->rx_queues[queue_id] == NULL) { - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Rx queue %"PRIu16" of device with port_id=%" - PRIu16" has not been setup\n", + PRIu16" has not been setup", queue_id, port_id); return -EINVAL; } if (rte_eth_dev_is_rx_hairpin_queue(dev, queue_id)) { - RTE_ETHDEV_LOG(INFO, - "Can't get hairpin Rx queue %"PRIu16" info of device with port_id=%"PRIu16"\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Can't get hairpin Rx queue %"PRIu16" info of device with port_id=%"PRIu16, queue_id, port_id); return -EINVAL; } @@ -5997,28 +5997,28 @@ rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u", queue_id); return -EINVAL; } if (qinfo == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u Tx queue %u info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get ethdev port %u Tx queue %u info to NULL", port_id, queue_id); return -EINVAL; } if (dev->data->tx_queues == NULL || dev->data->tx_queues[queue_id] == NULL) { - RTE_ETHDEV_LOG(ERR, + RTE_ETHDEV_LOG_LINE(ERR, "Tx queue %"PRIu16" of device with port_id=%" - PRIu16" has not been setup\n", + PRIu16" has not been setup", queue_id, port_id); return -EINVAL; } if (rte_eth_dev_is_tx_hairpin_queue(dev, queue_id)) { - RTE_ETHDEV_LOG(INFO, - "Can't get hairpin Tx queue %"PRIu16" info of device with port_id=%"PRIu16"\n", + RTE_ETHDEV_LOG_LINE(INFO, + "Can't get hairpin Tx queue %"PRIu16" info of device with port_id=%"PRIu16, queue_id, port_id); return -EINVAL; } @@ -6068,13 +6068,13 @@ rte_eth_rx_burst_mode_get(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", queue_id); return -EINVAL; } if (mode == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u Rx queue %u burst mode to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u Rx queue %u burst mode to NULL", port_id, queue_id); return -EINVAL; } @@ -6101,13 +6101,13 @@ rte_eth_tx_burst_mode_get(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u", queue_id); return -EINVAL; } if (mode == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u Tx queue %u burst mode to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u Tx queue %u burst mode to NULL", port_id, queue_id); return -EINVAL; } @@ -6134,13 +6134,13 @@ rte_eth_get_monitor_addr(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", queue_id); return -EINVAL; } if (pmc == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u Rx queue %u power monitor condition to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u Rx queue %u power monitor condition to NULL", port_id, queue_id); return -EINVAL; } @@ -6224,8 +6224,8 @@ rte_eth_timesync_read_rx_timestamp(uint16_t port_id, struct timespec *timestamp, dev = &rte_eth_devices[port_id]; if (timestamp == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot read ethdev port %u Rx timestamp to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot read ethdev port %u Rx timestamp to NULL", port_id); return -EINVAL; } @@ -6253,8 +6253,8 @@ rte_eth_timesync_read_tx_timestamp(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (timestamp == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot read ethdev port %u Tx timestamp to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot read ethdev port %u Tx timestamp to NULL", port_id); return -EINVAL; } @@ -6299,8 +6299,8 @@ rte_eth_timesync_read_time(uint16_t port_id, struct timespec *timestamp) dev = &rte_eth_devices[port_id]; if (timestamp == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot read ethdev port %u timesync time to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot read ethdev port %u timesync time to NULL", port_id); return -EINVAL; } @@ -6325,8 +6325,8 @@ rte_eth_timesync_write_time(uint16_t port_id, const struct timespec *timestamp) dev = &rte_eth_devices[port_id]; if (timestamp == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot write ethdev port %u timesync from NULL time\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot write ethdev port %u timesync from NULL time", port_id); return -EINVAL; } @@ -6351,7 +6351,7 @@ rte_eth_read_clock(uint16_t port_id, uint64_t *clock) dev = &rte_eth_devices[port_id]; if (clock == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot read ethdev port %u clock to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, "Cannot read ethdev port %u clock to NULL", port_id); return -EINVAL; } @@ -6375,8 +6375,8 @@ rte_eth_dev_get_reg_info(uint16_t port_id, struct rte_dev_reg_info *info) dev = &rte_eth_devices[port_id]; if (info == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u register info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u register info to NULL", port_id); return -EINVAL; } @@ -6418,8 +6418,8 @@ rte_eth_dev_get_eeprom(uint16_t port_id, struct rte_dev_eeprom_info *info) dev = &rte_eth_devices[port_id]; if (info == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u EEPROM info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u EEPROM info to NULL", port_id); return -EINVAL; } @@ -6443,8 +6443,8 @@ rte_eth_dev_set_eeprom(uint16_t port_id, struct rte_dev_eeprom_info *info) dev = &rte_eth_devices[port_id]; if (info == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot set ethdev port %u EEPROM from NULL info\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot set ethdev port %u EEPROM from NULL info", port_id); return -EINVAL; } @@ -6469,8 +6469,8 @@ rte_eth_dev_get_module_info(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (modinfo == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u EEPROM module info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u EEPROM module info to NULL", port_id); return -EINVAL; } @@ -6495,22 +6495,22 @@ rte_eth_dev_get_module_eeprom(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (info == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u module EEPROM info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u module EEPROM info to NULL", port_id); return -EINVAL; } if (info->data == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u module EEPROM data to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u module EEPROM data to NULL", port_id); return -EINVAL; } if (info->length == 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u module EEPROM to data with zero size\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u module EEPROM to data with zero size", port_id); return -EINVAL; } @@ -6535,8 +6535,8 @@ rte_eth_dev_get_dcb_info(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (dcb_info == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u DCB info to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u DCB info to NULL", port_id); return -EINVAL; } @@ -6601,8 +6601,8 @@ rte_eth_dev_hairpin_capability_get(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (cap == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u hairpin capability to NULL\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u hairpin capability to NULL", port_id); return -EINVAL; } @@ -6627,8 +6627,8 @@ rte_eth_dev_pool_ops_supported(uint16_t port_id, const char *pool) dev = &rte_eth_devices[port_id]; if (pool == NULL) { - RTE_ETHDEV_LOG(ERR, - "Cannot test ethdev port %u mempool operation from NULL pool\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot test ethdev port %u mempool operation from NULL pool", port_id); return -EINVAL; } @@ -6672,14 +6672,14 @@ rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features) dev = &rte_eth_devices[port_id]; if (dev->data->dev_configured != 0) { - RTE_ETHDEV_LOG(ERR, - "The port (ID=%"PRIu16") is already configured\n", + RTE_ETHDEV_LOG_LINE(ERR, + "The port (ID=%"PRIu16") is already configured", port_id); return -EBUSY; } if (features == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid features (NULL)\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid features (NULL)"); return -EINVAL; } @@ -6708,14 +6708,14 @@ rte_eth_ip_reassembly_capability_get(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (dev->data->dev_configured == 0) { - RTE_ETHDEV_LOG(ERR, - "port_id=%u is not configured, cannot get IP reassembly capability\n", + RTE_ETHDEV_LOG_LINE(ERR, + "port_id=%u is not configured, cannot get IP reassembly capability", port_id); return -EINVAL; } if (reassembly_capa == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get reassembly capability to NULL\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get reassembly capability to NULL"); return -EINVAL; } @@ -6743,14 +6743,14 @@ rte_eth_ip_reassembly_conf_get(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (dev->data->dev_configured == 0) { - RTE_ETHDEV_LOG(ERR, - "port_id=%u is not configured, cannot get IP reassembly configuration\n", + RTE_ETHDEV_LOG_LINE(ERR, + "port_id=%u is not configured, cannot get IP reassembly configuration", port_id); return -EINVAL; } if (conf == NULL) { - RTE_ETHDEV_LOG(ERR, "Cannot get reassembly info to NULL\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Cannot get reassembly info to NULL"); return -EINVAL; } @@ -6776,22 +6776,22 @@ rte_eth_ip_reassembly_conf_set(uint16_t port_id, dev = &rte_eth_devices[port_id]; if (dev->data->dev_configured == 0) { - RTE_ETHDEV_LOG(ERR, - "port_id=%u is not configured, cannot set IP reassembly configuration\n", + RTE_ETHDEV_LOG_LINE(ERR, + "port_id=%u is not configured, cannot set IP reassembly configuration", port_id); return -EINVAL; } if (dev->data->dev_started != 0) { - RTE_ETHDEV_LOG(ERR, - "port_id=%u is started, cannot configure IP reassembly params.\n", + RTE_ETHDEV_LOG_LINE(ERR, + "port_id=%u is started, cannot configure IP reassembly params.", port_id); return -EINVAL; } if (conf == NULL) { - RTE_ETHDEV_LOG(ERR, - "Invalid IP reassembly configuration (NULL)\n"); + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid IP reassembly configuration (NULL)"); return -EINVAL; } @@ -6814,7 +6814,7 @@ rte_eth_dev_priv_dump(uint16_t port_id, FILE *file) dev = &rte_eth_devices[port_id]; if (file == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid file (NULL)\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid file (NULL)"); return -EINVAL; } @@ -6833,12 +6833,12 @@ rte_eth_rx_descriptor_dump(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_rx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u", queue_id); return -EINVAL; } if (file == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid file (NULL)\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid file (NULL)"); return -EINVAL; } @@ -6859,12 +6859,12 @@ rte_eth_tx_descriptor_dump(uint16_t port_id, uint16_t queue_id, dev = &rte_eth_devices[port_id]; if (queue_id >= dev->data->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u\n", queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u", queue_id); return -EINVAL; } if (file == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid file (NULL)\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid file (NULL)"); return -EINVAL; } @@ -6886,8 +6886,8 @@ rte_eth_buffer_split_get_supported_hdr_ptypes(uint16_t port_id, uint32_t *ptypes dev = &rte_eth_devices[port_id]; if (ptypes == NULL && num > 0) { - RTE_ETHDEV_LOG(ERR, - "Cannot get ethdev port %u supported header protocol types to NULL when array size is non zero\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Cannot get ethdev port %u supported header protocol types to NULL when array size is non zero", port_id); return -EINVAL; } @@ -6940,7 +6940,7 @@ int rte_eth_dev_map_aggr_tx_affinity(uint16_t port_id, uint16_t tx_queue_id, dev = &rte_eth_devices[port_id]; if (tx_queue_id >= dev->data->nb_tx_queues) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u\n", tx_queue_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u", tx_queue_id); return -EINVAL; } @@ -6948,30 +6948,30 @@ int rte_eth_dev_map_aggr_tx_affinity(uint16_t port_id, uint16_t tx_queue_id, return -ENOTSUP; if (dev->data->dev_configured == 0) { - RTE_ETHDEV_LOG(ERR, - "Port %u must be configured before Tx affinity mapping\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u must be configured before Tx affinity mapping", port_id); return -EINVAL; } if (dev->data->dev_started) { - RTE_ETHDEV_LOG(ERR, - "Port %u must be stopped to allow configuration\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u must be stopped to allow configuration", port_id); return -EBUSY; } aggr_ports = rte_eth_dev_count_aggr_ports(port_id); if (aggr_ports == 0) { - RTE_ETHDEV_LOG(ERR, - "Port %u has no aggregated port\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u has no aggregated port", port_id); return -ENOTSUP; } if (affinity > aggr_ports) { - RTE_ETHDEV_LOG(ERR, - "Port %u map invalid affinity %u exceeds the maximum number %u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Port %u map invalid affinity %u exceeds the maximum number %u", port_id, affinity, aggr_ports); return -EINVAL; } diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index 77331ce652..e89e474c39 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -176,9 +176,11 @@ extern "C" { #include "rte_dev_info.h" extern int rte_eth_dev_logtype; +#define RTE_LOGTYPE_ETHDEV rte_eth_dev_logtype -#define RTE_ETHDEV_LOG(level, ...) \ - rte_log(RTE_LOG_ ## level, rte_eth_dev_logtype, "" __VA_ARGS__) +#define RTE_ETHDEV_LOG_LINE(level, ...) \ + RTE_LOG(level, ETHDEV, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + RTE_FMT_TAIL(__VA_ARGS__ ,))) struct rte_mbuf; @@ -2000,14 +2002,14 @@ struct rte_eth_fec_capa { /* Macros to check for valid port */ #define RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, retval) do { \ if (!rte_eth_dev_is_valid_port(port_id)) { \ - RTE_ETHDEV_LOG(ERR, "Invalid port_id=%u\n", port_id); \ + RTE_ETHDEV_LOG_LINE(ERR, "Invalid port_id=%u", port_id); \ return retval; \ } \ } while (0) #define RTE_ETH_VALID_PORTID_OR_RET(port_id) do { \ if (!rte_eth_dev_is_valid_port(port_id)) { \ - RTE_ETHDEV_LOG(ERR, "Invalid port_id=%u\n", port_id); \ + RTE_ETHDEV_LOG_LINE(ERR, "Invalid port_id=%u", port_id); \ return; \ } \ } while (0) @@ -6052,8 +6054,8 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id, #ifdef RTE_ETHDEV_DEBUG_RX if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Invalid port_id=%u or queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid port_id=%u or queue_id=%u", port_id, queue_id); return 0; } @@ -6067,7 +6069,7 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id, RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0); if (qd == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u for port_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u for port_id=%u", queue_id, port_id); return 0; } @@ -6123,8 +6125,8 @@ rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id) #ifdef RTE_ETHDEV_DEBUG_RX if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Invalid port_id=%u or queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid port_id=%u or queue_id=%u", port_id, queue_id); return -EINVAL; } @@ -6196,8 +6198,8 @@ rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id, #ifdef RTE_ETHDEV_DEBUG_RX if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Invalid port_id=%u or queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid port_id=%u or queue_id=%u", port_id, queue_id); return -EINVAL; } @@ -6267,8 +6269,8 @@ static inline int rte_eth_tx_descriptor_status(uint16_t port_id, #ifdef RTE_ETHDEV_DEBUG_TX if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Invalid port_id=%u or queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid port_id=%u or queue_id=%u", port_id, queue_id); return -EINVAL; } @@ -6391,8 +6393,8 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id, #ifdef RTE_ETHDEV_DEBUG_TX if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Invalid port_id=%u or queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid port_id=%u or queue_id=%u", port_id, queue_id); return 0; } @@ -6406,7 +6408,7 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id, RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0); if (qd == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u for port_id=%u", queue_id, port_id); return 0; } @@ -6501,8 +6503,8 @@ rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id, #ifdef RTE_ETHDEV_DEBUG_TX if (port_id >= RTE_MAX_ETHPORTS || queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Invalid port_id=%u or queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid port_id=%u or queue_id=%u", port_id, queue_id); rte_errno = ENODEV; return 0; @@ -6515,12 +6517,12 @@ rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id, #ifdef RTE_ETHDEV_DEBUG_TX if (!rte_eth_dev_is_valid_port(port_id)) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx port_id=%u\n", port_id); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx port_id=%u", port_id); rte_errno = ENODEV; return 0; } if (qd == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u for port_id=%u", queue_id, port_id); rte_errno = EINVAL; return 0; @@ -6706,8 +6708,8 @@ rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id, #ifdef RTE_ETHDEV_DEBUG_TX if (tx_port_id >= RTE_MAX_ETHPORTS || tx_queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, - "Invalid tx_port_id=%u or tx_queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, + "Invalid tx_port_id=%u or tx_queue_id=%u", tx_port_id, tx_queue_id); return 0; } @@ -6721,7 +6723,7 @@ rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id, RTE_ETH_VALID_PORTID_OR_ERR_RET(tx_port_id, 0); if (qd1 == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Tx queue_id=%u for port_id=%u", tx_queue_id, tx_port_id); return 0; } @@ -6732,7 +6734,7 @@ rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id, #ifdef RTE_ETHDEV_DEBUG_RX if (rx_port_id >= RTE_MAX_ETHPORTS || rx_queue_id >= RTE_MAX_QUEUES_PER_PORT) { - RTE_ETHDEV_LOG(ERR, "Invalid rx_port_id=%u or rx_queue_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Invalid rx_port_id=%u or rx_queue_id=%u", rx_port_id, rx_queue_id); return 0; } @@ -6746,7 +6748,7 @@ rte_eth_recycle_mbufs(uint16_t rx_port_id, uint16_t rx_queue_id, RTE_ETH_VALID_PORTID_OR_ERR_RET(rx_port_id, 0); if (qd2 == NULL) { - RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u for port_id=%u\n", + RTE_ETHDEV_LOG_LINE(ERR, "Invalid Rx queue_id=%u for port_id=%u", rx_queue_id, rx_port_id); return 0; } diff --git a/lib/ethdev/rte_ethdev_cman.c b/lib/ethdev/rte_ethdev_cman.c index a9c4637521..41e38bdc89 100644 --- a/lib/ethdev/rte_ethdev_cman.c +++ b/lib/ethdev/rte_ethdev_cman.c @@ -21,12 +21,12 @@ rte_eth_cman_info_get(uint16_t port_id, struct rte_eth_cman_info *info) dev = &rte_eth_devices[port_id]; if (info == NULL) { - RTE_ETHDEV_LOG(ERR, "congestion management info is NULL\n"); + RTE_ETHDEV_LOG_LINE(ERR, "congestion management info is NULL"); return -EINVAL; } if (dev->dev_ops->cman_info_get == NULL) { - RTE_ETHDEV_LOG(ERR, "Function not implemented\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Function not implemented"); return -ENOTSUP; } @@ -49,12 +49,12 @@ rte_eth_cman_config_init(uint16_t port_id, struct rte_eth_cman_config *config) dev = &rte_eth_devices[port_id]; if (config == NULL) { - RTE_ETHDEV_LOG(ERR, "congestion management config is NULL\n"); + RTE_ETHDEV_LOG_LINE(ERR, "congestion management config is NULL"); return -EINVAL; } if (dev->dev_ops->cman_config_init == NULL) { - RTE_ETHDEV_LOG(ERR, "Function not implemented\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Function not implemented"); return -ENOTSUP; } @@ -77,12 +77,12 @@ rte_eth_cman_config_set(uint16_t port_id, const struct rte_eth_cman_config *conf dev = &rte_eth_devices[port_id]; if (config == NULL) { - RTE_ETHDEV_LOG(ERR, "congestion management config is NULL\n"); + RTE_ETHDEV_LOG_LINE(ERR, "congestion management config is NULL"); return -EINVAL; } if (dev->dev_ops->cman_config_set == NULL) { - RTE_ETHDEV_LOG(ERR, "Function not implemented\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Function not implemented"); return -ENOTSUP; } @@ -104,12 +104,12 @@ rte_eth_cman_config_get(uint16_t port_id, struct rte_eth_cman_config *config) dev = &rte_eth_devices[port_id]; if (config == NULL) { - RTE_ETHDEV_LOG(ERR, "congestion management config is NULL\n"); + RTE_ETHDEV_LOG_LINE(ERR, "congestion management config is NULL"); return -EINVAL; } if (dev->dev_ops->cman_config_get == NULL) { - RTE_ETHDEV_LOG(ERR, "Function not implemented\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Function not implemented"); return -ENOTSUP; } diff --git a/lib/ethdev/rte_ethdev_telemetry.c b/lib/ethdev/rte_ethdev_telemetry.c index b01028ce9b..6b873e7abe 100644 --- a/lib/ethdev/rte_ethdev_telemetry.c +++ b/lib/ethdev/rte_ethdev_telemetry.c @@ -36,8 +36,8 @@ eth_dev_parse_port_params(const char *params, uint16_t *port_id, pi = strtoul(params, end_param, 0); if (**end_param != '\0' && !has_next) - RTE_ETHDEV_LOG(NOTICE, - "Extra parameters passed to ethdev telemetry command, ignoring\n"); + RTE_ETHDEV_LOG_LINE(NOTICE, + "Extra parameters passed to ethdev telemetry command, ignoring"); if (pi >= UINT16_MAX || !rte_eth_dev_is_valid_port(pi)) return -EINVAL; @@ -153,8 +153,8 @@ eth_dev_handle_port_xstats(const char *cmd __rte_unused, kvlist = rte_kvargs_parse(end_param, valid_keys); ret = rte_kvargs_process(kvlist, NULL, eth_dev_parse_hide_zero, &hide_zero); if (kvlist == NULL || ret != 0) - RTE_ETHDEV_LOG(NOTICE, - "Unknown extra parameters passed to ethdev telemetry command, ignoring\n"); + RTE_ETHDEV_LOG_LINE(NOTICE, + "Unknown extra parameters passed to ethdev telemetry command, ignoring"); rte_kvargs_free(kvlist); } @@ -445,8 +445,8 @@ eth_dev_handle_port_flow_ctrl(const char *cmd __rte_unused, ret = rte_eth_dev_flow_ctrl_get(port_id, &fc_conf); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, - "Failed to get flow ctrl info, ret = %d\n", ret); + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to get flow ctrl info, ret = %d", ret); return ret; } @@ -496,8 +496,8 @@ ethdev_parse_queue_params(const char *params, bool is_rx, qid = strtoul(qid_param, &end_param, 0); } if (*end_param != '\0') - RTE_ETHDEV_LOG(NOTICE, - "Extra parameters passed to ethdev telemetry command, ignoring\n"); + RTE_ETHDEV_LOG_LINE(NOTICE, + "Extra parameters passed to ethdev telemetry command, ignoring"); if (qid >= UINT16_MAX) return -EINVAL; @@ -522,8 +522,8 @@ eth_dev_add_burst_mode(uint16_t port_id, uint16_t queue_id, return 0; if (ret != 0) { - RTE_ETHDEV_LOG(ERR, - "Failed to get burst mode for port %u\n", port_id); + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to get burst mode for port %u", port_id); return ret; } @@ -689,8 +689,8 @@ eth_dev_add_dcb_info(uint16_t port_id, struct rte_tel_data *d) ret = rte_eth_dev_get_dcb_info(port_id, &dcb_info); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, - "Failed to get dcb info, ret = %d\n", ret); + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to get dcb info, ret = %d", ret); return ret; } @@ -769,8 +769,8 @@ eth_dev_handle_port_rss_info(const char *cmd __rte_unused, ret = rte_eth_dev_info_get(port_id, &dev_info); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, - "Failed to get device info, ret = %d\n", ret); + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to get device info, ret = %d", ret); return ret; } @@ -823,7 +823,7 @@ eth_dev_fec_capas_to_string(uint32_t fec_capa, char *fec_name, uint32_t len) count = snprintf(fec_name, len, "unknown "); if (count >= len) { - RTE_ETHDEV_LOG(WARNING, "FEC capa names may be truncated\n"); + RTE_ETHDEV_LOG_LINE(WARNING, "FEC capa names may be truncated"); count = len; } @@ -994,8 +994,8 @@ eth_dev_handle_port_vlan(const char *cmd __rte_unused, ret = rte_eth_dev_conf_get(port_id, &dev_conf); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, - "Failed to get device configuration, ret = %d\n", ret); + RTE_ETHDEV_LOG_LINE(ERR, + "Failed to get device configuration, ret = %d", ret); return ret; } @@ -1115,7 +1115,7 @@ eth_dev_handle_port_tm_caps(const char *cmd __rte_unused, ret = rte_tm_capabilities_get(port_id, &cap, &error); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, "error: %s, error type: %u\n", + RTE_ETHDEV_LOG_LINE(ERR, "error: %s, error type: %u", error.message ? error.message : "no stated reason", error.type); return ret; @@ -1229,8 +1229,8 @@ eth_dev_parse_tm_params(char *params, uint32_t *result) ret = strtoul(splited_param, ¶ms, 0); if (*params != '\0') - RTE_ETHDEV_LOG(NOTICE, - "Extra parameters passed to ethdev telemetry command, ignoring\n"); + RTE_ETHDEV_LOG_LINE(NOTICE, + "Extra parameters passed to ethdev telemetry command, ignoring"); if (ret >= UINT32_MAX) return -EINVAL; @@ -1263,7 +1263,7 @@ eth_dev_handle_port_tm_level_caps(const char *cmd __rte_unused, ret = rte_tm_level_capabilities_get(port_id, level_id, &cap, &error); if (ret != 0) { - RTE_ETHDEV_LOG(ERR, "error: %s, error type: %u\n", + RTE_ETHDEV_LOG_LINE(ERR, "error: %s, error type: %u", error.message ? error.message : "no stated reason", error.type); return ret; @@ -1389,7 +1389,7 @@ eth_dev_handle_port_tm_node_caps(const char *cmd __rte_unused, return 0; out: - RTE_ETHDEV_LOG(WARNING, "error: %s, error type: %u\n", + RTE_ETHDEV_LOG_LINE(WARNING, "error: %s, error type: %u", error.message ? error.message : "no stated reason", error.type); return ret; diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 549e329558..f49d1d3767 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -18,6 +18,8 @@ #include "ethdev_trace.h" +#define FLOW_LOG RTE_ETHDEV_LOG_LINE + /* Mbuf dynamic field name for metadata. */ int32_t rte_flow_dynf_metadata_offs = -1; @@ -1614,13 +1616,13 @@ rte_flow_info_get(uint16_t port_id, if (unlikely(!ops)) return -rte_errno; if (dev->data->dev_configured == 0) { - RTE_FLOW_LOG(INFO, - "Device with port_id=%"PRIu16" is not configured.\n", + FLOW_LOG(INFO, + "Device with port_id=%"PRIu16" is not configured.", port_id); return -EINVAL; } if (port_info == NULL) { - RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id); + FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.", port_id); return -EINVAL; } if (likely(!!ops->info_get)) { @@ -1651,23 +1653,23 @@ rte_flow_configure(uint16_t port_id, if (unlikely(!ops)) return -rte_errno; if (dev->data->dev_configured == 0) { - RTE_FLOW_LOG(INFO, - "Device with port_id=%"PRIu16" is not configured.\n", + FLOW_LOG(INFO, + "Device with port_id=%"PRIu16" is not configured.", port_id); return -EINVAL; } if (dev->data->dev_started != 0) { - RTE_FLOW_LOG(INFO, - "Device with port_id=%"PRIu16" already started.\n", + FLOW_LOG(INFO, + "Device with port_id=%"PRIu16" already started.", port_id); return -EINVAL; } if (port_attr == NULL) { - RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id); + FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.", port_id); return -EINVAL; } if (queue_attr == NULL) { - RTE_FLOW_LOG(ERR, "Port %"PRIu16" queue info is NULL.\n", port_id); + FLOW_LOG(ERR, "Port %"PRIu16" queue info is NULL.", port_id); return -EINVAL; } if ((port_attr->flags & RTE_FLOW_PORT_FLAG_SHARE_INDIRECT) && @@ -1704,8 +1706,8 @@ rte_flow_pattern_template_create(uint16_t port_id, if (unlikely(!ops)) return NULL; if (dev->data->flow_configured == 0) { - RTE_FLOW_LOG(INFO, - "Flow engine on port_id=%"PRIu16" is not configured.\n", + FLOW_LOG(INFO, + "Flow engine on port_id=%"PRIu16" is not configured.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_STATE, @@ -1713,8 +1715,8 @@ rte_flow_pattern_template_create(uint16_t port_id, return NULL; } if (template_attr == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" template attr is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" template attr is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, @@ -1722,8 +1724,8 @@ rte_flow_pattern_template_create(uint16_t port_id, return NULL; } if (pattern == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" pattern is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" pattern is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, @@ -1791,8 +1793,8 @@ rte_flow_actions_template_create(uint16_t port_id, if (unlikely(!ops)) return NULL; if (dev->data->flow_configured == 0) { - RTE_FLOW_LOG(INFO, - "Flow engine on port_id=%"PRIu16" is not configured.\n", + FLOW_LOG(INFO, + "Flow engine on port_id=%"PRIu16" is not configured.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_STATE, @@ -1800,8 +1802,8 @@ rte_flow_actions_template_create(uint16_t port_id, return NULL; } if (template_attr == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" template attr is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" template attr is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, @@ -1809,8 +1811,8 @@ rte_flow_actions_template_create(uint16_t port_id, return NULL; } if (actions == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" actions is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" actions is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, @@ -1818,8 +1820,8 @@ rte_flow_actions_template_create(uint16_t port_id, return NULL; } if (masks == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" masks is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" masks is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, @@ -1889,8 +1891,8 @@ rte_flow_template_table_create(uint16_t port_id, if (unlikely(!ops)) return NULL; if (dev->data->flow_configured == 0) { - RTE_FLOW_LOG(INFO, - "Flow engine on port_id=%"PRIu16" is not configured.\n", + FLOW_LOG(INFO, + "Flow engine on port_id=%"PRIu16" is not configured.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_STATE, @@ -1898,8 +1900,8 @@ rte_flow_template_table_create(uint16_t port_id, return NULL; } if (table_attr == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" table attr is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" table attr is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, @@ -1907,8 +1909,8 @@ rte_flow_template_table_create(uint16_t port_id, return NULL; } if (pattern_templates == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" pattern templates is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" pattern templates is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, @@ -1916,8 +1918,8 @@ rte_flow_template_table_create(uint16_t port_id, return NULL; } if (actions_templates == NULL) { - RTE_FLOW_LOG(ERR, - "Port %"PRIu16" actions templates is NULL.\n", + FLOW_LOG(ERR, + "Port %"PRIu16" actions templates is NULL.", port_id); rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index affdc8121b..78b6bbb159 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -46,9 +46,6 @@ extern "C" { #endif -#define RTE_FLOW_LOG(level, ...) \ - rte_log(RTE_LOG_ ## level, rte_eth_dev_logtype, "" __VA_ARGS__) - /** * Flow rule attributes. * diff --git a/lib/ethdev/sff_telemetry.c b/lib/ethdev/sff_telemetry.c index f29e7fa882..b3f239d967 100644 --- a/lib/ethdev/sff_telemetry.c +++ b/lib/ethdev/sff_telemetry.c @@ -19,7 +19,7 @@ sff_port_module_eeprom_parse(uint16_t port_id, struct rte_tel_data *d) int ret; if (d == NULL) { - RTE_ETHDEV_LOG(ERR, "Dict invalid\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Dict invalid"); return; } @@ -27,16 +27,16 @@ sff_port_module_eeprom_parse(uint16_t port_id, struct rte_tel_data *d) if (ret != 0) { switch (ret) { case -ENODEV: - RTE_ETHDEV_LOG(ERR, "Port index %d invalid\n", port_id); + RTE_ETHDEV_LOG_LINE(ERR, "Port index %d invalid", port_id); break; case -ENOTSUP: - RTE_ETHDEV_LOG(ERR, "Operation not supported by device\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Operation not supported by device"); break; case -EIO: - RTE_ETHDEV_LOG(ERR, "Device is removed\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Device is removed"); break; default: - RTE_ETHDEV_LOG(ERR, "Unable to get port module info, %d\n", ret); + RTE_ETHDEV_LOG_LINE(ERR, "Unable to get port module info, %d", ret); break; } return; @@ -46,7 +46,7 @@ sff_port_module_eeprom_parse(uint16_t port_id, struct rte_tel_data *d) einfo.length = minfo.eeprom_len; einfo.data = calloc(1, minfo.eeprom_len); if (einfo.data == NULL) { - RTE_ETHDEV_LOG(ERR, "Allocation of port %u EEPROM data failed\n", port_id); + RTE_ETHDEV_LOG_LINE(ERR, "Allocation of port %u EEPROM data failed", port_id); return; } @@ -54,16 +54,16 @@ sff_port_module_eeprom_parse(uint16_t port_id, struct rte_tel_data *d) if (ret != 0) { switch (ret) { case -ENODEV: - RTE_ETHDEV_LOG(ERR, "Port index %d invalid\n", port_id); + RTE_ETHDEV_LOG_LINE(ERR, "Port index %d invalid", port_id); break; case -ENOTSUP: - RTE_ETHDEV_LOG(ERR, "Operation not supported by device\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Operation not supported by device"); break; case -EIO: - RTE_ETHDEV_LOG(ERR, "Device is removed\n"); + RTE_ETHDEV_LOG_LINE(ERR, "Device is removed"); break; default: - RTE_ETHDEV_LOG(ERR, "Unable to get port module EEPROM, %d\n", ret); + RTE_ETHDEV_LOG_LINE(ERR, "Unable to get port module EEPROM, %d", ret); break; } free(einfo.data); @@ -84,7 +84,7 @@ sff_port_module_eeprom_parse(uint16_t port_id, struct rte_tel_data *d) sff_8636_show_all(einfo.data, einfo.length, d); break; default: - RTE_ETHDEV_LOG(NOTICE, "Unsupported module type: %u\n", minfo.type); + RTE_ETHDEV_LOG_LINE(NOTICE, "Unsupported module type: %u", minfo.type); break; } @@ -99,7 +99,7 @@ ssf_add_dict_string(struct rte_tel_data *d, const char *name_str, const char *va if (d->type != TEL_DICT) return; if (d->data_len >= RTE_TEL_MAX_DICT_ENTRIES) { - RTE_ETHDEV_LOG(ERR, "data_len has exceeded the maximum number of inserts\n"); + RTE_ETHDEV_LOG_LINE(ERR, "data_len has exceeded the maximum number of inserts"); return; } @@ -135,13 +135,13 @@ eth_dev_handle_port_module_eeprom(const char *cmd __rte_unused, const char *para port_id = strtoul(params, &end_param, 0); if (errno != 0 || port_id >= UINT16_MAX) { - RTE_ETHDEV_LOG(ERR, "Invalid argument, %d\n", errno); + RTE_ETHDEV_LOG_LINE(ERR, "Invalid argument, %d", errno); return -1; } if (*end_param != '\0') - RTE_ETHDEV_LOG(NOTICE, - "Extra parameters [%s] passed to ethdev telemetry command, ignoring\n", + RTE_ETHDEV_LOG_LINE(NOTICE, + "Extra parameters [%s] passed to ethdev telemetry command, ignoring", end_param); rte_tel_data_start_dict(d); diff --git a/lib/member/member.h b/lib/member/member.h new file mode 100644 index 0000000000..a7b5b4a57c --- /dev/null +++ b/lib/member/member.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Red Hat, Inc. + */ + +#include <rte_log.h> + +extern int librte_member_logtype; +#define RTE_LOGTYPE_MEMBER librte_member_logtype + +#define MEMBER_LOG(level, ...) \ + RTE_LOG(level, MEMBER, \ + RTE_FMT("%s(): " RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + __func__, RTE_FMT_TAIL(__VA_ARGS__ ,))) + diff --git a/lib/member/rte_member.c b/lib/member/rte_member.c index 8f859f7fbd..57eb7affab 100644 --- a/lib/member/rte_member.c +++ b/lib/member/rte_member.c @@ -11,6 +11,7 @@ #include <rte_tailq.h> #include <rte_ring_elem.h> +#include "member.h" #include "rte_member.h" #include "rte_member_ht.h" #include "rte_member_vbf.h" @@ -102,8 +103,8 @@ rte_member_create(const struct rte_member_parameters *params) if (params->key_len == 0 || params->prim_hash_seed == params->sec_hash_seed) { rte_errno = EINVAL; - RTE_MEMBER_LOG(ERR, "Create setsummary with " - "invalid parameters\n"); + MEMBER_LOG(ERR, "Create setsummary with " + "invalid parameters"); return NULL; } @@ -112,7 +113,7 @@ rte_member_create(const struct rte_member_parameters *params) sketch_key_ring = rte_ring_create_elem(ring_name, sizeof(uint32_t), rte_align32pow2(params->top_k), params->socket_id, 0); if (sketch_key_ring == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Ring Memory allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Ring Memory allocation failed"); return NULL; } } @@ -135,7 +136,7 @@ rte_member_create(const struct rte_member_parameters *params) } te = rte_zmalloc("MEMBER_TAILQ_ENTRY", sizeof(*te), 0); if (te == NULL) { - RTE_MEMBER_LOG(ERR, "tailq entry allocation failed\n"); + MEMBER_LOG(ERR, "tailq entry allocation failed"); goto error_unlock_exit; } @@ -144,7 +145,7 @@ rte_member_create(const struct rte_member_parameters *params) sizeof(struct rte_member_setsum), RTE_CACHE_LINE_SIZE, params->socket_id); if (setsum == NULL) { - RTE_MEMBER_LOG(ERR, "Create setsummary failed\n"); + MEMBER_LOG(ERR, "Create setsummary failed"); goto error_unlock_exit; } strlcpy(setsum->name, params->name, sizeof(setsum->name)); @@ -171,8 +172,8 @@ rte_member_create(const struct rte_member_parameters *params) if (ret < 0) goto error_unlock_exit; - RTE_MEMBER_LOG(DEBUG, "Creating a setsummary table with " - "mode %u\n", setsum->type); + MEMBER_LOG(DEBUG, "Creating a setsummary table with " + "mode %u", setsum->type); te->data = (void *)setsum; TAILQ_INSERT_TAIL(member_list, te, next); diff --git a/lib/member/rte_member.h b/lib/member/rte_member.h index b585904368..3278bbb5c1 100644 --- a/lib/member/rte_member.h +++ b/lib/member/rte_member.h @@ -100,15 +100,6 @@ typedef uint16_t member_set_t; #define MEMBER_HASH_FUNC rte_jhash #endif -extern int librte_member_logtype; - -#define RTE_MEMBER_LOG(level, ...) \ - rte_log(RTE_LOG_ ## level, \ - librte_member_logtype, \ - RTE_FMT("%s(): " RTE_FMT_HEAD(__VA_ARGS__,), \ - __func__, \ - RTE_FMT_TAIL(__VA_ARGS__,))) - /** @internal setsummary structure. */ struct rte_member_setsum; diff --git a/lib/member/rte_member_heap.h b/lib/member/rte_member_heap.h index 9c4a01aebe..e0a3d54eab 100644 --- a/lib/member/rte_member_heap.h +++ b/lib/member/rte_member_heap.h @@ -6,6 +6,7 @@ #ifndef RTE_MEMBER_HEAP_H #define RTE_MEMBER_HEAP_H +#include "member.h" #include <rte_ring_elem.h> #include "rte_member.h" @@ -129,16 +130,16 @@ resize_hash_table(struct minheap *hp) while (1) { new_bkt_cnt = hp->hashtable->bkt_cnt * HASH_RESIZE_MULTI; - RTE_MEMBER_LOG(ERR, "Sketch Minheap HT load factor is [%f]\n", + MEMBER_LOG(ERR, "Sketch Minheap HT load factor is [%f]", hp->hashtable->num_item / ((float)hp->hashtable->bkt_cnt * HASH_BKT_SIZE)); - RTE_MEMBER_LOG(ERR, "Sketch Minheap HT resize happen!\n"); + MEMBER_LOG(ERR, "Sketch Minheap HT resize happen!"); rte_free(hp->hashtable); hp->hashtable = rte_zmalloc_socket(NULL, sizeof(struct hash) + new_bkt_cnt * sizeof(struct hash_bkt), RTE_CACHE_LINE_SIZE, hp->socket); if (hp->hashtable == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Minheap HT allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Minheap HT allocation failed"); return -ENOMEM; } @@ -147,8 +148,8 @@ resize_hash_table(struct minheap *hp) for (i = 0; i < hp->size; ++i) { if (hash_table_insert(hp->elem[i].key, i + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, - "Sketch Minheap HT resize insert fail!\n"); + MEMBER_LOG(ERR, + "Sketch Minheap HT resize insert fail!"); break; } } @@ -174,7 +175,7 @@ rte_member_minheap_init(struct minheap *heap, int size, heap->elem = rte_zmalloc_socket(NULL, sizeof(struct node) * size, RTE_CACHE_LINE_SIZE, socket); if (heap->elem == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Minheap elem allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Minheap elem allocation failed"); return -ENOMEM; } @@ -188,7 +189,7 @@ rte_member_minheap_init(struct minheap *heap, int size, RTE_CACHE_LINE_SIZE, socket); if (heap->hashtable == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Minheap HT allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Minheap HT allocation failed"); rte_free(heap->elem); return -ENOMEM; } @@ -231,13 +232,13 @@ rte_member_heapify(struct minheap *hp, uint32_t idx, bool update_hash) if (update_hash) { if (hash_table_update(hp->elem[smallest].key, idx + 1, smallest + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table update failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table update failed"); return; } if (hash_table_update(hp->elem[idx].key, smallest + 1, idx + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table update failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table update failed"); return; } } @@ -255,7 +256,7 @@ rte_member_minheap_insert_node(struct minheap *hp, const void *key, uint32_t slot_id; if (rte_ring_sc_dequeue_elem(free_key_slot, &slot_id, sizeof(uint32_t)) != 0) { - RTE_MEMBER_LOG(ERR, "Minheap get empty keyslot failed\n"); + MEMBER_LOG(ERR, "Minheap get empty keyslot failed"); return -1; } @@ -270,7 +271,7 @@ rte_member_minheap_insert_node(struct minheap *hp, const void *key, hp->elem[i] = hp->elem[PARENT(i)]; if (hash_table_update(hp->elem[i].key, PARENT(i) + 1, i + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table update failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table update failed"); return -1; } i = PARENT(i); @@ -279,7 +280,7 @@ rte_member_minheap_insert_node(struct minheap *hp, const void *key, if (hash_table_insert(key, i + 1, hp->key_len, hp->hashtable) < 0) { if (resize_hash_table(hp) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table resize failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table resize failed"); return -1; } } @@ -296,7 +297,7 @@ rte_member_minheap_delete_node(struct minheap *hp, const void *key, uint32_t offset = RTE_PTR_DIFF(hp->elem[idx].key, key_slot) / hp->key_len; if (hash_table_del(key, idx + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table delete failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table delete failed"); return -1; } @@ -311,7 +312,7 @@ rte_member_minheap_delete_node(struct minheap *hp, const void *key, if (hash_table_update(hp->elem[idx].key, hp->size, idx + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table update failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table update failed"); return -1; } hp->size--; @@ -332,7 +333,7 @@ rte_member_minheap_replace_node(struct minheap *hp, recycle_key = hp->elem[0].key; if (hash_table_del(recycle_key, 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table delete failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table delete failed"); return -1; } @@ -340,7 +341,7 @@ rte_member_minheap_replace_node(struct minheap *hp, if (hash_table_update(hp->elem[0].key, hp->size, 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table update failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table update failed"); return -1; } hp->size--; @@ -358,7 +359,7 @@ rte_member_minheap_replace_node(struct minheap *hp, hp->elem[i] = hp->elem[PARENT(i)]; if (hash_table_update(hp->elem[i].key, PARENT(i) + 1, i + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table update failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table update failed"); return -1; } i = PARENT(i); @@ -367,9 +368,9 @@ rte_member_minheap_replace_node(struct minheap *hp, hp->elem[i] = nd; if (hash_table_insert(new_key, i + 1, hp->key_len, hp->hashtable) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table replace insert failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table replace insert failed"); if (resize_hash_table(hp) < 0) { - RTE_MEMBER_LOG(ERR, "Minheap Hash Table replace resize failed\n"); + MEMBER_LOG(ERR, "Minheap Hash Table replace resize failed"); return -1; } } diff --git a/lib/member/rte_member_ht.c b/lib/member/rte_member_ht.c index a85561b472..357097ff4b 100644 --- a/lib/member/rte_member_ht.c +++ b/lib/member/rte_member_ht.c @@ -9,6 +9,7 @@ #include <rte_log.h> #include <rte_vect.h> +#include "member.h" #include "rte_member.h" #include "rte_member_ht.h" @@ -84,8 +85,8 @@ rte_member_create_ht(struct rte_member_setsum *ss, !rte_is_power_of_2(RTE_MEMBER_BUCKET_ENTRIES) || num_entries < RTE_MEMBER_BUCKET_ENTRIES) { rte_errno = EINVAL; - RTE_MEMBER_LOG(ERR, - "Membership HT create with invalid parameters\n"); + MEMBER_LOG(ERR, + "Membership HT create with invalid parameters"); return -EINVAL; } @@ -98,8 +99,8 @@ rte_member_create_ht(struct rte_member_setsum *ss, RTE_CACHE_LINE_SIZE, ss->socket_id); if (buckets == NULL) { - RTE_MEMBER_LOG(ERR, "memory allocation failed for HT " - "setsummary\n"); + MEMBER_LOG(ERR, "memory allocation failed for HT " + "setsummary"); return -ENOMEM; } @@ -121,8 +122,8 @@ rte_member_create_ht(struct rte_member_setsum *ss, #endif ss->sig_cmp_fn = RTE_MEMBER_COMPARE_SCALAR; - RTE_MEMBER_LOG(DEBUG, "Hash table based filter created, " - "the table has %u entries, %u buckets\n", + MEMBER_LOG(DEBUG, "Hash table based filter created, " + "the table has %u entries, %u buckets", num_entries, num_buckets); return 0; } diff --git a/lib/member/rte_member_sketch.c b/lib/member/rte_member_sketch.c index d5f35aabe9..e006e835d9 100644 --- a/lib/member/rte_member_sketch.c +++ b/lib/member/rte_member_sketch.c @@ -14,6 +14,7 @@ #include <rte_prefetch.h> #include <rte_ring_elem.h> +#include "member.h" #include "rte_member.h" #include "rte_member_sketch.h" #include "rte_member_heap.h" @@ -118,8 +119,8 @@ rte_member_create_sketch(struct rte_member_setsum *ss, if (params->sample_rate == 0 || params->sample_rate > 1) { rte_errno = EINVAL; - RTE_MEMBER_LOG(ERR, - "Membership Sketch created with invalid parameters\n"); + MEMBER_LOG(ERR, + "Membership Sketch created with invalid parameters"); return -EINVAL; } @@ -141,8 +142,8 @@ rte_member_create_sketch(struct rte_member_setsum *ss, if (ss->use_avx512 == true) { #ifdef CC_AVX512_SUPPORT ss->num_row = NUM_ROW_VEC; - RTE_MEMBER_LOG(NOTICE, - "Membership Sketch AVX512 update/lookup/delete ops is selected\n"); + MEMBER_LOG(NOTICE, + "Membership Sketch AVX512 update/lookup/delete ops is selected"); ss->sketch_update = sketch_update_avx512; ss->sketch_lookup = sketch_lookup_avx512; ss->sketch_delete = sketch_delete_avx512; @@ -151,8 +152,8 @@ rte_member_create_sketch(struct rte_member_setsum *ss, #endif { ss->num_row = NUM_ROW_SCALAR; - RTE_MEMBER_LOG(NOTICE, - "Membership Sketch SCALAR update/lookup/delete ops is selected\n"); + MEMBER_LOG(NOTICE, + "Membership Sketch SCALAR update/lookup/delete ops is selected"); ss->sketch_update = sketch_update_scalar; ss->sketch_lookup = sketch_lookup_scalar; ss->sketch_delete = sketch_delete_scalar; @@ -173,21 +174,21 @@ rte_member_create_sketch(struct rte_member_setsum *ss, sizeof(uint64_t) * num_col * ss->num_row, RTE_CACHE_LINE_SIZE, ss->socket_id); if (ss->table == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Table memory allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Table memory allocation failed"); return -ENOMEM; } ss->hash_seeds = rte_zmalloc_socket(NULL, sizeof(uint64_t) * ss->num_row, RTE_CACHE_LINE_SIZE, ss->socket_id); if (ss->hash_seeds == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Hashseeds memory allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Hashseeds memory allocation failed"); return -ENOMEM; } ss->runtime_var = rte_zmalloc_socket(NULL, sizeof(struct sketch_runtime), RTE_CACHE_LINE_SIZE, ss->socket_id); if (ss->runtime_var == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Runtime memory allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Runtime memory allocation failed"); rte_free(ss); return -ENOMEM; } @@ -205,7 +206,7 @@ rte_member_create_sketch(struct rte_member_setsum *ss, runtime->key_slots = rte_zmalloc_socket(NULL, ss->key_len * ss->topk, RTE_CACHE_LINE_SIZE, ss->socket_id); if (runtime->key_slots == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Key Slots allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Key Slots allocation failed"); goto error; } @@ -216,14 +217,14 @@ rte_member_create_sketch(struct rte_member_setsum *ss, if (rte_member_minheap_init(&(runtime->heap), params->top_k, ss->socket_id, params->prim_hash_seed) < 0) { - RTE_MEMBER_LOG(ERR, "Sketch Minheap allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Minheap allocation failed"); goto error_runtime; } runtime->report_array = rte_zmalloc_socket(NULL, sizeof(struct node) * ss->topk, RTE_CACHE_LINE_SIZE, ss->socket_id); if (runtime->report_array == NULL) { - RTE_MEMBER_LOG(ERR, "Sketch Runtime Report Array allocation failed\n"); + MEMBER_LOG(ERR, "Sketch Runtime Report Array allocation failed"); goto error_runtime; } @@ -239,8 +240,8 @@ rte_member_create_sketch(struct rte_member_setsum *ss, ss->converge_thresh = 10 * pow(ss->error_rate, -2.0) * sqrt(log(1 / delta)); } - RTE_MEMBER_LOG(DEBUG, "Sketch created, " - "the total memory required is %u Bytes\n", ss->num_col * ss->num_row * 8); + MEMBER_LOG(DEBUG, "Sketch created, " + "the total memory required is %u Bytes", ss->num_col * ss->num_row * 8); return 0; @@ -382,8 +383,8 @@ should_converge(const struct rte_member_setsum *ss) /* For count min sketch - L1 norm */ if (runtime_var->pkt_cnt > ss->converge_thresh) { runtime_var->converged = 1; - RTE_MEMBER_LOG(DEBUG, "Sketch converged, begin sampling " - "from key count %"PRIu64"\n", + MEMBER_LOG(DEBUG, "Sketch converged, begin sampling " + "from key count %"PRIu64, runtime_var->pkt_cnt); } } @@ -471,8 +472,8 @@ rte_member_add_sketch(const struct rte_member_setsum *ss, * the rte_member_add_sketch_byte_count routine should be used. */ if (ss->count_byte == 1) { - RTE_MEMBER_LOG(ERR, "Sketch is Byte Mode, " - "should use rte_member_add_byte_count()!\n"); + MEMBER_LOG(ERR, "Sketch is Byte Mode, " + "should use rte_member_add_byte_count()!"); return -EINVAL; } @@ -528,8 +529,8 @@ rte_member_add_sketch_byte_count(const struct rte_member_setsum *ss, /* should not call this API if not in count byte mode */ if (ss->count_byte == 0) { - RTE_MEMBER_LOG(ERR, "Sketch is Pkt Mode, " - "should use rte_member_add()!\n"); + MEMBER_LOG(ERR, "Sketch is Pkt Mode, " + "should use rte_member_add()!"); return -EINVAL; } diff --git a/lib/member/rte_member_vbf.c b/lib/member/rte_member_vbf.c index 5a0c51ecc0..5ad9487fad 100644 --- a/lib/member/rte_member_vbf.c +++ b/lib/member/rte_member_vbf.c @@ -9,6 +9,7 @@ #include <rte_errno.h> #include <rte_log.h> +#include "member.h" #include "rte_member.h" #include "rte_member_vbf.h" @@ -35,7 +36,7 @@ rte_member_create_vbf(struct rte_member_setsum *ss, params->false_positive_rate == 0 || params->false_positive_rate > 1) { rte_errno = EINVAL; - RTE_MEMBER_LOG(ERR, "Membership vBF create with invalid parameters\n"); + MEMBER_LOG(ERR, "Membership vBF create with invalid parameters"); return -EINVAL; } @@ -56,7 +57,7 @@ rte_member_create_vbf(struct rte_member_setsum *ss, if (fp_one_bf == 0) { rte_errno = EINVAL; - RTE_MEMBER_LOG(ERR, "Membership BF false positive rate is too small\n"); + MEMBER_LOG(ERR, "Membership BF false positive rate is too small"); return -EINVAL; } @@ -111,10 +112,10 @@ rte_member_create_vbf(struct rte_member_setsum *ss, ss->mul_shift = rte_ctz32(ss->num_set); ss->div_shift = rte_ctz32(32 >> ss->mul_shift); - RTE_MEMBER_LOG(DEBUG, "vector bloom filter created, " + MEMBER_LOG(DEBUG, "vector bloom filter created, " "each bloom filter expects %u keys, needs %u bits, %u hashes, " "with false positive rate set as %.5f, " - "The new calculated vBF false positive rate is %.5f\n", + "The new calculated vBF false positive rate is %.5f", num_keys_per_bf, ss->bits, ss->num_hashes, fp_one_bf, new_fp); ss->table = rte_zmalloc_socket(NULL, ss->num_set * (ss->bits >> 3), diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c index 5a1ec14d7a..70963e7ee7 100644 --- a/lib/pdump/rte_pdump.c +++ b/lib/pdump/rte_pdump.c @@ -16,10 +16,10 @@ #include "rte_pdump.h" RTE_LOG_REGISTER_DEFAULT(pdump_logtype, NOTICE); +#define RTE_LOGTYPE_PDUMP pdump_logtype -/* Macro for printing using RTE_LOG */ -#define PDUMP_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, pdump_logtype, "%s(): " fmt, \ +#define PDUMP_LOG_LINE(level, fmt, args...) \ + RTE_LOG(level, PDUMP, "%s(): " fmt "\n", \ __func__, ## args) /* Used for the multi-process communication */ @@ -181,8 +181,8 @@ pdump_register_rx_callbacks(enum pdump_version ver, if (operation == ENABLE) { if (cbs->cb) { - PDUMP_LOG(ERR, - "rx callback for port=%d queue=%d, already exists\n", + PDUMP_LOG_LINE(ERR, + "rx callback for port=%d queue=%d, already exists", port, qid); return -EEXIST; } @@ -195,8 +195,8 @@ pdump_register_rx_callbacks(enum pdump_version ver, cbs->cb = rte_eth_add_first_rx_callback(port, qid, pdump_rx, cbs); if (cbs->cb == NULL) { - PDUMP_LOG(ERR, - "failed to add rx callback, errno=%d\n", + PDUMP_LOG_LINE(ERR, + "failed to add rx callback, errno=%d", rte_errno); return rte_errno; } @@ -204,15 +204,15 @@ pdump_register_rx_callbacks(enum pdump_version ver, int ret; if (cbs->cb == NULL) { - PDUMP_LOG(ERR, - "no existing rx callback for port=%d queue=%d\n", + PDUMP_LOG_LINE(ERR, + "no existing rx callback for port=%d queue=%d", port, qid); return -EINVAL; } ret = rte_eth_remove_rx_callback(port, qid, cbs->cb); if (ret < 0) { - PDUMP_LOG(ERR, - "failed to remove rx callback, errno=%d\n", + PDUMP_LOG_LINE(ERR, + "failed to remove rx callback, errno=%d", -ret); return ret; } @@ -239,8 +239,8 @@ pdump_register_tx_callbacks(enum pdump_version ver, if (operation == ENABLE) { if (cbs->cb) { - PDUMP_LOG(ERR, - "tx callback for port=%d queue=%d, already exists\n", + PDUMP_LOG_LINE(ERR, + "tx callback for port=%d queue=%d, already exists", port, qid); return -EEXIST; } @@ -253,8 +253,8 @@ pdump_register_tx_callbacks(enum pdump_version ver, cbs->cb = rte_eth_add_tx_callback(port, qid, pdump_tx, cbs); if (cbs->cb == NULL) { - PDUMP_LOG(ERR, - "failed to add tx callback, errno=%d\n", + PDUMP_LOG_LINE(ERR, + "failed to add tx callback, errno=%d", rte_errno); return rte_errno; } @@ -262,15 +262,15 @@ pdump_register_tx_callbacks(enum pdump_version ver, int ret; if (cbs->cb == NULL) { - PDUMP_LOG(ERR, - "no existing tx callback for port=%d queue=%d\n", + PDUMP_LOG_LINE(ERR, + "no existing tx callback for port=%d queue=%d", port, qid); return -EINVAL; } ret = rte_eth_remove_tx_callback(port, qid, cbs->cb); if (ret < 0) { - PDUMP_LOG(ERR, - "failed to remove tx callback, errno=%d\n", + PDUMP_LOG_LINE(ERR, + "failed to remove tx callback, errno=%d", -ret); return ret; } @@ -295,22 +295,22 @@ set_pdump_rxtx_cbs(const struct pdump_request *p) /* Check for possible DPDK version mismatch */ if (!(p->ver == V1 || p->ver == V2)) { - PDUMP_LOG(ERR, - "incorrect client version %u\n", p->ver); + PDUMP_LOG_LINE(ERR, + "incorrect client version %u", p->ver); return -EINVAL; } if (p->prm) { if (p->prm->prog_arg.type != RTE_BPF_ARG_PTR_MBUF) { - PDUMP_LOG(ERR, - "invalid BPF program type: %u\n", + PDUMP_LOG_LINE(ERR, + "invalid BPF program type: %u", p->prm->prog_arg.type); return -EINVAL; } filter = rte_bpf_load(p->prm); if (filter == NULL) { - PDUMP_LOG(ERR, "cannot load BPF filter: %s\n", + PDUMP_LOG_LINE(ERR, "cannot load BPF filter: %s", rte_strerror(rte_errno)); return -rte_errno; } @@ -324,8 +324,8 @@ set_pdump_rxtx_cbs(const struct pdump_request *p) ret = rte_eth_dev_get_port_by_name(p->device, &port); if (ret < 0) { - PDUMP_LOG(ERR, - "failed to get port id for device id=%s\n", + PDUMP_LOG_LINE(ERR, + "failed to get port id for device id=%s", p->device); return -EINVAL; } @@ -336,8 +336,8 @@ set_pdump_rxtx_cbs(const struct pdump_request *p) ret = rte_eth_dev_info_get(port, &dev_info); if (ret != 0) { - PDUMP_LOG(ERR, - "Error during getting device (port %u) info: %s\n", + PDUMP_LOG_LINE(ERR, + "Error during getting device (port %u) info: %s", port, strerror(-ret)); return ret; } @@ -345,19 +345,19 @@ set_pdump_rxtx_cbs(const struct pdump_request *p) nb_rx_q = dev_info.nb_rx_queues; nb_tx_q = dev_info.nb_tx_queues; if (nb_rx_q == 0 && flags & RTE_PDUMP_FLAG_RX) { - PDUMP_LOG(ERR, - "number of rx queues cannot be 0\n"); + PDUMP_LOG_LINE(ERR, + "number of rx queues cannot be 0"); return -EINVAL; } if (nb_tx_q == 0 && flags & RTE_PDUMP_FLAG_TX) { - PDUMP_LOG(ERR, - "number of tx queues cannot be 0\n"); + PDUMP_LOG_LINE(ERR, + "number of tx queues cannot be 0"); return -EINVAL; } if ((nb_tx_q == 0 || nb_rx_q == 0) && flags == RTE_PDUMP_FLAG_RXTX) { - PDUMP_LOG(ERR, - "both tx&rx queues must be non zero\n"); + PDUMP_LOG_LINE(ERR, + "both tx&rx queues must be non zero"); return -EINVAL; } } @@ -394,7 +394,7 @@ pdump_server(const struct rte_mp_msg *mp_msg, const void *peer) /* recv client requests */ if (mp_msg->len_param != sizeof(*cli_req)) { - PDUMP_LOG(ERR, "failed to recv from client\n"); + PDUMP_LOG_LINE(ERR, "failed to recv from client"); resp->err_value = -EINVAL; } else { cli_req = (const struct pdump_request *)mp_msg->param; @@ -407,7 +407,7 @@ pdump_server(const struct rte_mp_msg *mp_msg, const void *peer) mp_resp.len_param = sizeof(*resp); mp_resp.num_fds = 0; if (rte_mp_reply(&mp_resp, peer) < 0) { - PDUMP_LOG(ERR, "failed to send to client:%s\n", + PDUMP_LOG_LINE(ERR, "failed to send to client:%s", strerror(rte_errno)); return -1; } @@ -424,7 +424,7 @@ rte_pdump_init(void) mz = rte_memzone_reserve(MZ_RTE_PDUMP_STATS, sizeof(*pdump_stats), rte_socket_id(), 0); if (mz == NULL) { - PDUMP_LOG(ERR, "cannot allocate pdump statistics\n"); + PDUMP_LOG_LINE(ERR, "cannot allocate pdump statistics"); rte_errno = ENOMEM; return -1; } @@ -454,22 +454,22 @@ static int pdump_validate_ring_mp(struct rte_ring *ring, struct rte_mempool *mp) { if (ring == NULL || mp == NULL) { - PDUMP_LOG(ERR, "NULL ring or mempool\n"); + PDUMP_LOG_LINE(ERR, "NULL ring or mempool"); rte_errno = EINVAL; return -1; } if (mp->flags & RTE_MEMPOOL_F_SP_PUT || mp->flags & RTE_MEMPOOL_F_SC_GET) { - PDUMP_LOG(ERR, + PDUMP_LOG_LINE(ERR, "mempool with SP or SC set not valid for pdump," - "must have MP and MC set\n"); + "must have MP and MC set"); rte_errno = EINVAL; return -1; } if (rte_ring_is_prod_single(ring) || rte_ring_is_cons_single(ring)) { - PDUMP_LOG(ERR, + PDUMP_LOG_LINE(ERR, "ring with SP or SC set is not valid for pdump," - "must have MP and MC set\n"); + "must have MP and MC set"); rte_errno = EINVAL; return -1; } @@ -481,16 +481,16 @@ static int pdump_validate_flags(uint32_t flags) { if ((flags & RTE_PDUMP_FLAG_RXTX) == 0) { - PDUMP_LOG(ERR, - "invalid flags, should be either rx/tx/rxtx\n"); + PDUMP_LOG_LINE(ERR, + "invalid flags, should be either rx/tx/rxtx"); rte_errno = EINVAL; return -1; } /* mask off the flags we know about */ if (flags & ~(RTE_PDUMP_FLAG_RXTX | RTE_PDUMP_FLAG_PCAPNG)) { - PDUMP_LOG(ERR, - "unknown flags: %#x\n", flags); + PDUMP_LOG_LINE(ERR, + "unknown flags: %#x", flags); rte_errno = ENOTSUP; return -1; } @@ -504,14 +504,14 @@ pdump_validate_port(uint16_t port, char *name) int ret = 0; if (port >= RTE_MAX_ETHPORTS) { - PDUMP_LOG(ERR, "Invalid port id %u\n", port); + PDUMP_LOG_LINE(ERR, "Invalid port id %u", port); rte_errno = EINVAL; return -1; } ret = rte_eth_dev_get_name_by_port(port, name); if (ret < 0) { - PDUMP_LOG(ERR, "port %u to name mapping failed\n", + PDUMP_LOG_LINE(ERR, "port %u to name mapping failed", port); rte_errno = EINVAL; return -1; @@ -536,8 +536,8 @@ pdump_prepare_client_request(const char *device, uint16_t queue, struct pdump_response *resp; if (rte_eal_process_type() == RTE_PROC_PRIMARY) { - PDUMP_LOG(ERR, - "pdump enable/disable not allowed in primary process\n"); + PDUMP_LOG_LINE(ERR, + "pdump enable/disable not allowed in primary process"); return -EINVAL; } @@ -570,8 +570,8 @@ pdump_prepare_client_request(const char *device, uint16_t queue, } if (ret < 0) - PDUMP_LOG(ERR, - "client request for pdump enable/disable failed\n"); + PDUMP_LOG_LINE(ERR, + "client request for pdump enable/disable failed"); return ret; } @@ -738,8 +738,8 @@ rte_pdump_stats(uint16_t port, struct rte_pdump_stats *stats) memset(stats, 0, sizeof(*stats)); ret = rte_eth_dev_info_get(port, &dev_info); if (ret != 0) { - PDUMP_LOG(ERR, - "Error during getting device (port %u) info: %s\n", + PDUMP_LOG_LINE(ERR, + "Error during getting device (port %u) info: %s", port, strerror(-ret)); return ret; } @@ -747,7 +747,7 @@ rte_pdump_stats(uint16_t port, struct rte_pdump_stats *stats) if (pdump_stats == NULL) { if (rte_eal_process_type() == RTE_PROC_PRIMARY) { /* rte_pdump_init was not called */ - PDUMP_LOG(ERR, "pdump stats not initialized\n"); + PDUMP_LOG_LINE(ERR, "pdump stats not initialized"); rte_errno = EINVAL; return -1; } @@ -756,7 +756,7 @@ rte_pdump_stats(uint16_t port, struct rte_pdump_stats *stats) mz = rte_memzone_lookup(MZ_RTE_PDUMP_STATS); if (mz == NULL) { /* rte_pdump_init was not called in primary process?? */ - PDUMP_LOG(ERR, "can not find pdump stats\n"); + PDUMP_LOG_LINE(ERR, "can not find pdump stats"); rte_errno = EINVAL; return -1; } diff --git a/lib/power/power_acpi_cpufreq.c b/lib/power/power_acpi_cpufreq.c index 416d1fb6da..f8d978d03d 100644 --- a/lib/power/power_acpi_cpufreq.c +++ b/lib/power/power_acpi_cpufreq.c @@ -72,7 +72,7 @@ set_freq_internal(struct acpi_power_info *pi, uint32_t idx) if (idx == pi->curr_idx) return 0; - POWER_DEBUG_TRACE("Frequency[%u] %u to be set for lcore %u\n", + POWER_DEBUG_LOG("Frequency[%u] %u to be set for lcore %u", idx, pi->freqs[idx], pi->lcore_id); if (fseek(pi->f, 0, SEEK_SET) < 0) { POWER_LOG(ERR, "Fail to set file position indicator to 0 " @@ -155,7 +155,7 @@ power_get_available_freqs(struct acpi_power_info *pi) /* Store the available frequencies into power context */ for (i = 0, pi->nb_freqs = 0; i < count; i++) { - POWER_DEBUG_TRACE("Lcore %u frequency[%d]: %s\n", pi->lcore_id, + POWER_DEBUG_LOG("Lcore %u frequency[%d]: %s", pi->lcore_id, i, freqs[i]); pi->freqs[pi->nb_freqs++] = strtoul(freqs[i], &p, POWER_CONVERT_TO_DECIMAL); @@ -164,17 +164,17 @@ power_get_available_freqs(struct acpi_power_info *pi) if ((pi->freqs[0]-1000) == pi->freqs[1]) { pi->turbo_available = 1; pi->turbo_enable = 1; - POWER_DEBUG_TRACE("Lcore %u Can do Turbo Boost\n", + POWER_DEBUG_LOG("Lcore %u Can do Turbo Boost", pi->lcore_id); } else { pi->turbo_available = 0; pi->turbo_enable = 0; - POWER_DEBUG_TRACE("Turbo Boost not available on Lcore %u\n", + POWER_DEBUG_LOG("Turbo Boost not available on Lcore %u", pi->lcore_id); } ret = 0; - POWER_DEBUG_TRACE("%d frequency(s) of lcore %u are available\n", + POWER_DEBUG_LOG("%d frequency(s) of lcore %u are available", count, pi->lcore_id); out: if (f != NULL) diff --git a/lib/power/power_amd_pstate_cpufreq.c b/lib/power/power_amd_pstate_cpufreq.c index 67e3aa735a..028f84416b 100644 --- a/lib/power/power_amd_pstate_cpufreq.c +++ b/lib/power/power_amd_pstate_cpufreq.c @@ -79,7 +79,7 @@ set_freq_internal(struct amd_pstate_power_info *pi, uint32_t idx) if (idx == pi->curr_idx) return 0; - POWER_DEBUG_TRACE("Frequency[%u] %u to be set for lcore %u\n", + POWER_DEBUG_LOG("Frequency[%u] %u to be set for lcore %u", idx, pi->freqs[idx], pi->lcore_id); if (fseek(pi->f, 0, SEEK_SET) < 0) { POWER_LOG(ERR, "Fail to set file position indicator to 0 " @@ -153,14 +153,14 @@ power_check_turbo(struct amd_pstate_power_info *pi) pi->turbo_available = 1; pi->turbo_enable = 1; ret = 0; - POWER_DEBUG_TRACE("Lcore %u can do Turbo Boost! highest perf %u, " - "nominal perf %u\n", + POWER_DEBUG_LOG("Lcore %u can do Turbo Boost! highest perf %u, " + "nominal perf %u", pi->lcore_id, highest_perf, nominal_perf); } else { pi->turbo_available = 0; pi->turbo_enable = 0; - POWER_DEBUG_TRACE("Lcore %u Turbo not available! highest perf %u, " - "nominal perf %u\n", + POWER_DEBUG_LOG("Lcore %u Turbo not available! highest perf %u, " + "nominal perf %u", pi->lcore_id, highest_perf, nominal_perf); } @@ -277,7 +277,7 @@ power_get_available_freqs(struct amd_pstate_power_info *pi) ret = 0; - POWER_DEBUG_TRACE("%d frequency(s) of lcore %u are available\n", + POWER_DEBUG_LOG("%d frequency(s) of lcore %u are available", num_freqs, pi->lcore_id); out: diff --git a/lib/power/power_common.c b/lib/power/power_common.c index 5d6c4291de..590986d5ef 100644 --- a/lib/power/power_common.c +++ b/lib/power/power_common.c @@ -182,8 +182,8 @@ power_set_governor(unsigned int lcore_id, const char *new_governor, /* Check if current governor is already what we want */ if (strcmp(buf, new_governor) == 0) { ret = 0; - POWER_DEBUG_TRACE("Power management governor of lcore %u is " - "already %s\n", lcore_id, new_governor); + POWER_DEBUG_LOG("Power management governor of lcore %u is " + "already %s", lcore_id, new_governor); goto out; } diff --git a/lib/power/power_common.h b/lib/power/power_common.h index 4302370b5e..877ff2ca4c 100644 --- a/lib/power/power_common.h +++ b/lib/power/power_common.h @@ -16,10 +16,10 @@ extern int power_logtype; RTE_LOG(level, POWER, fmt "\n", ## __VA_ARGS__) #ifdef RTE_LIBRTE_POWER_DEBUG -#define POWER_DEBUG_TRACE(fmt, args...) \ - RTE_LOG(ERR, POWER, "%s: " fmt, __func__, ## args) +#define POWER_DEBUG_LOG(fmt, args...) \ + RTE_LOG(ERR, POWER, "%s: " fmt "\n", __func__, ## args) #else -#define POWER_DEBUG_TRACE(fmt, args...) +#define POWER_DEBUG_LOG(fmt, args...) #endif /* check if scaling driver matches one we want */ diff --git a/lib/power/power_cppc_cpufreq.c b/lib/power/power_cppc_cpufreq.c index dbbd166372..3ddf39bd76 100644 --- a/lib/power/power_cppc_cpufreq.c +++ b/lib/power/power_cppc_cpufreq.c @@ -82,7 +82,7 @@ set_freq_internal(struct cppc_power_info *pi, uint32_t idx) if (idx == pi->curr_idx) return 0; - POWER_DEBUG_TRACE("Frequency[%u] %u to be set for lcore %u\n", + POWER_DEBUG_LOG("Frequency[%u] %u to be set for lcore %u", idx, pi->freqs[idx], pi->lcore_id); if (fseek(pi->f, 0, SEEK_SET) < 0) { POWER_LOG(ERR, "Fail to set file position indicator to 0 " @@ -172,14 +172,14 @@ power_check_turbo(struct cppc_power_info *pi) pi->turbo_available = 1; pi->turbo_enable = 1; ret = 0; - POWER_DEBUG_TRACE("Lcore %u can do Turbo Boost! highest perf %u, " - "nominal perf %u\n", + POWER_DEBUG_LOG("Lcore %u can do Turbo Boost! highest perf %u, " + "nominal perf %u", pi->lcore_id, highest_perf, nominal_perf); } else { pi->turbo_available = 0; pi->turbo_enable = 0; - POWER_DEBUG_TRACE("Lcore %u Turbo not available! highest perf %u, " - "nominal perf %u\n", + POWER_DEBUG_LOG("Lcore %u Turbo not available! highest perf %u, " + "nominal perf %u", pi->lcore_id, highest_perf, nominal_perf); } @@ -265,7 +265,7 @@ power_get_available_freqs(struct cppc_power_info *pi) ret = 0; - POWER_DEBUG_TRACE("%d frequency(s) of lcore %u are available\n", + POWER_DEBUG_LOG("%d frequency(s) of lcore %u are available", num_freqs, pi->lcore_id); out: diff --git a/lib/power/power_intel_uncore.c b/lib/power/power_intel_uncore.c index c5c204c670..3ce8fccec2 100644 --- a/lib/power/power_intel_uncore.c +++ b/lib/power/power_intel_uncore.c @@ -90,7 +90,7 @@ set_uncore_freq_internal(struct uncore_power_info *ui, uint32_t idx) return -1; } - POWER_DEBUG_TRACE("Uncore frequency '%u' to be set for pkg %02u die %02u\n", + POWER_DEBUG_LOG("Uncore frequency '%u' to be set for pkg %02u die %02u", target_uncore_freq, ui->pkg, ui->die); /* write the minimum value first if the target freq is less than current max */ @@ -235,7 +235,7 @@ power_get_available_uncore_freqs(struct uncore_power_info *ui) ret = 0; - POWER_DEBUG_TRACE("%d frequency(s) of pkg %02u die %02u are available\n", + POWER_DEBUG_LOG("%d frequency(s) of pkg %02u die %02u are available", num_uncore_freqs, ui->pkg, ui->die); out: diff --git a/lib/power/power_pstate_cpufreq.c b/lib/power/power_pstate_cpufreq.c index c287ac54f8..73138dc4e4 100644 --- a/lib/power/power_pstate_cpufreq.c +++ b/lib/power/power_pstate_cpufreq.c @@ -104,7 +104,7 @@ power_read_turbo_pct(uint64_t *outVal) goto out; } - POWER_DEBUG_TRACE("power turbo pct: %"PRIu64"\n", *outVal); + POWER_DEBUG_LOG("power turbo pct: %"PRIu64, *outVal); out: close(fd); return ret; @@ -204,7 +204,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi) max_non_turbo = base_min_ratio + (100 - max_non_turbo) * (base_max_ratio - base_min_ratio) / 100; - POWER_DEBUG_TRACE("no turbo perf %"PRIu64"\n", max_non_turbo); + POWER_DEBUG_LOG("no turbo perf %"PRIu64, max_non_turbo); pi->non_turbo_max_ratio = (uint32_t)max_non_turbo; @@ -310,7 +310,7 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx) return -1; } - POWER_DEBUG_TRACE("Frequency '%u' to be set for lcore %u\n", + POWER_DEBUG_LOG("Frequency '%u' to be set for lcore %u", target_freq, pi->lcore_id); fflush(pi->f_cur_min); @@ -333,7 +333,7 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx) return -1; } - POWER_DEBUG_TRACE("Frequency '%u' to be set for lcore %u\n", + POWER_DEBUG_LOG("Frequency '%u' to be set for lcore %u", target_freq, pi->lcore_id); fflush(pi->f_cur_max); @@ -434,7 +434,7 @@ power_get_available_freqs(struct pstate_power_info *pi) else base_max_freq = pi->non_turbo_max_ratio * BUS_FREQ; - POWER_DEBUG_TRACE("sys min %u, sys max %u, base_max %u\n", + POWER_DEBUG_LOG("sys min %u, sys max %u, base_max %u", sys_min_freq, sys_max_freq, base_max_freq); @@ -471,7 +471,7 @@ power_get_available_freqs(struct pstate_power_info *pi) ret = 0; - POWER_DEBUG_TRACE("%d frequency(s) of lcore %u are available\n", + POWER_DEBUG_LOG("%d frequency(s) of lcore %u are available", num_freqs, pi->lcore_id); out: diff --git a/lib/regexdev/rte_regexdev.c b/lib/regexdev/rte_regexdev.c index d38a85eb0b..b2c4b49d97 100644 --- a/lib/regexdev/rte_regexdev.c +++ b/lib/regexdev/rte_regexdev.c @@ -73,16 +73,16 @@ regexdev_check_name(const char *name) size_t name_len; if (name == NULL) { - RTE_REGEXDEV_LOG(ERR, "Name can't be NULL\n"); + RTE_REGEXDEV_LOG_LINE(ERR, "Name can't be NULL"); return -EINVAL; } name_len = strnlen(name, RTE_REGEXDEV_NAME_MAX_LEN); if (name_len == 0) { - RTE_REGEXDEV_LOG(ERR, "Zero length RegEx device name\n"); + RTE_REGEXDEV_LOG_LINE(ERR, "Zero length RegEx device name"); return -EINVAL; } if (name_len >= RTE_REGEXDEV_NAME_MAX_LEN) { - RTE_REGEXDEV_LOG(ERR, "RegEx device name is too long\n"); + RTE_REGEXDEV_LOG_LINE(ERR, "RegEx device name is too long"); return -EINVAL; } return (int)name_len; @@ -101,17 +101,17 @@ rte_regexdev_register(const char *name) return NULL; dev = regexdev_allocated(name); if (dev != NULL) { - RTE_REGEXDEV_LOG(ERR, "RegEx device already allocated\n"); + RTE_REGEXDEV_LOG_LINE(ERR, "RegEx device already allocated"); return NULL; } dev_id = regexdev_find_free_dev(); if (dev_id == RTE_MAX_REGEXDEV_DEVS) { - RTE_REGEXDEV_LOG - (ERR, "Reached maximum number of RegEx devices\n"); + RTE_REGEXDEV_LOG_LINE + (ERR, "Reached maximum number of RegEx devices"); return NULL; } if (regexdev_shared_data_prepare() < 0) { - RTE_REGEXDEV_LOG(ERR, "Cannot allocate RegEx shared data\n"); + RTE_REGEXDEV_LOG_LINE(ERR, "Cannot allocate RegEx shared data"); return NULL; } @@ -215,8 +215,8 @@ rte_regexdev_configure(uint8_t dev_id, const struct rte_regexdev_config *cfg) if (*dev->dev_ops->dev_configure == NULL) return -ENOTSUP; if (dev->data->dev_started) { - RTE_REGEXDEV_LOG - (ERR, "Dev %u must be stopped to allow configuration\n", + RTE_REGEXDEV_LOG_LINE + (ERR, "Dev %u must be stopped to allow configuration", dev_id); return -EBUSY; } @@ -225,66 +225,66 @@ rte_regexdev_configure(uint8_t dev_id, const struct rte_regexdev_config *cfg) return ret; if ((cfg->dev_cfg_flags & RTE_REGEXDEV_CFG_CROSS_BUFFER_SCAN_F) && !(dev_info.regexdev_capa & RTE_REGEXDEV_SUPP_CROSS_BUFFER_F)) { - RTE_REGEXDEV_LOG(ERR, - "Dev %u doesn't support cross buffer scan\n", + RTE_REGEXDEV_LOG_LINE(ERR, + "Dev %u doesn't support cross buffer scan", dev_id); return -EINVAL; } if ((cfg->dev_cfg_flags & RTE_REGEXDEV_CFG_MATCH_AS_END_F) && !(dev_info.regexdev_capa & RTE_REGEXDEV_SUPP_MATCH_AS_END_F)) { - RTE_REGEXDEV_LOG(ERR, - "Dev %u doesn't support match as end\n", + RTE_REGEXDEV_LOG_LINE(ERR, + "Dev %u doesn't support match as end", dev_id); return -EINVAL; } if ((cfg->dev_cfg_flags & RTE_REGEXDEV_CFG_MATCH_ALL_F) && !(dev_info.regexdev_capa & RTE_REGEXDEV_SUPP_MATCH_ALL_F)) { - RTE_REGEXDEV_LOG(ERR, - "Dev %u doesn't support match all\n", + RTE_REGEXDEV_LOG_LINE(ERR, + "Dev %u doesn't support match all", dev_id); return -EINVAL; } if (cfg->nb_groups == 0) { - RTE_REGEXDEV_LOG(ERR, "Dev %u num of groups must be > 0\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %u num of groups must be > 0", dev_id); return -EINVAL; } if (cfg->nb_groups > dev_info.max_groups) { - RTE_REGEXDEV_LOG(ERR, "Dev %u num of groups %d > %d\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %u num of groups %d > %d", dev_id, cfg->nb_groups, dev_info.max_groups); return -EINVAL; } if (cfg->nb_max_matches == 0) { - RTE_REGEXDEV_LOG(ERR, "Dev %u num of matches must be > 0\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %u num of matches must be > 0", dev_id); return -EINVAL; } if (cfg->nb_max_matches > dev_info.max_matches) { - RTE_REGEXDEV_LOG(ERR, "Dev %u num of matches %d > %d\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %u num of matches %d > %d", dev_id, cfg->nb_max_matches, dev_info.max_matches); return -EINVAL; } if (cfg->nb_queue_pairs == 0) { - RTE_REGEXDEV_LOG(ERR, "Dev %u num of queues must be > 0\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %u num of queues must be > 0", dev_id); return -EINVAL; } if (cfg->nb_queue_pairs > dev_info.max_queue_pairs) { - RTE_REGEXDEV_LOG(ERR, "Dev %u num of queues %d > %d\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %u num of queues %d > %d", dev_id, cfg->nb_queue_pairs, dev_info.max_queue_pairs); return -EINVAL; } if (cfg->nb_rules_per_group == 0) { - RTE_REGEXDEV_LOG(ERR, - "Dev %u num of rules per group must be > 0\n", + RTE_REGEXDEV_LOG_LINE(ERR, + "Dev %u num of rules per group must be > 0", dev_id); return -EINVAL; } if (cfg->nb_rules_per_group > dev_info.max_rules_per_group) { - RTE_REGEXDEV_LOG(ERR, - "Dev %u num of rules per group %d > %d\n", + RTE_REGEXDEV_LOG_LINE(ERR, + "Dev %u num of rules per group %d > %d", dev_id, cfg->nb_rules_per_group, dev_info.max_rules_per_group); return -EINVAL; @@ -306,21 +306,21 @@ rte_regexdev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id, if (*dev->dev_ops->dev_qp_setup == NULL) return -ENOTSUP; if (dev->data->dev_started) { - RTE_REGEXDEV_LOG - (ERR, "Dev %u must be stopped to allow configuration\n", + RTE_REGEXDEV_LOG_LINE + (ERR, "Dev %u must be stopped to allow configuration", dev_id); return -EBUSY; } if (queue_pair_id >= dev->data->dev_conf.nb_queue_pairs) { - RTE_REGEXDEV_LOG(ERR, - "Dev %u invalid queue %d > %d\n", + RTE_REGEXDEV_LOG_LINE(ERR, + "Dev %u invalid queue %d > %d", dev_id, queue_pair_id, dev->data->dev_conf.nb_queue_pairs); return -EINVAL; } if (dev->data->dev_started) { - RTE_REGEXDEV_LOG - (ERR, "Dev %u must be stopped to allow configuration\n", + RTE_REGEXDEV_LOG_LINE + (ERR, "Dev %u must be stopped to allow configuration", dev_id); return -EBUSY; } @@ -383,7 +383,7 @@ rte_regexdev_attr_get(uint8_t dev_id, enum rte_regexdev_attr_id attr_id, if (*dev->dev_ops->dev_attr_get == NULL) return -ENOTSUP; if (attr_value == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d attribute value can't be NULL\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d attribute value can't be NULL", dev_id); return -EINVAL; } @@ -401,7 +401,7 @@ rte_regexdev_attr_set(uint8_t dev_id, enum rte_regexdev_attr_id attr_id, if (*dev->dev_ops->dev_attr_set == NULL) return -ENOTSUP; if (attr_value == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d attribute value can't be NULL\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d attribute value can't be NULL", dev_id); return -EINVAL; } @@ -420,7 +420,7 @@ rte_regexdev_rule_db_update(uint8_t dev_id, if (*dev->dev_ops->dev_rule_db_update == NULL) return -ENOTSUP; if (rules == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d rules can't be NULL\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d rules can't be NULL", dev_id); return -EINVAL; } @@ -450,7 +450,7 @@ rte_regexdev_rule_db_import(uint8_t dev_id, const char *rule_db, if (*dev->dev_ops->dev_db_import == NULL) return -ENOTSUP; if (rule_db == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d rules can't be NULL\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d rules can't be NULL", dev_id); return -EINVAL; } @@ -480,7 +480,7 @@ rte_regexdev_xstats_names_get(uint8_t dev_id, if (*dev->dev_ops->dev_xstats_names_get == NULL) return -ENOTSUP; if (xstats_map == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d xstats map can't be NULL\n", + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d xstats map can't be NULL", dev_id); return -EINVAL; } @@ -498,11 +498,11 @@ rte_regexdev_xstats_get(uint8_t dev_id, const uint16_t *ids, if (*dev->dev_ops->dev_xstats_get == NULL) return -ENOTSUP; if (ids == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d ids can't be NULL\n", dev_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d ids can't be NULL", dev_id); return -EINVAL; } if (values == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d values can't be NULL\n", dev_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d values can't be NULL", dev_id); return -EINVAL; } return (*dev->dev_ops->dev_xstats_get)(dev, ids, values, n); @@ -519,15 +519,15 @@ rte_regexdev_xstats_by_name_get(uint8_t dev_id, const char *name, if (*dev->dev_ops->dev_xstats_by_name_get == NULL) return -ENOTSUP; if (name == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d name can't be NULL\n", dev_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d name can't be NULL", dev_id); return -EINVAL; } if (id == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d id can't be NULL\n", dev_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d id can't be NULL", dev_id); return -EINVAL; } if (value == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d value can't be NULL\n", dev_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d value can't be NULL", dev_id); return -EINVAL; } return (*dev->dev_ops->dev_xstats_by_name_get)(dev, name, id, value); @@ -544,7 +544,7 @@ rte_regexdev_xstats_reset(uint8_t dev_id, const uint16_t *ids, if (*dev->dev_ops->dev_xstats_reset == NULL) return -ENOTSUP; if (ids == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d ids can't be NULL\n", dev_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d ids can't be NULL", dev_id); return -EINVAL; } return (*dev->dev_ops->dev_xstats_reset)(dev, ids, nb_ids); @@ -572,7 +572,7 @@ rte_regexdev_dump(uint8_t dev_id, FILE *f) if (*dev->dev_ops->dev_dump == NULL) return -ENOTSUP; if (f == NULL) { - RTE_REGEXDEV_LOG(ERR, "Dev %d file can't be NULL\n", dev_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Dev %d file can't be NULL", dev_id); return -EINVAL; } return (*dev->dev_ops->dev_dump)(dev, f); diff --git a/lib/regexdev/rte_regexdev.h b/lib/regexdev/rte_regexdev.h index d50af775b5..a215d8768e 100644 --- a/lib/regexdev/rte_regexdev.h +++ b/lib/regexdev/rte_regexdev.h @@ -206,21 +206,23 @@ extern "C" { #define RTE_REGEXDEV_NAME_MAX_LEN RTE_DEV_NAME_MAX_LEN extern int rte_regexdev_logtype; +#define RTE_LOGTYPE_REGEXDEV rte_regexdev_logtype -#define RTE_REGEXDEV_LOG(level, ...) \ - rte_log(RTE_LOG_ ## level, rte_regexdev_logtype, "" __VA_ARGS__) +#define RTE_REGEXDEV_LOG_LINE(level, ...) \ + RTE_LOG(level, REGEXDEV, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + RTE_FMT_TAIL(__VA_ARGS__ ,))) /* Macros to check for valid port */ #define RTE_REGEXDEV_VALID_DEV_ID_OR_ERR_RET(dev_id, retval) do { \ if (!rte_regexdev_is_valid_dev(dev_id)) { \ - RTE_REGEXDEV_LOG(ERR, "Invalid dev_id=%u\n", dev_id); \ + RTE_REGEXDEV_LOG_LINE(ERR, "Invalid dev_id=%u", dev_id); \ return retval; \ } \ } while (0) #define RTE_REGEXDEV_VALID_DEV_ID_OR_RET(dev_id) do { \ if (!rte_regexdev_is_valid_dev(dev_id)) { \ - RTE_REGEXDEV_LOG(ERR, "Invalid dev_id=%u\n", dev_id); \ + RTE_REGEXDEV_LOG_LINE(ERR, "Invalid dev_id=%u", dev_id); \ return; \ } \ } while (0) @@ -1475,7 +1477,7 @@ rte_regexdev_enqueue_burst(uint8_t dev_id, uint16_t qp_id, if (*dev->enqueue == NULL) return -ENOTSUP; if (qp_id >= dev->data->dev_conf.nb_queue_pairs) { - RTE_REGEXDEV_LOG(ERR, "Invalid queue %d\n", qp_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Invalid queue %d", qp_id); return -EINVAL; } #endif @@ -1535,7 +1537,7 @@ rte_regexdev_dequeue_burst(uint8_t dev_id, uint16_t qp_id, if (*dev->dequeue == NULL) return -ENOTSUP; if (qp_id >= dev->data->dev_conf.nb_queue_pairs) { - RTE_REGEXDEV_LOG(ERR, "Invalid queue %d\n", qp_id); + RTE_REGEXDEV_LOG_LINE(ERR, "Invalid queue %d", qp_id); return -EINVAL; } #endif diff --git a/lib/telemetry/telemetry.c b/lib/telemetry/telemetry.c index 92982842a8..747eba2656 100644 --- a/lib/telemetry/telemetry.c +++ b/lib/telemetry/telemetry.c @@ -56,7 +56,10 @@ static const char *socket_dir; /* runtime directory */ static rte_cpuset_t *thread_cpuset; RTE_LOG_REGISTER_DEFAULT(logtype, WARNING); -#define TMTY_LOG(l, ...) rte_log(RTE_LOG_ ## l, logtype, "TELEMETRY: " __VA_ARGS__) +#define RTE_LOGTYPE_TMTY logtype +#define TMTY_LOG_LINE(l, ...) \ + RTE_LOG(l, TMTY, RTE_FMT("TELEMETRY: " RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + RTE_FMT_TAIL(__VA_ARGS__ ,))) /* list of command callbacks, with one command registered by default */ static struct cmd_callback *callbacks; @@ -417,7 +420,7 @@ socket_listener(void *socket) struct socket *s = (struct socket *)socket; int s_accepted = accept(s->sock, NULL, NULL); if (s_accepted < 0) { - TMTY_LOG(ERR, "Error with accept, telemetry thread quitting\n"); + TMTY_LOG_LINE(ERR, "Error with accept, telemetry thread quitting"); return NULL; } if (s->num_clients != NULL) { @@ -433,7 +436,7 @@ socket_listener(void *socket) rc = pthread_create(&th, NULL, s->fn, (void *)(uintptr_t)s_accepted); if (rc != 0) { - TMTY_LOG(ERR, "Error with create client thread: %s\n", + TMTY_LOG_LINE(ERR, "Error with create client thread: %s", strerror(rc)); close(s_accepted); if (s->num_clients != NULL) @@ -469,22 +472,22 @@ create_socket(char *path) { int sock = socket(AF_UNIX, SOCK_SEQPACKET, 0); if (sock < 0) { - TMTY_LOG(ERR, "Error with socket creation, %s\n", strerror(errno)); + TMTY_LOG_LINE(ERR, "Error with socket creation, %s", strerror(errno)); return -1; } struct sockaddr_un sun = {.sun_family = AF_UNIX}; strlcpy(sun.sun_path, path, sizeof(sun.sun_path)); - TMTY_LOG(DEBUG, "Attempting socket bind to path '%s'\n", path); + TMTY_LOG_LINE(DEBUG, "Attempting socket bind to path '%s'", path); if (bind(sock, (void *) &sun, sizeof(sun)) < 0) { struct stat st; - TMTY_LOG(DEBUG, "Initial bind to socket '%s' failed.\n", path); + TMTY_LOG_LINE(DEBUG, "Initial bind to socket '%s' failed.", path); /* first check if we have a runtime dir */ if (stat(socket_dir, &st) < 0 || !S_ISDIR(st.st_mode)) { - TMTY_LOG(ERR, "Cannot access DPDK runtime directory: %s\n", socket_dir); + TMTY_LOG_LINE(ERR, "Cannot access DPDK runtime directory: %s", socket_dir); close(sock); return -ENOENT; } @@ -496,22 +499,22 @@ create_socket(char *path) } /* socket is not active, delete and attempt rebind */ - TMTY_LOG(DEBUG, "Attempting unlink and retrying bind\n"); + TMTY_LOG_LINE(DEBUG, "Attempting unlink and retrying bind"); unlink(sun.sun_path); if (bind(sock, (void *) &sun, sizeof(sun)) < 0) { - TMTY_LOG(ERR, "Error binding socket: %s\n", strerror(errno)); + TMTY_LOG_LINE(ERR, "Error binding socket: %s", strerror(errno)); close(sock); return -errno; /* if unlink failed, this will be -EADDRINUSE as above */ } } if (listen(sock, 1) < 0) { - TMTY_LOG(ERR, "Error calling listen for socket: %s\n", strerror(errno)); + TMTY_LOG_LINE(ERR, "Error calling listen for socket: %s", strerror(errno)); unlink(sun.sun_path); close(sock); return -errno; } - TMTY_LOG(DEBUG, "Socket creation and binding ok\n"); + TMTY_LOG_LINE(DEBUG, "Socket creation and binding ok"); return sock; } @@ -535,14 +538,14 @@ telemetry_legacy_init(void) int rc; if (num_legacy_callbacks == 1) { - TMTY_LOG(WARNING, "No legacy callbacks, legacy socket not created\n"); + TMTY_LOG_LINE(WARNING, "No legacy callbacks, legacy socket not created"); return -1; } v1_socket.fn = legacy_client_handler; if ((size_t) snprintf(v1_socket.path, sizeof(v1_socket.path), "%s/telemetry", socket_dir) >= sizeof(v1_socket.path)) { - TMTY_LOG(ERR, "Error with socket binding, path too long\n"); + TMTY_LOG_LINE(ERR, "Error with socket binding, path too long"); return -1; } v1_socket.sock = create_socket(v1_socket.path); @@ -552,7 +555,7 @@ telemetry_legacy_init(void) } rc = pthread_create(&t_old, NULL, socket_listener, &v1_socket); if (rc != 0) { - TMTY_LOG(ERR, "Error with create legacy socket thread: %s\n", + TMTY_LOG_LINE(ERR, "Error with create legacy socket thread: %s", strerror(rc)); close(v1_socket.sock); v1_socket.sock = -1; @@ -562,7 +565,7 @@ telemetry_legacy_init(void) } pthread_setaffinity_np(t_old, sizeof(*thread_cpuset), thread_cpuset); set_thread_name(t_old, "dpdk-telemet-v1"); - TMTY_LOG(DEBUG, "Legacy telemetry socket initialized ok\n"); + TMTY_LOG_LINE(DEBUG, "Legacy telemetry socket initialized ok"); pthread_detach(t_old); return 0; } @@ -584,7 +587,7 @@ telemetry_v2_init(void) "Returns help text for a command. Parameters: string command"); v2_socket.fn = client_handler; if (strlcpy(spath, get_socket_path(socket_dir, 2), sizeof(spath)) >= sizeof(spath)) { - TMTY_LOG(ERR, "Error with socket binding, path too long\n"); + TMTY_LOG_LINE(ERR, "Error with socket binding, path too long"); return -1; } memcpy(v2_socket.path, spath, sizeof(v2_socket.path)); @@ -599,14 +602,14 @@ telemetry_v2_init(void) /* add a suffix to the path if the basic version fails */ if (snprintf(v2_socket.path, sizeof(v2_socket.path), "%s:%d", spath, ++suffix) >= (int)sizeof(v2_socket.path)) { - TMTY_LOG(ERR, "Error with socket binding, path too long\n"); + TMTY_LOG_LINE(ERR, "Error with socket binding, path too long"); return -1; } v2_socket.sock = create_socket(v2_socket.path); } rc = pthread_create(&t_new, NULL, socket_listener, &v2_socket); if (rc != 0) { - TMTY_LOG(ERR, "Error with create socket thread: %s\n", + TMTY_LOG_LINE(ERR, "Error with create socket thread: %s", strerror(rc)); close(v2_socket.sock); v2_socket.sock = -1; @@ -634,7 +637,7 @@ rte_telemetry_init(const char *runtime_dir, const char *rte_version, rte_cpuset_ #ifndef RTE_EXEC_ENV_WINDOWS if (telemetry_v2_init() != 0) return -1; - TMTY_LOG(DEBUG, "Telemetry initialized ok\n"); + TMTY_LOG_LINE(DEBUG, "Telemetry initialized ok"); telemetry_legacy_init(); #endif /* RTE_EXEC_ENV_WINDOWS */ diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c index 10ab77262e..f2c275a7d7 100644 --- a/lib/vhost/iotlb.c +++ b/lib/vhost/iotlb.c @@ -150,16 +150,16 @@ vhost_user_iotlb_pending_insert(struct virtio_net *dev, uint64_t iova, uint8_t p node = vhost_user_iotlb_pool_get(dev); if (node == NULL) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "IOTLB pool empty, clear entries for pending insertion\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "IOTLB pool empty, clear entries for pending insertion"); if (!TAILQ_EMPTY(&dev->iotlb_pending_list)) vhost_user_iotlb_pending_remove_all(dev); else vhost_user_iotlb_cache_random_evict(dev); node = vhost_user_iotlb_pool_get(dev); if (node == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "IOTLB pool still empty, pending insertion failure\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "IOTLB pool still empty, pending insertion failure"); return; } } @@ -253,16 +253,16 @@ vhost_user_iotlb_cache_insert(struct virtio_net *dev, uint64_t iova, uint64_t ua new_node = vhost_user_iotlb_pool_get(dev); if (new_node == NULL) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "IOTLB pool empty, clear entries for cache insertion\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "IOTLB pool empty, clear entries for cache insertion"); if (!TAILQ_EMPTY(&dev->iotlb_list)) vhost_user_iotlb_cache_random_evict(dev); else vhost_user_iotlb_pending_remove_all(dev); new_node = vhost_user_iotlb_pool_get(dev); if (new_node == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "IOTLB pool still empty, cache insertion failed\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "IOTLB pool still empty, cache insertion failed"); return; } } @@ -415,7 +415,7 @@ vhost_user_iotlb_init(struct virtio_net *dev) dev->iotlb_pool = rte_calloc_socket("iotlb", IOTLB_CACHE_SIZE, sizeof(struct vhost_iotlb_entry), 0, socket); if (!dev->iotlb_pool) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to create IOTLB cache pool\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to create IOTLB cache pool"); return -1; } for (i = 0; i < IOTLB_CACHE_SIZE; i++) diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c index 5882e44176..a2fdac30a4 100644 --- a/lib/vhost/socket.c +++ b/lib/vhost/socket.c @@ -128,17 +128,17 @@ read_fd_message(char *ifname, int sockfd, char *buf, int buflen, int *fds, int m ret = recvmsg(sockfd, &msgh, 0); if (ret <= 0) { if (ret) - VHOST_LOG_CONFIG(ifname, ERR, "recvmsg failed on fd %d (%s)\n", + VHOST_CONFIG_LOG(ifname, ERR, "recvmsg failed on fd %d (%s)", sockfd, strerror(errno)); return ret; } if (msgh.msg_flags & MSG_TRUNC) - VHOST_LOG_CONFIG(ifname, ERR, "truncated msg (fd %d)\n", sockfd); + VHOST_CONFIG_LOG(ifname, ERR, "truncated msg (fd %d)", sockfd); /* MSG_CTRUNC may be caused by LSM misconfiguration */ if (msgh.msg_flags & MSG_CTRUNC) - VHOST_LOG_CONFIG(ifname, ERR, "truncated control data (fd %d)\n", sockfd); + VHOST_CONFIG_LOG(ifname, ERR, "truncated control data (fd %d)", sockfd); for (cmsg = CMSG_FIRSTHDR(&msgh); cmsg != NULL; cmsg = CMSG_NXTHDR(&msgh, cmsg)) { @@ -181,7 +181,7 @@ send_fd_message(char *ifname, int sockfd, char *buf, int buflen, int *fds, int f msgh.msg_controllen = sizeof(control); cmsg = CMSG_FIRSTHDR(&msgh); if (cmsg == NULL) { - VHOST_LOG_CONFIG(ifname, ERR, "cmsg == NULL\n"); + VHOST_CONFIG_LOG(ifname, ERR, "cmsg == NULL"); errno = EINVAL; return -1; } @@ -199,7 +199,7 @@ send_fd_message(char *ifname, int sockfd, char *buf, int buflen, int *fds, int f } while (ret < 0 && errno == EINTR); if (ret < 0) { - VHOST_LOG_CONFIG(ifname, ERR, "sendmsg error on fd %d (%s)\n", + VHOST_CONFIG_LOG(ifname, ERR, "sendmsg error on fd %d (%s)", sockfd, strerror(errno)); return ret; } @@ -252,13 +252,13 @@ vhost_user_add_connection(int fd, struct vhost_user_socket *vsocket) dev->async_copy = 1; } - VHOST_LOG_CONFIG(vsocket->path, INFO, "new device, handle is %d\n", vid); + VHOST_CONFIG_LOG(vsocket->path, INFO, "new device, handle is %d", vid); if (vsocket->notify_ops->new_connection) { ret = vsocket->notify_ops->new_connection(vid); if (ret < 0) { - VHOST_LOG_CONFIG(vsocket->path, ERR, - "failed to add vhost user connection with fd %d\n", + VHOST_CONFIG_LOG(vsocket->path, ERR, + "failed to add vhost user connection with fd %d", fd); goto err_cleanup; } @@ -270,8 +270,8 @@ vhost_user_add_connection(int fd, struct vhost_user_socket *vsocket) ret = fdset_add(&vhost_user.fdset, fd, vhost_user_read_cb, NULL, conn); if (ret < 0) { - VHOST_LOG_CONFIG(vsocket->path, ERR, - "failed to add fd %d into vhost server fdset\n", + VHOST_CONFIG_LOG(vsocket->path, ERR, + "failed to add fd %d into vhost server fdset", fd); if (vsocket->notify_ops->destroy_connection) @@ -304,7 +304,7 @@ vhost_user_server_new_connection(int fd, void *dat, int *remove __rte_unused) if (fd < 0) return; - VHOST_LOG_CONFIG(vsocket->path, INFO, "new vhost user connection is %d\n", fd); + VHOST_CONFIG_LOG(vsocket->path, INFO, "new vhost user connection is %d", fd); vhost_user_add_connection(fd, vsocket); } @@ -352,12 +352,12 @@ create_unix_socket(struct vhost_user_socket *vsocket) fd = socket(AF_UNIX, SOCK_STREAM, 0); if (fd < 0) return -1; - VHOST_LOG_CONFIG(vsocket->path, INFO, "vhost-user %s: socket created, fd: %d\n", + VHOST_CONFIG_LOG(vsocket->path, INFO, "vhost-user %s: socket created, fd: %d", vsocket->is_server ? "server" : "client", fd); if (!vsocket->is_server && fcntl(fd, F_SETFL, O_NONBLOCK)) { - VHOST_LOG_CONFIG(vsocket->path, ERR, - "vhost-user: can't set nonblocking mode for socket, fd: %d (%s)\n", + VHOST_CONFIG_LOG(vsocket->path, ERR, + "vhost-user: can't set nonblocking mode for socket, fd: %d (%s)", fd, strerror(errno)); close(fd); return -1; @@ -391,11 +391,11 @@ vhost_user_start_server(struct vhost_user_socket *vsocket) */ ret = bind(fd, (struct sockaddr *)&vsocket->un, sizeof(vsocket->un)); if (ret < 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to bind: %s; remove it and try again\n", + VHOST_CONFIG_LOG(path, ERR, "failed to bind: %s; remove it and try again", strerror(errno)); goto err; } - VHOST_LOG_CONFIG(path, INFO, "binding succeeded\n"); + VHOST_CONFIG_LOG(path, INFO, "binding succeeded"); ret = listen(fd, MAX_VIRTIO_BACKLOG); if (ret < 0) @@ -404,7 +404,7 @@ vhost_user_start_server(struct vhost_user_socket *vsocket) ret = fdset_add(&vhost_user.fdset, fd, vhost_user_server_new_connection, NULL, vsocket); if (ret < 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to add listen fd %d to vhost server fdset\n", + VHOST_CONFIG_LOG(path, ERR, "failed to add listen fd %d to vhost server fdset", fd); goto err; } @@ -444,12 +444,12 @@ vhost_user_connect_nonblock(char *path, int fd, struct sockaddr *un, size_t sz) flags = fcntl(fd, F_GETFL, 0); if (flags < 0) { - VHOST_LOG_CONFIG(path, ERR, "can't get flags for connfd %d (%s)\n", + VHOST_CONFIG_LOG(path, ERR, "can't get flags for connfd %d (%s)", fd, strerror(errno)); return -2; } if ((flags & O_NONBLOCK) && fcntl(fd, F_SETFL, flags & ~O_NONBLOCK)) { - VHOST_LOG_CONFIG(path, ERR, "can't disable nonblocking on fd %d\n", fd); + VHOST_CONFIG_LOG(path, ERR, "can't disable nonblocking on fd %d", fd); return -2; } return 0; @@ -477,15 +477,15 @@ vhost_user_client_reconnect(void *arg __rte_unused) sizeof(reconn->un)); if (ret == -2) { close(reconn->fd); - VHOST_LOG_CONFIG(reconn->vsocket->path, ERR, - "reconnection for fd %d failed\n", + VHOST_CONFIG_LOG(reconn->vsocket->path, ERR, + "reconnection for fd %d failed", reconn->fd); goto remove_fd; } if (ret == -1) continue; - VHOST_LOG_CONFIG(reconn->vsocket->path, INFO, "connected\n"); + VHOST_CONFIG_LOG(reconn->vsocket->path, INFO, "connected"); vhost_user_add_connection(reconn->fd, reconn->vsocket); remove_fd: TAILQ_REMOVE(&reconn_list.head, reconn, next); @@ -506,7 +506,7 @@ vhost_user_reconnect_init(void) ret = pthread_mutex_init(&reconn_list.mutex, NULL); if (ret < 0) { - VHOST_LOG_CONFIG("thread", ERR, "%s: failed to initialize mutex\n", __func__); + VHOST_CONFIG_LOG("thread", ERR, "%s: failed to initialize mutex", __func__); return ret; } TAILQ_INIT(&reconn_list.head); @@ -514,10 +514,10 @@ vhost_user_reconnect_init(void) ret = rte_thread_create_internal_control(&reconn_tid, "vhost-reco", vhost_user_client_reconnect, NULL); if (ret != 0) { - VHOST_LOG_CONFIG("thread", ERR, "failed to create reconnect thread\n"); + VHOST_CONFIG_LOG("thread", ERR, "failed to create reconnect thread"); if (pthread_mutex_destroy(&reconn_list.mutex)) - VHOST_LOG_CONFIG("thread", ERR, - "%s: failed to destroy reconnect mutex\n", + VHOST_CONFIG_LOG("thread", ERR, + "%s: failed to destroy reconnect mutex", __func__); } @@ -539,17 +539,17 @@ vhost_user_start_client(struct vhost_user_socket *vsocket) return 0; } - VHOST_LOG_CONFIG(path, WARNING, "failed to connect: %s\n", strerror(errno)); + VHOST_CONFIG_LOG(path, WARNING, "failed to connect: %s", strerror(errno)); if (ret == -2 || !vsocket->reconnect) { close(fd); return -1; } - VHOST_LOG_CONFIG(path, INFO, "reconnecting...\n"); + VHOST_CONFIG_LOG(path, INFO, "reconnecting..."); reconn = malloc(sizeof(*reconn)); if (reconn == NULL) { - VHOST_LOG_CONFIG(path, ERR, "failed to allocate memory for reconnect\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to allocate memory for reconnect"); close(fd); return -1; } @@ -638,7 +638,7 @@ rte_vhost_driver_get_vdpa_dev_type(const char *path, uint32_t *type) pthread_mutex_lock(&vhost_user.mutex); vsocket = find_vhost_user_socket(path); if (!vsocket) { - VHOST_LOG_CONFIG(path, ERR, "socket file is not registered yet.\n"); + VHOST_CONFIG_LOG(path, ERR, "socket file is not registered yet."); ret = -1; goto unlock_exit; } @@ -731,7 +731,7 @@ rte_vhost_driver_get_features(const char *path, uint64_t *features) pthread_mutex_lock(&vhost_user.mutex); vsocket = find_vhost_user_socket(path); if (!vsocket) { - VHOST_LOG_CONFIG(path, ERR, "socket file is not registered yet.\n"); + VHOST_CONFIG_LOG(path, ERR, "socket file is not registered yet."); ret = -1; goto unlock_exit; } @@ -743,7 +743,7 @@ rte_vhost_driver_get_features(const char *path, uint64_t *features) } if (vdpa_dev->ops->get_features(vdpa_dev, &vdpa_features) < 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to get vdpa features for socket file.\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to get vdpa features for socket file."); ret = -1; goto unlock_exit; } @@ -781,7 +781,7 @@ rte_vhost_driver_get_protocol_features(const char *path, pthread_mutex_lock(&vhost_user.mutex); vsocket = find_vhost_user_socket(path); if (!vsocket) { - VHOST_LOG_CONFIG(path, ERR, "socket file is not registered yet.\n"); + VHOST_CONFIG_LOG(path, ERR, "socket file is not registered yet."); ret = -1; goto unlock_exit; } @@ -794,7 +794,7 @@ rte_vhost_driver_get_protocol_features(const char *path, if (vdpa_dev->ops->get_protocol_features(vdpa_dev, &vdpa_protocol_features) < 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to get vdpa protocol features.\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to get vdpa protocol features."); ret = -1; goto unlock_exit; } @@ -818,7 +818,7 @@ rte_vhost_driver_get_queue_num(const char *path, uint32_t *queue_num) pthread_mutex_lock(&vhost_user.mutex); vsocket = find_vhost_user_socket(path); if (!vsocket) { - VHOST_LOG_CONFIG(path, ERR, "socket file is not registered yet.\n"); + VHOST_CONFIG_LOG(path, ERR, "socket file is not registered yet."); ret = -1; goto unlock_exit; } @@ -830,7 +830,7 @@ rte_vhost_driver_get_queue_num(const char *path, uint32_t *queue_num) } if (vdpa_dev->ops->get_queue_num(vdpa_dev, &vdpa_queue_num) < 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to get vdpa queue number.\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to get vdpa queue number."); ret = -1; goto unlock_exit; } @@ -848,10 +848,10 @@ rte_vhost_driver_set_max_queue_num(const char *path, uint32_t max_queue_pairs) struct vhost_user_socket *vsocket; int ret = 0; - VHOST_LOG_CONFIG(path, INFO, "Setting max queue pairs to %u\n", max_queue_pairs); + VHOST_CONFIG_LOG(path, INFO, "Setting max queue pairs to %u", max_queue_pairs); if (max_queue_pairs > VHOST_MAX_QUEUE_PAIRS) { - VHOST_LOG_CONFIG(path, ERR, "Library only supports up to %u queue pairs\n", + VHOST_CONFIG_LOG(path, ERR, "Library only supports up to %u queue pairs", VHOST_MAX_QUEUE_PAIRS); return -1; } @@ -859,7 +859,7 @@ rte_vhost_driver_set_max_queue_num(const char *path, uint32_t max_queue_pairs) pthread_mutex_lock(&vhost_user.mutex); vsocket = find_vhost_user_socket(path); if (!vsocket) { - VHOST_LOG_CONFIG(path, ERR, "socket file is not registered yet.\n"); + VHOST_CONFIG_LOG(path, ERR, "socket file is not registered yet."); ret = -1; goto unlock_exit; } @@ -898,7 +898,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags) pthread_mutex_lock(&vhost_user.mutex); if (vhost_user.vsocket_cnt == MAX_VHOST_SOCKET) { - VHOST_LOG_CONFIG(path, ERR, "the number of vhost sockets reaches maximum\n"); + VHOST_CONFIG_LOG(path, ERR, "the number of vhost sockets reaches maximum"); goto out; } @@ -908,14 +908,14 @@ rte_vhost_driver_register(const char *path, uint64_t flags) memset(vsocket, 0, sizeof(struct vhost_user_socket)); vsocket->path = strdup(path); if (vsocket->path == NULL) { - VHOST_LOG_CONFIG(path, ERR, "failed to copy socket path string\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to copy socket path string"); vhost_user_socket_mem_free(vsocket); goto out; } TAILQ_INIT(&vsocket->conn_list); ret = pthread_mutex_init(&vsocket->conn_mutex, NULL); if (ret) { - VHOST_LOG_CONFIG(path, ERR, "failed to init connection mutex\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to init connection mutex"); goto out_free; } @@ -936,7 +936,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags) if (vsocket->async_copy && (vsocket->iommu_support || (flags & RTE_VHOST_USER_POSTCOPY_SUPPORT))) { - VHOST_LOG_CONFIG(path, ERR, "async copy with IOMMU or post-copy not supported\n"); + VHOST_CONFIG_LOG(path, ERR, "async copy with IOMMU or post-copy not supported"); goto out_mutex; } @@ -965,7 +965,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags) if (vsocket->async_copy) { vsocket->supported_features &= ~(1ULL << VHOST_F_LOG_ALL); vsocket->features &= ~(1ULL << VHOST_F_LOG_ALL); - VHOST_LOG_CONFIG(path, INFO, "logging feature is disabled in async copy mode\n"); + VHOST_CONFIG_LOG(path, INFO, "logging feature is disabled in async copy mode"); } /* @@ -979,8 +979,8 @@ rte_vhost_driver_register(const char *path, uint64_t flags) (1ULL << VIRTIO_NET_F_HOST_TSO6) | (1ULL << VIRTIO_NET_F_HOST_UFO); - VHOST_LOG_CONFIG(path, INFO, "Linear buffers requested without external buffers,\n"); - VHOST_LOG_CONFIG(path, INFO, "disabling host segmentation offloading support\n"); + VHOST_CONFIG_LOG(path, INFO, "Linear buffers requested without external buffers,"); + VHOST_CONFIG_LOG(path, INFO, "disabling host segmentation offloading support"); vsocket->supported_features &= ~seg_offload_features; vsocket->features &= ~seg_offload_features; } @@ -995,7 +995,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags) ~(1ULL << VHOST_USER_PROTOCOL_F_PAGEFAULT); } else { #ifndef RTE_LIBRTE_VHOST_POSTCOPY - VHOST_LOG_CONFIG(path, ERR, "Postcopy requested but not compiled\n"); + VHOST_CONFIG_LOG(path, ERR, "Postcopy requested but not compiled"); ret = -1; goto out_mutex; #endif @@ -1023,7 +1023,7 @@ rte_vhost_driver_register(const char *path, uint64_t flags) out_mutex: if (pthread_mutex_destroy(&vsocket->conn_mutex)) { - VHOST_LOG_CONFIG(path, ERR, "failed to destroy connection mutex\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to destroy connection mutex"); } out_free: vhost_user_socket_mem_free(vsocket); @@ -1113,7 +1113,7 @@ rte_vhost_driver_unregister(const char *path) goto again; } - VHOST_LOG_CONFIG(path, INFO, "free connfd %d\n", conn->connfd); + VHOST_CONFIG_LOG(path, INFO, "free connfd %d", conn->connfd); close(conn->connfd); vhost_destroy_device(conn->vid); TAILQ_REMOVE(&vsocket->conn_list, conn, next); @@ -1192,14 +1192,14 @@ rte_vhost_driver_start(const char *path) * rebuild the wait list of poll. */ if (fdset_pipe_init(&vhost_user.fdset) < 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to create pipe for vhost fdset\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to create pipe for vhost fdset"); return -1; } int ret = rte_thread_create_internal_control(&fdset_tid, "vhost-evt", fdset_event_dispatch, &vhost_user.fdset); if (ret != 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to create fdset handling thread\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to create fdset handling thread"); fdset_pipe_uninit(&vhost_user.fdset); return -1; } diff --git a/lib/vhost/vdpa.c b/lib/vhost/vdpa.c index 219eef879c..9776fc07a9 100644 --- a/lib/vhost/vdpa.c +++ b/lib/vhost/vdpa.c @@ -84,8 +84,8 @@ rte_vdpa_register_device(struct rte_device *rte_dev, !ops->get_protocol_features || !ops->dev_conf || !ops->dev_close || !ops->set_vring_state || !ops->set_features) { - VHOST_LOG_CONFIG(rte_dev->name, ERR, - "Some mandatory vDPA ops aren't implemented\n"); + VHOST_CONFIG_LOG(rte_dev->name, ERR, + "Some mandatory vDPA ops aren't implemented"); return NULL; } @@ -107,8 +107,8 @@ rte_vdpa_register_device(struct rte_device *rte_dev, if (ops->get_dev_type) { ret = ops->get_dev_type(dev, &dev->type); if (ret) { - VHOST_LOG_CONFIG(rte_dev->name, ERR, - "Failed to get vdpa dev type.\n"); + VHOST_CONFIG_LOG(rte_dev->name, ERR, + "Failed to get vdpa dev type."); ret = -1; goto out_unlock; } diff --git a/lib/vhost/vduse.c b/lib/vhost/vduse.c index 080b58f7de..c7ba5a61dd 100644 --- a/lib/vhost/vduse.c +++ b/lib/vhost/vduse.c @@ -78,32 +78,32 @@ vduse_iotlb_miss(struct virtio_net *dev, uint64_t iova, uint8_t perm __rte_unuse ret = ioctl(dev->vduse_dev_fd, VDUSE_IOTLB_GET_FD, &entry); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to get IOTLB entry for 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to get IOTLB entry for 0x%" PRIx64, iova); return -1; } fd = ret; - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "New IOTLB entry:\n"); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "\tIOVA: %" PRIx64 " - %" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "New IOTLB entry:"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "\tIOVA: %" PRIx64 " - %" PRIx64, (uint64_t)entry.start, (uint64_t)entry.last); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "\toffset: %" PRIx64 "\n", (uint64_t)entry.offset); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "\tfd: %d\n", fd); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "\tperm: %x\n", entry.perm); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "\toffset: %" PRIx64, (uint64_t)entry.offset); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "\tfd: %d", fd); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "\tperm: %x", entry.perm); size = entry.last - entry.start + 1; mmap_addr = mmap(0, size + entry.offset, entry.perm, MAP_SHARED, fd, 0); if (!mmap_addr) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Failed to mmap IOTLB entry for 0x%" PRIx64 "\n", iova); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Failed to mmap IOTLB entry for 0x%" PRIx64, iova); ret = -1; goto close_fd; } ret = fstat(fd, &stat); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to get page size.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to get page size."); munmap(mmap_addr, entry.offset + size); goto close_fd; } @@ -134,14 +134,14 @@ vduse_control_queue_event(int fd, void *arg, int *remove __rte_unused) ret = read(fd, &buf, sizeof(buf)); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to read control queue event: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to read control queue event: %s", strerror(errno)); return; } - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "Control queue kicked\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "Control queue kicked"); if (virtio_net_ctrl_handle(dev)) - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to handle ctrl request\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to handle ctrl request"); } static void @@ -156,21 +156,21 @@ vduse_vring_setup(struct virtio_net *dev, unsigned int index) vq_info.index = index; ret = ioctl(dev->vduse_dev_fd, VDUSE_VQ_GET_INFO, &vq_info); if (ret) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to get VQ %u info: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to get VQ %u info: %s", index, strerror(errno)); return; } - VHOST_LOG_CONFIG(dev->ifname, INFO, "VQ %u info:\n", index); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tnum: %u\n", vq_info.num); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tdesc_addr: %llx\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "VQ %u info:", index); + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tnum: %u", vq_info.num); + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tdesc_addr: %llx", (unsigned long long)vq_info.desc_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tdriver_addr: %llx\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tdriver_addr: %llx", (unsigned long long)vq_info.driver_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tdevice_addr: %llx\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tdevice_addr: %llx", (unsigned long long)vq_info.device_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tavail_idx: %u\n", vq_info.split.avail_index); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tready: %u\n", vq_info.ready); + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tavail_idx: %u", vq_info.split.avail_index); + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tready: %u", vq_info.ready); vq->last_avail_idx = vq_info.split.avail_index; vq->size = vq_info.num; @@ -182,12 +182,12 @@ vduse_vring_setup(struct virtio_net *dev, unsigned int index) vq->kickfd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC); if (vq->kickfd < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to init kickfd for VQ %u: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to init kickfd for VQ %u: %s", index, strerror(errno)); vq->kickfd = VIRTIO_INVALID_EVENTFD; return; } - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tkick fd: %d\n", vq->kickfd); + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tkick fd: %d", vq->kickfd); vq->shadow_used_split = rte_malloc_socket(NULL, vq->size * sizeof(struct vring_used_elem), @@ -198,12 +198,12 @@ vduse_vring_setup(struct virtio_net *dev, unsigned int index) vhost_user_iotlb_rd_lock(vq); if (vring_translate(dev, vq)) - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to translate vring %d addresses\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to translate vring %d addresses", index); if (vhost_enable_guest_notification(dev, vq, 0)) - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Failed to disable guest notifications on vring %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Failed to disable guest notifications on vring %d", index); vhost_user_iotlb_rd_unlock(vq); @@ -212,7 +212,7 @@ vduse_vring_setup(struct virtio_net *dev, unsigned int index) ret = ioctl(dev->vduse_dev_fd, VDUSE_VQ_SETUP_KICKFD, &vq_efd); if (ret) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to setup kickfd for VQ %u: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to setup kickfd for VQ %u: %s", index, strerror(errno)); close(vq->kickfd); vq->kickfd = VIRTIO_UNINITIALIZED_EVENTFD; @@ -222,8 +222,8 @@ vduse_vring_setup(struct virtio_net *dev, unsigned int index) if (vq == dev->cvq) { ret = fdset_add(&vduse.fdset, vq->kickfd, vduse_control_queue_event, NULL, dev); if (ret) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Failed to setup kickfd handler for VQ %u: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Failed to setup kickfd handler for VQ %u: %s", index, strerror(errno)); vq_efd.fd = VDUSE_EVENTFD_DEASSIGN; ioctl(dev->vduse_dev_fd, VDUSE_VQ_SETUP_KICKFD, &vq_efd); @@ -232,7 +232,7 @@ vduse_vring_setup(struct virtio_net *dev, unsigned int index) } fdset_pipe_notify(&vduse.fdset); vhost_enable_guest_notification(dev, vq, 1); - VHOST_LOG_CONFIG(dev->ifname, INFO, "Ctrl queue event handler installed\n"); + VHOST_CONFIG_LOG(dev->ifname, INFO, "Ctrl queue event handler installed"); } } @@ -253,7 +253,7 @@ vduse_vring_cleanup(struct virtio_net *dev, unsigned int index) ret = ioctl(dev->vduse_dev_fd, VDUSE_VQ_SETUP_KICKFD, &vq_efd); if (ret) - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to cleanup kickfd for VQ %u: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to cleanup kickfd for VQ %u: %s", index, strerror(errno)); close(vq->kickfd); @@ -279,23 +279,23 @@ vduse_device_start(struct virtio_net *dev) { unsigned int i, ret; - VHOST_LOG_CONFIG(dev->ifname, INFO, "Starting device...\n"); + VHOST_CONFIG_LOG(dev->ifname, INFO, "Starting device..."); dev->notify_ops = vhost_driver_callback_get(dev->ifname); if (!dev->notify_ops) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Failed to get callback ops for driver\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Failed to get callback ops for driver"); return; } ret = ioctl(dev->vduse_dev_fd, VDUSE_DEV_GET_FEATURES, &dev->features); if (ret) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to get features: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to get features: %s", strerror(errno)); return; } - VHOST_LOG_CONFIG(dev->ifname, INFO, "Negotiated Virtio features: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "Negotiated Virtio features: 0x%" PRIx64, dev->features); if (dev->features & @@ -331,7 +331,7 @@ vduse_device_stop(struct virtio_net *dev) { unsigned int i; - VHOST_LOG_CONFIG(dev->ifname, INFO, "Stopping device...\n"); + VHOST_CONFIG_LOG(dev->ifname, INFO, "Stopping device..."); vhost_destroy_device_notify(dev); @@ -357,34 +357,34 @@ vduse_events_handler(int fd, void *arg, int *remove __rte_unused) ret = read(fd, &req, sizeof(req)); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to read request: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to read request: %s", strerror(errno)); return; } else if (ret < (int)sizeof(req)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Incomplete to read request %d\n", ret); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Incomplete to read request %d", ret); return; } - VHOST_LOG_CONFIG(dev->ifname, INFO, "New request: %s (%u)\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "New request: %s (%u)", vduse_req_id_to_str(req.type), req.type); switch (req.type) { case VDUSE_GET_VQ_STATE: vq = dev->virtqueue[req.vq_state.index]; - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tvq index: %u, avail_index: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tvq index: %u, avail_index: %u", req.vq_state.index, vq->last_avail_idx); resp.vq_state.split.avail_index = vq->last_avail_idx; resp.result = VDUSE_REQ_RESULT_OK; break; case VDUSE_SET_STATUS: - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tnew status: 0x%08x\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tnew status: 0x%08x", req.s.status); old_status = dev->status; dev->status = req.s.status; resp.result = VDUSE_REQ_RESULT_OK; break; case VDUSE_UPDATE_IOTLB: - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tIOVA range: %" PRIx64 " - %" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tIOVA range: %" PRIx64 " - %" PRIx64, (uint64_t)req.iova.start, (uint64_t)req.iova.last); vhost_user_iotlb_cache_remove(dev, req.iova.start, req.iova.last - req.iova.start + 1); @@ -399,7 +399,7 @@ vduse_events_handler(int fd, void *arg, int *remove __rte_unused) ret = write(dev->vduse_dev_fd, &resp, sizeof(resp)); if (ret != sizeof(resp)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to write response %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to write response %s", strerror(errno)); return; } @@ -411,7 +411,7 @@ vduse_events_handler(int fd, void *arg, int *remove __rte_unused) vduse_device_stop(dev); } - VHOST_LOG_CONFIG(dev->ifname, INFO, "Request %s (%u) handled successfully\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "Request %s (%u) handled successfully", vduse_req_id_to_str(req.type), req.type); } @@ -435,14 +435,14 @@ vduse_device_create(const char *path, bool compliant_ol_flags) * rebuild the wait list of poll. */ if (fdset_pipe_init(&vduse.fdset) < 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to create pipe for vduse fdset\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to create pipe for vduse fdset"); return -1; } ret = rte_thread_create_internal_control(&fdset_tid, "vduse-evt", fdset_event_dispatch, &vduse.fdset); if (ret != 0) { - VHOST_LOG_CONFIG(path, ERR, "failed to create vduse fdset handling thread\n"); + VHOST_CONFIG_LOG(path, ERR, "failed to create vduse fdset handling thread"); fdset_pipe_uninit(&vduse.fdset); return -1; } @@ -452,13 +452,13 @@ vduse_device_create(const char *path, bool compliant_ol_flags) control_fd = open(VDUSE_CTRL_PATH, O_RDWR); if (control_fd < 0) { - VHOST_LOG_CONFIG(name, ERR, "Failed to open %s: %s\n", + VHOST_CONFIG_LOG(name, ERR, "Failed to open %s: %s", VDUSE_CTRL_PATH, strerror(errno)); return -1; } if (ioctl(control_fd, VDUSE_SET_API_VERSION, &ver)) { - VHOST_LOG_CONFIG(name, ERR, "Failed to set API version: %" PRIu64 ": %s\n", + VHOST_CONFIG_LOG(name, ERR, "Failed to set API version: %" PRIu64 ": %s", ver, strerror(errno)); ret = -1; goto out_ctrl_close; @@ -467,24 +467,24 @@ vduse_device_create(const char *path, bool compliant_ol_flags) dev_config = malloc(offsetof(struct vduse_dev_config, config) + sizeof(vnet_config)); if (!dev_config) { - VHOST_LOG_CONFIG(name, ERR, "Failed to allocate VDUSE config\n"); + VHOST_CONFIG_LOG(name, ERR, "Failed to allocate VDUSE config"); ret = -1; goto out_ctrl_close; } ret = rte_vhost_driver_get_features(path, &features); if (ret < 0) { - VHOST_LOG_CONFIG(name, ERR, "Failed to get backend features\n"); + VHOST_CONFIG_LOG(name, ERR, "Failed to get backend features"); goto out_free; } ret = rte_vhost_driver_get_queue_num(path, &max_queue_pairs); if (ret < 0) { - VHOST_LOG_CONFIG(name, ERR, "Failed to get max queue pairs\n"); + VHOST_CONFIG_LOG(name, ERR, "Failed to get max queue pairs"); goto out_free; } - VHOST_LOG_CONFIG(path, INFO, "VDUSE max queue pairs: %u\n", max_queue_pairs); + VHOST_CONFIG_LOG(path, INFO, "VDUSE max queue pairs: %u", max_queue_pairs); total_queues = max_queue_pairs * 2; if (max_queue_pairs == 1) @@ -506,14 +506,14 @@ vduse_device_create(const char *path, bool compliant_ol_flags) ret = ioctl(control_fd, VDUSE_CREATE_DEV, dev_config); if (ret < 0) { - VHOST_LOG_CONFIG(name, ERR, "Failed to create VDUSE device: %s\n", + VHOST_CONFIG_LOG(name, ERR, "Failed to create VDUSE device: %s", strerror(errno)); goto out_free; } dev_fd = open(path, O_RDWR); if (dev_fd < 0) { - VHOST_LOG_CONFIG(name, ERR, "Failed to open device %s: %s\n", + VHOST_CONFIG_LOG(name, ERR, "Failed to open device %s: %s", path, strerror(errno)); ret = -1; goto out_dev_close; @@ -521,14 +521,14 @@ vduse_device_create(const char *path, bool compliant_ol_flags) ret = fcntl(dev_fd, F_SETFL, O_NONBLOCK); if (ret < 0) { - VHOST_LOG_CONFIG(name, ERR, "Failed to set chardev as non-blocking: %s\n", + VHOST_CONFIG_LOG(name, ERR, "Failed to set chardev as non-blocking: %s", strerror(errno)); goto out_dev_close; } vid = vhost_new_device(&vduse_backend_ops); if (vid < 0) { - VHOST_LOG_CONFIG(name, ERR, "Failed to create new Vhost device\n"); + VHOST_CONFIG_LOG(name, ERR, "Failed to create new Vhost device"); ret = -1; goto out_dev_close; } @@ -549,7 +549,7 @@ vduse_device_create(const char *path, bool compliant_ol_flags) ret = alloc_vring_queue(dev, i); if (ret) { - VHOST_LOG_CONFIG(name, ERR, "Failed to alloc vring %d metadata\n", i); + VHOST_CONFIG_LOG(name, ERR, "Failed to alloc vring %d metadata", i); goto out_dev_destroy; } @@ -558,7 +558,7 @@ vduse_device_create(const char *path, bool compliant_ol_flags) ret = ioctl(dev->vduse_dev_fd, VDUSE_VQ_SETUP, &vq_cfg); if (ret) { - VHOST_LOG_CONFIG(name, ERR, "Failed to set-up VQ %d\n", i); + VHOST_CONFIG_LOG(name, ERR, "Failed to set-up VQ %d", i); goto out_dev_destroy; } } @@ -567,7 +567,7 @@ vduse_device_create(const char *path, bool compliant_ol_flags) ret = fdset_add(&vduse.fdset, dev->vduse_dev_fd, vduse_events_handler, NULL, dev); if (ret) { - VHOST_LOG_CONFIG(name, ERR, "Failed to add fd %d to vduse fdset\n", + VHOST_CONFIG_LOG(name, ERR, "Failed to add fd %d to vduse fdset", dev->vduse_dev_fd); goto out_dev_destroy; } @@ -624,7 +624,7 @@ vduse_device_destroy(const char *path) if (dev->vduse_ctrl_fd >= 0) { ret = ioctl(dev->vduse_ctrl_fd, VDUSE_DESTROY_DEV, name); if (ret) - VHOST_LOG_CONFIG(name, ERR, "Failed to destroy VDUSE device: %s\n", + VHOST_CONFIG_LOG(name, ERR, "Failed to destroy VDUSE device: %s", strerror(errno)); close(dev->vduse_ctrl_fd); dev->vduse_ctrl_fd = -1; diff --git a/lib/vhost/vduse.h b/lib/vhost/vduse.h index 4879b1f900..0d8f3f1205 100644 --- a/lib/vhost/vduse.h +++ b/lib/vhost/vduse.h @@ -21,14 +21,14 @@ vduse_device_create(const char *path, bool compliant_ol_flags) { RTE_SET_USED(compliant_ol_flags); - VHOST_LOG_CONFIG(path, ERR, "VDUSE support disabled at build time\n"); + VHOST_CONFIG_LOG(path, ERR, "VDUSE support disabled at build time"); return -1; } static inline int vduse_device_destroy(const char *path) { - VHOST_LOG_CONFIG(path, ERR, "VDUSE support disabled at build time\n"); + VHOST_CONFIG_LOG(path, ERR, "VDUSE support disabled at build time"); return -1; } diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index 8a1f992d9d..5912a42979 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -100,8 +100,8 @@ __vhost_iova_to_vva(struct virtio_net *dev, struct vhost_virtqueue *vq, vhost_user_iotlb_pending_insert(dev, iova, perm); if (vhost_iotlb_miss(dev, iova, perm)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "IOTLB miss req failed for IOVA 0x%" PRIx64 "\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "IOTLB miss req failed for IOVA 0x%" PRIx64, iova); vhost_user_iotlb_pending_remove(dev, iova, 1, perm); } @@ -174,8 +174,8 @@ __vhost_log_write_iova(struct virtio_net *dev, struct vhost_virtqueue *vq, hva = __vhost_iova_to_vva(dev, vq, iova, &map_len, VHOST_ACCESS_RW); if (map_len != len) { - VHOST_LOG_DATA(dev->ifname, ERR, - "failed to write log for IOVA 0x%" PRIx64 ". No IOTLB entry found\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "failed to write log for IOVA 0x%" PRIx64 ". No IOTLB entry found", iova); return; } @@ -292,8 +292,8 @@ __vhost_log_cache_write_iova(struct virtio_net *dev, struct vhost_virtqueue *vq, hva = __vhost_iova_to_vva(dev, vq, iova, &map_len, VHOST_ACCESS_RW); if (map_len != len) { - VHOST_LOG_DATA(dev->ifname, ERR, - "failed to write log for IOVA 0x%" PRIx64 ". No IOTLB entry found\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "failed to write log for IOVA 0x%" PRIx64 ". No IOTLB entry found", iova); return; } @@ -473,9 +473,9 @@ translate_log_addr(struct virtio_net *dev, struct vhost_virtqueue *vq, gpa = hva_to_gpa(dev, hva, exp_size); if (!gpa) { - VHOST_LOG_DATA(dev->ifname, ERR, + VHOST_DATA_LOG(dev->ifname, ERR, "failed to find GPA for log_addr: 0x%" - PRIx64 " hva: 0x%" PRIx64 "\n", + PRIx64 " hva: 0x%" PRIx64, log_addr, hva); return 0; } @@ -609,7 +609,7 @@ init_vring_queue(struct virtio_net *dev __rte_unused, struct vhost_virtqueue *vq #ifdef RTE_LIBRTE_VHOST_NUMA if (get_mempolicy(&numa_node, NULL, 0, vq, MPOL_F_NODE | MPOL_F_ADDR)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to query numa node: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to query numa node: %s", rte_strerror(errno)); numa_node = SOCKET_ID_ANY; } @@ -640,8 +640,8 @@ alloc_vring_queue(struct virtio_net *dev, uint32_t vring_idx) vq = rte_zmalloc(NULL, sizeof(struct vhost_virtqueue), 0); if (vq == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for vring %u.\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for vring %u.", i); return -1; } @@ -678,8 +678,8 @@ reset_device(struct virtio_net *dev) struct vhost_virtqueue *vq = dev->virtqueue[i]; if (!vq) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to reset vring, virtqueue not allocated (%d)\n", i); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to reset vring, virtqueue not allocated (%d)", i); continue; } reset_vring_queue(dev, vq); @@ -697,17 +697,17 @@ vhost_new_device(struct vhost_backend_ops *ops) int i; if (ops == NULL) { - VHOST_LOG_CONFIG("device", ERR, "missing backend ops.\n"); + VHOST_CONFIG_LOG("device", ERR, "missing backend ops."); return -1; } if (ops->iotlb_miss == NULL) { - VHOST_LOG_CONFIG("device", ERR, "missing IOTLB miss backend op.\n"); + VHOST_CONFIG_LOG("device", ERR, "missing IOTLB miss backend op."); return -1; } if (ops->inject_irq == NULL) { - VHOST_LOG_CONFIG("device", ERR, "missing IRQ injection backend op.\n"); + VHOST_CONFIG_LOG("device", ERR, "missing IRQ injection backend op."); return -1; } @@ -718,14 +718,14 @@ vhost_new_device(struct vhost_backend_ops *ops) } if (i == RTE_MAX_VHOST_DEVICE) { - VHOST_LOG_CONFIG("device", ERR, "failed to find a free slot for new device.\n"); + VHOST_CONFIG_LOG("device", ERR, "failed to find a free slot for new device."); pthread_mutex_unlock(&vhost_dev_lock); return -1; } dev = rte_zmalloc(NULL, sizeof(struct virtio_net), 0); if (dev == NULL) { - VHOST_LOG_CONFIG("device", ERR, "failed to allocate memory for new device.\n"); + VHOST_CONFIG_LOG("device", ERR, "failed to allocate memory for new device."); pthread_mutex_unlock(&vhost_dev_lock); return -1; } @@ -832,7 +832,7 @@ vhost_setup_virtio_net(int vid, bool enable, bool compliant_ol_flags, bool stats dev->flags &= ~VIRTIO_DEV_SUPPORT_IOMMU; if (vhost_user_iotlb_init(dev) < 0) - VHOST_LOG_CONFIG("device", ERR, "failed to init IOTLB\n"); + VHOST_CONFIG_LOG("device", ERR, "failed to init IOTLB"); } @@ -891,7 +891,7 @@ rte_vhost_get_numa_node(int vid) ret = get_mempolicy(&numa_node, NULL, 0, dev, MPOL_F_NODE | MPOL_F_ADDR); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to query numa node: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to query numa node: %s", rte_strerror(errno)); return -1; } @@ -1608,8 +1608,8 @@ rte_vhost_rx_queue_count(int vid, uint16_t qid) return 0; if (unlikely(qid >= dev->nr_vring || (qid & 1) == 0)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid virtqueue idx %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid virtqueue idx %d.", __func__, qid); return 0; } @@ -1775,16 +1775,16 @@ async_channel_register(struct virtio_net *dev, struct vhost_virtqueue *vq) int node = vq->numa_node; if (unlikely(vq->async)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "async register failed: already registered (qid: %d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "async register failed: already registered (qid: %d)", vq->index); return -1; } async = rte_zmalloc_socket(NULL, sizeof(struct vhost_async), 0, node); if (!async) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate async metadata (qid: %d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate async metadata (qid: %d)", vq->index); return -1; } @@ -1792,8 +1792,8 @@ async_channel_register(struct virtio_net *dev, struct vhost_virtqueue *vq) async->pkts_info = rte_malloc_socket(NULL, vq->size * sizeof(struct async_inflight_info), RTE_CACHE_LINE_SIZE, node); if (!async->pkts_info) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate async_pkts_info (qid: %d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate async_pkts_info (qid: %d)", vq->index); goto out_free_async; } @@ -1801,8 +1801,8 @@ async_channel_register(struct virtio_net *dev, struct vhost_virtqueue *vq) async->pkts_cmpl_flag = rte_zmalloc_socket(NULL, vq->size * sizeof(bool), RTE_CACHE_LINE_SIZE, node); if (!async->pkts_cmpl_flag) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate async pkts_cmpl_flag (qid: %d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate async pkts_cmpl_flag (qid: %d)", vq->index); goto out_free_async; } @@ -1812,8 +1812,8 @@ async_channel_register(struct virtio_net *dev, struct vhost_virtqueue *vq) vq->size * sizeof(struct vring_used_elem_packed), RTE_CACHE_LINE_SIZE, node); if (!async->buffers_packed) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate async buffers (qid: %d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate async buffers (qid: %d)", vq->index); goto out_free_inflight; } @@ -1822,8 +1822,8 @@ async_channel_register(struct virtio_net *dev, struct vhost_virtqueue *vq) vq->size * sizeof(struct vring_used_elem), RTE_CACHE_LINE_SIZE, node); if (!async->descs_split) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate async descs (qid: %d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate async descs (qid: %d)", vq->index); goto out_free_inflight; } @@ -1914,8 +1914,8 @@ rte_vhost_async_channel_unregister(int vid, uint16_t queue_id) return ret; if (rte_rwlock_write_trylock(&vq->access_lock)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to unregister async channel, virtqueue busy.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to unregister async channel, virtqueue busy."); return ret; } @@ -1927,9 +1927,9 @@ rte_vhost_async_channel_unregister(int vid, uint16_t queue_id) if (!vq->async) { ret = 0; } else if (vq->async->pkts_inflight_n) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to unregister async channel.\n"); - VHOST_LOG_CONFIG(dev->ifname, ERR, - "inflight packets must be completed before unregistration.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to unregister async channel."); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "inflight packets must be completed before unregistration."); } else { vhost_free_async_mem(vq); ret = 0; @@ -1964,9 +1964,9 @@ rte_vhost_async_channel_unregister_thread_unsafe(int vid, uint16_t queue_id) return 0; if (vq->async->pkts_inflight_n) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to unregister async channel.\n"); - VHOST_LOG_CONFIG(dev->ifname, ERR, - "inflight packets must be completed before unregistration.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to unregister async channel."); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "inflight packets must be completed before unregistration."); return -1; } @@ -1985,17 +1985,17 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id) pthread_mutex_lock(&vhost_dma_lock); if (!rte_dma_is_valid(dma_id)) { - VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id); + VHOST_CONFIG_LOG("dma", ERR, "DMA %d is not found.", dma_id); goto error; } if (rte_dma_info_get(dma_id, &info) != 0) { - VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d information.\n", dma_id); + VHOST_CONFIG_LOG("dma", ERR, "Fail to get DMA %d information.", dma_id); goto error; } if (vchan_id >= info.max_vchans) { - VHOST_LOG_CONFIG("dma", ERR, "Invalid DMA %d vChannel %u.\n", dma_id, vchan_id); + VHOST_CONFIG_LOG("dma", ERR, "Invalid DMA %d vChannel %u.", dma_id, vchan_id); goto error; } @@ -2005,8 +2005,8 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id) vchans = rte_zmalloc(NULL, sizeof(struct async_dma_vchan_info) * info.max_vchans, RTE_CACHE_LINE_SIZE); if (vchans == NULL) { - VHOST_LOG_CONFIG("dma", ERR, - "Failed to allocate vchans for DMA %d vChannel %u.\n", + VHOST_CONFIG_LOG("dma", ERR, + "Failed to allocate vchans for DMA %d vChannel %u.", dma_id, vchan_id); goto error; } @@ -2015,7 +2015,7 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id) } if (dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr) { - VHOST_LOG_CONFIG("dma", INFO, "DMA %d vChannel %u already registered.\n", + VHOST_CONFIG_LOG("dma", INFO, "DMA %d vChannel %u already registered.", dma_id, vchan_id); pthread_mutex_unlock(&vhost_dma_lock); return 0; @@ -2027,8 +2027,8 @@ rte_vhost_async_dma_configure(int16_t dma_id, uint16_t vchan_id) pkts_cmpl_flag_addr = rte_zmalloc(NULL, sizeof(bool *) * max_desc, RTE_CACHE_LINE_SIZE); if (!pkts_cmpl_flag_addr) { - VHOST_LOG_CONFIG("dma", ERR, - "Failed to allocate pkts_cmpl_flag_addr for DMA %d vChannel %u.\n", + VHOST_CONFIG_LOG("dma", ERR, + "Failed to allocate pkts_cmpl_flag_addr for DMA %d vChannel %u.", dma_id, vchan_id); if (dma_copy_track[dma_id].nr_vchans == 0) { @@ -2070,8 +2070,8 @@ rte_vhost_async_get_inflight(int vid, uint16_t queue_id) return ret; if (rte_rwlock_write_trylock(&vq->access_lock)) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "failed to check in-flight packets. virtqueue busy.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "failed to check in-flight packets. virtqueue busy."); return ret; } @@ -2284,30 +2284,30 @@ rte_vhost_async_dma_unconfigure(int16_t dma_id, uint16_t vchan_id) pthread_mutex_lock(&vhost_dma_lock); if (!rte_dma_is_valid(dma_id)) { - VHOST_LOG_CONFIG("dma", ERR, "DMA %d is not found.\n", dma_id); + VHOST_CONFIG_LOG("dma", ERR, "DMA %d is not found.", dma_id); goto error; } if (rte_dma_info_get(dma_id, &info) != 0) { - VHOST_LOG_CONFIG("dma", ERR, "Fail to get DMA %d information.\n", dma_id); + VHOST_CONFIG_LOG("dma", ERR, "Fail to get DMA %d information.", dma_id); goto error; } if (vchan_id >= info.max_vchans || !dma_copy_track[dma_id].vchans || !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr) { - VHOST_LOG_CONFIG("dma", ERR, "Invalid channel %d:%u.\n", dma_id, vchan_id); + VHOST_CONFIG_LOG("dma", ERR, "Invalid channel %d:%u.", dma_id, vchan_id); goto error; } if (rte_dma_stats_get(dma_id, vchan_id, &stats) != 0) { - VHOST_LOG_CONFIG("dma", ERR, - "Failed to get stats for DMA %d vChannel %u.\n", dma_id, vchan_id); + VHOST_CONFIG_LOG("dma", ERR, + "Failed to get stats for DMA %d vChannel %u.", dma_id, vchan_id); goto error; } if (stats.submitted - stats.completed != 0) { - VHOST_LOG_CONFIG("dma", ERR, - "Do not unconfigure when there are inflight packets.\n"); + VHOST_CONFIG_LOG("dma", ERR, + "Do not unconfigure when there are inflight packets."); goto error; } diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index 5f24911190..e0f6dd4081 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -673,17 +673,15 @@ vhost_log_write_iova(struct virtio_net *dev, struct vhost_virtqueue *vq, } extern int vhost_config_log_level; +#define RTE_LOGTYPE_VHOST_CONFIG vhost_config_log_level extern int vhost_data_log_level; +#define RTE_LOGTYPE_VHOST_DATA vhost_data_log_level -#define VHOST_LOG_CONFIG(prefix, level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, vhost_config_log_level, \ - "VHOST_CONFIG: (%s) " fmt, prefix, ##args) +#define VHOST_CONFIG_LOG(prefix, level, fmt, args...) \ + RTE_LOG(level, VHOST_CONFIG, "VHOST_CONFIG: (%s) " fmt "\n", prefix, ##args) -#define VHOST_LOG_DATA(prefix, level, fmt, args...) \ - (void)((RTE_LOG_ ## level <= RTE_LOG_DP_LEVEL) ? \ - rte_log(RTE_LOG_ ## level, vhost_data_log_level, \ - "VHOST_DATA: (%s) " fmt, prefix, ##args) : \ - 0) +#define VHOST_DATA_LOG(prefix, level, fmt, args...) \ + RTE_LOG_DP(level, VHOST_DATA, "VHOST_DATA: (%s) " fmt "\n", prefix, ##args) #ifdef RTE_LIBRTE_VHOST_DEBUG #define VHOST_MAX_PRINT_BUFF 6072 @@ -702,7 +700,7 @@ extern int vhost_data_log_level; } \ snprintf(packet + strnlen(packet, VHOST_MAX_PRINT_BUFF), VHOST_MAX_PRINT_BUFF - strnlen(packet, VHOST_MAX_PRINT_BUFF), "\n"); \ \ - VHOST_LOG_DATA(device->ifname, DEBUG, "%s", packet); \ + RTE_LOG_DP(DEBUG, VHOST_DATA, "VHOST_DATA: (%s) %s", dev->ifname, packet); \ } while (0) #else #define PRINT_PACKET(device, addr, size, header) do {} while (0) @@ -830,7 +828,7 @@ get_device(int vid) dev = vhost_devices[vid]; if (unlikely(!dev)) { - VHOST_LOG_CONFIG("device", ERR, "(%d) device not found.\n", vid); + VHOST_CONFIG_LOG("device", ERR, "(%d) device not found.", vid); } return dev; @@ -963,8 +961,8 @@ vhost_vring_call_split(struct virtio_net *dev, struct vhost_virtqueue *vq) vq->signalled_used = new; vq->signalled_used_valid = true; - VHOST_LOG_DATA(dev->ifname, DEBUG, - "%s: used_event_idx=%d, old=%d, new=%d\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "%s: used_event_idx=%d, old=%d, new=%d", __func__, vhost_used_event(vq), old, new); if (vhost_need_event(vhost_used_event(vq), new, old) || diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index 413f068bcd..bac10e6182 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -93,8 +93,8 @@ validate_msg_fds(struct virtio_net *dev, struct vhu_msg_context *ctx, int expect if (ctx->fd_num == expected_fds) return 0; - VHOST_LOG_CONFIG(dev->ifname, ERR, - "expect %d FDs for request %s, received %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "expect %d FDs for request %s, received %d", expected_fds, vhost_message_handlers[ctx->msg.request.frontend].description, ctx->fd_num); @@ -144,7 +144,7 @@ async_dma_map(struct virtio_net *dev, bool do_map) return; /* DMA mapping errors won't stop VHOST_USER_SET_MEM_TABLE. */ - VHOST_LOG_CONFIG(dev->ifname, ERR, "DMA engine map failed\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "DMA engine map failed"); } } @@ -160,7 +160,7 @@ async_dma_map(struct virtio_net *dev, bool do_map) if (rte_errno == EINVAL) return; - VHOST_LOG_CONFIG(dev->ifname, ERR, "DMA engine unmap failed\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "DMA engine unmap failed"); } } } @@ -339,7 +339,7 @@ vhost_user_set_features(struct virtio_net **pdev, rte_vhost_driver_get_features(dev->ifname, &vhost_features); if (features & ~vhost_features) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "received invalid negotiated features.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "received invalid negotiated features."); dev->flags |= VIRTIO_DEV_FEATURES_FAILED; dev->status &= ~VIRTIO_DEVICE_STATUS_FEATURES_OK; @@ -356,8 +356,8 @@ vhost_user_set_features(struct virtio_net **pdev, * is enabled when the live-migration starts. */ if ((dev->features ^ features) & ~(1ULL << VHOST_F_LOG_ALL)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "features changed while device is running.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "features changed while device is running."); return RTE_VHOST_MSG_RESULT_ERR; } @@ -374,11 +374,11 @@ vhost_user_set_features(struct virtio_net **pdev, } else { dev->vhost_hlen = sizeof(struct virtio_net_hdr); } - VHOST_LOG_CONFIG(dev->ifname, INFO, - "negotiated Virtio features: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "negotiated Virtio features: 0x%" PRIx64, dev->features); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "mergeable RX buffers %s, virtio 1 %s\n", + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "mergeable RX buffers %s, virtio 1 %s", (dev->features & (1 << VIRTIO_NET_F_MRG_RXBUF)) ? "on" : "off", (dev->features & (1ULL << VIRTIO_F_VERSION_1)) ? "on" : "off"); @@ -426,8 +426,8 @@ vhost_user_set_vring_num(struct virtio_net **pdev, struct vhost_virtqueue *vq = dev->virtqueue[ctx->msg.payload.state.index]; if (ctx->msg.payload.state.num > 32768) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid virtqueue size %u\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid virtqueue size %u", ctx->msg.payload.state.num); return RTE_VHOST_MSG_RESULT_ERR; } @@ -445,8 +445,8 @@ vhost_user_set_vring_num(struct virtio_net **pdev, */ if (!vq_is_packed(dev)) { if (vq->size & (vq->size - 1)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid virtqueue size %u\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid virtqueue size %u", vq->size); return RTE_VHOST_MSG_RESULT_ERR; } @@ -459,8 +459,8 @@ vhost_user_set_vring_num(struct virtio_net **pdev, sizeof(struct vring_used_elem_packed), RTE_CACHE_LINE_SIZE, vq->numa_node); if (!vq->shadow_used_packed) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for shadow used ring.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for shadow used ring."); return RTE_VHOST_MSG_RESULT_ERR; } @@ -472,8 +472,8 @@ vhost_user_set_vring_num(struct virtio_net **pdev, RTE_CACHE_LINE_SIZE, vq->numa_node); if (!vq->shadow_used_split) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for vq internal data.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for vq internal data."); return RTE_VHOST_MSG_RESULT_ERR; } } @@ -483,8 +483,8 @@ vhost_user_set_vring_num(struct virtio_net **pdev, vq->size * sizeof(struct batch_copy_elem), RTE_CACHE_LINE_SIZE, vq->numa_node); if (!vq->batch_copy_elems) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for batching copy.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for batching copy."); return RTE_VHOST_MSG_RESULT_ERR; } @@ -520,8 +520,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) ret = get_mempolicy(&node, NULL, 0, vq->desc, MPOL_F_NODE | MPOL_F_ADDR); if (ret) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "unable to get virtqueue %d numa information.\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "unable to get virtqueue %d numa information.", vq->index); return; } @@ -531,15 +531,15 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) vq = rte_realloc_socket(*pvq, sizeof(**pvq), 0, node); if (!vq) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc virtqueue %d on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc virtqueue %d on node %d", (*pvq)->index, node); return; } *pvq = vq; if (vq != dev->virtqueue[vq->index]) { - VHOST_LOG_CONFIG(dev->ifname, INFO, "reallocated virtqueue on node %d\n", node); + VHOST_CONFIG_LOG(dev->ifname, INFO, "reallocated virtqueue on node %d", node); dev->virtqueue[vq->index] = vq; } @@ -549,8 +549,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) sup = rte_realloc_socket(vq->shadow_used_packed, vq->size * sizeof(*sup), RTE_CACHE_LINE_SIZE, node); if (!sup) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc shadow packed on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc shadow packed on node %d", node); return; } @@ -561,8 +561,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) sus = rte_realloc_socket(vq->shadow_used_split, vq->size * sizeof(*sus), RTE_CACHE_LINE_SIZE, node); if (!sus) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc shadow split on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc shadow split on node %d", node); return; } @@ -572,8 +572,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) bce = rte_realloc_socket(vq->batch_copy_elems, vq->size * sizeof(*bce), RTE_CACHE_LINE_SIZE, node); if (!bce) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc batch copy elem on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc batch copy elem on node %d", node); return; } @@ -584,8 +584,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) lc = rte_realloc_socket(vq->log_cache, sizeof(*lc) * VHOST_LOG_CACHE_NR, 0, node); if (!lc) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc log cache on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc log cache on node %d", node); return; } @@ -597,8 +597,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) ri = rte_realloc_socket(vq->resubmit_inflight, sizeof(*ri), 0, node); if (!ri) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc resubmit inflight on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc resubmit inflight on node %d", node); return; } @@ -610,8 +610,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) rd = rte_realloc_socket(ri->resubmit_list, sizeof(*rd) * ri->resubmit_num, 0, node); if (!rd) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc resubmit list on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc resubmit list on node %d", node); return; } @@ -628,7 +628,7 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) ret = get_mempolicy(&dev_node, NULL, 0, dev, MPOL_F_NODE | MPOL_F_ADDR); if (ret) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "unable to get numa information.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "unable to get numa information."); return; } @@ -637,20 +637,20 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) dev = rte_realloc_socket(*pdev, sizeof(**pdev), 0, node); if (!dev) { - VHOST_LOG_CONFIG((*pdev)->ifname, ERR, "failed to realloc dev on node %d\n", node); + VHOST_CONFIG_LOG((*pdev)->ifname, ERR, "failed to realloc dev on node %d", node); return; } *pdev = dev; - VHOST_LOG_CONFIG(dev->ifname, INFO, "reallocated device on node %d\n", node); + VHOST_CONFIG_LOG(dev->ifname, INFO, "reallocated device on node %d", node); vhost_devices[dev->vid] = dev; mem_size = sizeof(struct rte_vhost_memory) + sizeof(struct rte_vhost_mem_region) * dev->mem->nregions; mem = rte_realloc_socket(dev->mem, mem_size, 0, node); if (!mem) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc mem table on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc mem table on node %d", node); return; } @@ -659,8 +659,8 @@ numa_realloc(struct virtio_net **pdev, struct vhost_virtqueue **pvq) gp = rte_realloc_socket(dev->guest_pages, dev->max_guest_pages * sizeof(*gp), RTE_CACHE_LINE_SIZE, node); if (!gp) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to realloc guest pages on node %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to realloc guest pages on node %d", node); return; } @@ -771,8 +771,8 @@ mem_set_dump(struct virtio_net *dev, void *ptr, size_t size, bool enable, uint64 size_t len = end - (uintptr_t)start; if (madvise(start, len, enable ? MADV_DODUMP : MADV_DONTDUMP) == -1) { - VHOST_LOG_CONFIG(dev->ifname, INFO, - "could not set coredump preference (%s).\n", strerror(errno)); + VHOST_CONFIG_LOG(dev->ifname, INFO, + "could not set coredump preference (%s).", strerror(errno)); } #endif } @@ -791,7 +791,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) vq->log_guest_addr = log_addr_to_gpa(dev, vq); if (vq->log_guest_addr == 0) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "failed to map log_guest_addr.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "failed to map log_guest_addr."); return; } } @@ -803,7 +803,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) if (vq->desc_packed == NULL || len != sizeof(struct vring_packed_desc) * vq->size) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "failed to map desc_packed ring.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "failed to map desc_packed ring."); return; } @@ -819,8 +819,8 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) vq, vq->ring_addrs.avail_user_addr, &len); if (vq->driver_event == NULL || len != sizeof(struct vring_packed_desc_event)) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "failed to find driver area address.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "failed to find driver area address."); return; } @@ -832,8 +832,8 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) vq, vq->ring_addrs.used_user_addr, &len); if (vq->device_event == NULL || len != sizeof(struct vring_packed_desc_event)) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "failed to find device area address.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "failed to find device area address."); return; } @@ -851,7 +851,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) vq->desc = (struct vring_desc *)(uintptr_t)ring_addr_to_vva(dev, vq, vq->ring_addrs.desc_user_addr, &len); if (vq->desc == 0 || len != sizeof(struct vring_desc) * vq->size) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "failed to map desc ring.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "failed to map desc ring."); return; } @@ -867,7 +867,7 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) vq->avail = (struct vring_avail *)(uintptr_t)ring_addr_to_vva(dev, vq, vq->ring_addrs.avail_user_addr, &len); if (vq->avail == 0 || len != expected_len) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "failed to map avail ring.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "failed to map avail ring."); return; } @@ -880,28 +880,28 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) vq->used = (struct vring_used *)(uintptr_t)ring_addr_to_vva(dev, vq, vq->ring_addrs.used_user_addr, &len); if (vq->used == 0 || len != expected_len) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "failed to map used ring.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "failed to map used ring."); return; } mem_set_dump(dev, vq->used, len, true, hua_to_alignment(dev->mem, vq->used)); if (vq->last_used_idx != vq->used->idx) { - VHOST_LOG_CONFIG(dev->ifname, WARNING, - "last_used_idx (%u) and vq->used->idx (%u) mismatches;\n", + VHOST_CONFIG_LOG(dev->ifname, WARNING, + "last_used_idx (%u) and vq->used->idx (%u) mismatches;", vq->last_used_idx, vq->used->idx); vq->last_used_idx = vq->used->idx; vq->last_avail_idx = vq->used->idx; - VHOST_LOG_CONFIG(dev->ifname, WARNING, - "some packets maybe resent for Tx and dropped for Rx\n"); + VHOST_CONFIG_LOG(dev->ifname, WARNING, + "some packets maybe resent for Tx and dropped for Rx"); } vq->access_ok = true; - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "mapped address desc: %p\n", vq->desc); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "mapped address avail: %p\n", vq->avail); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "mapped address used: %p\n", vq->used); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "log_guest_addr: %" PRIx64 "\n", vq->log_guest_addr); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "mapped address desc: %p", vq->desc); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "mapped address avail: %p", vq->avail); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "mapped address used: %p", vq->used); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "log_guest_addr: %" PRIx64, vq->log_guest_addr); } /* @@ -975,8 +975,8 @@ vhost_user_set_vring_base(struct virtio_net **pdev, vq->last_avail_idx = ctx->msg.payload.state.num; } - VHOST_LOG_CONFIG(dev->ifname, INFO, - "vring base idx:%u last_used_idx:%u last_avail_idx:%u.\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "vring base idx:%u last_used_idx:%u last_avail_idx:%u.", ctx->msg.payload.state.index, vq->last_used_idx, vq->last_avail_idx); return RTE_VHOST_MSG_RESULT_OK; @@ -996,7 +996,7 @@ add_one_guest_page(struct virtio_net *dev, uint64_t guest_phys_addr, dev->max_guest_pages * sizeof(*page), RTE_CACHE_LINE_SIZE); if (dev->guest_pages == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "cannot realloc guest_pages\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "cannot realloc guest_pages"); rte_free(old_pages); return -1; } @@ -1077,12 +1077,12 @@ dump_guest_pages(struct virtio_net *dev) for (i = 0; i < dev->nr_guest_pages; i++) { page = &dev->guest_pages[i]; - VHOST_LOG_CONFIG(dev->ifname, INFO, "guest physical page region %u\n", i); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tguest_phys_addr: %" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "guest physical page region %u", i); + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tguest_phys_addr: %" PRIx64, page->guest_phys_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\thost_iova : %" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "\thost_iova : %" PRIx64, page->host_iova); - VHOST_LOG_CONFIG(dev->ifname, INFO, "\tsize : %" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "\tsize : %" PRIx64, page->size); } } @@ -1131,9 +1131,9 @@ vhost_user_postcopy_region_register(struct virtio_net *dev, if (ioctl(dev->postcopy_ufd, UFFDIO_REGISTER, ®_struct)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to register ufd for region " - "%" PRIx64 " - %" PRIx64 " (ufd = %d) %s\n", + "%" PRIx64 " - %" PRIx64 " (ufd = %d) %s", (uint64_t)reg_struct.range.start, (uint64_t)reg_struct.range.start + (uint64_t)reg_struct.range.len - 1, @@ -1142,8 +1142,8 @@ vhost_user_postcopy_region_register(struct virtio_net *dev, return -1; } - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t userfaultfd registered for range : %" PRIx64 " - %" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t userfaultfd registered for range : %" PRIx64 " - %" PRIx64, (uint64_t)reg_struct.range.start, (uint64_t)reg_struct.range.start + (uint64_t)reg_struct.range.len - 1); @@ -1190,8 +1190,8 @@ vhost_user_postcopy_register(struct virtio_net *dev, int main_fd, * we've got to wait before we're allowed to generate faults. */ if (read_vhost_message(dev, main_fd, &ack_ctx) <= 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to read qemu ack on postcopy set-mem-table\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to read qemu ack on postcopy set-mem-table"); return -1; } @@ -1199,8 +1199,8 @@ vhost_user_postcopy_register(struct virtio_net *dev, int main_fd, return -1; if (ack_ctx.msg.request.frontend != VHOST_USER_SET_MEM_TABLE) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "bad qemu ack on postcopy set-mem-table (%d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "bad qemu ack on postcopy set-mem-table (%d)", ack_ctx.msg.request.frontend); return -1; } @@ -1227,8 +1227,8 @@ vhost_user_mmap_region(struct virtio_net *dev, /* Check for memory_size + mmap_offset overflow */ if (mmap_offset >= -region->size) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "mmap_offset (%#"PRIx64") and memory_size (%#"PRIx64") overflow\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "mmap_offset (%#"PRIx64") and memory_size (%#"PRIx64") overflow", mmap_offset, region->size); return -1; } @@ -1243,7 +1243,7 @@ vhost_user_mmap_region(struct virtio_net *dev, */ alignment = get_blk_size(region->fd); if (alignment == (uint64_t)-1) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "couldn't get hugepage size through fstat\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "couldn't get hugepage size through fstat"); return -1; } mmap_size = RTE_ALIGN_CEIL(mmap_size, alignment); @@ -1256,8 +1256,8 @@ vhost_user_mmap_region(struct virtio_net *dev, * mmap() kernel implementation would return an error, but * better catch it before and provide useful info in the logs. */ - VHOST_LOG_CONFIG(dev->ifname, ERR, - "mmap size (0x%" PRIx64 ") or alignment (0x%" PRIx64 ") is invalid\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "mmap size (0x%" PRIx64 ") or alignment (0x%" PRIx64 ") is invalid", region->size + mmap_offset, alignment); return -1; } @@ -1267,7 +1267,7 @@ vhost_user_mmap_region(struct virtio_net *dev, MAP_SHARED | populate, region->fd, 0); if (mmap_addr == MAP_FAILED) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "mmap failed (%s).\n", strerror(errno)); + VHOST_CONFIG_LOG(dev->ifname, ERR, "mmap failed (%s).", strerror(errno)); return -1; } @@ -1278,35 +1278,35 @@ vhost_user_mmap_region(struct virtio_net *dev, if (dev->async_copy) { if (add_guest_pages(dev, region, alignment) < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "adding guest pages to region failed.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "adding guest pages to region failed."); return -1; } } - VHOST_LOG_CONFIG(dev->ifname, INFO, - "guest memory region size: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "guest memory region size: 0x%" PRIx64, region->size); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t guest physical addr: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t guest physical addr: 0x%" PRIx64, region->guest_phys_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t guest virtual addr: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t guest virtual addr: 0x%" PRIx64, region->guest_user_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t host virtual addr: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t host virtual addr: 0x%" PRIx64, region->host_user_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t mmap addr : 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t mmap addr : 0x%" PRIx64, (uint64_t)(uintptr_t)mmap_addr); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t mmap size : 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t mmap size : 0x%" PRIx64, mmap_size); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t mmap align: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t mmap align: 0x%" PRIx64, alignment); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t mmap off : 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t mmap off : 0x%" PRIx64, mmap_offset); return 0; @@ -1329,14 +1329,14 @@ vhost_user_set_mem_table(struct virtio_net **pdev, return RTE_VHOST_MSG_RESULT_ERR; if (memory->nregions > VHOST_MEMORY_MAX_NREGIONS) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "too many memory regions (%u)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "too many memory regions (%u)", memory->nregions); goto close_msg_fds; } if (dev->mem && !vhost_memory_changed(memory, dev->mem)) { - VHOST_LOG_CONFIG(dev->ifname, INFO, "memory regions not changed\n"); + VHOST_CONFIG_LOG(dev->ifname, INFO, "memory regions not changed"); close_msg_fds(ctx); @@ -1386,8 +1386,8 @@ vhost_user_set_mem_table(struct virtio_net **pdev, RTE_CACHE_LINE_SIZE, numa_node); if (dev->guest_pages == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for dev->guest_pages\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for dev->guest_pages"); goto close_msg_fds; } } @@ -1395,7 +1395,7 @@ vhost_user_set_mem_table(struct virtio_net **pdev, dev->mem = rte_zmalloc_socket("vhost-mem-table", sizeof(struct rte_vhost_memory) + sizeof(struct rte_vhost_mem_region) * memory->nregions, 0, numa_node); if (dev->mem == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to allocate memory for dev->mem\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to allocate memory for dev->mem"); goto free_guest_pages; } @@ -1416,7 +1416,7 @@ vhost_user_set_mem_table(struct virtio_net **pdev, mmap_offset = memory->regions[i].mmap_offset; if (vhost_user_mmap_region(dev, reg, mmap_offset) < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to mmap region %u\n", i); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to mmap region %u", i); goto free_mem_table; } @@ -1538,7 +1538,7 @@ virtio_is_ready(struct virtio_net *dev) dev->flags |= VIRTIO_DEV_READY; if (!(dev->flags & VIRTIO_DEV_RUNNING)) - VHOST_LOG_CONFIG(dev->ifname, INFO, "virtio is now ready for processing.\n"); + VHOST_CONFIG_LOG(dev->ifname, INFO, "virtio is now ready for processing."); return 1; } @@ -1559,7 +1559,7 @@ inflight_mem_alloc(struct virtio_net *dev, const char *name, size_t size, int *f if (mfd == -1) { mfd = mkstemp(fname); if (mfd == -1) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to get inflight buffer fd\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to get inflight buffer fd"); return NULL; } @@ -1567,14 +1567,14 @@ inflight_mem_alloc(struct virtio_net *dev, const char *name, size_t size, int *f } if (ftruncate(mfd, size) == -1) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to alloc inflight buffer\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to alloc inflight buffer"); close(mfd); return NULL; } ptr = mmap(0, size, PROT_READ | PROT_WRITE, MAP_SHARED, mfd, 0); if (ptr == MAP_FAILED) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to mmap inflight buffer\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to mmap inflight buffer"); close(mfd); return NULL; } @@ -1616,8 +1616,8 @@ vhost_user_get_inflight_fd(struct virtio_net **pdev, void *addr; if (ctx->msg.size != sizeof(ctx->msg.payload.inflight)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid get_inflight_fd message size is %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid get_inflight_fd message size is %d", ctx->msg.size); return RTE_VHOST_MSG_RESULT_ERR; } @@ -1633,7 +1633,7 @@ vhost_user_get_inflight_fd(struct virtio_net **pdev, dev->inflight_info = rte_zmalloc_socket("inflight_info", sizeof(struct inflight_mem_info), 0, numa_node); if (!dev->inflight_info) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to alloc dev inflight area\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to alloc dev inflight area"); return RTE_VHOST_MSG_RESULT_ERR; } dev->inflight_info->fd = -1; @@ -1642,11 +1642,11 @@ vhost_user_get_inflight_fd(struct virtio_net **pdev, num_queues = ctx->msg.payload.inflight.num_queues; queue_size = ctx->msg.payload.inflight.queue_size; - VHOST_LOG_CONFIG(dev->ifname, INFO, - "get_inflight_fd num_queues: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "get_inflight_fd num_queues: %u", ctx->msg.payload.inflight.num_queues); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "get_inflight_fd queue_size: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "get_inflight_fd queue_size: %u", ctx->msg.payload.inflight.queue_size); if (vq_is_packed(dev)) @@ -1657,7 +1657,7 @@ vhost_user_get_inflight_fd(struct virtio_net **pdev, mmap_size = num_queues * pervq_inflight_size; addr = inflight_mem_alloc(dev, "vhost-inflight", mmap_size, &fd); if (!addr) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to alloc vhost inflight area\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to alloc vhost inflight area"); ctx->msg.payload.inflight.mmap_size = 0; return RTE_VHOST_MSG_RESULT_ERR; } @@ -1691,14 +1691,14 @@ vhost_user_get_inflight_fd(struct virtio_net **pdev, } } - VHOST_LOG_CONFIG(dev->ifname, INFO, - "send inflight mmap_size: %"PRIu64"\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "send inflight mmap_size: %"PRIu64, ctx->msg.payload.inflight.mmap_size); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "send inflight mmap_offset: %"PRIu64"\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "send inflight mmap_offset: %"PRIu64, ctx->msg.payload.inflight.mmap_offset); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "send inflight fd: %d\n", ctx->fds[0]); + VHOST_CONFIG_LOG(dev->ifname, INFO, + "send inflight fd: %d", ctx->fds[0]); return RTE_VHOST_MSG_RESULT_REPLY; } @@ -1722,8 +1722,8 @@ vhost_user_set_inflight_fd(struct virtio_net **pdev, fd = ctx->fds[0]; if (ctx->msg.size != sizeof(ctx->msg.payload.inflight) || fd < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid set_inflight_fd message size is %d,fd is %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid set_inflight_fd message size is %d,fd is %d", ctx->msg.size, fd); return RTE_VHOST_MSG_RESULT_ERR; } @@ -1738,21 +1738,21 @@ vhost_user_set_inflight_fd(struct virtio_net **pdev, else pervq_inflight_size = get_pervq_shm_size_split(queue_size); - VHOST_LOG_CONFIG(dev->ifname, INFO, "set_inflight_fd mmap_size: %"PRIu64"\n", mmap_size); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "set_inflight_fd mmap_offset: %"PRIu64"\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "set_inflight_fd mmap_size: %"PRIu64, mmap_size); + VHOST_CONFIG_LOG(dev->ifname, INFO, + "set_inflight_fd mmap_offset: %"PRIu64, mmap_offset); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "set_inflight_fd num_queues: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "set_inflight_fd num_queues: %u", num_queues); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "set_inflight_fd queue_size: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "set_inflight_fd queue_size: %u", queue_size); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "set_inflight_fd fd: %d\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "set_inflight_fd fd: %d", fd); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "set_inflight_fd pervq_inflight_size: %d\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "set_inflight_fd pervq_inflight_size: %d", pervq_inflight_size); /* @@ -1766,7 +1766,7 @@ vhost_user_set_inflight_fd(struct virtio_net **pdev, dev->inflight_info = rte_zmalloc_socket("inflight_info", sizeof(struct inflight_mem_info), 0, numa_node); if (dev->inflight_info == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to alloc dev inflight area\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to alloc dev inflight area"); return RTE_VHOST_MSG_RESULT_ERR; } dev->inflight_info->fd = -1; @@ -1780,7 +1780,7 @@ vhost_user_set_inflight_fd(struct virtio_net **pdev, addr = mmap(0, mmap_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, mmap_offset); if (addr == MAP_FAILED) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to mmap share memory.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to mmap share memory."); return RTE_VHOST_MSG_RESULT_ERR; } @@ -1831,8 +1831,8 @@ vhost_user_set_vring_call(struct virtio_net **pdev, file.fd = VIRTIO_INVALID_EVENTFD; else file.fd = ctx->fds[0]; - VHOST_LOG_CONFIG(dev->ifname, INFO, - "vring call idx:%d file:%d\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "vring call idx:%d file:%d", file.index, file.fd); vq = dev->virtqueue[file.index]; @@ -1863,7 +1863,7 @@ static int vhost_user_set_vring_err(struct virtio_net **pdev, if (!(ctx->msg.payload.u64 & VHOST_USER_VRING_NOFD_MASK)) close(ctx->fds[0]); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "not implemented\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "not implemented"); return RTE_VHOST_MSG_RESULT_OK; } @@ -1929,8 +1929,8 @@ vhost_check_queue_inflights_split(struct virtio_net *dev, resubmit = rte_zmalloc_socket("resubmit", sizeof(struct rte_vhost_resubmit_info), 0, vq->numa_node); if (!resubmit) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for resubmit info.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for resubmit info."); return RTE_VHOST_MSG_RESULT_ERR; } @@ -1938,8 +1938,8 @@ vhost_check_queue_inflights_split(struct virtio_net *dev, resubmit_num * sizeof(struct rte_vhost_resubmit_desc), 0, vq->numa_node); if (!resubmit->resubmit_list) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for inflight desc.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for inflight desc."); rte_free(resubmit); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2025,8 +2025,8 @@ vhost_check_queue_inflights_packed(struct virtio_net *dev, resubmit = rte_zmalloc_socket("resubmit", sizeof(struct rte_vhost_resubmit_info), 0, vq->numa_node); if (resubmit == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for resubmit info.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for resubmit info."); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2034,8 +2034,8 @@ vhost_check_queue_inflights_packed(struct virtio_net *dev, resubmit_num * sizeof(struct rte_vhost_resubmit_desc), 0, vq->numa_node); if (resubmit->resubmit_list == NULL) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate memory for resubmit desc.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate memory for resubmit desc."); rte_free(resubmit); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2082,8 +2082,8 @@ vhost_user_set_vring_kick(struct virtio_net **pdev, file.fd = VIRTIO_INVALID_EVENTFD; else file.fd = ctx->fds[0]; - VHOST_LOG_CONFIG(dev->ifname, INFO, - "vring kick idx:%d file:%d\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "vring kick idx:%d file:%d", file.index, file.fd); /* Interpret ring addresses only when ring is started. */ @@ -2111,15 +2111,15 @@ vhost_user_set_vring_kick(struct virtio_net **pdev, if (vq_is_packed(dev)) { if (vhost_check_queue_inflights_packed(dev, vq)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to inflights for vq: %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to inflights for vq: %d", file.index); return RTE_VHOST_MSG_RESULT_ERR; } } else { if (vhost_check_queue_inflights_split(dev, vq)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to inflights for vq: %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to inflights for vq: %d", file.index); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2159,8 +2159,8 @@ vhost_user_get_vring_base(struct virtio_net **pdev, ctx->msg.payload.state.num = vq->last_avail_idx; } - VHOST_LOG_CONFIG(dev->ifname, INFO, - "vring base idx:%d file:%d\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "vring base idx:%d file:%d", ctx->msg.payload.state.index, ctx->msg.payload.state.num); /* * Based on current qemu vhost-user implementation, this message is @@ -2217,8 +2217,8 @@ vhost_user_set_vring_enable(struct virtio_net **pdev, bool enable = !!ctx->msg.payload.state.num; int index = (int)ctx->msg.payload.state.index; - VHOST_LOG_CONFIG(dev->ifname, INFO, - "set queue enable: %d to qp idx: %d\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "set queue enable: %d to qp idx: %d", enable, index); vq = dev->virtqueue[index]; @@ -2226,8 +2226,8 @@ vhost_user_set_vring_enable(struct virtio_net **pdev, /* vhost_user_lock_all_queue_pairs locked all qps */ vq_assert_lock(dev, vq); if (enable && vq->async && vq->async->pkts_inflight_n) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to enable vring. Inflight packets must be completed first\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to enable vring. Inflight packets must be completed first"); return RTE_VHOST_MSG_RESULT_ERR; } } @@ -2267,13 +2267,13 @@ vhost_user_set_protocol_features(struct virtio_net **pdev, rte_vhost_driver_get_protocol_features(dev->ifname, &backend_protocol_features); if (protocol_features & ~backend_protocol_features) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "received invalid protocol features.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "received invalid protocol features."); return RTE_VHOST_MSG_RESULT_ERR; } dev->protocol_features = protocol_features; - VHOST_LOG_CONFIG(dev->ifname, INFO, - "negotiated Vhost-user protocol features: 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "negotiated Vhost-user protocol features: 0x%" PRIx64, dev->protocol_features); return RTE_VHOST_MSG_RESULT_OK; @@ -2295,13 +2295,13 @@ vhost_user_set_log_base(struct virtio_net **pdev, return RTE_VHOST_MSG_RESULT_ERR; if (fd < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "invalid log fd: %d\n", fd); + VHOST_CONFIG_LOG(dev->ifname, ERR, "invalid log fd: %d", fd); return RTE_VHOST_MSG_RESULT_ERR; } if (ctx->msg.size != sizeof(VhostUserLog)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid log base msg size: %"PRId32" != %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid log base msg size: %"PRId32" != %d", ctx->msg.size, (int)sizeof(VhostUserLog)); goto close_msg_fds; } @@ -2311,14 +2311,14 @@ vhost_user_set_log_base(struct virtio_net **pdev, /* Check for mmap size and offset overflow. */ if (off >= -size) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "log offset %#"PRIx64" and log size %#"PRIx64" overflow\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "log offset %#"PRIx64" and log size %#"PRIx64" overflow", off, size); goto close_msg_fds; } - VHOST_LOG_CONFIG(dev->ifname, INFO, - "log mmap size: %"PRId64", offset: %"PRId64"\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "log mmap size: %"PRId64", offset: %"PRId64, size, off); /* @@ -2329,7 +2329,7 @@ vhost_user_set_log_base(struct virtio_net **pdev, alignment = get_blk_size(fd); close(fd); if (addr == MAP_FAILED) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "mmap log base failed!\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "mmap log base failed!"); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2359,8 +2359,8 @@ vhost_user_set_log_base(struct virtio_net **pdev, * caching will be done, which will impact performance */ if (!vq->log_cache) - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to allocate VQ logging cache\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to allocate VQ logging cache"); } /* @@ -2387,7 +2387,7 @@ static int vhost_user_set_log_fd(struct virtio_net **pdev, return RTE_VHOST_MSG_RESULT_ERR; close(ctx->fds[0]); - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "not implemented.\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "not implemented."); return RTE_VHOST_MSG_RESULT_OK; } @@ -2409,8 +2409,8 @@ vhost_user_send_rarp(struct virtio_net **pdev, uint8_t *mac = (uint8_t *)&ctx->msg.payload.u64; struct rte_vdpa_device *vdpa_dev; - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "MAC: " RTE_ETHER_ADDR_PRT_FMT "\n", + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "MAC: " RTE_ETHER_ADDR_PRT_FMT, mac[0], mac[1], mac[2], mac[3], mac[4], mac[5]); memcpy(dev->mac.addr_bytes, mac, 6); @@ -2438,8 +2438,8 @@ vhost_user_net_set_mtu(struct virtio_net **pdev, if (ctx->msg.payload.u64 < VIRTIO_MIN_MTU || ctx->msg.payload.u64 > VIRTIO_MAX_MTU) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid MTU size (%"PRIu64")\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid MTU size (%"PRIu64")", ctx->msg.payload.u64); return RTE_VHOST_MSG_RESULT_ERR; @@ -2462,8 +2462,8 @@ vhost_user_set_req_fd(struct virtio_net **pdev, return RTE_VHOST_MSG_RESULT_ERR; if (fd < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid file descriptor for backend channel (%d)\n", fd); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid file descriptor for backend channel (%d)", fd); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2563,7 +2563,7 @@ vhost_user_get_config(struct virtio_net **pdev, return RTE_VHOST_MSG_RESULT_ERR; if (!vdpa_dev) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "is not vDPA device!\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "is not vDPA device!"); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2573,10 +2573,10 @@ vhost_user_get_config(struct virtio_net **pdev, ctx->msg.payload.cfg.size); if (ret != 0) { ctx->msg.size = 0; - VHOST_LOG_CONFIG(dev->ifname, ERR, "get_config() return error!\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "get_config() return error!"); } } else { - VHOST_LOG_CONFIG(dev->ifname, ERR, "get_config() not supported!\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "get_config() not supported!"); } return RTE_VHOST_MSG_RESULT_REPLY; @@ -2595,14 +2595,14 @@ vhost_user_set_config(struct virtio_net **pdev, return RTE_VHOST_MSG_RESULT_ERR; if (ctx->msg.payload.cfg.size > VHOST_USER_MAX_CONFIG_SIZE) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "vhost_user_config size: %"PRIu32", should not be larger than %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "vhost_user_config size: %"PRIu32", should not be larger than %d", ctx->msg.payload.cfg.size, VHOST_USER_MAX_CONFIG_SIZE); goto out; } if (!vdpa_dev) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "is not vDPA device!\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "is not vDPA device!"); goto out; } @@ -2613,9 +2613,9 @@ vhost_user_set_config(struct virtio_net **pdev, ctx->msg.payload.cfg.size, ctx->msg.payload.cfg.flags); if (ret) - VHOST_LOG_CONFIG(dev->ifname, ERR, "set_config() return error!\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "set_config() return error!"); } else { - VHOST_LOG_CONFIG(dev->ifname, ERR, "set_config() not supported!\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "set_config() not supported!"); } return RTE_VHOST_MSG_RESULT_OK; @@ -2676,7 +2676,7 @@ vhost_user_iotlb_msg(struct virtio_net **pdev, } break; default: - VHOST_LOG_CONFIG(dev->ifname, ERR, "invalid IOTLB message type (%d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "invalid IOTLB message type (%d)", imsg->type); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2696,16 +2696,16 @@ vhost_user_set_postcopy_advise(struct virtio_net **pdev, dev->postcopy_ufd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK); if (dev->postcopy_ufd == -1) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "userfaultfd not available: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "userfaultfd not available: %s", strerror(errno)); return RTE_VHOST_MSG_RESULT_ERR; } api_struct.api = UFFD_API; api_struct.features = 0; if (ioctl(dev->postcopy_ufd, UFFDIO_API, &api_struct)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "UFFDIO_API ioctl failure: %s\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "UFFDIO_API ioctl failure: %s", strerror(errno)); close(dev->postcopy_ufd); dev->postcopy_ufd = -1; @@ -2731,8 +2731,8 @@ vhost_user_set_postcopy_listen(struct virtio_net **pdev, struct virtio_net *dev = *pdev; if (dev->mem && dev->mem->nregions) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "regions already registered at postcopy-listen\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "regions already registered at postcopy-listen"); return RTE_VHOST_MSG_RESULT_ERR; } dev->postcopy_listening = 1; @@ -2783,8 +2783,8 @@ vhost_user_set_status(struct virtio_net **pdev, /* As per Virtio specification, the device status is 8bits long */ if (ctx->msg.payload.u64 > UINT8_MAX) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "invalid VHOST_USER_SET_STATUS payload 0x%" PRIx64 "\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "invalid VHOST_USER_SET_STATUS payload 0x%" PRIx64, ctx->msg.payload.u64); return RTE_VHOST_MSG_RESULT_ERR; } @@ -2793,8 +2793,8 @@ vhost_user_set_status(struct virtio_net **pdev, if ((dev->status & VIRTIO_DEVICE_STATUS_FEATURES_OK) && (dev->flags & VIRTIO_DEV_FEATURES_FAILED)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "FEATURES_OK bit is set but feature negotiation failed\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "FEATURES_OK bit is set but feature negotiation failed"); /* * Clear the bit to let the driver know about the feature * negotiation failure @@ -2802,27 +2802,27 @@ vhost_user_set_status(struct virtio_net **pdev, dev->status &= ~VIRTIO_DEVICE_STATUS_FEATURES_OK; } - VHOST_LOG_CONFIG(dev->ifname, INFO, "new device status(0x%08x):\n", dev->status); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t-RESET: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, "new device status(0x%08x):", dev->status); + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t-RESET: %u", (dev->status == VIRTIO_DEVICE_STATUS_RESET)); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t-ACKNOWLEDGE: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t-ACKNOWLEDGE: %u", !!(dev->status & VIRTIO_DEVICE_STATUS_ACK)); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t-DRIVER: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t-DRIVER: %u", !!(dev->status & VIRTIO_DEVICE_STATUS_DRIVER)); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t-FEATURES_OK: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t-FEATURES_OK: %u", !!(dev->status & VIRTIO_DEVICE_STATUS_FEATURES_OK)); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t-DRIVER_OK: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t-DRIVER_OK: %u", !!(dev->status & VIRTIO_DEVICE_STATUS_DRIVER_OK)); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t-DEVICE_NEED_RESET: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t-DEVICE_NEED_RESET: %u", !!(dev->status & VIRTIO_DEVICE_STATUS_DEV_NEED_RESET)); - VHOST_LOG_CONFIG(dev->ifname, INFO, - "\t-FAILED: %u\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "\t-FAILED: %u", !!(dev->status & VIRTIO_DEVICE_STATUS_FAILED)); return RTE_VHOST_MSG_RESULT_OK; @@ -2881,14 +2881,14 @@ read_vhost_message(struct virtio_net *dev, int sockfd, struct vhu_msg_context * goto out; if (ret != VHOST_USER_HDR_SIZE) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Unexpected header size read\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Unexpected header size read"); ret = -1; goto out; } if (ctx->msg.size) { if (ctx->msg.size > sizeof(ctx->msg.payload)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "invalid msg size: %d\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "invalid msg size: %d", ctx->msg.size); ret = -1; goto out; @@ -2897,7 +2897,7 @@ read_vhost_message(struct virtio_net *dev, int sockfd, struct vhu_msg_context * if (ret <= 0) goto out; if (ret != (int)ctx->msg.size) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "read control message failed\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "read control message failed"); ret = -1; goto out; } @@ -2949,24 +2949,24 @@ send_vhost_backend_message_process_reply(struct virtio_net *dev, struct vhu_msg_ rte_spinlock_lock(&dev->backend_req_lock); ret = send_vhost_backend_message(dev, ctx); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to send config change (%d)\n", ret); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to send config change (%d)", ret); goto out; } ret = read_vhost_message(dev, dev->backend_req_fd, &msg_reply); if (ret <= 0) { if (ret < 0) - VHOST_LOG_CONFIG(dev->ifname, ERR, - "vhost read backend message reply failed\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "vhost read backend message reply failed"); else - VHOST_LOG_CONFIG(dev->ifname, INFO, "vhost peer closed\n"); + VHOST_CONFIG_LOG(dev->ifname, INFO, "vhost peer closed"); ret = -1; goto out; } if (msg_reply.msg.request.backend != ctx->msg.request.backend) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "received unexpected msg type (%u), expected %u\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "received unexpected msg type (%u), expected %u", msg_reply.msg.request.backend, ctx->msg.request.backend); ret = -1; goto out; @@ -3010,7 +3010,7 @@ vhost_user_check_and_alloc_queue_pair(struct virtio_net *dev, } if (vring_idx >= VHOST_MAX_VRING) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "invalid vring index: %u\n", vring_idx); + VHOST_CONFIG_LOG(dev->ifname, ERR, "invalid vring index: %u", vring_idx); return -1; } @@ -3078,8 +3078,8 @@ vhost_user_msg_handler(int vid, int fd) if (!dev->notify_ops) { dev->notify_ops = vhost_driver_callback_get(dev->ifname); if (!dev->notify_ops) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to get callback ops for driver\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to get callback ops for driver"); return -1; } } @@ -3087,7 +3087,7 @@ vhost_user_msg_handler(int vid, int fd) ctx.msg.request.frontend = VHOST_USER_NONE; ret = read_vhost_message(dev, fd, &ctx); if (ret == 0) { - VHOST_LOG_CONFIG(dev->ifname, INFO, "vhost peer closed\n"); + VHOST_CONFIG_LOG(dev->ifname, INFO, "vhost peer closed"); return -1; } @@ -3098,7 +3098,7 @@ vhost_user_msg_handler(int vid, int fd) msg_handler = NULL; if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "vhost read message %s%s%sfailed\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, "vhost read message %s%s%sfailed", msg_handler != NULL ? "for " : "", msg_handler != NULL ? msg_handler->description : "", msg_handler != NULL ? " " : ""); @@ -3107,20 +3107,20 @@ vhost_user_msg_handler(int vid, int fd) if (msg_handler != NULL && msg_handler->description != NULL) { if (request != VHOST_USER_IOTLB_MSG) - VHOST_LOG_CONFIG(dev->ifname, INFO, - "read message %s\n", + VHOST_CONFIG_LOG(dev->ifname, INFO, + "read message %s", msg_handler->description); else - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "read message %s\n", + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "read message %s", msg_handler->description); } else { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "external request %d\n", request); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "external request %d", request); } ret = vhost_user_check_and_alloc_queue_pair(dev, &ctx); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to alloc queue\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to alloc queue"); return -1; } @@ -3187,20 +3187,20 @@ vhost_user_msg_handler(int vid, int fd) switch (msg_result) { case RTE_VHOST_MSG_RESULT_ERR: - VHOST_LOG_CONFIG(dev->ifname, ERR, - "processing %s failed.\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "processing %s failed.", msg_handler->description); handled = true; break; case RTE_VHOST_MSG_RESULT_OK: - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "processing %s succeeded.\n", + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "processing %s succeeded.", msg_handler->description); handled = true; break; case RTE_VHOST_MSG_RESULT_REPLY: - VHOST_LOG_CONFIG(dev->ifname, DEBUG, - "processing %s succeeded and needs reply.\n", + VHOST_CONFIG_LOG(dev->ifname, DEBUG, + "processing %s succeeded and needs reply.", msg_handler->description); send_vhost_reply(dev, fd, &ctx); handled = true; @@ -3229,8 +3229,8 @@ vhost_user_msg_handler(int vid, int fd) /* If message was not handled at this stage, treat it as an error */ if (!handled) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "vhost message (req: %d) was not handled.\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "vhost message (req: %d) was not handled.", request); close_msg_fds(&ctx); msg_result = RTE_VHOST_MSG_RESULT_ERR; @@ -3247,7 +3247,7 @@ vhost_user_msg_handler(int vid, int fd) ctx.fd_num = 0; send_vhost_reply(dev, fd, &ctx); } else if (msg_result == RTE_VHOST_MSG_RESULT_ERR) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "vhost message handling failed.\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "vhost message handling failed."); ret = -1; goto unlock; } @@ -3296,7 +3296,7 @@ vhost_user_msg_handler(int vid, int fd) if (!(dev->flags & VIRTIO_DEV_VDPA_CONFIGURED)) { if (vdpa_dev->ops->dev_conf(dev->vid)) - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to configure vDPA device\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to configure vDPA device"); else dev->flags |= VIRTIO_DEV_VDPA_CONFIGURED; } @@ -3324,8 +3324,8 @@ vhost_user_iotlb_miss(struct virtio_net *dev, uint64_t iova, uint8_t perm) ret = send_vhost_message(dev, dev->backend_req_fd, &ctx); if (ret < 0) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "failed to send IOTLB miss message (%d)\n", + VHOST_CONFIG_LOG(dev->ifname, ERR, + "failed to send IOTLB miss message (%d)", ret); return ret; } @@ -3358,7 +3358,7 @@ rte_vhost_backend_config_change(int vid, bool need_reply) } if (ret < 0) - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to send config change (%d)\n", ret); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to send config change (%d)", ret); return ret; } @@ -3390,7 +3390,7 @@ static int vhost_user_backend_set_vring_host_notifier(struct virtio_net *dev, ret = send_vhost_backend_message_process_reply(dev, &ctx); if (ret < 0) - VHOST_LOG_CONFIG(dev->ifname, ERR, "failed to set host notifier (%d)\n", ret); + VHOST_CONFIG_LOG(dev->ifname, ERR, "failed to set host notifier (%d)", ret); return ret; } diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 8af20f1487..280d4845f8 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -130,8 +130,8 @@ vhost_async_dma_transfer_one(struct virtio_net *dev, struct vhost_virtqueue *vq, */ if (unlikely(copy_idx < 0)) { if (!vhost_async_dma_copy_log) { - VHOST_LOG_DATA(dev->ifname, ERR, - "DMA copy failed for channel %d:%u\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "DMA copy failed for channel %d:%u", dma_id, vchan_id); vhost_async_dma_copy_log = true; } @@ -201,8 +201,8 @@ vhost_async_dma_check_completed(struct virtio_net *dev, int16_t dma_id, uint16_t */ nr_copies = rte_dma_completed(dma_id, vchan_id, max_pkts, &last_idx, &has_error); if (unlikely(!vhost_async_dma_complete_log && has_error)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "DMA completion failure on channel %d:%u\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "DMA completion failure on channel %d:%u", dma_id, vchan_id); vhost_async_dma_complete_log = true; } else if (nr_copies == 0) { @@ -1062,7 +1062,7 @@ async_iter_initialize(struct virtio_net *dev, struct vhost_async *async) struct vhost_iov_iter *iter; if (unlikely(async->iovec_idx >= VHOST_MAX_ASYNC_VEC)) { - VHOST_LOG_DATA(dev->ifname, ERR, "no more async iovec available\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "no more async iovec available"); return -1; } @@ -1084,7 +1084,7 @@ async_iter_add_iovec(struct virtio_net *dev, struct vhost_async *async, static bool vhost_max_async_vec_log; if (!vhost_max_async_vec_log) { - VHOST_LOG_DATA(dev->ifname, ERR, "no more async iovec available\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "no more async iovec available"); vhost_max_async_vec_log = true; } @@ -1145,8 +1145,8 @@ async_fill_seg(struct virtio_net *dev, struct vhost_virtqueue *vq, host_iova = (void *)(uintptr_t)gpa_to_first_hpa(dev, buf_iova + buf_offset, cpy_len, &mapped_len); if (unlikely(!host_iova)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: failed to get host iova.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: failed to get host iova.", __func__); return -1; } @@ -1243,7 +1243,7 @@ mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, } else hdr = (struct virtio_net_hdr_mrg_rxbuf *)(uintptr_t)hdr_addr; - VHOST_LOG_DATA(dev->ifname, DEBUG, "RX: num merge buffers %d\n", num_buffers); + VHOST_DATA_LOG(dev->ifname, DEBUG, "RX: num merge buffers %d", num_buffers); if (unlikely(buf_len < dev->vhost_hlen)) { buf_offset = dev->vhost_hlen - buf_len; @@ -1428,14 +1428,14 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, if (unlikely(reserve_avail_buf_split(dev, vq, pkt_len, buf_vec, &num_buffers, avail_head, &nr_vec) < 0)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, - "failed to get enough desc from vring\n"); + VHOST_DATA_LOG(dev->ifname, DEBUG, + "failed to get enough desc from vring"); vq->shadow_used_idx -= num_buffers; break; } - VHOST_LOG_DATA(dev->ifname, DEBUG, - "current index %d | end index %d\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "current index %d | end index %d", vq->last_avail_idx, vq->last_avail_idx + num_buffers); if (mbuf_to_desc(dev, vq, pkts[pkt_idx], buf_vec, nr_vec, @@ -1645,12 +1645,12 @@ virtio_dev_rx_single_packed(struct virtio_net *dev, if (unlikely(vhost_enqueue_single_packed(dev, vq, pkt, buf_vec, &nr_descs) < 0)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, "failed to get enough desc from vring\n"); + VHOST_DATA_LOG(dev->ifname, DEBUG, "failed to get enough desc from vring"); return -1; } - VHOST_LOG_DATA(dev->ifname, DEBUG, - "current index %d | end index %d\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "current index %d | end index %d", vq->last_avail_idx, vq->last_avail_idx + nr_descs); vq_inc_last_avail_packed(vq, nr_descs); @@ -1702,7 +1702,7 @@ virtio_dev_rx(struct virtio_net *dev, struct vhost_virtqueue *vq, { uint32_t nb_tx = 0; - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s\n", __func__); + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s", __func__); rte_rwlock_read_lock(&vq->access_lock); if (unlikely(!vq->enabled)) @@ -1744,15 +1744,15 @@ rte_vhost_enqueue_burst(int vid, uint16_t queue_id, return 0; if (unlikely(!(dev->flags & VIRTIO_DEV_BUILTIN_VIRTIO_NET))) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: built-in vhost net backend is disabled.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: built-in vhost net backend is disabled.", __func__); return 0; } if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid virtqueue idx %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid virtqueue idx %d.", __func__, queue_id); return 0; } @@ -1821,14 +1821,14 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev, struct vhost_virtqueue if (unlikely(reserve_avail_buf_split(dev, vq, pkt_len, buf_vec, &num_buffers, avail_head, &nr_vec) < 0)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, - "failed to get enough desc from vring\n"); + VHOST_DATA_LOG(dev->ifname, DEBUG, + "failed to get enough desc from vring"); vq->shadow_used_idx -= num_buffers; break; } - VHOST_LOG_DATA(dev->ifname, DEBUG, - "current index %d | end index %d\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "current index %d | end index %d", vq->last_avail_idx, vq->last_avail_idx + num_buffers); if (mbuf_to_desc(dev, vq, pkts[pkt_idx], buf_vec, nr_vec, num_buffers, true) < 0) { @@ -1853,8 +1853,8 @@ virtio_dev_rx_async_submit_split(struct virtio_net *dev, struct vhost_virtqueue if (unlikely(pkt_err)) { uint16_t num_descs = 0; - VHOST_LOG_DATA(dev->ifname, DEBUG, - "%s: failed to transfer %u packets for queue %u.\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "%s: failed to transfer %u packets for queue %u.", __func__, pkt_err, vq->index); /* update number of completed packets */ @@ -1967,12 +1967,12 @@ virtio_dev_rx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, if (unlikely(vhost_enqueue_async_packed(dev, vq, pkt, buf_vec, nr_descs, nr_buffers) < 0)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, "failed to get enough desc from vring\n"); + VHOST_DATA_LOG(dev->ifname, DEBUG, "failed to get enough desc from vring"); return -1; } - VHOST_LOG_DATA(dev->ifname, DEBUG, - "current index %d | end index %d\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "current index %d | end index %d", vq->last_avail_idx, vq->last_avail_idx + *nr_descs); return 0; @@ -2151,8 +2151,8 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, struct vhost_virtqueue pkt_err = pkt_idx - n_xfer; if (unlikely(pkt_err)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, - "%s: failed to transfer %u packets for queue %u.\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "%s: failed to transfer %u packets for queue %u.", __func__, pkt_err, vq->index); dma_error_handler_packed(vq, slot_idx, pkt_err, &pkt_idx); } @@ -2344,18 +2344,18 @@ rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id, if (unlikely(!dev)) return 0; - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s\n", __func__); + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s", __func__); if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid virtqueue idx %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid virtqueue idx %d.", __func__, queue_id); return 0; } if (unlikely(!dma_copy_track[dma_id].vchans || !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid channel %d:%u.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid channel %d:%u.", __func__, dma_id, vchan_id); return 0; } @@ -2363,15 +2363,15 @@ rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id, vq = dev->virtqueue[queue_id]; if (rte_rwlock_read_trylock(&vq->access_lock)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, - "%s: virtqueue %u is busy.\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, + "%s: virtqueue %u is busy.", __func__, queue_id); return 0; } if (unlikely(!vq->async)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: async not registered for virtqueue %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: async not registered for virtqueue %d.", __func__, queue_id); goto out; } @@ -2399,15 +2399,15 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, if (!dev) return 0; - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s\n", __func__); + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s", __func__); if (unlikely(queue_id >= dev->nr_vring)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid virtqueue idx %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid virtqueue idx %d.", __func__, queue_id); return 0; } if (unlikely(dma_id < 0 || dma_id >= RTE_DMADEV_DEFAULT_MAX)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid dma id %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid dma id %d.", __func__, dma_id); return 0; } @@ -2417,16 +2417,16 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id, vq_assert_lock(dev, vq); if (unlikely(!vq->async)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: async not registered for virtqueue %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: async not registered for virtqueue %d.", __func__, queue_id); return 0; } if (unlikely(!dma_copy_track[dma_id].vchans || !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid channel %d:%u.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid channel %d:%u.", __func__, dma_id, vchan_id); return 0; } @@ -2455,15 +2455,15 @@ rte_vhost_clear_queue(int vid, uint16_t queue_id, struct rte_mbuf **pkts, if (!dev) return 0; - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s\n", __func__); + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s", __func__); if (unlikely(queue_id >= dev->nr_vring)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid virtqueue idx %u.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid virtqueue idx %u.", __func__, queue_id); return 0; } if (unlikely(dma_id < 0 || dma_id >= RTE_DMADEV_DEFAULT_MAX)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid dma id %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid dma id %d.", __func__, dma_id); return 0; } @@ -2471,20 +2471,20 @@ rte_vhost_clear_queue(int vid, uint16_t queue_id, struct rte_mbuf **pkts, vq = dev->virtqueue[queue_id]; if (rte_rwlock_read_trylock(&vq->access_lock)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s: virtqueue %u is busy.\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s: virtqueue %u is busy.", __func__, queue_id); return 0; } if (unlikely(!vq->async)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: async not registered for queue id %u.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: async not registered for queue id %u.", __func__, queue_id); goto out_access_unlock; } if (unlikely(!dma_copy_track[dma_id].vchans || !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid channel %d:%u.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid channel %d:%u.", __func__, dma_id, vchan_id); goto out_access_unlock; } @@ -2511,12 +2511,12 @@ virtio_dev_rx_async_submit(struct virtio_net *dev, struct vhost_virtqueue *vq, { uint32_t nb_tx = 0; - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s\n", __func__); + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s", __func__); if (unlikely(!dma_copy_track[dma_id].vchans || !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid channel %d:%u.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid channel %d:%u.", __func__, dma_id, vchan_id); return 0; } @@ -2565,15 +2565,15 @@ rte_vhost_submit_enqueue_burst(int vid, uint16_t queue_id, return 0; if (unlikely(!(dev->flags & VIRTIO_DEV_BUILTIN_VIRTIO_NET))) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: built-in vhost net backend is disabled.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: built-in vhost net backend is disabled.", __func__); return 0; } if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid virtqueue idx %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid virtqueue idx %d.", __func__, queue_id); return 0; } @@ -2743,8 +2743,8 @@ vhost_dequeue_offload_legacy(struct virtio_net *dev, struct virtio_net_hdr *hdr, m->l4_len = sizeof(struct rte_udp_hdr); break; default: - VHOST_LOG_DATA(dev->ifname, WARNING, - "unsupported gso type %u.\n", + VHOST_DATA_LOG(dev->ifname, WARNING, + "unsupported gso type %u.", hdr->gso_type); goto error; } @@ -2975,8 +2975,8 @@ desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, if (mbuf_avail == 0) { cur = rte_pktmbuf_alloc(mbuf_pool); if (unlikely(cur == NULL)) { - VHOST_LOG_DATA(dev->ifname, ERR, - "failed to allocate memory for mbuf.\n"); + VHOST_DATA_LOG(dev->ifname, ERR, + "failed to allocate memory for mbuf."); goto error; } @@ -3041,7 +3041,7 @@ virtio_dev_extbuf_alloc(struct virtio_net *dev, struct rte_mbuf *pkt, uint32_t s virtio_dev_extbuf_free, buf); if (unlikely(shinfo == NULL)) { rte_free(buf); - VHOST_LOG_DATA(dev->ifname, ERR, "failed to init shinfo\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "failed to init shinfo"); return -1; } @@ -3097,11 +3097,11 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]); - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s\n", __func__); + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s", __func__); count = RTE_MIN(count, MAX_PKT_BURST); count = RTE_MIN(count, avail_entries); - VHOST_LOG_DATA(dev->ifname, DEBUG, "about to dequeue %u buffers\n", count); + VHOST_DATA_LOG(dev->ifname, DEBUG, "about to dequeue %u buffers", count); if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, count)) return 0; @@ -3138,8 +3138,8 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, * is required. Drop this packet. */ if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, - "failed mbuf alloc of size %d from %s.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "failed mbuf alloc of size %d from %s.", buf_len, mbuf_pool->name); allocerr_warned = true; } @@ -3152,7 +3152,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, mbuf_pool, legacy_ol_flags, 0, false); if (unlikely(err)) { if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, "failed to copy desc to mbuf.\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "failed to copy desc to mbuf."); allocerr_warned = true; } dropped += 1; @@ -3421,8 +3421,8 @@ vhost_dequeue_single_packed(struct virtio_net *dev, if (unlikely(virtio_dev_pktmbuf_prep(dev, pkts, buf_len))) { if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, - "failed mbuf alloc of size %d from %s.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "failed mbuf alloc of size %d from %s.", buf_len, mbuf_pool->name); allocerr_warned = true; } @@ -3433,7 +3433,7 @@ vhost_dequeue_single_packed(struct virtio_net *dev, mbuf_pool, legacy_ol_flags, 0, false); if (unlikely(err)) { if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, "failed to copy desc to mbuf.\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "failed to copy desc to mbuf."); allocerr_warned = true; } return -1; @@ -3556,15 +3556,15 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, return 0; if (unlikely(!(dev->flags & VIRTIO_DEV_BUILTIN_VIRTIO_NET))) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: built-in vhost net backend is disabled.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: built-in vhost net backend is disabled.", __func__); return 0; } if (unlikely(!is_valid_virt_queue_idx(queue_id, 1, dev->nr_vring))) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: invalid virtqueue idx %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: invalid virtqueue idx %d.", __func__, queue_id); return 0; } @@ -3609,7 +3609,7 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id, rarp_mbuf = rte_net_make_rarp_packet(mbuf_pool, &dev->mac); if (rarp_mbuf == NULL) { - VHOST_LOG_DATA(dev->ifname, ERR, "failed to make RARP packet.\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "failed to make RARP packet."); count = 0; goto out; } @@ -3731,7 +3731,7 @@ virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq, count = RTE_MIN(count, MAX_PKT_BURST); count = RTE_MIN(count, avail_entries); - VHOST_LOG_DATA(dev->ifname, DEBUG, "about to dequeue %u buffers\n", count); + VHOST_DATA_LOG(dev->ifname, DEBUG, "about to dequeue %u buffers", count); if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts_prealloc, count)) goto out; @@ -3768,8 +3768,8 @@ virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq, * is required. Drop this packet. */ if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: Failed mbuf alloc of size %d from %s\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: Failed mbuf alloc of size %d from %s", __func__, buf_len, mbuf_pool->name); allocerr_warned = true; } @@ -3783,8 +3783,8 @@ virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq, legacy_ol_flags, slot_idx, true); if (unlikely(err)) { if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, - "%s: Failed to offload copies to async channel.\n", + VHOST_DATA_LOG(dev->ifname, ERR, + "%s: Failed to offload copies to async channel.", __func__); allocerr_warned = true; } @@ -3814,7 +3814,7 @@ virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq, pkt_err = pkt_idx - n_xfer; if (unlikely(pkt_err)) { - VHOST_LOG_DATA(dev->ifname, DEBUG, "%s: failed to transfer data.\n", + VHOST_DATA_LOG(dev->ifname, DEBUG, "%s: failed to transfer data.", __func__); pkt_idx = n_xfer; @@ -3914,7 +3914,7 @@ virtio_dev_tx_async_single_packed(struct virtio_net *dev, if (unlikely(virtio_dev_pktmbuf_prep(dev, pkts, buf_len))) { if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, "Failed mbuf alloc of size %d from %s.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "Failed mbuf alloc of size %d from %s.", buf_len, mbuf_pool->name); allocerr_warned = true; @@ -3927,7 +3927,7 @@ virtio_dev_tx_async_single_packed(struct virtio_net *dev, if (unlikely(err)) { rte_pktmbuf_free(pkts); if (!allocerr_warned) { - VHOST_LOG_DATA(dev->ifname, ERR, "Failed to copy desc to mbuf on.\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "Failed to copy desc to mbuf on."); allocerr_warned = true; } return -1; @@ -4019,7 +4019,7 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, struct async_inflight_info *pkts_info = async->pkts_info; struct rte_mbuf *pkts_prealloc[MAX_PKT_BURST]; - VHOST_LOG_DATA(dev->ifname, DEBUG, "(%d) about to dequeue %u buffers\n", dev->vid, count); + VHOST_DATA_LOG(dev->ifname, DEBUG, "(%d) about to dequeue %u buffers", dev->vid, count); async_iter_reset(async); @@ -4153,26 +4153,26 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id, *nr_inflight = -1; if (unlikely(!(dev->flags & VIRTIO_DEV_BUILTIN_VIRTIO_NET))) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: built-in vhost net backend is disabled.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: built-in vhost net backend is disabled.", __func__); return 0; } if (unlikely(!is_valid_virt_queue_idx(queue_id, 1, dev->nr_vring))) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid virtqueue idx %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid virtqueue idx %d.", __func__, queue_id); return 0; } if (unlikely(dma_id < 0 || dma_id >= RTE_DMADEV_DEFAULT_MAX)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid dma id %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid dma id %d.", __func__, dma_id); return 0; } if (unlikely(!dma_copy_track[dma_id].vchans || !dma_copy_track[dma_id].vchans[vchan_id].pkts_cmpl_flag_addr)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: invalid channel %d:%u.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: invalid channel %d:%u.", __func__, dma_id, vchan_id); return 0; } @@ -4188,7 +4188,7 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id, } if (unlikely(!vq->async)) { - VHOST_LOG_DATA(dev->ifname, ERR, "%s: async not registered for queue id %d.\n", + VHOST_DATA_LOG(dev->ifname, ERR, "%s: async not registered for queue id %d.", __func__, queue_id); count = 0; goto out_access_unlock; @@ -4224,7 +4224,7 @@ rte_vhost_async_try_dequeue_burst(int vid, uint16_t queue_id, rarp_mbuf = rte_net_make_rarp_packet(mbuf_pool, &dev->mac); if (rarp_mbuf == NULL) { - VHOST_LOG_DATA(dev->ifname, ERR, "failed to make RARP packet.\n"); + VHOST_DATA_LOG(dev->ifname, ERR, "failed to make RARP packet."); count = 0; goto out; } diff --git a/lib/vhost/virtio_net_ctrl.c b/lib/vhost/virtio_net_ctrl.c index c4847f84ed..8f78122361 100644 --- a/lib/vhost/virtio_net_ctrl.c +++ b/lib/vhost/virtio_net_ctrl.c @@ -36,13 +36,13 @@ virtio_net_ctrl_pop(struct virtio_net *dev, struct vhost_virtqueue *cvq, avail_idx = rte_atomic_load_explicit((unsigned short __rte_atomic *)&cvq->avail->idx, rte_memory_order_acquire); if (avail_idx == cvq->last_avail_idx) { - VHOST_LOG_CONFIG(dev->ifname, DEBUG, "Control queue empty\n"); + VHOST_CONFIG_LOG(dev->ifname, DEBUG, "Control queue empty"); return 0; } desc_idx = cvq->avail->ring[cvq->last_avail_idx]; if (desc_idx >= cvq->size) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Out of range desc index, dropping\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Out of range desc index, dropping"); goto err; } @@ -55,7 +55,7 @@ virtio_net_ctrl_pop(struct virtio_net *dev, struct vhost_virtqueue *cvq, descs = (struct vring_desc *)(uintptr_t)vhost_iova_to_vva(dev, cvq, desc_iova, &desc_len, VHOST_ACCESS_RO); if (!descs || desc_len != cvq->desc[desc_idx].len) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to map ctrl indirect descs\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to map ctrl indirect descs"); goto err; } @@ -72,28 +72,28 @@ virtio_net_ctrl_pop(struct virtio_net *dev, struct vhost_virtqueue *cvq, if (descs[desc_idx].flags & VRING_DESC_F_WRITE) { if (ctrl_elem->desc_ack) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Unexpected ctrl chain layout\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Unexpected ctrl chain layout"); goto err; } if (desc_len != sizeof(uint8_t)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Invalid ack size for ctrl req, dropping\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Invalid ack size for ctrl req, dropping"); goto err; } ctrl_elem->desc_ack = (uint8_t *)(uintptr_t)vhost_iova_to_vva(dev, cvq, desc_iova, &desc_len, VHOST_ACCESS_WO); if (!ctrl_elem->desc_ack || desc_len != sizeof(uint8_t)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Failed to map ctrl ack descriptor\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Failed to map ctrl ack descriptor"); goto err; } } else { if (ctrl_elem->desc_ack) { - VHOST_LOG_CONFIG(dev->ifname, ERR, - "Unexpected ctrl chain layout\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, + "Unexpected ctrl chain layout"); goto err; } @@ -114,18 +114,18 @@ virtio_net_ctrl_pop(struct virtio_net *dev, struct vhost_virtqueue *cvq, ctrl_elem->n_descs = n_descs; if (!ctrl_elem->desc_ack) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Missing ctrl ack descriptor\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Missing ctrl ack descriptor"); goto err; } if (data_len < sizeof(ctrl_elem->ctrl_req->class) + sizeof(ctrl_elem->ctrl_req->command)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Invalid control header size\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Invalid control header size"); goto err; } ctrl_elem->ctrl_req = malloc(data_len); if (!ctrl_elem->ctrl_req) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to alloc ctrl request\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to alloc ctrl request"); goto err; } @@ -138,7 +138,7 @@ virtio_net_ctrl_pop(struct virtio_net *dev, struct vhost_virtqueue *cvq, descs = (struct vring_desc *)(uintptr_t)vhost_iova_to_vva(dev, cvq, desc_iova, &desc_len, VHOST_ACCESS_RO); if (!descs || desc_len != cvq->desc[desc_idx].len) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to map ctrl indirect descs\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to map ctrl indirect descs"); goto free_err; } @@ -153,7 +153,7 @@ virtio_net_ctrl_pop(struct virtio_net *dev, struct vhost_virtqueue *cvq, desc_addr = vhost_iova_to_vva(dev, cvq, desc_iova, &desc_len, VHOST_ACCESS_RO); if (!desc_addr || desc_len < descs[desc_idx].len) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Failed to map ctrl descriptor\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Failed to map ctrl descriptor"); goto free_err; } @@ -199,7 +199,7 @@ virtio_net_ctrl_handle_req(struct virtio_net *dev, struct virtio_net_ctrl *ctrl_ uint32_t i; queue_pairs = *(uint16_t *)(uintptr_t)ctrl_req->command_data; - VHOST_LOG_CONFIG(dev->ifname, INFO, "Ctrl req: MQ %u queue pairs\n", queue_pairs); + VHOST_CONFIG_LOG(dev->ifname, INFO, "Ctrl req: MQ %u queue pairs", queue_pairs); ret = VIRTIO_NET_OK; for (i = 0; i < dev->nr_vring; i++) { @@ -253,12 +253,12 @@ virtio_net_ctrl_handle(struct virtio_net *dev) int ret = 0; if (dev->features & (1ULL << VIRTIO_F_RING_PACKED)) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "Packed ring not supported yet\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "Packed ring not supported yet"); return -1; } if (!dev->cvq) { - VHOST_LOG_CONFIG(dev->ifname, ERR, "missing control queue\n"); + VHOST_CONFIG_LOG(dev->ifname, ERR, "missing control queue"); return -1; } -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* [PATCH v5 13/13] lib: use per line logging in helpers 2023-12-20 15:35 ` [PATCH v5 00/13] Detect superfluous newline in logs David Marchand ` (11 preceding siblings ...) 2023-12-20 15:36 ` [PATCH v5 12/13] lib: replace logging helpers David Marchand @ 2023-12-20 15:36 ` David Marchand 2023-12-21 9:31 ` [PATCH v5 00/13] Detect superfluous newline in logs David Marchand 2023-12-21 16:32 ` Stephen Hemminger 14 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-20 15:36 UTC (permalink / raw) To: dev Cc: thomas, ferruh.yigit, bruce.richardson, stephen, mb, Chengwen Feng, Andrew Rybchenko, Konstantin Ananyev, Nicolas Chautru, Cristian Dumitrescu, Fan Zhang, Ashish Gupta, Akhil Goyal, Kevin Laatz, Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam, Byron Marohn, Yipeng Wang, Jerin Jacob, Erik Gabriel Carrillo, Vladimir Medvedkin, Elena Agostini, Kiran Kumar K, Nithin Dabilpuram, Zhirun Yan, Sameh Gobriel, Reshma Pattan, Srikanth Yalavarthi, Jasvinder Singh, Pavan Nikhilesh, Anatoly Burakov, David Hunt, Sivaprasad Tummala, Sachin Saxena, Hemant Agrawal, Honnappa Nagarahalli, Ori Kam, Volodymyr Fialko, Ciara Power, Maxime Coquelin, Chenbo Xia Use RTE_LOG_LINE in existing macros that append a \n. This will help catching unwanted newline character or multilines in log messages. Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Chengwen Feng <fengchengwen@huawei.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> --- Changes since v4: - converted more helpers, Changes since v3: - fixed some checkpatch complaints, Changes since RFC v1: - converted all logging helpers in lib/, --- lib/acl/acl_log.h | 4 ++-- lib/bbdev/rte_bbdev.c | 5 +++-- lib/bpf/bpf_impl.h | 2 +- lib/cfgfile/rte_cfgfile.c | 4 ++-- lib/compressdev/rte_compressdev_internal.h | 5 +++-- lib/cryptodev/rte_cryptodev.h | 22 ++++++++++------------ lib/dmadev/rte_dmadev.c | 6 ++++-- lib/eal/common/eal_private.h | 4 ++-- lib/eal/windows/include/rte_windows.h | 6 +++--- lib/efd/rte_efd.c | 4 ++-- lib/ethdev/rte_ethdev.h | 3 +-- lib/eventdev/eventdev_pmd.h | 12 ++++++------ lib/eventdev/rte_event_timer_adapter.c | 17 ++++++++++------- lib/fib/fib_log.h | 4 ++-- lib/gpudev/gpudev.c | 6 ++++-- lib/graph/graph_private.h | 7 ++++--- lib/hash/rte_cuckoo_hash.c | 4 ++-- lib/hash/rte_fbk_hash.c | 4 ++-- lib/hash/rte_hash_crc.c | 4 ++-- lib/hash/rte_thash.c | 4 ++-- lib/hash/rte_thash_gfni.c | 4 ++-- lib/ip_frag/ip_frag_common.h | 4 ++-- lib/latencystats/rte_latencystats.c | 4 ++-- lib/lpm/lpm_log.h | 4 ++-- lib/mbuf/mbuf_log.h | 4 ++-- lib/member/member.h | 4 ++-- lib/mempool/rte_mempool.h | 4 ++-- lib/metrics/rte_metrics_telemetry.c | 4 ++-- lib/mldev/rte_mldev.h | 5 +++-- lib/net/rte_net_crc.c | 8 ++++---- lib/node/node_private.h | 8 +++++--- lib/pdump/rte_pdump.c | 5 ++--- lib/pipeline/rte_pipeline.c | 4 ++-- lib/port/port_log.h | 4 ++-- lib/power/guest_channel.c | 4 ++-- lib/power/power_common.h | 6 +++--- lib/rawdev/rte_rawdev_pmd.h | 4 ++-- lib/rcu/rte_rcu_qsbr.c | 2 +- lib/rcu/rte_rcu_qsbr.h | 8 +++----- lib/regexdev/rte_regexdev.h | 3 +-- lib/reorder/rte_reorder.c | 4 ++-- lib/rib/rib_log.h | 4 ++-- lib/ring/rte_ring.c | 4 ++-- lib/sched/rte_sched_log.h | 4 ++-- lib/stack/stack_pvt.h | 4 ++-- lib/table/table_log.h | 4 ++-- lib/telemetry/telemetry.c | 4 +--- lib/vhost/fd_man.c | 4 ++-- lib/vhost/vhost.h | 4 ++-- lib/vhost/vhost_crypto.c | 6 +++--- 50 files changed, 133 insertions(+), 129 deletions(-) diff --git a/lib/acl/acl_log.h b/lib/acl/acl_log.h index 2d7c376058..d2310401a8 100644 --- a/lib/acl/acl_log.h +++ b/lib/acl/acl_log.h @@ -4,5 +4,5 @@ extern int acl_logtype; #define RTE_LOGTYPE_ACL acl_logtype -#define ACL_LOG(level, fmt, ...) \ - RTE_LOG(level, ACL, fmt "\n", ## __VA_ARGS__) +#define ACL_LOG(level, ...) \ + RTE_LOG_LINE(level, ACL, "" __VA_ARGS__) diff --git a/lib/bbdev/rte_bbdev.c b/lib/bbdev/rte_bbdev.c index e09bb97abb..13bde3c25b 100644 --- a/lib/bbdev/rte_bbdev.c +++ b/lib/bbdev/rte_bbdev.c @@ -28,10 +28,11 @@ /* BBDev library logging ID */ RTE_LOG_REGISTER_DEFAULT(bbdev_logtype, NOTICE); +#define RTE_LOGTYPE_BBDEV bbdev_logtype /* Helper macro for logging */ -#define rte_bbdev_log(level, fmt, ...) \ - rte_log(RTE_LOG_ ## level, bbdev_logtype, fmt "\n", ##__VA_ARGS__) +#define rte_bbdev_log(level, ...) \ + RTE_LOG_LINE(level, BBDEV, "" __VA_ARGS__) #define rte_bbdev_log_debug(fmt, ...) \ rte_bbdev_log(DEBUG, RTE_STR(__LINE__) ":%s() " fmt, __func__, \ diff --git a/lib/bpf/bpf_impl.h b/lib/bpf/bpf_impl.h index 6a82ae4ef2..1a3d97d0c7 100644 --- a/lib/bpf/bpf_impl.h +++ b/lib/bpf/bpf_impl.h @@ -30,7 +30,7 @@ extern int rte_bpf_logtype; #define RTE_LOGTYPE_BPF rte_bpf_logtype #define RTE_BPF_LOG_LINE(lvl, fmt, args...) \ - RTE_LOG(lvl, BPF, fmt "\n", ##args) + RTE_LOG_LINE(lvl, BPF, fmt, ##args) static inline size_t bpf_size(uint32_t bpf_op_sz) diff --git a/lib/cfgfile/rte_cfgfile.c b/lib/cfgfile/rte_cfgfile.c index 2f9cc0722a..6a5e4fd942 100644 --- a/lib/cfgfile/rte_cfgfile.c +++ b/lib/cfgfile/rte_cfgfile.c @@ -29,10 +29,10 @@ struct rte_cfgfile { /* Setting up dynamic logging 8< */ RTE_LOG_REGISTER_DEFAULT(cfgfile_logtype, INFO); +#define RTE_LOGTYPE_CFGFILE cfgfile_logtype #define CFG_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, cfgfile_logtype, "%s(): " fmt "\n", \ - __func__, ## args) + RTE_LOG_LINE(level, CFGFILE, "%s(): " fmt, __func__, ## args) /* >8 End of setting up dynamic logging */ /** when we resize a file structure, how many extra entries diff --git a/lib/compressdev/rte_compressdev_internal.h b/lib/compressdev/rte_compressdev_internal.h index b3b193e3ee..01b7764282 100644 --- a/lib/compressdev/rte_compressdev_internal.h +++ b/lib/compressdev/rte_compressdev_internal.h @@ -21,9 +21,10 @@ extern "C" { /* Logging Macros */ extern int compressdev_logtype; +#define RTE_LOGTYPE_COMPRESSDEV compressdev_logtype + #define COMPRESSDEV_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, compressdev_logtype, "%s(): " fmt "\n", \ - __func__, ##args) + RTE_LOG_LINE(level, COMPRESSDEV, "%s(): " fmt, __func__, ## args) /** * Dequeue processed packets from queue pair of a device. diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h index 30ad2d9a95..359b6c2b29 100644 --- a/lib/cryptodev/rte_cryptodev.h +++ b/lib/cryptodev/rte_cryptodev.h @@ -36,24 +36,22 @@ extern int rte_cryptodev_logtype; /* Logging Macros */ #define CDEV_LOG_ERR(...) \ - RTE_LOG(ERR, CRYPTODEV, \ - RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ - __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(ERR, CRYPTODEV, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__ ,), \ + __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) #define CDEV_LOG_INFO(...) \ - RTE_LOG(INFO, CRYPTODEV, \ - RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ - RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(INFO, CRYPTODEV, "" __VA_ARGS__) #define CDEV_LOG_DEBUG(...) \ - RTE_LOG(DEBUG, CRYPTODEV, \ - RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ - __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(DEBUG, CRYPTODEV, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__ ,), \ + __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) #define CDEV_PMD_TRACE(...) \ - RTE_LOG(DEBUG, CRYPTODEV, \ - RTE_FMT("[%s] %s: " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ - dev, __func__, RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(DEBUG, CRYPTODEV, \ + RTE_FMT("[%s] %s: " RTE_FMT_HEAD(__VA_ARGS__ ,), \ + dev, __func__, RTE_FMT_TAIL(__VA_ARGS__ ,))) /** * A macro that points to an offset from the start diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c index 009a21849a..5953a77bd6 100644 --- a/lib/dmadev/rte_dmadev.c +++ b/lib/dmadev/rte_dmadev.c @@ -32,9 +32,11 @@ static struct { } *dma_devices_shared_data; RTE_LOG_REGISTER_DEFAULT(rte_dma_logtype, INFO); +#define RTE_LOGTYPE_DMA rte_dma_logtype + #define RTE_DMA_LOG(level, ...) \ - rte_log(RTE_LOG_ ## level, rte_dma_logtype, RTE_FMT("dma: " \ - RTE_FMT_HEAD(__VA_ARGS__,) "\n", RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(level, DMA, RTE_FMT("dma: " RTE_FMT_HEAD(__VA_ARGS__ ,), \ + RTE_FMT_TAIL(__VA_ARGS__ ,))) int rte_dma_dev_max(size_t dev_max) diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h index afd87000e7..71523cfdb8 100644 --- a/lib/eal/common/eal_private.h +++ b/lib/eal/common/eal_private.h @@ -748,7 +748,7 @@ int eal_asprintf(char **buffer, const char *format, ...); eal_asprintf(buffer, format, ##__VA_ARGS__) #endif -#define EAL_LOG(level, fmt, ...) \ - RTE_LOG(level, EAL, fmt "\n", ## __VA_ARGS__) +#define EAL_LOG(level, ...) \ + RTE_LOG_LINE(level, EAL, "" __VA_ARGS__) #endif /* _EAL_PRIVATE_H_ */ diff --git a/lib/eal/windows/include/rte_windows.h b/lib/eal/windows/include/rte_windows.h index 83730c3d2e..0b0d117865 100644 --- a/lib/eal/windows/include/rte_windows.h +++ b/lib/eal/windows/include/rte_windows.h @@ -48,9 +48,9 @@ extern "C" { * Log GetLastError() with context, usually a Win32 API function and arguments. */ #define RTE_LOG_WIN32_ERR(...) \ - RTE_LOG(DEBUG, EAL, RTE_FMT("GetLastError()=%lu: " \ - RTE_FMT_HEAD(__VA_ARGS__,) "\n", GetLastError(), \ - RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(DEBUG, EAL, RTE_FMT("GetLastError()=%lu: " \ + RTE_FMT_HEAD(__VA_ARGS__ ,), GetLastError(), \ + RTE_FMT_TAIL(__VA_ARGS__ ,))) #ifdef __cplusplus } diff --git a/lib/efd/rte_efd.c b/lib/efd/rte_efd.c index cd0d90f328..d3b732f2e8 100644 --- a/lib/efd/rte_efd.c +++ b/lib/efd/rte_efd.c @@ -31,8 +31,8 @@ RTE_LOG_REGISTER_DEFAULT(efd_logtype, INFO); #define RTE_LOGTYPE_EFD efd_logtype -#define EFD_LOG(level, fmt, ...) \ - RTE_LOG(level, EFD, fmt "\n", ## __VA_ARGS__) +#define EFD_LOG(level, ...) \ + RTE_LOG_LINE(level, EFD, "" __VA_ARGS__) #define EFD_KEY(key_idx, table) (table->keys + ((key_idx) * table->key_len)) /** Hash function used to determine chunk_id and bin_id for a group */ diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index e89e474c39..21e3a21903 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -179,8 +179,7 @@ extern int rte_eth_dev_logtype; #define RTE_LOGTYPE_ETHDEV rte_eth_dev_logtype #define RTE_ETHDEV_LOG_LINE(level, ...) \ - RTE_LOG(level, ETHDEV, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ - RTE_FMT_TAIL(__VA_ARGS__ ,))) + RTE_LOG_LINE(level, ETHDEV, "" __VA_ARGS__) struct rte_mbuf; diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index 2ec5aec0a8..1790587808 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -33,15 +33,15 @@ extern "C" { /* Logging Macros */ #define RTE_EDEV_LOG_ERR(...) \ - RTE_LOG(ERR, EVENTDEV, \ - RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ - __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(ERR, EVENTDEV, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__ ,), \ + __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) #ifdef RTE_LIBRTE_EVENTDEV_DEBUG #define RTE_EDEV_LOG_DEBUG(...) \ - RTE_LOG(DEBUG, EVENTDEV, \ - RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,) "\n", \ - __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(DEBUG, EVENTDEV, \ + RTE_FMT("%s() line %u: " RTE_FMT_HEAD(__VA_ARGS__ ,), \ + __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) #else #define RTE_EDEV_LOG_DEBUG(...) (void)0 #endif diff --git a/lib/eventdev/rte_event_timer_adapter.c b/lib/eventdev/rte_event_timer_adapter.c index 3f22e85173..e6d3492056 100644 --- a/lib/eventdev/rte_event_timer_adapter.c +++ b/lib/eventdev/rte_event_timer_adapter.c @@ -30,27 +30,30 @@ #define DATA_MZ_NAME_FORMAT "rte_event_timer_adapter_data_%d" RTE_LOG_REGISTER_SUFFIX(evtim_logtype, adapter.timer, NOTICE); +#define RTE_LOGTYPE_EVTIM evtim_logtype RTE_LOG_REGISTER_SUFFIX(evtim_buffer_logtype, adapter.timer, NOTICE); +#define RTE_LOGTYPE_EVTIM_BUF evtim_buffer_logtype RTE_LOG_REGISTER_SUFFIX(evtim_svc_logtype, adapter.timer.svc, NOTICE); +#define RTE_LOGTYPE_EVTIM_SVC evtim_svc_logtype static struct rte_event_timer_adapter *adapters; static const struct event_timer_adapter_ops swtim_ops; #define EVTIM_LOG(level, logtype, ...) \ - rte_log(RTE_LOG_ ## level, logtype, \ - RTE_FMT("EVTIMER: %s() line %u: " RTE_FMT_HEAD(__VA_ARGS__,) \ - "\n", __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__,))) + RTE_LOG_LINE(level, logtype, \ + RTE_FMT("EVTIMER: %s() line %u: " RTE_FMT_HEAD(__VA_ARGS__ ,), \ + __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) -#define EVTIM_LOG_ERR(...) EVTIM_LOG(ERR, evtim_logtype, __VA_ARGS__) +#define EVTIM_LOG_ERR(...) EVTIM_LOG(ERR, EVTIM, __VA_ARGS__) #ifdef RTE_LIBRTE_EVENTDEV_DEBUG #define EVTIM_LOG_DBG(...) \ - EVTIM_LOG(DEBUG, evtim_logtype, __VA_ARGS__) + EVTIM_LOG(DEBUG, EVTIM, __VA_ARGS__) #define EVTIM_BUF_LOG_DBG(...) \ - EVTIM_LOG(DEBUG, evtim_buffer_logtype, __VA_ARGS__) + EVTIM_LOG(DEBUG, EVTIM_BUF, __VA_ARGS__) #define EVTIM_SVC_LOG_DBG(...) \ - EVTIM_LOG(DEBUG, evtim_svc_logtype, __VA_ARGS__) + EVTIM_LOG(DEBUG, EVTIM_SVC, __VA_ARGS__) #else #define EVTIM_LOG_DBG(...) (void)0 #define EVTIM_BUF_LOG_DBG(...) (void)0 diff --git a/lib/fib/fib_log.h b/lib/fib/fib_log.h index aa901cb344..29ee356c4e 100644 --- a/lib/fib/fib_log.h +++ b/lib/fib/fib_log.h @@ -2,5 +2,5 @@ extern int fib_logtype; #define RTE_LOGTYPE_FIB fib_logtype -#define FIB_LOG(level, fmt, ...) \ - RTE_LOG(level, FIB, fmt "\n", ## __VA_ARGS__) +#define FIB_LOG(level, ...) \ + RTE_LOG_LINE(level, FIB, "" __VA_ARGS__) diff --git a/lib/gpudev/gpudev.c b/lib/gpudev/gpudev.c index 6845d18b4d..de8291151f 100644 --- a/lib/gpudev/gpudev.c +++ b/lib/gpudev/gpudev.c @@ -17,9 +17,11 @@ /* Logging */ RTE_LOG_REGISTER_DEFAULT(gpu_logtype, NOTICE); +#define RTE_LOGTYPE_GPUDEV gpu_logtype + #define GPU_LOG(level, ...) \ - rte_log(RTE_LOG_ ## level, gpu_logtype, RTE_FMT("gpu: " \ - RTE_FMT_HEAD(__VA_ARGS__, ) "\n", RTE_FMT_TAIL(__VA_ARGS__, ))) + RTE_LOG_LINE(level, GPUDEV, RTE_FMT("gpu: " RTE_FMT_HEAD(__VA_ARGS__ ,), \ + RTE_FMT_TAIL(__VA_ARGS__ ,))) /* Set any driver error as EPERM */ #define GPU_DRV_RET(function) \ diff --git a/lib/graph/graph_private.h b/lib/graph/graph_private.h index d0ef13b205..f9274ce96c 100644 --- a/lib/graph/graph_private.h +++ b/lib/graph/graph_private.h @@ -18,11 +18,12 @@ #include "rte_graph_worker.h" extern int rte_graph_logtype; +#define RTE_LOGTYPE_GRAPH rte_graph_logtype #define GRAPH_LOG(level, ...) \ - rte_log(RTE_LOG_##level, rte_graph_logtype, \ - RTE_FMT("GRAPH: %s():%u " RTE_FMT_HEAD(__VA_ARGS__, ) "\n", \ - __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__, ))) + RTE_LOG_LINE(level, GRAPH, \ + RTE_FMT("GRAPH: %s():%u " RTE_FMT_HEAD(__VA_ARGS__ ,), \ + __func__, __LINE__, RTE_FMT_TAIL(__VA_ARGS__ ,))) #define graph_err(...) GRAPH_LOG(ERR, __VA_ARGS__) #define graph_warn(...) GRAPH_LOG(WARNING, __VA_ARGS__) diff --git a/lib/hash/rte_cuckoo_hash.c b/lib/hash/rte_cuckoo_hash.c index a1b5137bfb..70456754c4 100644 --- a/lib/hash/rte_cuckoo_hash.c +++ b/lib/hash/rte_cuckoo_hash.c @@ -28,8 +28,8 @@ /* needs to be before rte_cuckoo_hash.h */ RTE_LOG_REGISTER_DEFAULT(hash_logtype, INFO); #define RTE_LOGTYPE_HASH hash_logtype -#define HASH_LOG(level, fmt, ...) \ - RTE_LOG(level, HASH, fmt "\n", ## __VA_ARGS__) +#define HASH_LOG(level, ...) \ + RTE_LOG_LINE(level, HASH, "" __VA_ARGS__) #include "rte_cuckoo_hash.h" diff --git a/lib/hash/rte_fbk_hash.c b/lib/hash/rte_fbk_hash.c index 681286b946..dacb7e8b09 100644 --- a/lib/hash/rte_fbk_hash.c +++ b/lib/hash/rte_fbk_hash.c @@ -21,8 +21,8 @@ RTE_LOG_REGISTER_SUFFIX(fbk_hash_logtype, fbk, INFO); #define RTE_LOGTYPE_HASH fbk_hash_logtype -#define HASH_LOG(level, fmt, ...) \ - RTE_LOG(level, HASH, fmt "\n", ## __VA_ARGS__) +#define HASH_LOG(level, ...) \ + RTE_LOG_LINE(level, HASH, "" __VA_ARGS__) TAILQ_HEAD(rte_fbk_hash_list, rte_tailq_entry); diff --git a/lib/hash/rte_hash_crc.c b/lib/hash/rte_hash_crc.c index 997ac3c5fa..c037cdb0f0 100644 --- a/lib/hash/rte_hash_crc.c +++ b/lib/hash/rte_hash_crc.c @@ -9,8 +9,8 @@ RTE_LOG_REGISTER_SUFFIX(hash_crc_logtype, crc, INFO); #define RTE_LOGTYPE_HASH_CRC hash_crc_logtype -#define HASH_CRC_LOG(level, fmt, ...) \ - RTE_LOG(level, HASH_CRC, fmt "\n", ## __VA_ARGS__) +#define HASH_CRC_LOG(level, ...) \ + RTE_LOG_LINE(level, HASH_CRC, "" __VA_ARGS__) uint8_t rte_hash_crc32_alg = CRC32_SW; diff --git a/lib/hash/rte_thash.c b/lib/hash/rte_thash.c index 3dcae495e9..e8de071ede 100644 --- a/lib/hash/rte_thash.c +++ b/lib/hash/rte_thash.c @@ -15,8 +15,8 @@ RTE_LOG_REGISTER_SUFFIX(thash_logtype, thash, INFO); #define RTE_LOGTYPE_HASH thash_logtype -#define HASH_LOG(level, fmt, ...) \ - RTE_LOG(level, HASH, fmt "\n", ## __VA_ARGS__) +#define HASH_LOG(level, ...) \ + RTE_LOG_LINE(level, HASH, "" __VA_ARGS__) #define THASH_NAME_LEN 64 #define TOEPLITZ_HASH_LEN 32 diff --git a/lib/hash/rte_thash_gfni.c b/lib/hash/rte_thash_gfni.c index cd22a649f7..f1525f9838 100644 --- a/lib/hash/rte_thash_gfni.c +++ b/lib/hash/rte_thash_gfni.c @@ -11,8 +11,8 @@ RTE_LOG_REGISTER_SUFFIX(hash_gfni_logtype, gfni, INFO); #define RTE_LOGTYPE_HASH hash_gfni_logtype -#define HASH_LOG(level, fmt, ...) \ - RTE_LOG(level, HASH, fmt "\n", ## __VA_ARGS__) +#define HASH_LOG(level, ...) \ + RTE_LOG_LINE(level, HASH, "" __VA_ARGS__) uint32_t rte_thash_gfni(const uint64_t *mtrx __rte_unused, diff --git a/lib/ip_frag/ip_frag_common.h b/lib/ip_frag/ip_frag_common.h index 4e9637d12f..c766154dbe 100644 --- a/lib/ip_frag/ip_frag_common.h +++ b/lib/ip_frag/ip_frag_common.h @@ -22,8 +22,8 @@ extern int ipfrag_logtype; #define RTE_LOGTYPE_IPFRAG ipfrag_logtype /* logging macros. */ -#define IP_FRAG_LOG_LINE(level, fmt, ...) \ - RTE_LOG(level, IPFRAG, fmt "\n", ## __VA_ARGS__) +#define IP_FRAG_LOG_LINE(level, ...) \ + RTE_LOG_LINE(level, IPFRAG, "" __VA_ARGS__) #ifdef RTE_LIBRTE_IP_FRAG_DEBUG #define IP_FRAG_LOG(lvl, fmt, args...) RTE_LOG(lvl, IPFRAG, fmt, ##args) diff --git a/lib/latencystats/rte_latencystats.c b/lib/latencystats/rte_latencystats.c index 6d7c4a3316..4ea9b0d75b 100644 --- a/lib/latencystats/rte_latencystats.c +++ b/lib/latencystats/rte_latencystats.c @@ -27,8 +27,8 @@ latencystat_cycles_per_ns(void) RTE_LOG_REGISTER_DEFAULT(latencystat_logtype, INFO); #define RTE_LOGTYPE_LATENCY_STATS latencystat_logtype -#define LATENCY_STATS_LOG(level, fmt, ...) \ - RTE_LOG(level, LATENCY_STATS, fmt "\n", ## __VA_ARGS__) +#define LATENCY_STATS_LOG(level, ...) \ + RTE_LOG_LINE(level, LATENCY_STATS, "" __VA_ARGS__) static uint64_t timestamp_dynflag; static int timestamp_dynfield_offset = -1; diff --git a/lib/lpm/lpm_log.h b/lib/lpm/lpm_log.h index 1385b9b02a..1a22b573e2 100644 --- a/lib/lpm/lpm_log.h +++ b/lib/lpm/lpm_log.h @@ -2,5 +2,5 @@ extern int lpm_logtype; #define RTE_LOGTYPE_LPM lpm_logtype -#define LPM_LOG(level, fmt, ...) \ - RTE_LOG(level, LPM, fmt "\n", ## __VA_ARGS__) +#define LPM_LOG(level, ...) \ + RTE_LOG_LINE(level, LPM, "" __VA_ARGS__) diff --git a/lib/mbuf/mbuf_log.h b/lib/mbuf/mbuf_log.h index a8a674f6be..8feaf20747 100644 --- a/lib/mbuf/mbuf_log.h +++ b/lib/mbuf/mbuf_log.h @@ -2,5 +2,5 @@ extern int mbuf_logtype; #define RTE_LOGTYPE_MBUF mbuf_logtype -#define MBUF_LOG(level, fmt, ...) \ - RTE_LOG(level, MBUF, fmt "\n", ## __VA_ARGS__) +#define MBUF_LOG(level, ...) \ + RTE_LOG_LINE(level, MBUF, "" __VA_ARGS__) diff --git a/lib/member/member.h b/lib/member/member.h index a7b5b4a57c..cf600c4838 100644 --- a/lib/member/member.h +++ b/lib/member/member.h @@ -8,7 +8,7 @@ extern int librte_member_logtype; #define RTE_LOGTYPE_MEMBER librte_member_logtype #define MEMBER_LOG(level, ...) \ - RTE_LOG(level, MEMBER, \ - RTE_FMT("%s(): " RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + RTE_LOG_LINE(level, MEMBER, \ + RTE_FMT("%s(): " RTE_FMT_HEAD(__VA_ARGS__ ,), \ __func__, RTE_FMT_TAIL(__VA_ARGS__ ,))) diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h index e44039dffb..6fa4d482c5 100644 --- a/lib/mempool/rte_mempool.h +++ b/lib/mempool/rte_mempool.h @@ -182,8 +182,8 @@ struct rte_mempool_objtlr { */ extern int rte_mempool_logtype; #define RTE_LOGTYPE_MEMPOOL rte_mempool_logtype -#define RTE_MEMPOOL_LOG(level, fmt, ...) \ - RTE_LOG(level, MEMPOOL, fmt "\n", ## __VA_ARGS__) +#define RTE_MEMPOOL_LOG(level, ...) \ + RTE_LOG_LINE(level, MEMPOOL, "" __VA_ARGS__) /** * A list of memory where objects are stored diff --git a/lib/metrics/rte_metrics_telemetry.c b/lib/metrics/rte_metrics_telemetry.c index 1d133e1f8c..b8c9d75a7d 100644 --- a/lib/metrics/rte_metrics_telemetry.c +++ b/lib/metrics/rte_metrics_telemetry.c @@ -16,11 +16,11 @@ struct telemetry_metrics_data tel_met_data; int metrics_log_level; +#define RTE_LOGTYPE_METRICS metrics_log_level /* Logging Macros */ #define METRICS_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ##level, metrics_log_level, "%s(): "fmt "\n", \ - __func__, ##args) + RTE_LOG_LINE(level, METRICS, "%s(): "fmt, __func__, ## args) #define METRICS_LOG_ERR(fmt, args...) \ METRICS_LOG(ERR, fmt, ## args) diff --git a/lib/mldev/rte_mldev.h b/lib/mldev/rte_mldev.h index 63b2670bb0..5cf6f0566f 100644 --- a/lib/mldev/rte_mldev.h +++ b/lib/mldev/rte_mldev.h @@ -144,9 +144,10 @@ extern "C" { /* Logging Macro */ extern int rte_ml_dev_logtype; +#define RTE_LOGTYPE_MLDEV rte_ml_dev_logtype -#define RTE_MLDEV_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_##level, rte_ml_dev_logtype, "%s(): " fmt "\n", __func__, ##args) +#define RTE_MLDEV_LOG(level, fmt, args...) \ + RTE_LOG_LINE(level, MLDEV, "%s(): " fmt, __func__, ##args) #define RTE_ML_STR_MAX 128 /**< Maximum length of name string */ diff --git a/lib/net/rte_net_crc.c b/lib/net/rte_net_crc.c index 900d6de7f4..b401ea3dd8 100644 --- a/lib/net/rte_net_crc.c +++ b/lib/net/rte_net_crc.c @@ -70,11 +70,11 @@ static const rte_net_crc_handler handlers_neon[] = { static uint16_t max_simd_bitwidth; -#define NET_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, libnet_logtype, "%s(): " fmt "\n", \ - __func__, ## args) - RTE_LOG_REGISTER_DEFAULT(libnet_logtype, INFO); +#define RTE_LOGTYPE_NET libnet_logtype + +#define NET_LOG(level, fmt, args...) \ + RTE_LOG_LINE(level, NET, "%s(): " fmt, __func__, ## args) /* Scalar handling */ diff --git a/lib/node/node_private.h b/lib/node/node_private.h index 26135aaa5b..845fdaa12e 100644 --- a/lib/node/node_private.h +++ b/lib/node/node_private.h @@ -11,11 +11,13 @@ #include <rte_mbuf_dyn.h> extern int rte_node_logtype; +#define RTE_LOGTYPE_NODE rte_node_logtype + #define NODE_LOG(level, node_name, ...) \ - rte_log(RTE_LOG_##level, rte_node_logtype, \ - RTE_FMT("NODE %s: %s():%u " RTE_FMT_HEAD(__VA_ARGS__, ) "\n", \ + RTE_LOG_LINE(level, NODE, \ + RTE_FMT("NODE %s: %s():%u " RTE_FMT_HEAD(__VA_ARGS__ ,), \ node_name, __func__, __LINE__, \ - RTE_FMT_TAIL(__VA_ARGS__, ))) + RTE_FMT_TAIL(__VA_ARGS__ ,))) #define node_err(node_name, ...) NODE_LOG(ERR, node_name, __VA_ARGS__) #define node_info(node_name, ...) NODE_LOG(INFO, node_name, __VA_ARGS__) diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c index 70963e7ee7..f6160f9911 100644 --- a/lib/pdump/rte_pdump.c +++ b/lib/pdump/rte_pdump.c @@ -18,9 +18,8 @@ RTE_LOG_REGISTER_DEFAULT(pdump_logtype, NOTICE); #define RTE_LOGTYPE_PDUMP pdump_logtype -#define PDUMP_LOG_LINE(level, fmt, args...) \ - RTE_LOG(level, PDUMP, "%s(): " fmt "\n", \ - __func__, ## args) +#define PDUMP_LOG_LINE(level, fmt, args...) \ + RTE_LOG_LINE(level, PDUMP, "%s(): " fmt, __func__, ## args) /* Used for the multi-process communication */ #define PDUMP_MP "mp_pdump" diff --git a/lib/pipeline/rte_pipeline.c b/lib/pipeline/rte_pipeline.c index d52b63506e..b0aea4596d 100644 --- a/lib/pipeline/rte_pipeline.c +++ b/lib/pipeline/rte_pipeline.c @@ -12,8 +12,8 @@ #include "rte_pipeline.h" -#define PIPELINE_LOG(level, fmt, ...) \ - RTE_LOG(level, PIPELINE, fmt "\n", ## __VA_ARGS__) +#define PIPELINE_LOG(level, ...) \ + RTE_LOG_LINE(level, PIPELINE, "" __VA_ARGS__) #define RTE_TABLE_INVALID UINT32_MAX diff --git a/lib/port/port_log.h b/lib/port/port_log.h index bc1744bd97..99332a3803 100644 --- a/lib/port/port_log.h +++ b/lib/port/port_log.h @@ -4,6 +4,6 @@ #include <rte_log.h> -#define PORT_LOG(level, fmt, ...) \ - RTE_LOG(level, PORT, fmt "\n", ## __VA_ARGS__) +#define PORT_LOG(level, ...) \ + RTE_LOG_LINE(level, PORT, "" __VA_ARGS__) diff --git a/lib/power/guest_channel.c b/lib/power/guest_channel.c index 6949f26d33..bc3f55b6bf 100644 --- a/lib/power/guest_channel.c +++ b/lib/power/guest_channel.c @@ -19,8 +19,8 @@ RTE_LOG_REGISTER_SUFFIX(guest_channel_logtype, guest_channel, INFO); #define RTE_LOGTYPE_GUEST_CHANNEL guest_channel_logtype -#define GUEST_CHANNEL_LOG(level, fmt, ...) \ - RTE_LOG(level, GUEST_CHANNEL, fmt "\n", ## __VA_ARGS__) +#define GUEST_CHANNEL_LOG(level, ...) \ + RTE_LOG_LINE(level, GUEST_CHANNEL, "" __VA_ARGS__) /* Timeout for incoming message in milliseconds. */ #define TIMEOUT 10 diff --git a/lib/power/power_common.h b/lib/power/power_common.h index 877ff2ca4c..30966400ba 100644 --- a/lib/power/power_common.h +++ b/lib/power/power_common.h @@ -12,12 +12,12 @@ extern int power_logtype; #define RTE_LOGTYPE_POWER power_logtype -#define POWER_LOG(level, fmt, ...) \ - RTE_LOG(level, POWER, fmt "\n", ## __VA_ARGS__) +#define POWER_LOG(level, ...) \ + RTE_LOG_LINE(level, POWER, "" __VA_ARGS__) #ifdef RTE_LIBRTE_POWER_DEBUG #define POWER_DEBUG_LOG(fmt, args...) \ - RTE_LOG(ERR, POWER, "%s: " fmt "\n", __func__, ## args) + RTE_LOG_LINE(ERR, POWER, "%s: " fmt, __func__, ## args) #else #define POWER_DEBUG_LOG(fmt, args...) #endif diff --git a/lib/rawdev/rte_rawdev_pmd.h b/lib/rawdev/rte_rawdev_pmd.h index 7b9ef1d09f..7173282c66 100644 --- a/lib/rawdev/rte_rawdev_pmd.h +++ b/lib/rawdev/rte_rawdev_pmd.h @@ -27,11 +27,11 @@ extern "C" { #include "rte_rawdev.h" extern int librawdev_logtype; +#define RTE_LOGTYPE_RAWDEV librawdev_logtype /* Logging Macros */ #define RTE_RDEV_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, librawdev_logtype, "%s(): " fmt "\n", \ - __func__, ##args) + RTE_LOG_LINE(level, RAWDEV, "%s(): " fmt, __func__, ##args) #define RTE_RDEV_ERR(fmt, args...) \ RTE_RDEV_LOG(ERR, fmt, ## args) diff --git a/lib/rcu/rte_rcu_qsbr.c b/lib/rcu/rte_rcu_qsbr.c index 5b6530788a..bd0b83be0c 100644 --- a/lib/rcu/rte_rcu_qsbr.c +++ b/lib/rcu/rte_rcu_qsbr.c @@ -20,7 +20,7 @@ #include "rcu_qsbr_pvt.h" #define RCU_LOG(level, fmt, args...) \ - RTE_LOG(level, RCU, "%s(): " fmt "\n", __func__, ## args) + RTE_LOG_LINE(level, RCU, "%s(): " fmt, __func__, ## args) /* Get the memory size of QSBR variable */ size_t diff --git a/lib/rcu/rte_rcu_qsbr.h b/lib/rcu/rte_rcu_qsbr.h index 0dca8310c0..23c9f89805 100644 --- a/lib/rcu/rte_rcu_qsbr.h +++ b/lib/rcu/rte_rcu_qsbr.h @@ -40,17 +40,15 @@ extern int rte_rcu_log_type; #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG #define __RTE_RCU_DP_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, rte_rcu_log_type, \ - "%s(): " fmt "\n", __func__, ## args) + RTE_LOG_LINE(level, RCU, "%s(): " fmt, __func__, ## args) #else #define __RTE_RCU_DP_LOG(level, fmt, args...) #endif #if defined(RTE_LIBRTE_RCU_DEBUG) -#define __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...) do {\ +#define __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...) do { \ if (v->qsbr_cnt[thread_id].lock_cnt) \ - rte_log(RTE_LOG_ ## level, rte_rcu_log_type, \ - "%s(): " fmt "\n", __func__, ## args); \ + RTE_LOG_LINE(level, RCU, "%s(): " fmt, __func__, ## args); \ } while (0) #else #define __RTE_RCU_IS_LOCK_CNT_ZERO(v, thread_id, level, fmt, args...) diff --git a/lib/regexdev/rte_regexdev.h b/lib/regexdev/rte_regexdev.h index a215d8768e..a50b841b1e 100644 --- a/lib/regexdev/rte_regexdev.h +++ b/lib/regexdev/rte_regexdev.h @@ -209,8 +209,7 @@ extern int rte_regexdev_logtype; #define RTE_LOGTYPE_REGEXDEV rte_regexdev_logtype #define RTE_REGEXDEV_LOG_LINE(level, ...) \ - RTE_LOG(level, REGEXDEV, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ - RTE_FMT_TAIL(__VA_ARGS__ ,))) + RTE_LOG_LINE(level, REGEXDEV, "" __VA_ARGS__) /* Macros to check for valid port */ #define RTE_REGEXDEV_VALID_DEV_ID_OR_ERR_RET(dev_id, retval) do { \ diff --git a/lib/reorder/rte_reorder.c b/lib/reorder/rte_reorder.c index fd550b03af..ff815445a8 100644 --- a/lib/reorder/rte_reorder.c +++ b/lib/reorder/rte_reorder.c @@ -18,8 +18,8 @@ RTE_LOG_REGISTER_DEFAULT(reorder_logtype, INFO); #define RTE_LOGTYPE_REORDER reorder_logtype -#define REORDER_LOG(level, fmt, ...) \ - RTE_LOG(level, REORDER, fmt "\n", ## __VA_ARGS__) +#define REORDER_LOG(level, ...) \ + RTE_LOG_LINE(level, REORDER, "" __VA_ARGS__) TAILQ_HEAD(rte_reorder_list, rte_tailq_entry); diff --git a/lib/rib/rib_log.h b/lib/rib/rib_log.h index ce74a4ce3e..db549d28a7 100644 --- a/lib/rib/rib_log.h +++ b/lib/rib/rib_log.h @@ -4,5 +4,5 @@ extern int rib_logtype; #define RTE_LOGTYPE_RIB rib_logtype -#define RIB_LOG(level, fmt, ...) \ - RTE_LOG(level, RIB, fmt "\n", ## __VA_ARGS__) +#define RIB_LOG(level, ...) \ + RTE_LOG_LINE(level, RIB, "" __VA_ARGS__) diff --git a/lib/ring/rte_ring.c b/lib/ring/rte_ring.c index 118ffab4b9..c59f626263 100644 --- a/lib/ring/rte_ring.c +++ b/lib/ring/rte_ring.c @@ -28,8 +28,8 @@ RTE_LOG_REGISTER_DEFAULT(ring_logtype, INFO); #define RTE_LOGTYPE_RING ring_logtype -#define RING_LOG(level, fmt, ...) \ - RTE_LOG(level, RING, fmt "\n", ## __VA_ARGS__) +#define RING_LOG(level, ...) \ + RTE_LOG_LINE(level, RING, "" __VA_ARGS__) TAILQ_HEAD(rte_ring_list, rte_tailq_entry); diff --git a/lib/sched/rte_sched_log.h b/lib/sched/rte_sched_log.h index d050b8fda1..19ddb655f8 100644 --- a/lib/sched/rte_sched_log.h +++ b/lib/sched/rte_sched_log.h @@ -2,5 +2,5 @@ extern int sched_logtype; #define RTE_LOGTYPE_SCHED sched_logtype -#define SCHED_LOG(level, fmt, ...) \ - RTE_LOG(level, SCHED, fmt "\n", ## __VA_ARGS__) +#define SCHED_LOG(level, ...) \ + RTE_LOG_LINE(level, SCHED, "" __VA_ARGS__) diff --git a/lib/stack/stack_pvt.h b/lib/stack/stack_pvt.h index c7eab4027d..2dce42a9da 100644 --- a/lib/stack/stack_pvt.h +++ b/lib/stack/stack_pvt.h @@ -8,10 +8,10 @@ #include <rte_log.h> extern int stack_logtype; +#define RTE_LOGTYPE_STACK stack_logtype #define STACK_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ##level, stack_logtype, "%s(): "fmt "\n", \ - __func__, ##args) + RTE_LOG_LINE(level, STACK, "%s(): "fmt, __func__, ##args) #define STACK_LOG_ERR(fmt, args...) \ STACK_LOG(ERR, fmt, ## args) diff --git a/lib/table/table_log.h b/lib/table/table_log.h index b50b20e595..0330f89d41 100644 --- a/lib/table/table_log.h +++ b/lib/table/table_log.h @@ -4,6 +4,6 @@ #include <rte_log.h> -#define TABLE_LOG(level, fmt, ...) \ - RTE_LOG(level, TABLE, fmt "\n", ## __VA_ARGS__) +#define TABLE_LOG(level, ...) \ + RTE_LOG_LINE(level, TABLE, "" __VA_ARGS__) diff --git a/lib/telemetry/telemetry.c b/lib/telemetry/telemetry.c index 747eba2656..31e2391867 100644 --- a/lib/telemetry/telemetry.c +++ b/lib/telemetry/telemetry.c @@ -57,9 +57,7 @@ static rte_cpuset_t *thread_cpuset; RTE_LOG_REGISTER_DEFAULT(logtype, WARNING); #define RTE_LOGTYPE_TMTY logtype -#define TMTY_LOG_LINE(l, ...) \ - RTE_LOG(l, TMTY, RTE_FMT("TELEMETRY: " RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ - RTE_FMT_TAIL(__VA_ARGS__ ,))) +#define TMTY_LOG_LINE(l, ...) RTE_LOG_LINE(l, TMTY, "TELEMETRY: " __VA_ARGS__) /* list of command callbacks, with one command registered by default */ static struct cmd_callback *callbacks; diff --git a/lib/vhost/fd_man.c b/lib/vhost/fd_man.c index 01cb77257e..79a8d2c006 100644 --- a/lib/vhost/fd_man.c +++ b/lib/vhost/fd_man.c @@ -12,8 +12,8 @@ RTE_LOG_REGISTER_SUFFIX(vhost_fdset_logtype, fdset, INFO); #define RTE_LOGTYPE_VHOST_FDMAN vhost_fdset_logtype -#define VHOST_FDMAN_LOG(level, fmt, ...) \ - RTE_LOG(level, VHOST_FDMAN, fmt "\n", ## __VA_ARGS__) +#define VHOST_FDMAN_LOG(level, ...) \ + RTE_LOG_LINE(level, VHOST_FDMAN, "" __VA_ARGS__) #define FDPOLLERR (POLLERR | POLLHUP | POLLNVAL) diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index e0f6dd4081..470dadbba6 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -678,10 +678,10 @@ extern int vhost_data_log_level; #define RTE_LOGTYPE_VHOST_DATA vhost_data_log_level #define VHOST_CONFIG_LOG(prefix, level, fmt, args...) \ - RTE_LOG(level, VHOST_CONFIG, "VHOST_CONFIG: (%s) " fmt "\n", prefix, ##args) + RTE_LOG_LINE(level, VHOST_CONFIG, "VHOST_CONFIG: (%s) " fmt, prefix, ##args) #define VHOST_DATA_LOG(prefix, level, fmt, args...) \ - RTE_LOG_DP(level, VHOST_DATA, "VHOST_DATA: (%s) " fmt "\n", prefix, ##args) + RTE_LOG_DP_LINE(level, VHOST_DATA, "VHOST_DATA: (%s) " fmt, prefix, ##args) #ifdef RTE_LIBRTE_VHOST_DEBUG #define VHOST_MAX_PRINT_BUFF 6072 diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c index 6e5443e5f8..3704fbbb3d 100644 --- a/lib/vhost/vhost_crypto.c +++ b/lib/vhost/vhost_crypto.c @@ -21,15 +21,15 @@ RTE_LOG_REGISTER_SUFFIX(vhost_crypto_logtype, crypto, INFO); #define RTE_LOGTYPE_VHOST_CRYPTO vhost_crypto_logtype #define VC_LOG_ERR(fmt, args...) \ - RTE_LOG(ERR, VHOST_CRYPTO, "%s() line %u: " fmt "\n", \ + RTE_LOG_LINE(ERR, VHOST_CRYPTO, "%s() line %u: " fmt, \ __func__, __LINE__, ## args) #define VC_LOG_INFO(fmt, args...) \ - RTE_LOG(INFO, VHOST_CRYPTO, "%s() line %u: " fmt "\n", \ + RTE_LOG_LINE(INFO, VHOST_CRYPTO, "%s() line %u: " fmt, \ __func__, __LINE__, ## args) #ifdef RTE_LIBRTE_VHOST_DEBUG #define VC_LOG_DBG(fmt, args...) \ - RTE_LOG(DEBUG, VHOST_CRYPTO, "%s() line %u: " fmt "\n", \ + RTE_LOG_LINE(DEBUG, VHOST_CRYPTO, "%s() line %u: " fmt, \ __func__, __LINE__, ## args) #else #define VC_LOG_DBG(fmt, args...) -- 2.43.0 ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [PATCH v5 00/13] Detect superfluous newline in logs 2023-12-20 15:35 ` [PATCH v5 00/13] Detect superfluous newline in logs David Marchand ` (12 preceding siblings ...) 2023-12-20 15:36 ` [PATCH v5 13/13] lib: use per line logging in helpers David Marchand @ 2023-12-21 9:31 ` David Marchand 2023-12-21 16:32 ` Stephen Hemminger 14 siblings, 0 replies; 122+ messages in thread From: David Marchand @ 2023-12-21 9:31 UTC (permalink / raw) To: David Marchand; +Cc: dev, thomas, ferruh.yigit, bruce.richardson, stephen, mb On Wed, Dec 20, 2023 at 4:37 PM David Marchand <david.marchand@redhat.com> wrote: > > Getting readable and consistent logs is important when running a DPDK > application, especially when troubleshooting. > A common issue with logs is when a DPDK change do not add (or on the > contrary add too many \n) in the format string. > > This issue would only get noticed when actually hitting this log (which > may be a situation hard to reach). > > This series proposes to introduce a new RTE_LOG_LINE helper that is > responsible for logging a one line message and spews a build error (with > gcc) if any \n is part of the format string. > > > Since the v1 discussion on the cover letter, I changed my mind, and made the > choice to break existing logging helpers exported in the public API. > The reasoning is that those should not be used in the first place: > logs should be produced only by the library that registers the logtype. > > Some multiline logging for debugging and the test assert macros are > still present, but in this case an explicit call to RTE_LOG() is done. > This can be checked with a simple: > $ git grep -E 'RTE_LOG(_DP)?\(' -- lib/ :^lib/log/ > lib/acl/acl_bld.c: RTE_LOG(DEBUG, ACL, "Build phase for ACL \"%s\":\n" > lib/acl/acl_gen.c: RTE_LOG(DEBUG, ACL, "Gen phase for ACL \"%s\":\n" > lib/bpf/bpf_validate.c: RTE_LOG(DEBUG, BPF, "%s(%p) stats:\n" > lib/bpf/bpf_validate.c: RTE_LOG(DEBUG, BPF, > lib/eal/common/eal_common_debug.c: RTE_LOG(CRIT, EAL, "Error - exiting with code: %d\n" > lib/eal/include/rte_test.h: RTE_LOG(ERR, EAL, "Test assert %s line %d failed: " \ > lib/ip_frag/ip_frag_common.h:#define IP_FRAG_LOG(lvl, fmt, args...) RTE_LOG(lvl, IPFRAG, fmt, ##args) > lib/sched/rte_sched.c: RTE_LOG(DEBUG, SCHED, "Low level config for pipe profile %u:\n" > lib/sched/rte_sched.c: RTE_LOG(DEBUG, SCHED, "Low level config for subport profile %u:\n" > lib/vhost/vhost.h: RTE_LOG_DP(DEBUG, VHOST_DATA, "VHOST_DATA: (%s) %s", dev->ifname, packet); \ Series applied, thanks for the reviews. -- David Marchand ^ permalink raw reply [flat|nested] 122+ messages in thread
* Re: [PATCH v5 00/13] Detect superfluous newline in logs 2023-12-20 15:35 ` [PATCH v5 00/13] Detect superfluous newline in logs David Marchand ` (13 preceding siblings ...) 2023-12-21 9:31 ` [PATCH v5 00/13] Detect superfluous newline in logs David Marchand @ 2023-12-21 16:32 ` Stephen Hemminger 14 siblings, 0 replies; 122+ messages in thread From: Stephen Hemminger @ 2023-12-21 16:32 UTC (permalink / raw) To: David Marchand; +Cc: dev, thomas, ferruh.yigit, bruce.richardson, mb On Wed, 20 Dec 2023 16:35:53 +0100 David Marchand <david.marchand@redhat.com> wrote: > Getting readable and consistent logs is important when running a DPDK > application, especially when troubleshooting. > A common issue with logs is when a DPDK change do not add (or on the > contrary add too many \n) in the format string. > > This issue would only get noticed when actually hitting this log (which > may be a situation hard to reach). > > This series proposes to introduce a new RTE_LOG_LINE helper that is > responsible for logging a one line message and spews a build error (with > gcc) if any \n is part of the format string. > > > Since the v1 discussion on the cover letter, I changed my mind, and made the > choice to break existing logging helpers exported in the public API. > The reasoning is that those should not be used in the first place: > logs should be produced only by the library that registers the logtype. > > Some multiline logging for debugging and the test assert macros are > still present, but in this case an explicit call to RTE_LOG() is done. > This can be checked with a simple: > $ git grep -E 'RTE_LOG(_DP)?\(' -- lib/ :^lib/log/ > lib/acl/acl_bld.c: RTE_LOG(DEBUG, ACL, "Build phase for ACL \"%s\":\n" > lib/acl/acl_gen.c: RTE_LOG(DEBUG, ACL, "Gen phase for ACL \"%s\":\n" > lib/bpf/bpf_validate.c: RTE_LOG(DEBUG, BPF, "%s(%p) stats:\n" > lib/bpf/bpf_validate.c: RTE_LOG(DEBUG, BPF, > lib/eal/common/eal_common_debug.c: RTE_LOG(CRIT, EAL, "Error - exiting with code: %d\n" > lib/eal/include/rte_test.h: RTE_LOG(ERR, EAL, "Test assert %s line %d failed: " \ > lib/ip_frag/ip_frag_common.h:#define IP_FRAG_LOG(lvl, fmt, args...) RTE_LOG(lvl, IPFRAG, fmt, ##args) > lib/sched/rte_sched.c: RTE_LOG(DEBUG, SCHED, "Low level config for pipe profile %u:\n" > lib/sched/rte_sched.c: RTE_LOG(DEBUG, SCHED, "Low level config for subport profile %u:\n" > lib/vhost/vhost.h: RTE_LOG_DP(DEBUG, VHOST_DATA, "VHOST_DATA: (%s) %s", dev->ifname, packet); \ Tabs are also problematic for syslog. With simple mod to testpmd: diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 9e4e99e53b9a..811fe0c4aeb8 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -4555,6 +4555,9 @@ main(int argc, char** argv) rte_exit(EXIT_FAILURE, "Cannot init EAL: %s\n", rte_strerror(rte_errno)); + RTE_LOG(INFO, USER1, "Sample log message with extra\nnewline\n"); + RTE_LOG(INFO, USER1, "Sample log message with tab\tinserted\n"); And the result in syslog is: 2023-12-21T08:27:47.172156-08:00 sut dpdk-testpmd[69471]: USER1: Sample log message with extra#012newline 2023-12-21T08:27:47.173418-08:00 sut dpdk-testpmd[69471]: USER1: Sample log message with tab#011inserted ^ permalink raw reply [flat|nested] 122+ messages in thread
end of thread, other threads:[~2023-12-21 16:32 UTC | newest] Thread overview: 122+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2023-11-17 13:18 [RFC 0/3] Detect superfluous newline in logs David Marchand 2023-11-17 13:18 ` [RFC 1/3] lib: remove redundant newline from logs David Marchand 2023-11-17 13:18 ` [RFC 2/3] log: add a per line log helper David Marchand 2023-11-17 13:18 ` [RFC 3/3] lib: use per line logging David Marchand 2023-11-17 13:27 ` [RFC 0/3] Detect superfluous newline in logs Bruce Richardson 2023-11-17 13:48 ` David Marchand 2023-11-17 14:11 ` Bruce Richardson 2023-11-17 14:21 ` David Marchand 2023-11-17 13:47 ` Morten Brørup 2023-11-17 14:09 ` David Marchand 2023-11-17 14:17 ` Morten Brørup 2023-12-08 14:59 ` [RFC v2 00/14] " David Marchand 2023-12-08 14:59 ` [RFC v2 01/14] hash: remove some dead code David Marchand 2023-12-08 16:53 ` Stephen Hemminger 2023-12-08 20:46 ` Tyler Retzlaff 2023-12-08 14:59 ` [RFC v2 02/14] regexdev: fix logtype register David Marchand 2023-12-08 16:58 ` Stephen Hemminger 2023-12-08 20:46 ` Tyler Retzlaff 2023-12-14 10:11 ` Ori Kam 2023-12-08 14:59 ` [RFC v2 03/14] lib: use dedicated logtypes David Marchand 2023-12-08 17:00 ` Stephen Hemminger 2023-12-08 20:49 ` Tyler Retzlaff 2023-12-16 9:47 ` Andrew Rybchenko 2023-12-08 14:59 ` [RFC v2 04/14] lib: add newline in logs David Marchand 2023-12-08 17:01 ` Stephen Hemminger 2023-12-11 12:38 ` David Marchand 2023-12-08 20:50 ` Tyler Retzlaff 2023-12-16 9:43 ` Andrew Rybchenko 2023-12-08 14:59 ` [RFC v2 05/14] lib: remove redundant newline from logs David Marchand 2023-12-08 17:02 ` Stephen Hemminger 2023-12-09 7:15 ` fengchengwen 2023-12-11 8:48 ` Mattias Rönnblom 2023-12-08 14:59 ` [RFC v2 06/14] eal/linux: remove log paraphrasing the doc David Marchand 2023-12-08 17:03 ` Stephen Hemminger 2023-12-08 20:54 ` Tyler Retzlaff 2023-12-08 14:59 ` [RFC v2 07/14] bpf: remove log level in internal helper David Marchand 2023-12-08 17:04 ` Stephen Hemminger 2023-12-08 21:02 ` Tyler Retzlaff 2023-12-08 14:59 ` [RFC v2 08/14] lib: simplify multilines log messages David Marchand 2023-12-08 17:05 ` Stephen Hemminger 2023-12-16 9:26 ` Andrew Rybchenko 2023-12-08 21:03 ` Tyler Retzlaff 2023-12-08 14:59 ` [RFC v2 09/14] rcu: introduce a logging helper David Marchand 2023-12-08 17:08 ` Stephen Hemminger 2023-12-08 18:26 ` Honnappa Nagarahalli 2023-12-08 21:27 ` Tyler Retzlaff 2023-12-12 15:00 ` David Marchand 2023-12-12 19:11 ` Tyler Retzlaff 2023-12-18 9:37 ` David Marchand 2023-12-18 19:52 ` Tyler Retzlaff 2023-12-08 14:59 ` [RFC v2 10/14] vhost: improve log for memory dumping configuration David Marchand 2023-12-08 17:14 ` Stephen Hemminger 2023-12-08 14:59 ` [RFC v2 11/14] log: add a per line log helper David Marchand 2023-12-08 17:15 ` Stephen Hemminger 2023-12-09 7:21 ` fengchengwen 2023-12-08 14:59 ` [RFC v2 12/14] lib: convert to per line logging David Marchand 2023-12-08 17:16 ` Stephen Hemminger 2023-12-11 12:34 ` David Marchand 2023-12-16 9:30 ` Andrew Rybchenko 2023-12-08 14:59 ` [RFC v2 13/14] lib: replace logging helpers David Marchand 2023-12-08 17:18 ` Stephen Hemminger 2023-12-11 12:36 ` David Marchand 2023-12-16 9:42 ` Andrew Rybchenko 2023-12-08 14:59 ` [RFC v2 14/14] lib: use per line logging in helpers David Marchand 2023-12-09 7:19 ` fengchengwen 2023-12-16 9:41 ` Andrew Rybchenko 2023-12-18 9:27 ` [PATCH v3 00/14] Detect superfluous newline in logs David Marchand 2023-12-18 9:27 ` [PATCH v3 01/14] hash: remove some dead code David Marchand 2023-12-18 9:27 ` [PATCH v3 02/14] regexdev: fix logtype register David Marchand 2023-12-18 9:27 ` [PATCH v3 03/14] lib: use dedicated logtypes and macros David Marchand 2023-12-18 9:27 ` [PATCH v3 04/14] lib: add newline in logs David Marchand 2023-12-18 9:27 ` [PATCH v3 05/14] lib: remove redundant newline from logs David Marchand 2023-12-18 9:27 ` [PATCH v3 06/14] eal/linux: remove log paraphrasing the doc David Marchand 2023-12-18 9:27 ` [PATCH v3 07/14] bpf: remove log level in internal helper David Marchand 2023-12-18 9:27 ` [PATCH v3 08/14] lib: simplify multilines log messages David Marchand 2023-12-18 10:05 ` Andrew Rybchenko 2023-12-18 9:27 ` [PATCH v3 09/14] rcu: introduce a logging helper David Marchand 2023-12-18 9:27 ` [PATCH v3 10/14] vhost: improve log for memory dumping configuration David Marchand 2023-12-18 9:27 ` [PATCH v3 11/14] log: add a per line log helper David Marchand 2023-12-18 9:27 ` [PATCH v3 12/14] lib: convert to per line logging David Marchand 2023-12-18 9:27 ` [PATCH v3 13/14] lib: replace logging helpers David Marchand 2023-12-18 9:27 ` [PATCH v3 14/14] lib: use per line logging in helpers David Marchand 2023-12-18 14:37 ` [PATCH v4 00/14] Detect superfluous newline in logs David Marchand 2023-12-18 14:37 ` [PATCH v4 01/14] hash: remove some dead code David Marchand 2023-12-18 14:37 ` [PATCH v4 02/14] regexdev: fix logtype register David Marchand 2023-12-18 16:46 ` Stephen Hemminger 2023-12-18 14:37 ` [PATCH v4 03/14] lib: use dedicated logtypes and macros David Marchand 2023-12-18 14:37 ` [PATCH v4 04/14] lib: add newline in logs David Marchand 2023-12-18 14:37 ` [PATCH v4 05/14] lib: remove redundant newline from logs David Marchand 2023-12-18 14:37 ` [PATCH v4 06/14] eal/linux: remove log paraphrasing the doc David Marchand 2023-12-18 14:37 ` [PATCH v4 07/14] bpf: remove log level in internal helper David Marchand 2023-12-18 14:37 ` [PATCH v4 08/14] lib: simplify multilines log messages David Marchand 2023-12-18 14:37 ` [PATCH v4 09/14] rcu: introduce a logging helper David Marchand 2023-12-18 14:37 ` [PATCH v4 10/14] vhost: improve log for memory dumping configuration David Marchand 2023-12-18 14:38 ` [PATCH v4 11/14] log: add a per line log helper David Marchand 2023-12-19 15:45 ` Thomas Monjalon 2023-12-19 17:16 ` Stephen Hemminger 2023-12-20 8:26 ` David Marchand 2023-12-18 14:38 ` [PATCH v4 12/14] lib: convert to per line logging David Marchand 2023-12-20 13:46 ` Thomas Monjalon 2023-12-20 14:00 ` David Marchand 2023-12-18 14:38 ` [PATCH v4 13/14] lib: replace logging helpers David Marchand 2023-12-18 14:38 ` [PATCH v4 14/14] lib: use per line logging in helpers David Marchand 2023-12-20 15:35 ` [PATCH v5 00/13] Detect superfluous newline in logs David Marchand 2023-12-20 15:35 ` [PATCH v5 01/13] hash: remove some dead code David Marchand 2023-12-21 5:58 ` Ruifeng Wang 2023-12-21 6:26 ` Ruifeng Wang 2023-12-20 15:35 ` [PATCH v5 02/13] regexdev: fix logtype register David Marchand 2023-12-20 15:35 ` [PATCH v5 03/13] lib: use dedicated logtypes and macros David Marchand 2023-12-20 15:35 ` [PATCH v5 04/13] lib: add newline in logs David Marchand 2023-12-20 15:35 ` [PATCH v5 05/13] lib: remove redundant newline from logs David Marchand 2023-12-20 15:35 ` [PATCH v5 06/13] eal/linux: remove log paraphrasing the doc David Marchand 2023-12-20 15:36 ` [PATCH v5 07/13] bpf: remove log level in internal helper David Marchand 2023-12-20 15:36 ` [PATCH v5 08/13] lib: simplify multilines log messages David Marchand 2023-12-20 15:36 ` [PATCH v5 09/13] lib: add more logging helpers David Marchand 2023-12-20 15:36 ` [PATCH v5 10/13] vhost: improve log for memory dumping configuration David Marchand 2023-12-20 15:36 ` [PATCH v5 11/13] log: add a per line log helper David Marchand 2023-12-20 15:42 ` David Marchand 2023-12-20 15:36 ` [PATCH v5 12/13] lib: replace logging helpers David Marchand 2023-12-20 15:36 ` [PATCH v5 13/13] lib: use per line logging in helpers David Marchand 2023-12-21 9:31 ` [PATCH v5 00/13] Detect superfluous newline in logs David Marchand 2023-12-21 16:32 ` Stephen Hemminger
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).