From: David Marchand <david.marchand@redhat.com>
To: dev@dpdk.org
Cc: thomas@monjalon.net, ferruh.yigit@amd.com,
bruce.richardson@intel.com, stephen@networkplumber.org,
mb@smartsharesystems.com,
Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>,
Anatoly Burakov <anatoly.burakov@intel.com>,
Harman Kalra <hkalra@marvell.com>,
Jerin Jacob <jerinj@marvell.com>,
Sunil Kumar Kori <skori@marvell.com>,
Harry van Haaren <harry.van.haaren@intel.com>,
Stanislaw Kardach <kda@semihalf.com>,
Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>,
Narcisa Ana Maria Vasile <navasile@linux.microsoft.com>,
Dmitry Malloy <dmitrym@microsoft.com>,
Pallavi Kadam <pallavi.kadam@intel.com>,
Byron Marohn <byron.marohn@intel.com>,
Yipeng Wang <yipeng1.wang@intel.com>,
Vladimir Medvedkin <vladimir.medvedkin@intel.com>,
Sameh Gobriel <sameh.gobriel@intel.com>,
Reshma Pattan <reshma.pattan@intel.com>,
Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>,
Cristian Dumitrescu <cristian.dumitrescu@intel.com>,
David Hunt <david.hunt@intel.com>,
Sivaprasad Tummala <sivaprasad.tummala@amd.com>,
Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>,
Volodymyr Fialko <vfialko@marvell.com>,
Maxime Coquelin <maxime.coquelin@redhat.com>,
Chenbo Xia <chenbox@nvidia.com>
Subject: [RFC v2 12/14] lib: convert to per line logging
Date: Fri, 8 Dec 2023 15:59:46 +0100 [thread overview]
Message-ID: <20231208145950.2184940-13-david.marchand@redhat.com> (raw)
In-Reply-To: <20231208145950.2184940-1-david.marchand@redhat.com>
Convert many libraries that call RTE_LOG(... "\n", ...) to RTE_LOG_LINE.
Note:
- for acl and sched libraries that still has some debug multilines
messages, a direct call to RTE_LOG is used: this will make it easier to
notice such special cases,
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
lib/acl/acl_bld.c | 28 +--
lib/acl/acl_gen.c | 8 +-
lib/acl/rte_acl.c | 8 +-
lib/acl/tb_mem.c | 4 +-
lib/eal/common/eal_common_bus.c | 22 +-
lib/eal/common/eal_common_class.c | 4 +-
lib/eal/common/eal_common_config.c | 2 +-
lib/eal/common/eal_common_debug.c | 6 +-
lib/eal/common/eal_common_dev.c | 80 +++----
lib/eal/common/eal_common_devargs.c | 18 +-
lib/eal/common/eal_common_dynmem.c | 34 +--
lib/eal/common/eal_common_fbarray.c | 12 +-
lib/eal/common/eal_common_interrupts.c | 38 ++--
lib/eal/common/eal_common_lcore.c | 26 +--
lib/eal/common/eal_common_memalloc.c | 12 +-
lib/eal/common/eal_common_memory.c | 66 +++---
lib/eal/common/eal_common_memzone.c | 24 +--
lib/eal/common/eal_common_options.c | 236 ++++++++++----------
lib/eal/common/eal_common_proc.c | 112 +++++-----
lib/eal/common/eal_common_tailqs.c | 12 +-
lib/eal/common/eal_common_thread.c | 12 +-
lib/eal/common/eal_common_timer.c | 6 +-
lib/eal/common/eal_common_trace_utils.c | 2 +-
lib/eal/common/eal_trace.h | 4 +-
lib/eal/common/hotplug_mp.c | 54 ++---
lib/eal/common/malloc_elem.c | 6 +-
lib/eal/common/malloc_heap.c | 40 ++--
lib/eal/common/malloc_mp.c | 72 +++----
lib/eal/common/rte_keepalive.c | 2 +-
lib/eal/common/rte_malloc.c | 10 +-
lib/eal/common/rte_service.c | 8 +-
lib/eal/freebsd/eal.c | 74 +++----
lib/eal/freebsd/eal_alarm.c | 2 +-
lib/eal/freebsd/eal_dev.c | 8 +-
lib/eal/freebsd/eal_hugepage_info.c | 22 +-
lib/eal/freebsd/eal_interrupts.c | 60 +++---
lib/eal/freebsd/eal_lcore.c | 2 +-
lib/eal/freebsd/eal_memalloc.c | 10 +-
lib/eal/freebsd/eal_memory.c | 34 +--
lib/eal/freebsd/eal_thread.c | 2 +-
lib/eal/freebsd/eal_timer.c | 10 +-
lib/eal/linux/eal.c | 122 +++++------
lib/eal/linux/eal_alarm.c | 2 +-
lib/eal/linux/eal_dev.c | 40 ++--
lib/eal/linux/eal_hugepage_info.c | 38 ++--
lib/eal/linux/eal_interrupts.c | 116 +++++-----
lib/eal/linux/eal_lcore.c | 4 +-
lib/eal/linux/eal_memalloc.c | 120 +++++------
lib/eal/linux/eal_memory.c | 208 +++++++++---------
lib/eal/linux/eal_thread.c | 4 +-
lib/eal/linux/eal_timer.c | 10 +-
lib/eal/linux/eal_vfio.c | 270 +++++++++++------------
lib/eal/linux/eal_vfio_mp_sync.c | 4 +-
lib/eal/riscv/rte_cycles.c | 4 +-
lib/eal/unix/eal_filesystem.c | 14 +-
lib/eal/unix/eal_firmware.c | 2 +-
lib/eal/unix/eal_unix_memory.c | 8 +-
lib/eal/unix/rte_thread.c | 34 +--
lib/eal/windows/eal.c | 36 ++--
lib/eal/windows/eal_alarm.c | 12 +-
lib/eal/windows/eal_debug.c | 8 +-
lib/eal/windows/eal_dev.c | 8 +-
lib/eal/windows/eal_hugepages.c | 10 +-
lib/eal/windows/eal_interrupts.c | 10 +-
lib/eal/windows/eal_lcore.c | 6 +-
lib/eal/windows/eal_memalloc.c | 50 ++---
lib/eal/windows/eal_memory.c | 20 +-
lib/eal/windows/eal_windows.h | 4 +-
lib/eal/windows/include/rte_windows.h | 4 +-
lib/eal/windows/rte_thread.c | 28 +--
lib/efd/rte_efd.c | 58 ++---
lib/fib/rte_fib.c | 14 +-
lib/fib/rte_fib6.c | 14 +-
lib/hash/rte_cuckoo_hash.c | 52 ++---
lib/hash/rte_fbk_hash.c | 4 +-
lib/hash/rte_hash_crc.c | 12 +-
lib/hash/rte_thash.c | 20 +-
lib/hash/rte_thash_gfni.c | 8 +-
lib/ip_frag/rte_ip_frag_common.c | 8 +-
lib/latencystats/rte_latencystats.c | 41 ++--
lib/log/log.c | 6 +-
lib/lpm/rte_lpm.c | 12 +-
lib/lpm/rte_lpm6.c | 10 +-
lib/mbuf/rte_mbuf.c | 14 +-
lib/mbuf/rte_mbuf_dyn.c | 14 +-
lib/mbuf/rte_mbuf_pool_ops.c | 4 +-
lib/mempool/rte_mempool.c | 24 +--
lib/mempool/rte_mempool.h | 2 +-
lib/mempool/rte_mempool_ops.c | 10 +-
lib/pipeline/rte_pipeline.c | 228 ++++++++++----------
lib/port/rte_port_ethdev.c | 18 +-
lib/port/rte_port_eventdev.c | 18 +-
lib/port/rte_port_fd.c | 24 +--
lib/port/rte_port_frag.c | 14 +-
lib/port/rte_port_ras.c | 12 +-
lib/port/rte_port_ring.c | 18 +-
lib/port/rte_port_sched.c | 12 +-
lib/port/rte_port_source_sink.c | 48 ++---
lib/port/rte_port_sym_crypto.c | 18 +-
lib/power/guest_channel.c | 38 ++--
lib/power/power_acpi_cpufreq.c | 106 ++++-----
lib/power/power_amd_pstate_cpufreq.c | 120 +++++------
lib/power/power_common.c | 10 +-
lib/power/power_cppc_cpufreq.c | 118 +++++-----
lib/power/power_intel_uncore.c | 68 +++---
lib/power/power_kvm_vm.c | 22 +-
lib/power/power_pstate_cpufreq.c | 144 ++++++-------
lib/power/rte_power.c | 22 +-
lib/power/rte_power_pmd_mgmt.c | 34 +--
lib/power/rte_power_uncore.c | 14 +-
lib/rcu/rte_rcu_qsbr.c | 2 +-
lib/reorder/rte_reorder.c | 32 +--
lib/rib/rte_rib.c | 10 +-
lib/rib/rte_rib6.c | 10 +-
lib/ring/rte_ring.c | 24 +--
lib/sched/rte_pie.c | 18 +-
lib/sched/rte_sched.c | 274 ++++++++++++------------
lib/table/rte_table_acl.c | 72 +++----
lib/table/rte_table_array.c | 16 +-
lib/table/rte_table_hash_cuckoo.c | 22 +-
lib/table/rte_table_hash_ext.c | 22 +-
lib/table/rte_table_hash_key16.c | 38 ++--
lib/table/rte_table_hash_key32.c | 38 ++--
lib/table/rte_table_hash_key8.c | 38 ++--
lib/table/rte_table_hash_lru.c | 22 +-
lib/table/rte_table_lpm.c | 42 ++--
lib/table/rte_table_lpm_ipv6.c | 44 ++--
lib/table/rte_table_stub.c | 4 +-
lib/vhost/fd_man.c | 8 +-
129 files changed, 2278 insertions(+), 2279 deletions(-)
diff --git a/lib/acl/acl_bld.c b/lib/acl/acl_bld.c
index eaf8770415..27bdd6b9a1 100644
--- a/lib/acl/acl_bld.c
+++ b/lib/acl/acl_bld.c
@@ -1017,8 +1017,8 @@ build_trie(struct acl_build_context *context, struct rte_acl_build_rule *head,
break;
default:
- RTE_LOG(ERR, ACL,
- "Error in rule[%u] type - %hhu\n",
+ RTE_LOG_LINE(ERR, ACL,
+ "Error in rule[%u] type - %hhu",
rule->f->data.userdata,
rule->config->defs[n].type);
return NULL;
@@ -1374,7 +1374,7 @@ acl_build_tries(struct acl_build_context *context,
last = build_one_trie(context, rule_sets, n, context->node_max);
if (context->bld_tries[n].trie == NULL) {
- RTE_LOG(ERR, ACL, "Build of %u-th trie failed\n", n);
+ RTE_LOG_LINE(ERR, ACL, "Build of %u-th trie failed", n);
return -ENOMEM;
}
@@ -1383,8 +1383,8 @@ acl_build_tries(struct acl_build_context *context,
break;
if (num_tries == RTE_DIM(context->tries)) {
- RTE_LOG(ERR, ACL,
- "Exceeded max number of tries: %u\n",
+ RTE_LOG_LINE(ERR, ACL,
+ "Exceeded max number of tries: %u",
num_tries);
return -ENOMEM;
}
@@ -1409,7 +1409,7 @@ acl_build_tries(struct acl_build_context *context,
*/
last = build_one_trie(context, rule_sets, n, INT32_MAX);
if (context->bld_tries[n].trie == NULL || last != NULL) {
- RTE_LOG(ERR, ACL, "Build of %u-th trie failed\n", n);
+ RTE_LOG_LINE(ERR, ACL, "Build of %u-th trie failed", n);
return -ENOMEM;
}
@@ -1435,8 +1435,8 @@ acl_build_log(const struct acl_build_context *ctx)
for (n = 0; n < RTE_DIM(ctx->tries); n++) {
if (ctx->tries[n].count != 0)
- RTE_LOG(DEBUG, ACL,
- "trie %u: number of rules: %u, indexes: %u\n",
+ RTE_LOG_LINE(DEBUG, ACL,
+ "trie %u: number of rules: %u, indexes: %u",
n, ctx->tries[n].count,
ctx->tries[n].num_data_indexes);
}
@@ -1526,8 +1526,8 @@ acl_bld(struct acl_build_context *bcx, struct rte_acl_ctx *ctx,
/* build phase runs out of memory. */
if (rc != 0) {
- RTE_LOG(ERR, ACL,
- "ACL context: %s, %s() failed with error code: %d\n",
+ RTE_LOG_LINE(ERR, ACL,
+ "ACL context: %s, %s() failed with error code: %d",
bcx->acx->name, __func__, rc);
return rc;
}
@@ -1568,8 +1568,8 @@ acl_check_bld_param(struct rte_acl_ctx *ctx, const struct rte_acl_config *cfg)
for (i = 0; i != cfg->num_fields; i++) {
if (cfg->defs[i].type > RTE_ACL_FIELD_TYPE_BITMASK) {
- RTE_LOG(ERR, ACL,
- "ACL context: %s, invalid type: %hhu for %u-th field\n",
+ RTE_LOG_LINE(ERR, ACL,
+ "ACL context: %s, invalid type: %hhu for %u-th field",
ctx->name, cfg->defs[i].type, i);
return -EINVAL;
}
@@ -1580,8 +1580,8 @@ acl_check_bld_param(struct rte_acl_ctx *ctx, const struct rte_acl_config *cfg)
;
if (j == RTE_DIM(field_sizes)) {
- RTE_LOG(ERR, ACL,
- "ACL context: %s, invalid size: %hhu for %u-th field\n",
+ RTE_LOG_LINE(ERR, ACL,
+ "ACL context: %s, invalid size: %hhu for %u-th field",
ctx->name, cfg->defs[i].size, i);
return -EINVAL;
}
diff --git a/lib/acl/acl_gen.c b/lib/acl/acl_gen.c
index 03a47ea231..2f612df1e0 100644
--- a/lib/acl/acl_gen.c
+++ b/lib/acl/acl_gen.c
@@ -471,9 +471,9 @@ rte_acl_gen(struct rte_acl_ctx *ctx, struct rte_acl_trie *trie,
XMM_SIZE;
if (total_size > max_size) {
- RTE_LOG(DEBUG, ACL,
+ RTE_LOG_LINE(DEBUG, ACL,
"Gen phase for ACL ctx \"%s\" exceeds max_size limit, "
- "bytes required: %zu, allowed: %zu\n",
+ "bytes required: %zu, allowed: %zu",
ctx->name, total_size, max_size);
return -ERANGE;
}
@@ -481,8 +481,8 @@ rte_acl_gen(struct rte_acl_ctx *ctx, struct rte_acl_trie *trie,
mem = rte_zmalloc_socket(ctx->name, total_size, RTE_CACHE_LINE_SIZE,
ctx->socket_id);
if (mem == NULL) {
- RTE_LOG(ERR, ACL,
- "allocation of %zu bytes on socket %d for %s failed\n",
+ RTE_LOG_LINE(ERR, ACL,
+ "allocation of %zu bytes on socket %d for %s failed",
total_size, ctx->socket_id, ctx->name);
return -ENOMEM;
}
diff --git a/lib/acl/rte_acl.c b/lib/acl/rte_acl.c
index 760c3587d4..bec26d0a22 100644
--- a/lib/acl/rte_acl.c
+++ b/lib/acl/rte_acl.c
@@ -399,15 +399,15 @@ rte_acl_create(const struct rte_acl_param *param)
te = rte_zmalloc("ACL_TAILQ_ENTRY", sizeof(*te), 0);
if (te == NULL) {
- RTE_LOG(ERR, ACL, "Cannot allocate tailq entry!\n");
+ RTE_LOG_LINE(ERR, ACL, "Cannot allocate tailq entry!");
goto exit;
}
ctx = rte_zmalloc_socket(name, sz, RTE_CACHE_LINE_SIZE, param->socket_id);
if (ctx == NULL) {
- RTE_LOG(ERR, ACL,
- "allocation of %zu bytes on socket %d for %s failed\n",
+ RTE_LOG_LINE(ERR, ACL,
+ "allocation of %zu bytes on socket %d for %s failed",
sz, param->socket_id, name);
rte_free(te);
goto exit;
@@ -473,7 +473,7 @@ rte_acl_add_rules(struct rte_acl_ctx *ctx, const struct rte_acl_rule *rules,
((uintptr_t)rules + i * ctx->rule_sz);
rc = acl_check_rule(&rv->data);
if (rc != 0) {
- RTE_LOG(ERR, ACL, "%s(%s): rule #%u is invalid\n",
+ RTE_LOG_LINE(ERR, ACL, "%s(%s): rule #%u is invalid",
__func__, ctx->name, i + 1);
return rc;
}
diff --git a/lib/acl/tb_mem.c b/lib/acl/tb_mem.c
index 238d65692a..228e62c8cd 100644
--- a/lib/acl/tb_mem.c
+++ b/lib/acl/tb_mem.c
@@ -26,8 +26,8 @@ tb_pool(struct tb_mem_pool *pool, size_t sz)
size = sz + pool->alignment - 1;
block = calloc(1, size + sizeof(*pool->block));
if (block == NULL) {
- RTE_LOG(ERR, ACL, "%s(%zu) failed, currently allocated "
- "by pool: %zu bytes\n", __func__, sz, pool->alloc);
+ RTE_LOG_LINE(ERR, ACL, "%s(%zu) failed, currently allocated "
+ "by pool: %zu bytes", __func__, sz, pool->alloc);
siglongjmp(pool->fail, -ENOMEM);
return NULL;
}
diff --git a/lib/eal/common/eal_common_bus.c b/lib/eal/common/eal_common_bus.c
index acac14131a..456f27112c 100644
--- a/lib/eal/common/eal_common_bus.c
+++ b/lib/eal/common/eal_common_bus.c
@@ -35,14 +35,14 @@ rte_bus_register(struct rte_bus *bus)
RTE_VERIFY(!bus->plug || bus->unplug);
TAILQ_INSERT_TAIL(&rte_bus_list, bus, next);
- RTE_LOG(DEBUG, EAL, "Registered [%s] bus.\n", rte_bus_name(bus));
+ RTE_LOG_LINE(DEBUG, EAL, "Registered [%s] bus.", rte_bus_name(bus));
}
void
rte_bus_unregister(struct rte_bus *bus)
{
TAILQ_REMOVE(&rte_bus_list, bus, next);
- RTE_LOG(DEBUG, EAL, "Unregistered [%s] bus.\n", rte_bus_name(bus));
+ RTE_LOG_LINE(DEBUG, EAL, "Unregistered [%s] bus.", rte_bus_name(bus));
}
/* Scan all the buses for registered devices */
@@ -55,7 +55,7 @@ rte_bus_scan(void)
TAILQ_FOREACH(bus, &rte_bus_list, next) {
ret = bus->scan();
if (ret)
- RTE_LOG(ERR, EAL, "Scan for (%s) bus failed.\n",
+ RTE_LOG_LINE(ERR, EAL, "Scan for (%s) bus failed.",
rte_bus_name(bus));
}
@@ -77,14 +77,14 @@ rte_bus_probe(void)
ret = bus->probe();
if (ret)
- RTE_LOG(ERR, EAL, "Bus (%s) probe failed.\n",
+ RTE_LOG_LINE(ERR, EAL, "Bus (%s) probe failed.",
rte_bus_name(bus));
}
if (vbus) {
ret = vbus->probe();
if (ret)
- RTE_LOG(ERR, EAL, "Bus (%s) probe failed.\n",
+ RTE_LOG_LINE(ERR, EAL, "Bus (%s) probe failed.",
rte_bus_name(vbus));
}
@@ -133,7 +133,7 @@ rte_bus_dump(FILE *f)
TAILQ_FOREACH(bus, &rte_bus_list, next) {
ret = bus_dump_one(f, bus);
if (ret) {
- RTE_LOG(ERR, EAL, "Unable to write to stream (%d)\n",
+ RTE_LOG_LINE(ERR, EAL, "Unable to write to stream (%d)",
ret);
break;
}
@@ -235,15 +235,15 @@ rte_bus_get_iommu_class(void)
continue;
bus_iova_mode = bus->get_iommu_class();
- RTE_LOG(DEBUG, EAL, "Bus %s wants IOVA as '%s'\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Bus %s wants IOVA as '%s'",
rte_bus_name(bus),
bus_iova_mode == RTE_IOVA_DC ? "DC" :
(bus_iova_mode == RTE_IOVA_PA ? "PA" : "VA"));
if (bus_iova_mode == RTE_IOVA_PA) {
buses_want_pa = true;
if (!RTE_IOVA_IN_MBUF)
- RTE_LOG(WARNING, EAL,
- "Bus %s wants IOVA as PA not compatible with 'enable_iova_as_pa=false' build option.\n",
+ RTE_LOG_LINE(WARNING, EAL,
+ "Bus %s wants IOVA as PA not compatible with 'enable_iova_as_pa=false' build option.",
rte_bus_name(bus));
} else if (bus_iova_mode == RTE_IOVA_VA)
buses_want_va = true;
@@ -255,8 +255,8 @@ rte_bus_get_iommu_class(void)
} else {
mode = RTE_IOVA_DC;
if (buses_want_va) {
- RTE_LOG(WARNING, EAL, "Some buses want 'VA' but forcing 'DC' because other buses want 'PA'.\n");
- RTE_LOG(WARNING, EAL, "Depending on the final decision by the EAL, not all buses may be able to initialize.\n");
+ RTE_LOG_LINE(WARNING, EAL, "Some buses want 'VA' but forcing 'DC' because other buses want 'PA'.");
+ RTE_LOG_LINE(WARNING, EAL, "Depending on the final decision by the EAL, not all buses may be able to initialize.");
}
}
diff --git a/lib/eal/common/eal_common_class.c b/lib/eal/common/eal_common_class.c
index 0187076af1..02a983b286 100644
--- a/lib/eal/common/eal_common_class.c
+++ b/lib/eal/common/eal_common_class.c
@@ -19,14 +19,14 @@ rte_class_register(struct rte_class *class)
RTE_VERIFY(class->name && strlen(class->name));
TAILQ_INSERT_TAIL(&rte_class_list, class, next);
- RTE_LOG(DEBUG, EAL, "Registered [%s] device class.\n", class->name);
+ RTE_LOG_LINE(DEBUG, EAL, "Registered [%s] device class.", class->name);
}
void
rte_class_unregister(struct rte_class *class)
{
TAILQ_REMOVE(&rte_class_list, class, next);
- RTE_LOG(DEBUG, EAL, "Unregistered [%s] device class.\n", class->name);
+ RTE_LOG_LINE(DEBUG, EAL, "Unregistered [%s] device class.", class->name);
}
struct rte_class *
diff --git a/lib/eal/common/eal_common_config.c b/lib/eal/common/eal_common_config.c
index 0daf0f3188..4b6530f2fb 100644
--- a/lib/eal/common/eal_common_config.c
+++ b/lib/eal/common/eal_common_config.c
@@ -31,7 +31,7 @@ int
eal_set_runtime_dir(const char *run_dir)
{
if (strlcpy(runtime_dir, run_dir, PATH_MAX) >= PATH_MAX) {
- RTE_LOG(ERR, EAL, "Runtime directory string too long\n");
+ RTE_LOG_LINE(ERR, EAL, "Runtime directory string too long");
return -1;
}
diff --git a/lib/eal/common/eal_common_debug.c b/lib/eal/common/eal_common_debug.c
index 9cac9c6390..065843f34e 100644
--- a/lib/eal/common/eal_common_debug.c
+++ b/lib/eal/common/eal_common_debug.c
@@ -16,7 +16,7 @@ __rte_panic(const char *funcname, const char *format, ...)
{
va_list ap;
- rte_log(RTE_LOG_CRIT, RTE_LOGTYPE_EAL, "PANIC in %s():\n", funcname);
+ RTE_LOG_LINE(CRIT, EAL, "PANIC in %s():", funcname);
va_start(ap, format);
rte_vlog(RTE_LOG_CRIT, RTE_LOGTYPE_EAL, format, ap);
va_end(ap);
@@ -42,7 +42,7 @@ rte_exit(int exit_code, const char *format, ...)
va_end(ap);
if (rte_eal_cleanup() != 0 && rte_errno != EALREADY)
- RTE_LOG(CRIT, EAL,
- "EAL could not release all resources\n");
+ RTE_LOG_LINE(CRIT, EAL,
+ "EAL could not release all resources");
exit(exit_code);
}
diff --git a/lib/eal/common/eal_common_dev.c b/lib/eal/common/eal_common_dev.c
index 614ef6c9fc..359907798a 100644
--- a/lib/eal/common/eal_common_dev.c
+++ b/lib/eal/common/eal_common_dev.c
@@ -182,7 +182,7 @@ local_dev_probe(const char *devargs, struct rte_device **new_dev)
goto err_devarg;
if (da->bus->plug == NULL) {
- RTE_LOG(ERR, EAL, "Function plug not supported by bus (%s)\n",
+ RTE_LOG_LINE(ERR, EAL, "Function plug not supported by bus (%s)",
da->bus->name);
ret = -ENOTSUP;
goto err_devarg;
@@ -199,7 +199,7 @@ local_dev_probe(const char *devargs, struct rte_device **new_dev)
dev = da->bus->find_device(NULL, cmp_dev_name, da->name);
if (dev == NULL) {
- RTE_LOG(ERR, EAL, "Cannot find device (%s)\n",
+ RTE_LOG_LINE(ERR, EAL, "Cannot find device (%s)",
da->name);
ret = -ENODEV;
goto err_devarg;
@@ -214,7 +214,7 @@ local_dev_probe(const char *devargs, struct rte_device **new_dev)
ret = -ENOTSUP;
if (ret && !rte_dev_is_probed(dev)) { /* if hasn't ever succeeded */
- RTE_LOG(ERR, EAL, "Driver cannot attach the device (%s)\n",
+ RTE_LOG_LINE(ERR, EAL, "Driver cannot attach the device (%s)",
dev->name);
return ret;
}
@@ -248,13 +248,13 @@ rte_dev_probe(const char *devargs)
*/
ret = eal_dev_hotplug_request_to_primary(&req);
if (ret != 0) {
- RTE_LOG(ERR, EAL,
- "Failed to send hotplug request to primary\n");
+ RTE_LOG_LINE(ERR, EAL,
+ "Failed to send hotplug request to primary");
return -ENOMSG;
}
if (req.result != 0)
- RTE_LOG(ERR, EAL,
- "Failed to hotplug add device\n");
+ RTE_LOG_LINE(ERR, EAL,
+ "Failed to hotplug add device");
return req.result;
}
@@ -264,8 +264,8 @@ rte_dev_probe(const char *devargs)
ret = local_dev_probe(devargs, &dev);
if (ret != 0) {
- RTE_LOG(ERR, EAL,
- "Failed to attach device on primary process\n");
+ RTE_LOG_LINE(ERR, EAL,
+ "Failed to attach device on primary process");
/**
* it is possible that secondary process failed to attached a
@@ -282,8 +282,8 @@ rte_dev_probe(const char *devargs)
/* if any communication error, we need to rollback. */
if (ret != 0) {
- RTE_LOG(ERR, EAL,
- "Failed to send hotplug add request to secondary\n");
+ RTE_LOG_LINE(ERR, EAL,
+ "Failed to send hotplug add request to secondary");
ret = -ENOMSG;
goto rollback;
}
@@ -293,8 +293,8 @@ rte_dev_probe(const char *devargs)
* is necessary.
*/
if (req.result != 0) {
- RTE_LOG(ERR, EAL,
- "Failed to attach device on secondary process\n");
+ RTE_LOG_LINE(ERR, EAL,
+ "Failed to attach device on secondary process");
ret = req.result;
/* for -EEXIST, we don't need to rollback. */
@@ -310,15 +310,15 @@ rte_dev_probe(const char *devargs)
/* primary send rollback request to secondary. */
if (eal_dev_hotplug_request_to_secondary(&req) != 0)
- RTE_LOG(WARNING, EAL,
+ RTE_LOG_LINE(WARNING, EAL,
"Failed to rollback device attach on secondary."
- "Devices in secondary may not sync with primary\n");
+ "Devices in secondary may not sync with primary");
/* primary rollback itself. */
if (local_dev_remove(dev) != 0)
- RTE_LOG(WARNING, EAL,
+ RTE_LOG_LINE(WARNING, EAL,
"Failed to rollback device attach on primary."
- "Devices in secondary may not sync with primary\n");
+ "Devices in secondary may not sync with primary");
return ret;
}
@@ -331,13 +331,13 @@ rte_eal_hotplug_remove(const char *busname, const char *devname)
bus = rte_bus_find_by_name(busname);
if (bus == NULL) {
- RTE_LOG(ERR, EAL, "Cannot find bus (%s)\n", busname);
+ RTE_LOG_LINE(ERR, EAL, "Cannot find bus (%s)", busname);
return -ENOENT;
}
dev = bus->find_device(NULL, cmp_dev_name, devname);
if (dev == NULL) {
- RTE_LOG(ERR, EAL, "Cannot find plugged device (%s)\n", devname);
+ RTE_LOG_LINE(ERR, EAL, "Cannot find plugged device (%s)", devname);
return -EINVAL;
}
@@ -351,14 +351,14 @@ local_dev_remove(struct rte_device *dev)
int ret;
if (dev->bus->unplug == NULL) {
- RTE_LOG(ERR, EAL, "Function unplug not supported by bus (%s)\n",
+ RTE_LOG_LINE(ERR, EAL, "Function unplug not supported by bus (%s)",
dev->bus->name);
return -ENOTSUP;
}
ret = dev->bus->unplug(dev);
if (ret) {
- RTE_LOG(ERR, EAL, "Driver cannot detach the device (%s)\n",
+ RTE_LOG_LINE(ERR, EAL, "Driver cannot detach the device (%s)",
dev->name);
return (ret < 0) ? ret : -ENOENT;
}
@@ -374,7 +374,7 @@ rte_dev_remove(struct rte_device *dev)
int ret;
if (!rte_dev_is_probed(dev)) {
- RTE_LOG(ERR, EAL, "Device is not probed\n");
+ RTE_LOG_LINE(ERR, EAL, "Device is not probed");
return -ENOENT;
}
@@ -394,13 +394,13 @@ rte_dev_remove(struct rte_device *dev)
*/
ret = eal_dev_hotplug_request_to_primary(&req);
if (ret != 0) {
- RTE_LOG(ERR, EAL,
- "Failed to send hotplug request to primary\n");
+ RTE_LOG_LINE(ERR, EAL,
+ "Failed to send hotplug request to primary");
return -ENOMSG;
}
if (req.result != 0)
- RTE_LOG(ERR, EAL,
- "Failed to hotplug remove device\n");
+ RTE_LOG_LINE(ERR, EAL,
+ "Failed to hotplug remove device");
return req.result;
}
@@ -414,8 +414,8 @@ rte_dev_remove(struct rte_device *dev)
* part of the secondary processes still detached it successfully.
*/
if (ret != 0) {
- RTE_LOG(ERR, EAL,
- "Failed to send device detach request to secondary\n");
+ RTE_LOG_LINE(ERR, EAL,
+ "Failed to send device detach request to secondary");
ret = -ENOMSG;
goto rollback;
}
@@ -425,8 +425,8 @@ rte_dev_remove(struct rte_device *dev)
* is necessary.
*/
if (req.result != 0) {
- RTE_LOG(ERR, EAL,
- "Failed to detach device on secondary process\n");
+ RTE_LOG_LINE(ERR, EAL,
+ "Failed to detach device on secondary process");
ret = req.result;
/**
* if -ENOENT, we don't need to rollback, since devices is
@@ -441,8 +441,8 @@ rte_dev_remove(struct rte_device *dev)
/* if primary failed, still need to consider if rollback is necessary */
if (ret != 0) {
- RTE_LOG(ERR, EAL,
- "Failed to detach device on primary process\n");
+ RTE_LOG_LINE(ERR, EAL,
+ "Failed to detach device on primary process");
/* if -ENOENT, we don't need to rollback */
if (ret == -ENOENT)
return ret;
@@ -456,9 +456,9 @@ rte_dev_remove(struct rte_device *dev)
/* primary send rollback request to secondary. */
if (eal_dev_hotplug_request_to_secondary(&req) != 0)
- RTE_LOG(WARNING, EAL,
+ RTE_LOG_LINE(WARNING, EAL,
"Failed to rollback device detach on secondary."
- "Devices in secondary may not sync with primary\n");
+ "Devices in secondary may not sync with primary");
return ret;
}
@@ -508,16 +508,16 @@ rte_dev_event_callback_register(const char *device_name,
}
TAILQ_INSERT_TAIL(&dev_event_cbs, event_cb, next);
} else {
- RTE_LOG(ERR, EAL,
+ RTE_LOG_LINE(ERR, EAL,
"Failed to allocate memory for device "
"event callback.");
ret = -ENOMEM;
goto error;
}
} else {
- RTE_LOG(ERR, EAL,
+ RTE_LOG_LINE(ERR, EAL,
"The callback is already exist, no need "
- "to register again.\n");
+ "to register again.");
event_cb = NULL;
ret = -EEXIST;
goto error;
@@ -635,17 +635,17 @@ rte_dev_iterator_init(struct rte_dev_iterator *it,
* one layer specified.
*/
if (bus == NULL && cls == NULL) {
- RTE_LOG(DEBUG, EAL, "Either bus or class must be specified.\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Either bus or class must be specified.");
rte_errno = EINVAL;
goto get_out;
}
if (bus != NULL && bus->dev_iterate == NULL) {
- RTE_LOG(DEBUG, EAL, "Bus %s not supported\n", bus->name);
+ RTE_LOG_LINE(DEBUG, EAL, "Bus %s not supported", bus->name);
rte_errno = ENOTSUP;
goto get_out;
}
if (cls != NULL && cls->dev_iterate == NULL) {
- RTE_LOG(DEBUG, EAL, "Class %s not supported\n", cls->name);
+ RTE_LOG_LINE(DEBUG, EAL, "Class %s not supported", cls->name);
rte_errno = ENOTSUP;
goto get_out;
}
diff --git a/lib/eal/common/eal_common_devargs.c b/lib/eal/common/eal_common_devargs.c
index fb5d0a293b..dbf5affa76 100644
--- a/lib/eal/common/eal_common_devargs.c
+++ b/lib/eal/common/eal_common_devargs.c
@@ -39,12 +39,12 @@ devargs_bus_parse_default(struct rte_devargs *devargs,
/* Parse devargs name from bus key-value list. */
name = rte_kvargs_get(bus_args, "name");
if (name == NULL) {
- RTE_LOG(DEBUG, EAL, "devargs name not found: %s\n",
+ RTE_LOG_LINE(DEBUG, EAL, "devargs name not found: %s",
devargs->data);
return 0;
}
if (rte_strscpy(devargs->name, name, sizeof(devargs->name)) < 0) {
- RTE_LOG(ERR, EAL, "devargs name too long: %s\n",
+ RTE_LOG_LINE(ERR, EAL, "devargs name too long: %s",
devargs->data);
return -E2BIG;
}
@@ -79,7 +79,7 @@ rte_devargs_layers_parse(struct rte_devargs *devargs,
if (devargs->data != devstr) {
devargs->data = strdup(devstr);
if (devargs->data == NULL) {
- RTE_LOG(ERR, EAL, "OOM\n");
+ RTE_LOG_LINE(ERR, EAL, "OOM");
ret = -ENOMEM;
goto get_out;
}
@@ -133,7 +133,7 @@ rte_devargs_layers_parse(struct rte_devargs *devargs,
devargs->bus_str = layers[i].str;
devargs->bus = rte_bus_find_by_name(kv->value);
if (devargs->bus == NULL) {
- RTE_LOG(ERR, EAL, "Could not find bus \"%s\"\n",
+ RTE_LOG_LINE(ERR, EAL, "Could not find bus \"%s\"",
kv->value);
ret = -EFAULT;
goto get_out;
@@ -142,7 +142,7 @@ rte_devargs_layers_parse(struct rte_devargs *devargs,
devargs->cls_str = layers[i].str;
devargs->cls = rte_class_find_by_name(kv->value);
if (devargs->cls == NULL) {
- RTE_LOG(ERR, EAL, "Could not find class \"%s\"\n",
+ RTE_LOG_LINE(ERR, EAL, "Could not find class \"%s\"",
kv->value);
ret = -EFAULT;
goto get_out;
@@ -217,7 +217,7 @@ rte_devargs_parse(struct rte_devargs *da, const char *dev)
da->name[i] = devname[i];
i++;
if (i == maxlen) {
- RTE_LOG(WARNING, EAL, "Parsing \"%s\": device name should be shorter than %zu\n",
+ RTE_LOG_LINE(WARNING, EAL, "Parsing \"%s\": device name should be shorter than %zu",
dev, maxlen);
da->name[i - 1] = '\0';
return -EINVAL;
@@ -227,7 +227,7 @@ rte_devargs_parse(struct rte_devargs *da, const char *dev)
if (bus == NULL) {
bus = rte_bus_find_by_device_name(da->name);
if (bus == NULL) {
- RTE_LOG(ERR, EAL, "failed to parse device \"%s\"\n",
+ RTE_LOG_LINE(ERR, EAL, "failed to parse device \"%s\"",
da->name);
return -EFAULT;
}
@@ -239,7 +239,7 @@ rte_devargs_parse(struct rte_devargs *da, const char *dev)
else
da->data = strdup("");
if (da->data == NULL) {
- RTE_LOG(ERR, EAL, "not enough memory to parse arguments\n");
+ RTE_LOG_LINE(ERR, EAL, "not enough memory to parse arguments");
return -ENOMEM;
}
da->drv_str = da->data;
@@ -266,7 +266,7 @@ rte_devargs_parsef(struct rte_devargs *da, const char *format, ...)
len += 1;
dev = calloc(1, (size_t)len);
if (dev == NULL) {
- RTE_LOG(ERR, EAL, "not enough memory to parse device\n");
+ RTE_LOG_LINE(ERR, EAL, "not enough memory to parse device");
return -ENOMEM;
}
diff --git a/lib/eal/common/eal_common_dynmem.c b/lib/eal/common/eal_common_dynmem.c
index 95da55d9b0..721cb63bf2 100644
--- a/lib/eal/common/eal_common_dynmem.c
+++ b/lib/eal/common/eal_common_dynmem.c
@@ -76,7 +76,7 @@ eal_dynmem_memseg_lists_init(void)
n_memtypes = internal_conf->num_hugepage_sizes * rte_socket_count();
memtypes = calloc(n_memtypes, sizeof(*memtypes));
if (memtypes == NULL) {
- RTE_LOG(ERR, EAL, "Cannot allocate space for memory types\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot allocate space for memory types");
return -1;
}
@@ -101,8 +101,8 @@ eal_dynmem_memseg_lists_init(void)
memtypes[cur_type].page_sz = hugepage_sz;
memtypes[cur_type].socket_id = socket_id;
- RTE_LOG(DEBUG, EAL, "Detected memory type: "
- "socket_id:%u hugepage_sz:%" PRIu64 "\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Detected memory type: "
+ "socket_id:%u hugepage_sz:%" PRIu64,
socket_id, hugepage_sz);
}
}
@@ -120,7 +120,7 @@ eal_dynmem_memseg_lists_init(void)
max_seglists_per_type = RTE_MAX_MEMSEG_LISTS / n_memtypes;
if (max_seglists_per_type == 0) {
- RTE_LOG(ERR, EAL, "Cannot accommodate all memory types, please increase RTE_MAX_MEMSEG_LISTS\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot accommodate all memory types, please increase RTE_MAX_MEMSEG_LISTS");
goto out;
}
@@ -171,15 +171,15 @@ eal_dynmem_memseg_lists_init(void)
/* limit number of segment lists according to our maximum */
n_seglists = RTE_MIN(n_seglists, max_seglists_per_type);
- RTE_LOG(DEBUG, EAL, "Creating %i segment lists: "
- "n_segs:%i socket_id:%i hugepage_sz:%" PRIu64 "\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Creating %i segment lists: "
+ "n_segs:%i socket_id:%i hugepage_sz:%" PRIu64,
n_seglists, n_segs, socket_id, pagesz);
/* create all segment lists */
for (cur_seglist = 0; cur_seglist < n_seglists; cur_seglist++) {
if (msl_idx >= RTE_MAX_MEMSEG_LISTS) {
- RTE_LOG(ERR, EAL,
- "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS\n");
+ RTE_LOG_LINE(ERR, EAL,
+ "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS");
goto out;
}
msl = &mcfg->memsegs[msl_idx++];
@@ -189,7 +189,7 @@ eal_dynmem_memseg_lists_init(void)
goto out;
if (eal_memseg_list_alloc(msl, 0)) {
- RTE_LOG(ERR, EAL, "Cannot allocate VA space for memseg list\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot allocate VA space for memseg list");
goto out;
}
}
@@ -287,9 +287,9 @@ eal_dynmem_hugepage_init(void)
if (num_pages == 0)
continue;
- RTE_LOG(DEBUG, EAL,
+ RTE_LOG_LINE(DEBUG, EAL,
"Allocating %u pages of size %" PRIu64 "M "
- "on socket %i\n",
+ "on socket %i",
num_pages, hpi->hugepage_sz >> 20, socket_id);
/* we may not be able to allocate all pages in one go,
@@ -307,7 +307,7 @@ eal_dynmem_hugepage_init(void)
pages = malloc(sizeof(*pages) * needed);
if (pages == NULL) {
- RTE_LOG(ERR, EAL, "Failed to malloc pages\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed to malloc pages");
return -1;
}
@@ -342,7 +342,7 @@ eal_dynmem_hugepage_init(void)
continue;
if (rte_mem_alloc_validator_register("socket-limit",
limits_callback, i, limit))
- RTE_LOG(ERR, EAL, "Failed to register socket limits validator callback\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed to register socket limits validator callback");
}
}
return 0;
@@ -515,8 +515,8 @@ eal_dynmem_calc_num_pages_per_socket(
internal_conf->socket_mem[socket] / 0x100000);
available = requested -
((unsigned int)(memory[socket] / 0x100000));
- RTE_LOG(ERR, EAL, "Not enough memory available on "
- "socket %u! Requested: %uMB, available: %uMB\n",
+ RTE_LOG_LINE(ERR, EAL, "Not enough memory available on "
+ "socket %u! Requested: %uMB, available: %uMB",
socket, requested, available);
return -1;
}
@@ -526,8 +526,8 @@ eal_dynmem_calc_num_pages_per_socket(
if (total_mem > 0) {
requested = (unsigned int)(internal_conf->memory / 0x100000);
available = requested - (unsigned int)(total_mem / 0x100000);
- RTE_LOG(ERR, EAL, "Not enough memory available! "
- "Requested: %uMB, available: %uMB\n",
+ RTE_LOG_LINE(ERR, EAL, "Not enough memory available! "
+ "Requested: %uMB, available: %uMB",
requested, available);
return -1;
}
diff --git a/lib/eal/common/eal_common_fbarray.c b/lib/eal/common/eal_common_fbarray.c
index 2055bfa57d..7b90e01500 100644
--- a/lib/eal/common/eal_common_fbarray.c
+++ b/lib/eal/common/eal_common_fbarray.c
@@ -83,7 +83,7 @@ resize_and_map(int fd, const char *path, void *addr, size_t len)
void *map_addr;
if (eal_file_truncate(fd, len)) {
- RTE_LOG(ERR, EAL, "Cannot truncate %s\n", path);
+ RTE_LOG_LINE(ERR, EAL, "Cannot truncate %s", path);
return -1;
}
@@ -755,7 +755,7 @@ rte_fbarray_init(struct rte_fbarray *arr, const char *name, unsigned int len,
void *new_data = rte_mem_map(data, mmap_len,
RTE_PROT_READ | RTE_PROT_WRITE, flags, fd, 0);
if (new_data == NULL) {
- RTE_LOG(DEBUG, EAL, "%s(): couldn't remap anonymous memory: %s\n",
+ RTE_LOG_LINE(DEBUG, EAL, "%s(): couldn't remap anonymous memory: %s",
__func__, rte_strerror(rte_errno));
goto fail;
}
@@ -770,12 +770,12 @@ rte_fbarray_init(struct rte_fbarray *arr, const char *name, unsigned int len,
*/
fd = eal_file_open(path, EAL_OPEN_CREATE | EAL_OPEN_READWRITE);
if (fd < 0) {
- RTE_LOG(DEBUG, EAL, "%s(): couldn't open %s: %s\n",
+ RTE_LOG_LINE(DEBUG, EAL, "%s(): couldn't open %s: %s",
__func__, path, rte_strerror(rte_errno));
goto fail;
} else if (eal_file_lock(
fd, EAL_FLOCK_EXCLUSIVE, EAL_FLOCK_RETURN)) {
- RTE_LOG(DEBUG, EAL, "%s(): couldn't lock %s: %s\n",
+ RTE_LOG_LINE(DEBUG, EAL, "%s(): couldn't lock %s: %s",
__func__, path, rte_strerror(rte_errno));
rte_errno = EBUSY;
goto fail;
@@ -1017,7 +1017,7 @@ rte_fbarray_destroy(struct rte_fbarray *arr)
*/
fd = tmp->fd;
if (eal_file_lock(fd, EAL_FLOCK_EXCLUSIVE, EAL_FLOCK_RETURN)) {
- RTE_LOG(DEBUG, EAL, "Cannot destroy fbarray - another process is using it\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Cannot destroy fbarray - another process is using it");
rte_errno = EBUSY;
ret = -1;
goto out;
@@ -1026,7 +1026,7 @@ rte_fbarray_destroy(struct rte_fbarray *arr)
/* we're OK to destroy the file */
eal_get_fbarray_path(path, sizeof(path), arr->name);
if (unlink(path)) {
- RTE_LOG(DEBUG, EAL, "Cannot unlink fbarray: %s\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Cannot unlink fbarray: %s",
strerror(errno));
rte_errno = errno;
/*
diff --git a/lib/eal/common/eal_common_interrupts.c b/lib/eal/common/eal_common_interrupts.c
index 97b64fed58..6a5723a068 100644
--- a/lib/eal/common/eal_common_interrupts.c
+++ b/lib/eal/common/eal_common_interrupts.c
@@ -15,7 +15,7 @@
/* Macros to check for valid interrupt handle */
#define CHECK_VALID_INTR_HANDLE(intr_handle) do { \
if (intr_handle == NULL) { \
- RTE_LOG(DEBUG, EAL, "Interrupt instance unallocated\n"); \
+ RTE_LOG_LINE(DEBUG, EAL, "Interrupt instance unallocated"); \
rte_errno = EINVAL; \
goto fail; \
} \
@@ -37,7 +37,7 @@ struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags)
* defined flags.
*/
if ((flags & ~RTE_INTR_INSTANCE_KNOWN_FLAGS) != 0) {
- RTE_LOG(DEBUG, EAL, "Invalid alloc flag passed 0x%x\n", flags);
+ RTE_LOG_LINE(DEBUG, EAL, "Invalid alloc flag passed 0x%x", flags);
rte_errno = EINVAL;
return NULL;
}
@@ -48,7 +48,7 @@ struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags)
else
intr_handle = calloc(1, sizeof(*intr_handle));
if (intr_handle == NULL) {
- RTE_LOG(ERR, EAL, "Failed to allocate intr_handle\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed to allocate intr_handle");
rte_errno = ENOMEM;
return NULL;
}
@@ -61,7 +61,7 @@ struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags)
sizeof(int));
}
if (intr_handle->efds == NULL) {
- RTE_LOG(ERR, EAL, "Fail to allocate event fd list\n");
+ RTE_LOG_LINE(ERR, EAL, "Fail to allocate event fd list");
rte_errno = ENOMEM;
goto fail;
}
@@ -75,7 +75,7 @@ struct rte_intr_handle *rte_intr_instance_alloc(uint32_t flags)
sizeof(struct rte_epoll_event));
}
if (intr_handle->elist == NULL) {
- RTE_LOG(ERR, EAL, "fail to allocate event fd list\n");
+ RTE_LOG_LINE(ERR, EAL, "fail to allocate event fd list");
rte_errno = ENOMEM;
goto fail;
}
@@ -100,7 +100,7 @@ struct rte_intr_handle *rte_intr_instance_dup(const struct rte_intr_handle *src)
struct rte_intr_handle *intr_handle;
if (src == NULL) {
- RTE_LOG(DEBUG, EAL, "Source interrupt instance unallocated\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Source interrupt instance unallocated");
rte_errno = EINVAL;
return NULL;
}
@@ -129,7 +129,7 @@ int rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size)
CHECK_VALID_INTR_HANDLE(intr_handle);
if (size == 0) {
- RTE_LOG(DEBUG, EAL, "Size can't be zero\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Size can't be zero");
rte_errno = EINVAL;
goto fail;
}
@@ -143,7 +143,7 @@ int rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size)
tmp_efds = realloc(intr_handle->efds, size * sizeof(int));
}
if (tmp_efds == NULL) {
- RTE_LOG(ERR, EAL, "Failed to realloc the efds list\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed to realloc the efds list");
rte_errno = ENOMEM;
goto fail;
}
@@ -157,7 +157,7 @@ int rte_intr_event_list_update(struct rte_intr_handle *intr_handle, int size)
size * sizeof(struct rte_epoll_event));
}
if (tmp_elist == NULL) {
- RTE_LOG(ERR, EAL, "Failed to realloc the event list\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed to realloc the event list");
rte_errno = ENOMEM;
goto fail;
}
@@ -253,8 +253,8 @@ int rte_intr_max_intr_set(struct rte_intr_handle *intr_handle,
CHECK_VALID_INTR_HANDLE(intr_handle);
if (max_intr > intr_handle->nb_intr) {
- RTE_LOG(DEBUG, EAL, "Maximum interrupt vector ID (%d) exceeds "
- "the number of available events (%d)\n", max_intr,
+ RTE_LOG_LINE(DEBUG, EAL, "Maximum interrupt vector ID (%d) exceeds "
+ "the number of available events (%d)", max_intr,
intr_handle->nb_intr);
rte_errno = ERANGE;
goto fail;
@@ -332,7 +332,7 @@ int rte_intr_efds_index_get(const struct rte_intr_handle *intr_handle,
CHECK_VALID_INTR_HANDLE(intr_handle);
if (index >= intr_handle->nb_intr) {
- RTE_LOG(DEBUG, EAL, "Invalid index %d, max limit %d\n", index,
+ RTE_LOG_LINE(DEBUG, EAL, "Invalid index %d, max limit %d", index,
intr_handle->nb_intr);
rte_errno = EINVAL;
goto fail;
@@ -349,7 +349,7 @@ int rte_intr_efds_index_set(struct rte_intr_handle *intr_handle,
CHECK_VALID_INTR_HANDLE(intr_handle);
if (index >= intr_handle->nb_intr) {
- RTE_LOG(DEBUG, EAL, "Invalid index %d, max limit %d\n", index,
+ RTE_LOG_LINE(DEBUG, EAL, "Invalid index %d, max limit %d", index,
intr_handle->nb_intr);
rte_errno = ERANGE;
goto fail;
@@ -368,7 +368,7 @@ struct rte_epoll_event *rte_intr_elist_index_get(
CHECK_VALID_INTR_HANDLE(intr_handle);
if (index >= intr_handle->nb_intr) {
- RTE_LOG(DEBUG, EAL, "Invalid index %d, max limit %d\n", index,
+ RTE_LOG_LINE(DEBUG, EAL, "Invalid index %d, max limit %d", index,
intr_handle->nb_intr);
rte_errno = ERANGE;
goto fail;
@@ -385,7 +385,7 @@ int rte_intr_elist_index_set(struct rte_intr_handle *intr_handle,
CHECK_VALID_INTR_HANDLE(intr_handle);
if (index >= intr_handle->nb_intr) {
- RTE_LOG(DEBUG, EAL, "Invalid index %d, max limit %d\n", index,
+ RTE_LOG_LINE(DEBUG, EAL, "Invalid index %d, max limit %d", index,
intr_handle->nb_intr);
rte_errno = ERANGE;
goto fail;
@@ -408,7 +408,7 @@ int rte_intr_vec_list_alloc(struct rte_intr_handle *intr_handle,
return 0;
if (size > intr_handle->nb_intr) {
- RTE_LOG(DEBUG, EAL, "Invalid size %d, max limit %d\n", size,
+ RTE_LOG_LINE(DEBUG, EAL, "Invalid size %d, max limit %d", size,
intr_handle->nb_intr);
rte_errno = ERANGE;
goto fail;
@@ -419,7 +419,7 @@ int rte_intr_vec_list_alloc(struct rte_intr_handle *intr_handle,
else
intr_handle->intr_vec = calloc(size, sizeof(int));
if (intr_handle->intr_vec == NULL) {
- RTE_LOG(ERR, EAL, "Failed to allocate %d intr_vec\n", size);
+ RTE_LOG_LINE(ERR, EAL, "Failed to allocate %d intr_vec", size);
rte_errno = ENOMEM;
goto fail;
}
@@ -437,7 +437,7 @@ int rte_intr_vec_list_index_get(const struct rte_intr_handle *intr_handle,
CHECK_VALID_INTR_HANDLE(intr_handle);
if (index >= intr_handle->vec_list_size) {
- RTE_LOG(DEBUG, EAL, "Index %d greater than vec list size %d\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Index %d greater than vec list size %d",
index, intr_handle->vec_list_size);
rte_errno = ERANGE;
goto fail;
@@ -454,7 +454,7 @@ int rte_intr_vec_list_index_set(struct rte_intr_handle *intr_handle,
CHECK_VALID_INTR_HANDLE(intr_handle);
if (index >= intr_handle->vec_list_size) {
- RTE_LOG(DEBUG, EAL, "Index %d greater than vec list size %d\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Index %d greater than vec list size %d",
index, intr_handle->vec_list_size);
rte_errno = ERANGE;
goto fail;
diff --git a/lib/eal/common/eal_common_lcore.c b/lib/eal/common/eal_common_lcore.c
index 6807a38247..4ec1996d12 100644
--- a/lib/eal/common/eal_common_lcore.c
+++ b/lib/eal/common/eal_common_lcore.c
@@ -174,8 +174,8 @@ rte_eal_cpu_init(void)
lcore_config[lcore_id].core_role = ROLE_RTE;
lcore_config[lcore_id].core_id = eal_cpu_core_id(lcore_id);
lcore_config[lcore_id].socket_id = socket_id;
- RTE_LOG(DEBUG, EAL, "Detected lcore %u as "
- "core %u on socket %u\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Detected lcore %u as "
+ "core %u on socket %u",
lcore_id, lcore_config[lcore_id].core_id,
lcore_config[lcore_id].socket_id);
count++;
@@ -183,17 +183,17 @@ rte_eal_cpu_init(void)
for (; lcore_id < CPU_SETSIZE; lcore_id++) {
if (eal_cpu_detected(lcore_id) == 0)
continue;
- RTE_LOG(DEBUG, EAL, "Skipped lcore %u as core %u on socket %u\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Skipped lcore %u as core %u on socket %u",
lcore_id, eal_cpu_core_id(lcore_id),
eal_cpu_socket_id(lcore_id));
}
/* Set the count of enabled logical cores of the EAL configuration */
config->lcore_count = count;
- RTE_LOG(DEBUG, EAL,
- "Maximum logical cores by configuration: %u\n",
+ RTE_LOG_LINE(DEBUG, EAL,
+ "Maximum logical cores by configuration: %u",
RTE_MAX_LCORE);
- RTE_LOG(INFO, EAL, "Detected CPU lcores: %u\n", config->lcore_count);
+ RTE_LOG_LINE(INFO, EAL, "Detected CPU lcores: %u", config->lcore_count);
/* sort all socket id's in ascending order */
qsort(lcore_to_socket_id, RTE_DIM(lcore_to_socket_id),
@@ -208,7 +208,7 @@ rte_eal_cpu_init(void)
socket_id;
prev_socket_id = socket_id;
}
- RTE_LOG(INFO, EAL, "Detected NUMA nodes: %u\n", config->numa_node_count);
+ RTE_LOG_LINE(INFO, EAL, "Detected NUMA nodes: %u", config->numa_node_count);
return 0;
}
@@ -247,7 +247,7 @@ callback_init(struct lcore_callback *callback, unsigned int lcore_id)
{
if (callback->init == NULL)
return 0;
- RTE_LOG(DEBUG, EAL, "Call init for lcore callback %s, lcore_id %u\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Call init for lcore callback %s, lcore_id %u",
callback->name, lcore_id);
return callback->init(lcore_id, callback->arg);
}
@@ -257,7 +257,7 @@ callback_uninit(struct lcore_callback *callback, unsigned int lcore_id)
{
if (callback->uninit == NULL)
return;
- RTE_LOG(DEBUG, EAL, "Call uninit for lcore callback %s, lcore_id %u\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Call uninit for lcore callback %s, lcore_id %u",
callback->name, lcore_id);
callback->uninit(lcore_id, callback->arg);
}
@@ -311,7 +311,7 @@ rte_lcore_callback_register(const char *name, rte_lcore_init_cb init,
}
no_init:
TAILQ_INSERT_TAIL(&lcore_callbacks, callback, next);
- RTE_LOG(DEBUG, EAL, "Registered new lcore callback %s (%sinit, %suninit).\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Registered new lcore callback %s (%sinit, %suninit).",
callback->name, callback->init == NULL ? "NO " : "",
callback->uninit == NULL ? "NO " : "");
out:
@@ -339,7 +339,7 @@ rte_lcore_callback_unregister(void *handle)
no_uninit:
TAILQ_REMOVE(&lcore_callbacks, callback, next);
rte_rwlock_write_unlock(&lcore_lock);
- RTE_LOG(DEBUG, EAL, "Unregistered lcore callback %s-%p.\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Unregistered lcore callback %s-%p.",
callback->name, callback->arg);
free_callback(callback);
}
@@ -361,7 +361,7 @@ eal_lcore_non_eal_allocate(void)
break;
}
if (lcore_id == RTE_MAX_LCORE) {
- RTE_LOG(DEBUG, EAL, "No lcore available.\n");
+ RTE_LOG_LINE(DEBUG, EAL, "No lcore available.");
goto out;
}
TAILQ_FOREACH(callback, &lcore_callbacks, next) {
@@ -375,7 +375,7 @@ eal_lcore_non_eal_allocate(void)
callback_uninit(prev, lcore_id);
prev = TAILQ_PREV(prev, lcore_callbacks_head, next);
}
- RTE_LOG(DEBUG, EAL, "Initialization refused for lcore %u.\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Initialization refused for lcore %u.",
lcore_id);
cfg->lcore_role[lcore_id] = ROLE_OFF;
cfg->lcore_count--;
diff --git a/lib/eal/common/eal_common_memalloc.c b/lib/eal/common/eal_common_memalloc.c
index ab04479c1c..feb22c2b2f 100644
--- a/lib/eal/common/eal_common_memalloc.c
+++ b/lib/eal/common/eal_common_memalloc.c
@@ -186,7 +186,7 @@ eal_memalloc_mem_event_callback_register(const char *name,
ret = 0;
- RTE_LOG(DEBUG, EAL, "Mem event callback '%s:%p' registered\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Mem event callback '%s:%p' registered",
name, arg);
unlock:
@@ -225,7 +225,7 @@ eal_memalloc_mem_event_callback_unregister(const char *name, void *arg)
ret = 0;
- RTE_LOG(DEBUG, EAL, "Mem event callback '%s:%p' unregistered\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Mem event callback '%s:%p' unregistered",
name, arg);
unlock:
@@ -242,7 +242,7 @@ eal_memalloc_mem_event_notify(enum rte_mem_event event, const void *start,
rte_rwlock_read_lock(&mem_event_rwlock);
TAILQ_FOREACH(entry, &mem_event_callback_list, next) {
- RTE_LOG(DEBUG, EAL, "Calling mem event callback '%s:%p'\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Calling mem event callback '%s:%p'",
entry->name, entry->arg);
entry->clb(event, start, len, entry->arg);
}
@@ -293,7 +293,7 @@ eal_memalloc_mem_alloc_validator_register(const char *name,
ret = 0;
- RTE_LOG(DEBUG, EAL, "Mem alloc validator '%s' on socket %i with limit %zu registered\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Mem alloc validator '%s' on socket %i with limit %zu registered",
name, socket_id, limit);
unlock:
@@ -332,7 +332,7 @@ eal_memalloc_mem_alloc_validator_unregister(const char *name, int socket_id)
ret = 0;
- RTE_LOG(DEBUG, EAL, "Mem alloc validator '%s' on socket %i unregistered\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Mem alloc validator '%s' on socket %i unregistered",
name, socket_id);
unlock:
@@ -351,7 +351,7 @@ eal_memalloc_mem_alloc_validate(int socket_id, size_t new_len)
TAILQ_FOREACH(entry, &mem_alloc_validator_list, next) {
if (entry->socket_id != socket_id || entry->limit > new_len)
continue;
- RTE_LOG(DEBUG, EAL, "Calling mem alloc validator '%s' on socket %i\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Calling mem alloc validator '%s' on socket %i",
entry->name, entry->socket_id);
if (entry->clb(socket_id, entry->limit, new_len) < 0)
ret = -1;
diff --git a/lib/eal/common/eal_common_memory.c b/lib/eal/common/eal_common_memory.c
index d9433db623..9e183669a6 100644
--- a/lib/eal/common/eal_common_memory.c
+++ b/lib/eal/common/eal_common_memory.c
@@ -57,7 +57,7 @@ eal_get_virtual_area(void *requested_addr, size_t *size,
if (system_page_sz == 0)
system_page_sz = rte_mem_page_size();
- RTE_LOG(DEBUG, EAL, "Ask a virtual area of 0x%zx bytes\n", *size);
+ RTE_LOG_LINE(DEBUG, EAL, "Ask a virtual area of 0x%zx bytes", *size);
addr_is_hint = (flags & EAL_VIRTUAL_AREA_ADDR_IS_HINT) > 0;
allow_shrink = (flags & EAL_VIRTUAL_AREA_ALLOW_SHRINK) > 0;
@@ -94,7 +94,7 @@ eal_get_virtual_area(void *requested_addr, size_t *size,
do {
map_sz = no_align ? *size : *size + page_sz;
if (map_sz > SIZE_MAX) {
- RTE_LOG(ERR, EAL, "Map size too big\n");
+ RTE_LOG_LINE(ERR, EAL, "Map size too big");
rte_errno = E2BIG;
return NULL;
}
@@ -125,16 +125,16 @@ eal_get_virtual_area(void *requested_addr, size_t *size,
RTE_PTR_ALIGN(mapped_addr, page_sz);
if (*size == 0) {
- RTE_LOG(ERR, EAL, "Cannot get a virtual area of any size: %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Cannot get a virtual area of any size: %s",
rte_strerror(rte_errno));
return NULL;
} else if (mapped_addr == NULL) {
- RTE_LOG(ERR, EAL, "Cannot get a virtual area: %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Cannot get a virtual area: %s",
rte_strerror(rte_errno));
return NULL;
} else if (requested_addr != NULL && !addr_is_hint &&
aligned_addr != requested_addr) {
- RTE_LOG(ERR, EAL, "Cannot get a virtual area at requested address: %p (got %p)\n",
+ RTE_LOG_LINE(ERR, EAL, "Cannot get a virtual area at requested address: %p (got %p)",
requested_addr, aligned_addr);
eal_mem_free(mapped_addr, map_sz);
rte_errno = EADDRNOTAVAIL;
@@ -146,19 +146,19 @@ eal_get_virtual_area(void *requested_addr, size_t *size,
* a base virtual address.
*/
if (internal_conf->base_virtaddr != 0) {
- RTE_LOG(WARNING, EAL, "WARNING! Base virtual address hint (%p != %p) not respected!\n",
+ RTE_LOG_LINE(WARNING, EAL, "WARNING! Base virtual address hint (%p != %p) not respected!",
requested_addr, aligned_addr);
- RTE_LOG(WARNING, EAL, " This may cause issues with mapping memory into secondary processes\n");
+ RTE_LOG_LINE(WARNING, EAL, " This may cause issues with mapping memory into secondary processes");
} else {
- RTE_LOG(DEBUG, EAL, "WARNING! Base virtual address hint (%p != %p) not respected!\n",
+ RTE_LOG_LINE(DEBUG, EAL, "WARNING! Base virtual address hint (%p != %p) not respected!",
requested_addr, aligned_addr);
- RTE_LOG(DEBUG, EAL, " This may cause issues with mapping memory into secondary processes\n");
+ RTE_LOG_LINE(DEBUG, EAL, " This may cause issues with mapping memory into secondary processes");
}
} else if (next_baseaddr != NULL) {
next_baseaddr = RTE_PTR_ADD(aligned_addr, *size);
}
- RTE_LOG(DEBUG, EAL, "Virtual area found at %p (size = 0x%zx)\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Virtual area found at %p (size = 0x%zx)",
aligned_addr, *size);
if (unmap) {
@@ -202,7 +202,7 @@ eal_memseg_list_init_named(struct rte_memseg_list *msl, const char *name,
{
if (rte_fbarray_init(&msl->memseg_arr, name, n_segs,
sizeof(struct rte_memseg))) {
- RTE_LOG(ERR, EAL, "Cannot allocate memseg list: %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Cannot allocate memseg list: %s",
rte_strerror(rte_errno));
return -1;
}
@@ -212,8 +212,8 @@ eal_memseg_list_init_named(struct rte_memseg_list *msl, const char *name,
msl->base_va = NULL;
msl->heap = heap;
- RTE_LOG(DEBUG, EAL,
- "Memseg list allocated at socket %i, page size 0x%"PRIx64"kB\n",
+ RTE_LOG_LINE(DEBUG, EAL,
+ "Memseg list allocated at socket %i, page size 0x%"PRIx64"kB",
socket_id, page_sz >> 10);
return 0;
@@ -251,8 +251,8 @@ eal_memseg_list_alloc(struct rte_memseg_list *msl, int reserve_flags)
* including common code, so don't duplicate the message.
*/
if (rte_errno == EADDRNOTAVAIL)
- RTE_LOG(ERR, EAL, "Cannot reserve %llu bytes at [%p] - "
- "please use '--" OPT_BASE_VIRTADDR "' option\n",
+ RTE_LOG_LINE(ERR, EAL, "Cannot reserve %llu bytes at [%p] - "
+ "please use '--" OPT_BASE_VIRTADDR "' option",
(unsigned long long)mem_sz, msl->base_va);
#endif
return -1;
@@ -260,7 +260,7 @@ eal_memseg_list_alloc(struct rte_memseg_list *msl, int reserve_flags)
msl->base_va = addr;
msl->len = mem_sz;
- RTE_LOG(DEBUG, EAL, "VA reserved for memseg list at %p, size %zx\n",
+ RTE_LOG_LINE(DEBUG, EAL, "VA reserved for memseg list at %p, size %zx",
addr, mem_sz);
return 0;
@@ -472,7 +472,7 @@ rte_mem_event_callback_register(const char *name, rte_mem_event_callback_t clb,
/* FreeBSD boots with legacy mem enabled by default */
if (internal_conf->legacy_mem) {
- RTE_LOG(DEBUG, EAL, "Registering mem event callbacks not supported\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Registering mem event callbacks not supported");
rte_errno = ENOTSUP;
return -1;
}
@@ -487,7 +487,7 @@ rte_mem_event_callback_unregister(const char *name, void *arg)
/* FreeBSD boots with legacy mem enabled by default */
if (internal_conf->legacy_mem) {
- RTE_LOG(DEBUG, EAL, "Registering mem event callbacks not supported\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Registering mem event callbacks not supported");
rte_errno = ENOTSUP;
return -1;
}
@@ -503,7 +503,7 @@ rte_mem_alloc_validator_register(const char *name,
/* FreeBSD boots with legacy mem enabled by default */
if (internal_conf->legacy_mem) {
- RTE_LOG(DEBUG, EAL, "Registering mem alloc validators not supported\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Registering mem alloc validators not supported");
rte_errno = ENOTSUP;
return -1;
}
@@ -519,7 +519,7 @@ rte_mem_alloc_validator_unregister(const char *name, int socket_id)
/* FreeBSD boots with legacy mem enabled by default */
if (internal_conf->legacy_mem) {
- RTE_LOG(DEBUG, EAL, "Registering mem alloc validators not supported\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Registering mem alloc validators not supported");
rte_errno = ENOTSUP;
return -1;
}
@@ -545,10 +545,10 @@ check_iova(const struct rte_memseg_list *msl __rte_unused,
if (!(iova & *mask))
return 0;
- RTE_LOG(DEBUG, EAL, "memseg iova %"PRIx64", len %zx, out of range\n",
+ RTE_LOG_LINE(DEBUG, EAL, "memseg iova %"PRIx64", len %zx, out of range",
ms->iova, ms->len);
- RTE_LOG(DEBUG, EAL, "\tusing dma mask %"PRIx64"\n", *mask);
+ RTE_LOG_LINE(DEBUG, EAL, "\tusing dma mask %"PRIx64, *mask);
return 1;
}
@@ -565,7 +565,7 @@ check_dma_mask(uint8_t maskbits, bool thread_unsafe)
/* Sanity check. We only check width can be managed with 64 bits
* variables. Indeed any higher value is likely wrong. */
if (maskbits > MAX_DMA_MASK_BITS) {
- RTE_LOG(ERR, EAL, "wrong dma mask size %u (Max: %u)\n",
+ RTE_LOG_LINE(ERR, EAL, "wrong dma mask size %u (Max: %u)",
maskbits, MAX_DMA_MASK_BITS);
return -1;
}
@@ -925,7 +925,7 @@ rte_extmem_register(void *va_addr, size_t len, rte_iova_t iova_addrs[],
/* get next available socket ID */
socket_id = mcfg->next_socket_id;
if (socket_id > INT32_MAX) {
- RTE_LOG(ERR, EAL, "Cannot assign new socket ID's\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot assign new socket ID's");
rte_errno = ENOSPC;
ret = -1;
goto unlock;
@@ -1030,7 +1030,7 @@ rte_eal_memory_detach(void)
/* detach internal memory subsystem data first */
if (eal_memalloc_cleanup())
- RTE_LOG(ERR, EAL, "Could not release memory subsystem data\n");
+ RTE_LOG_LINE(ERR, EAL, "Could not release memory subsystem data");
for (i = 0; i < RTE_DIM(mcfg->memsegs); i++) {
struct rte_memseg_list *msl = &mcfg->memsegs[i];
@@ -1047,7 +1047,7 @@ rte_eal_memory_detach(void)
*/
if (!msl->external)
if (rte_mem_unmap(msl->base_va, msl->len) != 0)
- RTE_LOG(ERR, EAL, "Could not unmap memory: %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Could not unmap memory: %s",
rte_strerror(rte_errno));
/*
@@ -1056,7 +1056,7 @@ rte_eal_memory_detach(void)
* have no way of knowing if they still do.
*/
if (rte_fbarray_detach(&msl->memseg_arr))
- RTE_LOG(ERR, EAL, "Could not detach fbarray: %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Could not detach fbarray: %s",
rte_strerror(rte_errno));
}
rte_rwlock_write_unlock(&mcfg->memory_hotplug_lock);
@@ -1068,7 +1068,7 @@ rte_eal_memory_detach(void)
*/
if (internal_conf->no_shconf == 0 && mcfg->mem_cfg_addr != 0) {
if (rte_mem_unmap(mcfg, RTE_ALIGN(sizeof(*mcfg), page_sz)) != 0)
- RTE_LOG(ERR, EAL, "Could not unmap shared memory config: %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Could not unmap shared memory config: %s",
rte_strerror(rte_errno));
}
rte_eal_get_configuration()->mem_config = NULL;
@@ -1084,7 +1084,7 @@ rte_eal_memory_init(void)
eal_get_internal_configuration();
int retval;
- RTE_LOG(DEBUG, EAL, "Setting up physically contiguous memory...\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Setting up physically contiguous memory...");
if (rte_eal_memseg_init() < 0)
goto fail;
@@ -1213,7 +1213,7 @@ handle_eal_memzone_info_request(const char *cmd __rte_unused,
/* go through each page occupied by this memzone */
msl = rte_mem_virt2memseg_list(mz->addr);
if (!msl) {
- RTE_LOG(DEBUG, EAL, "Skipping bad memzone\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Skipping bad memzone");
return -1;
}
page_sz = (size_t)mz->hugepage_sz;
@@ -1404,7 +1404,7 @@ handle_eal_memseg_info_request(const char *cmd __rte_unused,
ms = rte_fbarray_get(arr, ms_idx);
if (ms == NULL) {
rte_mcfg_mem_read_unlock();
- RTE_LOG(DEBUG, EAL, "Error fetching requested memseg.\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Error fetching requested memseg.");
return -1;
}
@@ -1477,7 +1477,7 @@ handle_eal_element_list_request(const char *cmd __rte_unused,
ms = rte_fbarray_get(&msl->memseg_arr, ms_idx);
if (ms == NULL) {
rte_mcfg_mem_read_unlock();
- RTE_LOG(DEBUG, EAL, "Error fetching requested memseg.\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Error fetching requested memseg.");
return -1;
}
@@ -1555,7 +1555,7 @@ handle_eal_element_info_request(const char *cmd __rte_unused,
ms = rte_fbarray_get(&msl->memseg_arr, ms_idx);
if (ms == NULL) {
rte_mcfg_mem_read_unlock();
- RTE_LOG(DEBUG, EAL, "Error fetching requested memseg.\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Error fetching requested memseg.");
return -1;
}
diff --git a/lib/eal/common/eal_common_memzone.c b/lib/eal/common/eal_common_memzone.c
index 1f3e701499..fc478d0fac 100644
--- a/lib/eal/common/eal_common_memzone.c
+++ b/lib/eal/common/eal_common_memzone.c
@@ -31,13 +31,13 @@ rte_memzone_max_set(size_t max)
struct rte_mem_config *mcfg;
if (eal_get_internal_configuration()->init_complete > 0) {
- RTE_LOG(ERR, EAL, "Max memzone cannot be set after EAL init\n");
+ RTE_LOG_LINE(ERR, EAL, "Max memzone cannot be set after EAL init");
return -1;
}
mcfg = rte_eal_get_configuration()->mem_config;
if (mcfg == NULL) {
- RTE_LOG(ERR, EAL, "Failed to set max memzone count\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed to set max memzone count");
return -1;
}
@@ -116,16 +116,16 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len,
/* no more room in config */
if (arr->count >= arr->len) {
- RTE_LOG(ERR, EAL,
+ RTE_LOG_LINE(ERR, EAL,
"%s(): Number of requested memzone segments exceeds maximum "
- "%u\n", __func__, arr->len);
+ "%u", __func__, arr->len);
rte_errno = ENOSPC;
return NULL;
}
if (strlen(name) > sizeof(mz->name) - 1) {
- RTE_LOG(DEBUG, EAL, "%s(): memzone <%s>: name too long\n",
+ RTE_LOG_LINE(DEBUG, EAL, "%s(): memzone <%s>: name too long",
__func__, name);
rte_errno = ENAMETOOLONG;
return NULL;
@@ -133,7 +133,7 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len,
/* zone already exist */
if ((memzone_lookup_thread_unsafe(name)) != NULL) {
- RTE_LOG(DEBUG, EAL, "%s(): memzone <%s> already exists\n",
+ RTE_LOG_LINE(DEBUG, EAL, "%s(): memzone <%s> already exists",
__func__, name);
rte_errno = EEXIST;
return NULL;
@@ -141,7 +141,7 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len,
/* if alignment is not a power of two */
if (align && !rte_is_power_of_2(align)) {
- RTE_LOG(ERR, EAL, "%s(): Invalid alignment: %u\n", __func__,
+ RTE_LOG_LINE(ERR, EAL, "%s(): Invalid alignment: %u", __func__,
align);
rte_errno = EINVAL;
return NULL;
@@ -218,7 +218,7 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len,
}
if (mz == NULL) {
- RTE_LOG(ERR, EAL, "%s(): Cannot find free memzone\n", __func__);
+ RTE_LOG_LINE(ERR, EAL, "%s(): Cannot find free memzone", __func__);
malloc_heap_free(elem);
rte_errno = ENOSPC;
return NULL;
@@ -323,7 +323,7 @@ rte_memzone_free(const struct rte_memzone *mz)
if (found_mz == NULL) {
ret = -EINVAL;
} else if (found_mz->addr == NULL) {
- RTE_LOG(ERR, EAL, "Memzone is not allocated\n");
+ RTE_LOG_LINE(ERR, EAL, "Memzone is not allocated");
ret = -EINVAL;
} else {
addr = found_mz->addr;
@@ -385,7 +385,7 @@ dump_memzone(const struct rte_memzone *mz, void *arg)
/* go through each page occupied by this memzone */
msl = rte_mem_virt2memseg_list(mz->addr);
if (!msl) {
- RTE_LOG(DEBUG, EAL, "Skipping bad memzone\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Skipping bad memzone");
return;
}
page_sz = (size_t)mz->hugepage_sz;
@@ -434,11 +434,11 @@ rte_eal_memzone_init(void)
if (rte_eal_process_type() == RTE_PROC_PRIMARY &&
rte_fbarray_init(&mcfg->memzones, "memzone",
rte_memzone_max_get(), sizeof(struct rte_memzone))) {
- RTE_LOG(ERR, EAL, "Cannot allocate memzone list\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot allocate memzone list");
ret = -1;
} else if (rte_eal_process_type() == RTE_PROC_SECONDARY &&
rte_fbarray_attach(&mcfg->memzones)) {
- RTE_LOG(ERR, EAL, "Cannot attach to memzone list\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot attach to memzone list");
ret = -1;
}
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index e9ba01fb89..c1af05b134 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -255,14 +255,14 @@ eal_option_device_add(enum rte_devtype type, const char *optarg)
optlen = strlen(optarg) + 1;
devopt = calloc(1, sizeof(*devopt) + optlen);
if (devopt == NULL) {
- RTE_LOG(ERR, EAL, "Unable to allocate device option\n");
+ RTE_LOG_LINE(ERR, EAL, "Unable to allocate device option");
return -ENOMEM;
}
devopt->type = type;
ret = strlcpy(devopt->arg, optarg, optlen);
if (ret < 0) {
- RTE_LOG(ERR, EAL, "Unable to copy device option\n");
+ RTE_LOG_LINE(ERR, EAL, "Unable to copy device option");
free(devopt);
return -EINVAL;
}
@@ -281,7 +281,7 @@ eal_option_device_parse(void)
if (ret == 0) {
ret = rte_devargs_add(devopt->type, devopt->arg);
if (ret)
- RTE_LOG(ERR, EAL, "Unable to parse device '%s'\n",
+ RTE_LOG_LINE(ERR, EAL, "Unable to parse device '%s'",
devopt->arg);
}
TAILQ_REMOVE(&devopt_list, devopt, next);
@@ -360,7 +360,7 @@ eal_plugin_add(const char *path)
solib = malloc(sizeof(*solib));
if (solib == NULL) {
- RTE_LOG(ERR, EAL, "malloc(solib) failed\n");
+ RTE_LOG_LINE(ERR, EAL, "malloc(solib) failed");
return -1;
}
memset(solib, 0, sizeof(*solib));
@@ -390,7 +390,7 @@ eal_plugindir_init(const char *path)
d = opendir(path);
if (d == NULL) {
- RTE_LOG(ERR, EAL, "failed to open directory %s: %s\n",
+ RTE_LOG_LINE(ERR, EAL, "failed to open directory %s: %s",
path, strerror(errno));
return -1;
}
@@ -442,13 +442,13 @@ verify_perms(const char *dirpath)
/* call stat to check for permissions and ensure not world writable */
if (stat(dirpath, &st) != 0) {
- RTE_LOG(ERR, EAL, "Error with stat on %s, %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Error with stat on %s, %s",
dirpath, strerror(errno));
return -1;
}
if (st.st_mode & S_IWOTH) {
- RTE_LOG(ERR, EAL,
- "Error, directory path %s is world-writable and insecure\n",
+ RTE_LOG_LINE(ERR, EAL,
+ "Error, directory path %s is world-writable and insecure",
dirpath);
return -1;
}
@@ -466,16 +466,16 @@ eal_dlopen(const char *pathname)
/* not a full or relative path, try a load from system dirs */
retval = dlopen(pathname, RTLD_NOW);
if (retval == NULL)
- RTE_LOG(ERR, EAL, "%s\n", dlerror());
+ RTE_LOG_LINE(ERR, EAL, "%s", dlerror());
return retval;
}
if (realp == NULL) {
- RTE_LOG(ERR, EAL, "Error with realpath for %s, %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Error with realpath for %s, %s",
pathname, strerror(errno));
goto out;
}
if (strnlen(realp, PATH_MAX) == PATH_MAX) {
- RTE_LOG(ERR, EAL, "Error, driver path greater than PATH_MAX\n");
+ RTE_LOG_LINE(ERR, EAL, "Error, driver path greater than PATH_MAX");
goto out;
}
@@ -485,7 +485,7 @@ eal_dlopen(const char *pathname)
retval = dlopen(realp, RTLD_NOW);
if (retval == NULL)
- RTE_LOG(ERR, EAL, "%s\n", dlerror());
+ RTE_LOG_LINE(ERR, EAL, "%s", dlerror());
out:
free(realp);
return retval;
@@ -500,7 +500,7 @@ is_shared_build(void)
len = strlcpy(soname, EAL_SO"."ABI_VERSION, sizeof(soname));
if (len > sizeof(soname)) {
- RTE_LOG(ERR, EAL, "Shared lib name too long in shared build check\n");
+ RTE_LOG_LINE(ERR, EAL, "Shared lib name too long in shared build check");
len = sizeof(soname) - 1;
}
@@ -508,10 +508,10 @@ is_shared_build(void)
void *handle;
/* check if we have this .so loaded, if so - shared build */
- RTE_LOG(DEBUG, EAL, "Checking presence of .so '%s'\n", soname);
+ RTE_LOG_LINE(DEBUG, EAL, "Checking presence of .so '%s'", soname);
handle = dlopen(soname, RTLD_LAZY | RTLD_NOLOAD);
if (handle != NULL) {
- RTE_LOG(INFO, EAL, "Detected shared linkage of DPDK\n");
+ RTE_LOG_LINE(INFO, EAL, "Detected shared linkage of DPDK");
dlclose(handle);
return 1;
}
@@ -524,7 +524,7 @@ is_shared_build(void)
}
}
- RTE_LOG(INFO, EAL, "Detected static linkage of DPDK\n");
+ RTE_LOG_LINE(INFO, EAL, "Detected static linkage of DPDK");
return 0;
}
@@ -549,13 +549,13 @@ eal_plugins_init(void)
if (stat(solib->name, &sb) == 0 && S_ISDIR(sb.st_mode)) {
if (eal_plugindir_init(solib->name) == -1) {
- RTE_LOG(ERR, EAL,
- "Cannot init plugin directory %s\n",
+ RTE_LOG_LINE(ERR, EAL,
+ "Cannot init plugin directory %s",
solib->name);
return -1;
}
} else {
- RTE_LOG(DEBUG, EAL, "open shared lib %s\n",
+ RTE_LOG_LINE(DEBUG, EAL, "open shared lib %s",
solib->name);
solib->lib_handle = eal_dlopen(solib->name);
if (solib->lib_handle == NULL)
@@ -626,15 +626,15 @@ eal_parse_service_coremask(const char *coremask)
uint32_t lcore = idx;
if (main_lcore_parsed &&
cfg->main_lcore == lcore) {
- RTE_LOG(ERR, EAL,
- "lcore %u is main lcore, cannot use as service core\n",
+ RTE_LOG_LINE(ERR, EAL,
+ "lcore %u is main lcore, cannot use as service core",
idx);
return -1;
}
if (eal_cpu_detected(idx) == 0) {
- RTE_LOG(ERR, EAL,
- "lcore %u unavailable\n", idx);
+ RTE_LOG_LINE(ERR, EAL,
+ "lcore %u unavailable", idx);
return -1;
}
@@ -658,9 +658,9 @@ eal_parse_service_coremask(const char *coremask)
return -1;
if (core_parsed && taken_lcore_count != count) {
- RTE_LOG(WARNING, EAL,
+ RTE_LOG_LINE(WARNING, EAL,
"Not all service cores are in the coremask. "
- "Please ensure -c or -l includes service cores\n");
+ "Please ensure -c or -l includes service cores");
}
cfg->service_lcore_count = count;
@@ -689,7 +689,7 @@ update_lcore_config(int *cores)
for (i = 0; i < RTE_MAX_LCORE; i++) {
if (cores[i] != -1) {
if (eal_cpu_detected(i) == 0) {
- RTE_LOG(ERR, EAL, "lcore %u unavailable\n", i);
+ RTE_LOG_LINE(ERR, EAL, "lcore %u unavailable", i);
ret = -1;
continue;
}
@@ -717,7 +717,7 @@ check_core_list(int *lcores, unsigned int count)
if (lcores[i] < RTE_MAX_LCORE)
continue;
- RTE_LOG(ERR, EAL, "lcore %d >= RTE_MAX_LCORE (%d)\n",
+ RTE_LOG_LINE(ERR, EAL, "lcore %d >= RTE_MAX_LCORE (%d)",
lcores[i], RTE_MAX_LCORE);
overflow = true;
}
@@ -737,9 +737,9 @@ check_core_list(int *lcores, unsigned int count)
}
if (len > 0)
lcorestr[len - 1] = 0;
- RTE_LOG(ERR, EAL, "To use high physical core ids, "
+ RTE_LOG_LINE(ERR, EAL, "To use high physical core ids, "
"please use --lcores to map them to lcore ids below RTE_MAX_LCORE, "
- "e.g. --lcores %s\n", lcorestr);
+ "e.g. --lcores %s", lcorestr);
return -1;
}
@@ -769,7 +769,7 @@ rte_eal_parse_coremask(const char *coremask, int *cores)
while ((i > 0) && isblank(coremask[i - 1]))
i--;
if (i == 0) {
- RTE_LOG(ERR, EAL, "No lcores in coremask: [%s]\n",
+ RTE_LOG_LINE(ERR, EAL, "No lcores in coremask: [%s]",
coremask_orig);
return -1;
}
@@ -778,7 +778,7 @@ rte_eal_parse_coremask(const char *coremask, int *cores)
c = coremask[i];
if (isxdigit(c) == 0) {
/* invalid characters */
- RTE_LOG(ERR, EAL, "invalid characters in coremask: [%s]\n",
+ RTE_LOG_LINE(ERR, EAL, "invalid characters in coremask: [%s]",
coremask_orig);
return -1;
}
@@ -787,7 +787,7 @@ rte_eal_parse_coremask(const char *coremask, int *cores)
{
if ((1 << j) & val) {
if (count >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, EAL, "Too many lcores provided. Cannot exceed RTE_MAX_LCORE (%d)\n",
+ RTE_LOG_LINE(ERR, EAL, "Too many lcores provided. Cannot exceed RTE_MAX_LCORE (%d)",
RTE_MAX_LCORE);
return -1;
}
@@ -796,7 +796,7 @@ rte_eal_parse_coremask(const char *coremask, int *cores)
}
}
if (count == 0) {
- RTE_LOG(ERR, EAL, "No lcores in coremask: [%s]\n",
+ RTE_LOG_LINE(ERR, EAL, "No lcores in coremask: [%s]",
coremask_orig);
return -1;
}
@@ -864,8 +864,8 @@ eal_parse_service_corelist(const char *corelist)
uint32_t lcore = idx;
if (cfg->main_lcore == lcore &&
main_lcore_parsed) {
- RTE_LOG(ERR, EAL,
- "Error: lcore %u is main lcore, cannot use as service core\n",
+ RTE_LOG_LINE(ERR, EAL,
+ "Error: lcore %u is main lcore, cannot use as service core",
idx);
return -1;
}
@@ -887,9 +887,9 @@ eal_parse_service_corelist(const char *corelist)
return -1;
if (core_parsed && taken_lcore_count != count) {
- RTE_LOG(WARNING, EAL,
+ RTE_LOG_LINE(WARNING, EAL,
"Not all service cores were in the coremask. "
- "Please ensure -c or -l includes service cores\n");
+ "Please ensure -c or -l includes service cores");
}
return 0;
@@ -943,7 +943,7 @@ eal_parse_corelist(const char *corelist, int *cores)
if (dup)
continue;
if (count >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, EAL, "Too many lcores provided. Cannot exceed RTE_MAX_LCORE (%d)\n",
+ RTE_LOG_LINE(ERR, EAL, "Too many lcores provided. Cannot exceed RTE_MAX_LCORE (%d)",
RTE_MAX_LCORE);
return -1;
}
@@ -991,8 +991,8 @@ eal_parse_main_lcore(const char *arg)
/* ensure main core is not used as service core */
if (lcore_config[cfg->main_lcore].core_role == ROLE_SERVICE) {
- RTE_LOG(ERR, EAL,
- "Error: Main lcore is used as a service core\n");
+ RTE_LOG_LINE(ERR, EAL,
+ "Error: Main lcore is used as a service core");
return -1;
}
@@ -1132,8 +1132,8 @@ check_cpuset(rte_cpuset_t *set)
continue;
if (eal_cpu_detected(idx) == 0) {
- RTE_LOG(ERR, EAL, "core %u "
- "unavailable\n", idx);
+ RTE_LOG_LINE(ERR, EAL, "core %u "
+ "unavailable", idx);
return -1;
}
}
@@ -1612,8 +1612,8 @@ eal_parse_huge_unlink(const char *arg, struct hugepage_file_discipline *out)
return 0;
}
if (strcmp(arg, HUGE_UNLINK_NEVER) == 0) {
- RTE_LOG(WARNING, EAL, "Using --"OPT_HUGE_UNLINK"="
- HUGE_UNLINK_NEVER" may create data leaks.\n");
+ RTE_LOG_LINE(WARNING, EAL, "Using --"OPT_HUGE_UNLINK"="
+ HUGE_UNLINK_NEVER" may create data leaks.");
out->unlink_existing = false;
return 0;
}
@@ -1648,24 +1648,24 @@ eal_parse_common_option(int opt, const char *optarg,
int lcore_indexes[RTE_MAX_LCORE];
if (eal_service_cores_parsed())
- RTE_LOG(WARNING, EAL,
- "Service cores parsed before dataplane cores. Please ensure -c is before -s or -S\n");
+ RTE_LOG_LINE(WARNING, EAL,
+ "Service cores parsed before dataplane cores. Please ensure -c is before -s or -S");
if (rte_eal_parse_coremask(optarg, lcore_indexes) < 0) {
- RTE_LOG(ERR, EAL, "invalid coremask syntax\n");
+ RTE_LOG_LINE(ERR, EAL, "invalid coremask syntax");
return -1;
}
if (update_lcore_config(lcore_indexes) < 0) {
char *available = available_cores();
- RTE_LOG(ERR, EAL,
- "invalid coremask, please check specified cores are part of %s\n",
+ RTE_LOG_LINE(ERR, EAL,
+ "invalid coremask, please check specified cores are part of %s",
available);
free(available);
return -1;
}
if (core_parsed) {
- RTE_LOG(ERR, EAL, "Option -c is ignored, because (%s) is set!\n",
+ RTE_LOG_LINE(ERR, EAL, "Option -c is ignored, because (%s) is set!",
(core_parsed == LCORE_OPT_LST) ? "-l" :
(core_parsed == LCORE_OPT_MAP) ? "--lcore" :
"-c");
@@ -1680,25 +1680,25 @@ eal_parse_common_option(int opt, const char *optarg,
int lcore_indexes[RTE_MAX_LCORE];
if (eal_service_cores_parsed())
- RTE_LOG(WARNING, EAL,
- "Service cores parsed before dataplane cores. Please ensure -l is before -s or -S\n");
+ RTE_LOG_LINE(WARNING, EAL,
+ "Service cores parsed before dataplane cores. Please ensure -l is before -s or -S");
if (eal_parse_corelist(optarg, lcore_indexes) < 0) {
- RTE_LOG(ERR, EAL, "invalid core list syntax\n");
+ RTE_LOG_LINE(ERR, EAL, "invalid core list syntax");
return -1;
}
if (update_lcore_config(lcore_indexes) < 0) {
char *available = available_cores();
- RTE_LOG(ERR, EAL,
- "invalid core list, please check specified cores are part of %s\n",
+ RTE_LOG_LINE(ERR, EAL,
+ "invalid core list, please check specified cores are part of %s",
available);
free(available);
return -1;
}
if (core_parsed) {
- RTE_LOG(ERR, EAL, "Option -l is ignored, because (%s) is set!\n",
+ RTE_LOG_LINE(ERR, EAL, "Option -l is ignored, because (%s) is set!",
(core_parsed == LCORE_OPT_MSK) ? "-c" :
(core_parsed == LCORE_OPT_MAP) ? "--lcore" :
"-l");
@@ -1711,14 +1711,14 @@ eal_parse_common_option(int opt, const char *optarg,
/* service coremask */
case 's':
if (eal_parse_service_coremask(optarg) < 0) {
- RTE_LOG(ERR, EAL, "invalid service coremask\n");
+ RTE_LOG_LINE(ERR, EAL, "invalid service coremask");
return -1;
}
break;
/* service corelist */
case 'S':
if (eal_parse_service_corelist(optarg) < 0) {
- RTE_LOG(ERR, EAL, "invalid service core list\n");
+ RTE_LOG_LINE(ERR, EAL, "invalid service core list");
return -1;
}
break;
@@ -1733,7 +1733,7 @@ eal_parse_common_option(int opt, const char *optarg,
case 'n':
conf->force_nchannel = atoi(optarg);
if (conf->force_nchannel == 0) {
- RTE_LOG(ERR, EAL, "invalid channel number\n");
+ RTE_LOG_LINE(ERR, EAL, "invalid channel number");
return -1;
}
break;
@@ -1742,7 +1742,7 @@ eal_parse_common_option(int opt, const char *optarg,
conf->force_nrank = atoi(optarg);
if (conf->force_nrank == 0 ||
conf->force_nrank > 16) {
- RTE_LOG(ERR, EAL, "invalid rank number\n");
+ RTE_LOG_LINE(ERR, EAL, "invalid rank number");
return -1;
}
break;
@@ -1756,13 +1756,13 @@ eal_parse_common_option(int opt, const char *optarg,
* write message at highest log level so it can always
* be seen
* even if info or warning messages are disabled */
- RTE_LOG(CRIT, EAL, "RTE Version: '%s'\n", rte_version());
+ RTE_LOG_LINE(CRIT, EAL, "RTE Version: '%s'", rte_version());
break;
/* long options */
case OPT_HUGE_UNLINK_NUM:
if (eal_parse_huge_unlink(optarg, &conf->hugepage_file) < 0) {
- RTE_LOG(ERR, EAL, "invalid --"OPT_HUGE_UNLINK" option\n");
+ RTE_LOG_LINE(ERR, EAL, "invalid --"OPT_HUGE_UNLINK" option");
return -1;
}
break;
@@ -1802,8 +1802,8 @@ eal_parse_common_option(int opt, const char *optarg,
case OPT_MAIN_LCORE_NUM:
if (eal_parse_main_lcore(optarg) < 0) {
- RTE_LOG(ERR, EAL, "invalid parameter for --"
- OPT_MAIN_LCORE "\n");
+ RTE_LOG_LINE(ERR, EAL, "invalid parameter for --"
+ OPT_MAIN_LCORE);
return -1;
}
break;
@@ -1818,8 +1818,8 @@ eal_parse_common_option(int opt, const char *optarg,
#ifndef RTE_EXEC_ENV_WINDOWS
case OPT_SYSLOG_NUM:
if (eal_parse_syslog(optarg, conf) < 0) {
- RTE_LOG(ERR, EAL, "invalid parameters for --"
- OPT_SYSLOG "\n");
+ RTE_LOG_LINE(ERR, EAL, "invalid parameters for --"
+ OPT_SYSLOG);
return -1;
}
break;
@@ -1827,9 +1827,9 @@ eal_parse_common_option(int opt, const char *optarg,
case OPT_LOG_LEVEL_NUM: {
if (eal_parse_log_level(optarg) < 0) {
- RTE_LOG(ERR, EAL,
+ RTE_LOG_LINE(ERR, EAL,
"invalid parameters for --"
- OPT_LOG_LEVEL "\n");
+ OPT_LOG_LEVEL);
return -1;
}
break;
@@ -1838,8 +1838,8 @@ eal_parse_common_option(int opt, const char *optarg,
#ifndef RTE_EXEC_ENV_WINDOWS
case OPT_TRACE_NUM: {
if (eal_trace_args_save(optarg) < 0) {
- RTE_LOG(ERR, EAL, "invalid parameters for --"
- OPT_TRACE "\n");
+ RTE_LOG_LINE(ERR, EAL, "invalid parameters for --"
+ OPT_TRACE);
return -1;
}
break;
@@ -1847,8 +1847,8 @@ eal_parse_common_option(int opt, const char *optarg,
case OPT_TRACE_DIR_NUM: {
if (eal_trace_dir_args_save(optarg) < 0) {
- RTE_LOG(ERR, EAL, "invalid parameters for --"
- OPT_TRACE_DIR "\n");
+ RTE_LOG_LINE(ERR, EAL, "invalid parameters for --"
+ OPT_TRACE_DIR);
return -1;
}
break;
@@ -1856,8 +1856,8 @@ eal_parse_common_option(int opt, const char *optarg,
case OPT_TRACE_BUF_SIZE_NUM: {
if (eal_trace_bufsz_args_save(optarg) < 0) {
- RTE_LOG(ERR, EAL, "invalid parameters for --"
- OPT_TRACE_BUF_SIZE "\n");
+ RTE_LOG_LINE(ERR, EAL, "invalid parameters for --"
+ OPT_TRACE_BUF_SIZE);
return -1;
}
break;
@@ -1865,8 +1865,8 @@ eal_parse_common_option(int opt, const char *optarg,
case OPT_TRACE_MODE_NUM: {
if (eal_trace_mode_args_save(optarg) < 0) {
- RTE_LOG(ERR, EAL, "invalid parameters for --"
- OPT_TRACE_MODE "\n");
+ RTE_LOG_LINE(ERR, EAL, "invalid parameters for --"
+ OPT_TRACE_MODE);
return -1;
}
break;
@@ -1875,13 +1875,13 @@ eal_parse_common_option(int opt, const char *optarg,
case OPT_LCORES_NUM:
if (eal_parse_lcores(optarg) < 0) {
- RTE_LOG(ERR, EAL, "invalid parameter for --"
- OPT_LCORES "\n");
+ RTE_LOG_LINE(ERR, EAL, "invalid parameter for --"
+ OPT_LCORES);
return -1;
}
if (core_parsed) {
- RTE_LOG(ERR, EAL, "Option --lcore is ignored, because (%s) is set!\n",
+ RTE_LOG_LINE(ERR, EAL, "Option --lcore is ignored, because (%s) is set!",
(core_parsed == LCORE_OPT_LST) ? "-l" :
(core_parsed == LCORE_OPT_MSK) ? "-c" :
"--lcore");
@@ -1898,15 +1898,15 @@ eal_parse_common_option(int opt, const char *optarg,
break;
case OPT_IOVA_MODE_NUM:
if (eal_parse_iova_mode(optarg) < 0) {
- RTE_LOG(ERR, EAL, "invalid parameters for --"
- OPT_IOVA_MODE "\n");
+ RTE_LOG_LINE(ERR, EAL, "invalid parameters for --"
+ OPT_IOVA_MODE);
return -1;
}
break;
case OPT_BASE_VIRTADDR_NUM:
if (eal_parse_base_virtaddr(optarg) < 0) {
- RTE_LOG(ERR, EAL, "invalid parameter for --"
- OPT_BASE_VIRTADDR "\n");
+ RTE_LOG_LINE(ERR, EAL, "invalid parameter for --"
+ OPT_BASE_VIRTADDR);
return -1;
}
break;
@@ -1917,8 +1917,8 @@ eal_parse_common_option(int opt, const char *optarg,
break;
case OPT_FORCE_MAX_SIMD_BITWIDTH_NUM:
if (eal_parse_simd_bitwidth(optarg) < 0) {
- RTE_LOG(ERR, EAL, "invalid parameter for --"
- OPT_FORCE_MAX_SIMD_BITWIDTH "\n");
+ RTE_LOG_LINE(ERR, EAL, "invalid parameter for --"
+ OPT_FORCE_MAX_SIMD_BITWIDTH);
return -1;
}
break;
@@ -1932,8 +1932,8 @@ eal_parse_common_option(int opt, const char *optarg,
return 0;
ba_conflict:
- RTE_LOG(ERR, EAL,
- "Options allow (-a) and block (-b) can't be used at the same time\n");
+ RTE_LOG_LINE(ERR, EAL,
+ "Options allow (-a) and block (-b) can't be used at the same time");
return -1;
}
@@ -2034,94 +2034,94 @@ eal_check_common_options(struct internal_config *internal_cfg)
eal_get_internal_configuration();
if (cfg->lcore_role[cfg->main_lcore] != ROLE_RTE) {
- RTE_LOG(ERR, EAL, "Main lcore is not enabled for DPDK\n");
+ RTE_LOG_LINE(ERR, EAL, "Main lcore is not enabled for DPDK");
return -1;
}
if (internal_cfg->process_type == RTE_PROC_INVALID) {
- RTE_LOG(ERR, EAL, "Invalid process type specified\n");
+ RTE_LOG_LINE(ERR, EAL, "Invalid process type specified");
return -1;
}
if (internal_cfg->hugefile_prefix != NULL &&
strlen(internal_cfg->hugefile_prefix) < 1) {
- RTE_LOG(ERR, EAL, "Invalid length of --" OPT_FILE_PREFIX " option\n");
+ RTE_LOG_LINE(ERR, EAL, "Invalid length of --" OPT_FILE_PREFIX " option");
return -1;
}
if (internal_cfg->hugepage_dir != NULL &&
strlen(internal_cfg->hugepage_dir) < 1) {
- RTE_LOG(ERR, EAL, "Invalid length of --" OPT_HUGE_DIR" option\n");
+ RTE_LOG_LINE(ERR, EAL, "Invalid length of --" OPT_HUGE_DIR" option");
return -1;
}
if (internal_cfg->user_mbuf_pool_ops_name != NULL &&
strlen(internal_cfg->user_mbuf_pool_ops_name) < 1) {
- RTE_LOG(ERR, EAL, "Invalid length of --" OPT_MBUF_POOL_OPS_NAME" option\n");
+ RTE_LOG_LINE(ERR, EAL, "Invalid length of --" OPT_MBUF_POOL_OPS_NAME" option");
return -1;
}
if (strchr(eal_get_hugefile_prefix(), '%') != NULL) {
- RTE_LOG(ERR, EAL, "Invalid char, '%%', in --"OPT_FILE_PREFIX" "
- "option\n");
+ RTE_LOG_LINE(ERR, EAL, "Invalid char, '%%', in --"OPT_FILE_PREFIX" "
+ "option");
return -1;
}
if (mem_parsed && internal_cfg->force_sockets == 1) {
- RTE_LOG(ERR, EAL, "Options -m and --"OPT_SOCKET_MEM" cannot "
- "be specified at the same time\n");
+ RTE_LOG_LINE(ERR, EAL, "Options -m and --"OPT_SOCKET_MEM" cannot "
+ "be specified at the same time");
return -1;
}
if (internal_cfg->no_hugetlbfs && internal_cfg->force_sockets == 1) {
- RTE_LOG(ERR, EAL, "Option --"OPT_SOCKET_MEM" cannot "
- "be specified together with --"OPT_NO_HUGE"\n");
+ RTE_LOG_LINE(ERR, EAL, "Option --"OPT_SOCKET_MEM" cannot "
+ "be specified together with --"OPT_NO_HUGE);
return -1;
}
if (internal_cfg->no_hugetlbfs &&
internal_cfg->hugepage_file.unlink_before_mapping &&
!internal_cfg->in_memory) {
- RTE_LOG(ERR, EAL, "Option --"OPT_HUGE_UNLINK" cannot "
- "be specified together with --"OPT_NO_HUGE"\n");
+ RTE_LOG_LINE(ERR, EAL, "Option --"OPT_HUGE_UNLINK" cannot "
+ "be specified together with --"OPT_NO_HUGE);
return -1;
}
if (internal_cfg->no_hugetlbfs &&
internal_cfg->huge_worker_stack_size != 0) {
- RTE_LOG(ERR, EAL, "Option --"OPT_HUGE_WORKER_STACK" cannot "
- "be specified together with --"OPT_NO_HUGE"\n");
+ RTE_LOG_LINE(ERR, EAL, "Option --"OPT_HUGE_WORKER_STACK" cannot "
+ "be specified together with --"OPT_NO_HUGE);
return -1;
}
if (internal_conf->force_socket_limits && internal_conf->legacy_mem) {
- RTE_LOG(ERR, EAL, "Option --"OPT_SOCKET_LIMIT
- " is only supported in non-legacy memory mode\n");
+ RTE_LOG_LINE(ERR, EAL, "Option --"OPT_SOCKET_LIMIT
+ " is only supported in non-legacy memory mode");
}
if (internal_cfg->single_file_segments &&
internal_cfg->hugepage_file.unlink_before_mapping &&
!internal_cfg->in_memory) {
- RTE_LOG(ERR, EAL, "Option --"OPT_SINGLE_FILE_SEGMENTS" is "
- "not compatible with --"OPT_HUGE_UNLINK"\n");
+ RTE_LOG_LINE(ERR, EAL, "Option --"OPT_SINGLE_FILE_SEGMENTS" is "
+ "not compatible with --"OPT_HUGE_UNLINK);
return -1;
}
if (!internal_cfg->hugepage_file.unlink_existing &&
internal_cfg->in_memory) {
- RTE_LOG(ERR, EAL, "Option --"OPT_IN_MEMORY" is not compatible "
- "with --"OPT_HUGE_UNLINK"="HUGE_UNLINK_NEVER"\n");
+ RTE_LOG_LINE(ERR, EAL, "Option --"OPT_IN_MEMORY" is not compatible "
+ "with --"OPT_HUGE_UNLINK"="HUGE_UNLINK_NEVER);
return -1;
}
if (internal_cfg->legacy_mem &&
internal_cfg->in_memory) {
- RTE_LOG(ERR, EAL, "Option --"OPT_LEGACY_MEM" is not compatible "
- "with --"OPT_IN_MEMORY"\n");
+ RTE_LOG_LINE(ERR, EAL, "Option --"OPT_LEGACY_MEM" is not compatible "
+ "with --"OPT_IN_MEMORY);
return -1;
}
if (internal_cfg->legacy_mem && internal_cfg->match_allocations) {
- RTE_LOG(ERR, EAL, "Option --"OPT_LEGACY_MEM" is not compatible "
- "with --"OPT_MATCH_ALLOCATIONS"\n");
+ RTE_LOG_LINE(ERR, EAL, "Option --"OPT_LEGACY_MEM" is not compatible "
+ "with --"OPT_MATCH_ALLOCATIONS);
return -1;
}
if (internal_cfg->no_hugetlbfs && internal_cfg->match_allocations) {
- RTE_LOG(ERR, EAL, "Option --"OPT_NO_HUGE" is not compatible "
- "with --"OPT_MATCH_ALLOCATIONS"\n");
+ RTE_LOG_LINE(ERR, EAL, "Option --"OPT_NO_HUGE" is not compatible "
+ "with --"OPT_MATCH_ALLOCATIONS);
return -1;
}
if (internal_cfg->legacy_mem && internal_cfg->memory == 0) {
- RTE_LOG(NOTICE, EAL, "Static memory layout is selected, "
+ RTE_LOG_LINE(NOTICE, EAL, "Static memory layout is selected, "
"amount of reserved memory can be adjusted with "
- "-m or --"OPT_SOCKET_MEM"\n");
+ "-m or --"OPT_SOCKET_MEM);
}
return 0;
@@ -2141,12 +2141,12 @@ rte_vect_set_max_simd_bitwidth(uint16_t bitwidth)
struct internal_config *internal_conf =
eal_get_internal_configuration();
if (internal_conf->max_simd_bitwidth.forced) {
- RTE_LOG(NOTICE, EAL, "Cannot set max SIMD bitwidth - user runtime override enabled\n");
+ RTE_LOG_LINE(NOTICE, EAL, "Cannot set max SIMD bitwidth - user runtime override enabled");
return -EPERM;
}
if (bitwidth < RTE_VECT_SIMD_DISABLED || !rte_is_power_of_2(bitwidth)) {
- RTE_LOG(ERR, EAL, "Invalid bitwidth value!\n");
+ RTE_LOG_LINE(ERR, EAL, "Invalid bitwidth value!");
return -EINVAL;
}
internal_conf->max_simd_bitwidth.bitwidth = bitwidth;
diff --git a/lib/eal/common/eal_common_proc.c b/lib/eal/common/eal_common_proc.c
index 728815c4a9..abc6117c65 100644
--- a/lib/eal/common/eal_common_proc.c
+++ b/lib/eal/common/eal_common_proc.c
@@ -181,12 +181,12 @@ static int
validate_action_name(const char *name)
{
if (name == NULL) {
- RTE_LOG(ERR, EAL, "Action name cannot be NULL\n");
+ RTE_LOG_LINE(ERR, EAL, "Action name cannot be NULL");
rte_errno = EINVAL;
return -1;
}
if (strnlen(name, RTE_MP_MAX_NAME_LEN) == 0) {
- RTE_LOG(ERR, EAL, "Length of action name is zero\n");
+ RTE_LOG_LINE(ERR, EAL, "Length of action name is zero");
rte_errno = EINVAL;
return -1;
}
@@ -208,7 +208,7 @@ rte_mp_action_register(const char *name, rte_mp_t action)
return -1;
if (internal_conf->no_shconf) {
- RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n");
+ RTE_LOG_LINE(DEBUG, EAL, "No shared files mode enabled, IPC is disabled");
rte_errno = ENOTSUP;
return -1;
}
@@ -244,7 +244,7 @@ rte_mp_action_unregister(const char *name)
return;
if (internal_conf->no_shconf) {
- RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n");
+ RTE_LOG_LINE(DEBUG, EAL, "No shared files mode enabled, IPC is disabled");
return;
}
@@ -291,12 +291,12 @@ read_msg(int fd, struct mp_msg_internal *m, struct sockaddr_un *s)
if (errno == EINTR)
goto retry;
- RTE_LOG(ERR, EAL, "recvmsg failed, %s\n", strerror(errno));
+ RTE_LOG_LINE(ERR, EAL, "recvmsg failed, %s", strerror(errno));
return -1;
}
if (msglen != buflen || (msgh.msg_flags & (MSG_TRUNC | MSG_CTRUNC))) {
- RTE_LOG(ERR, EAL, "truncated msg\n");
+ RTE_LOG_LINE(ERR, EAL, "truncated msg");
return -1;
}
@@ -311,11 +311,11 @@ read_msg(int fd, struct mp_msg_internal *m, struct sockaddr_un *s)
}
/* sanity-check the response */
if (m->msg.num_fds < 0 || m->msg.num_fds > RTE_MP_MAX_FD_NUM) {
- RTE_LOG(ERR, EAL, "invalid number of fd's received\n");
+ RTE_LOG_LINE(ERR, EAL, "invalid number of fd's received");
return -1;
}
if (m->msg.len_param < 0 || m->msg.len_param > RTE_MP_MAX_PARAM_LEN) {
- RTE_LOG(ERR, EAL, "invalid received data length\n");
+ RTE_LOG_LINE(ERR, EAL, "invalid received data length");
return -1;
}
return msglen;
@@ -340,7 +340,7 @@ process_msg(struct mp_msg_internal *m, struct sockaddr_un *s)
const struct internal_config *internal_conf =
eal_get_internal_configuration();
- RTE_LOG(DEBUG, EAL, "msg: %s\n", msg->name);
+ RTE_LOG_LINE(DEBUG, EAL, "msg: %s", msg->name);
if (m->type == MP_REP || m->type == MP_IGN) {
struct pending_request *req = NULL;
@@ -359,7 +359,7 @@ process_msg(struct mp_msg_internal *m, struct sockaddr_un *s)
req = async_reply_handle_thread_unsafe(
pending_req);
} else {
- RTE_LOG(ERR, EAL, "Drop mp reply: %s\n", msg->name);
+ RTE_LOG_LINE(ERR, EAL, "Drop mp reply: %s", msg->name);
cleanup_msg_fds(msg);
}
pthread_mutex_unlock(&pending_requests.lock);
@@ -388,12 +388,12 @@ process_msg(struct mp_msg_internal *m, struct sockaddr_un *s)
strlcpy(dummy.name, msg->name, sizeof(dummy.name));
mp_send(&dummy, s->sun_path, MP_IGN);
} else {
- RTE_LOG(ERR, EAL, "Cannot find action: %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Cannot find action: %s",
msg->name);
}
cleanup_msg_fds(msg);
} else if (action(msg, s->sun_path) < 0) {
- RTE_LOG(ERR, EAL, "Fail to handle message: %s\n", msg->name);
+ RTE_LOG_LINE(ERR, EAL, "Fail to handle message: %s", msg->name);
}
}
@@ -459,7 +459,7 @@ process_async_request(struct pending_request *sr, const struct timespec *now)
tmp = realloc(user_msgs, sizeof(*msg) *
(reply->nb_received + 1));
if (!tmp) {
- RTE_LOG(ERR, EAL, "Fail to alloc reply for request %s:%s\n",
+ RTE_LOG_LINE(ERR, EAL, "Fail to alloc reply for request %s:%s",
sr->dst, sr->request->name);
/* this entry is going to be removed and its message
* dropped, but we don't want to leak memory, so
@@ -518,7 +518,7 @@ async_reply_handle_thread_unsafe(void *arg)
struct timespec ts_now;
if (clock_gettime(CLOCK_MONOTONIC, &ts_now) < 0) {
- RTE_LOG(ERR, EAL, "Cannot get current time\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot get current time");
goto no_trigger;
}
@@ -532,10 +532,10 @@ async_reply_handle_thread_unsafe(void *arg)
* handling the same message twice.
*/
if (rte_errno == EINPROGRESS) {
- RTE_LOG(DEBUG, EAL, "Request handling is already in progress\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Request handling is already in progress");
goto no_trigger;
}
- RTE_LOG(ERR, EAL, "Failed to cancel alarm\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed to cancel alarm");
}
if (action == ACTION_TRIGGER)
@@ -570,7 +570,7 @@ open_socket_fd(void)
mp_fd = socket(AF_UNIX, SOCK_DGRAM, 0);
if (mp_fd < 0) {
- RTE_LOG(ERR, EAL, "failed to create unix socket\n");
+ RTE_LOG_LINE(ERR, EAL, "failed to create unix socket");
return -1;
}
@@ -582,13 +582,13 @@ open_socket_fd(void)
unlink(un.sun_path); /* May still exist since last run */
if (bind(mp_fd, (struct sockaddr *)&un, sizeof(un)) < 0) {
- RTE_LOG(ERR, EAL, "failed to bind %s: %s\n",
+ RTE_LOG_LINE(ERR, EAL, "failed to bind %s: %s",
un.sun_path, strerror(errno));
close(mp_fd);
return -1;
}
- RTE_LOG(INFO, EAL, "Multi-process socket %s\n", un.sun_path);
+ RTE_LOG_LINE(INFO, EAL, "Multi-process socket %s", un.sun_path);
return mp_fd;
}
@@ -614,7 +614,7 @@ rte_mp_channel_init(void)
* so no need to initialize IPC.
*/
if (internal_conf->no_shconf) {
- RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC will be disabled\n");
+ RTE_LOG_LINE(DEBUG, EAL, "No shared files mode enabled, IPC will be disabled");
rte_errno = ENOTSUP;
return -1;
}
@@ -630,13 +630,13 @@ rte_mp_channel_init(void)
/* lock the directory */
dir_fd = open(mp_dir_path, O_RDONLY);
if (dir_fd < 0) {
- RTE_LOG(ERR, EAL, "failed to open %s: %s\n",
+ RTE_LOG_LINE(ERR, EAL, "failed to open %s: %s",
mp_dir_path, strerror(errno));
return -1;
}
if (flock(dir_fd, LOCK_EX)) {
- RTE_LOG(ERR, EAL, "failed to lock %s: %s\n",
+ RTE_LOG_LINE(ERR, EAL, "failed to lock %s: %s",
mp_dir_path, strerror(errno));
close(dir_fd);
return -1;
@@ -649,7 +649,7 @@ rte_mp_channel_init(void)
if (rte_thread_create_internal_control(&mp_handle_tid, "mp-msg",
mp_handle, NULL) < 0) {
- RTE_LOG(ERR, EAL, "failed to create mp thread: %s\n",
+ RTE_LOG_LINE(ERR, EAL, "failed to create mp thread: %s",
strerror(errno));
close(dir_fd);
close(rte_atomic_exchange_explicit(&mp_fd, -1, rte_memory_order_relaxed));
@@ -732,7 +732,7 @@ send_msg(const char *dst_path, struct rte_mp_msg *msg, int type)
unlink(dst_path);
return 0;
}
- RTE_LOG(ERR, EAL, "failed to send to (%s) due to %s\n",
+ RTE_LOG_LINE(ERR, EAL, "failed to send to (%s) due to %s",
dst_path, strerror(errno));
return -1;
}
@@ -760,7 +760,7 @@ mp_send(struct rte_mp_msg *msg, const char *peer, int type)
/* broadcast to all secondary processes */
mp_dir = opendir(mp_dir_path);
if (!mp_dir) {
- RTE_LOG(ERR, EAL, "Unable to open directory %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Unable to open directory %s",
mp_dir_path);
rte_errno = errno;
return -1;
@@ -769,7 +769,7 @@ mp_send(struct rte_mp_msg *msg, const char *peer, int type)
dir_fd = dirfd(mp_dir);
/* lock the directory to prevent processes spinning up while we send */
if (flock(dir_fd, LOCK_SH)) {
- RTE_LOG(ERR, EAL, "Unable to lock directory %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Unable to lock directory %s",
mp_dir_path);
rte_errno = errno;
closedir(mp_dir);
@@ -799,7 +799,7 @@ static int
check_input(const struct rte_mp_msg *msg)
{
if (msg == NULL) {
- RTE_LOG(ERR, EAL, "Msg cannot be NULL\n");
+ RTE_LOG_LINE(ERR, EAL, "Msg cannot be NULL");
rte_errno = EINVAL;
return -1;
}
@@ -808,25 +808,25 @@ check_input(const struct rte_mp_msg *msg)
return -1;
if (msg->len_param < 0) {
- RTE_LOG(ERR, EAL, "Message data length is negative\n");
+ RTE_LOG_LINE(ERR, EAL, "Message data length is negative");
rte_errno = EINVAL;
return -1;
}
if (msg->num_fds < 0) {
- RTE_LOG(ERR, EAL, "Number of fd's is negative\n");
+ RTE_LOG_LINE(ERR, EAL, "Number of fd's is negative");
rte_errno = EINVAL;
return -1;
}
if (msg->len_param > RTE_MP_MAX_PARAM_LEN) {
- RTE_LOG(ERR, EAL, "Message data is too long\n");
+ RTE_LOG_LINE(ERR, EAL, "Message data is too long");
rte_errno = E2BIG;
return -1;
}
if (msg->num_fds > RTE_MP_MAX_FD_NUM) {
- RTE_LOG(ERR, EAL, "Cannot send more than %d FDs\n",
+ RTE_LOG_LINE(ERR, EAL, "Cannot send more than %d FDs",
RTE_MP_MAX_FD_NUM);
rte_errno = E2BIG;
return -1;
@@ -845,12 +845,12 @@ rte_mp_sendmsg(struct rte_mp_msg *msg)
return -1;
if (internal_conf->no_shconf) {
- RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n");
+ RTE_LOG_LINE(DEBUG, EAL, "No shared files mode enabled, IPC is disabled");
rte_errno = ENOTSUP;
return -1;
}
- RTE_LOG(DEBUG, EAL, "sendmsg: %s\n", msg->name);
+ RTE_LOG_LINE(DEBUG, EAL, "sendmsg: %s", msg->name);
return mp_send(msg, NULL, MP_MSG);
}
@@ -865,7 +865,7 @@ mp_request_async(const char *dst, struct rte_mp_msg *req,
pending_req = calloc(1, sizeof(*pending_req));
reply_msg = calloc(1, sizeof(*reply_msg));
if (pending_req == NULL || reply_msg == NULL) {
- RTE_LOG(ERR, EAL, "Could not allocate space for sync request\n");
+ RTE_LOG_LINE(ERR, EAL, "Could not allocate space for sync request");
rte_errno = ENOMEM;
ret = -1;
goto fail;
@@ -881,7 +881,7 @@ mp_request_async(const char *dst, struct rte_mp_msg *req,
exist = find_pending_request(dst, req->name);
if (exist) {
- RTE_LOG(ERR, EAL, "A pending request %s:%s\n", dst, req->name);
+ RTE_LOG_LINE(ERR, EAL, "A pending request %s:%s", dst, req->name);
rte_errno = EEXIST;
ret = -1;
goto fail;
@@ -889,7 +889,7 @@ mp_request_async(const char *dst, struct rte_mp_msg *req,
ret = send_msg(dst, req, MP_REQ);
if (ret < 0) {
- RTE_LOG(ERR, EAL, "Fail to send request %s:%s\n",
+ RTE_LOG_LINE(ERR, EAL, "Fail to send request %s:%s",
dst, req->name);
ret = -1;
goto fail;
@@ -902,7 +902,7 @@ mp_request_async(const char *dst, struct rte_mp_msg *req,
/* if alarm set fails, we simply ignore the reply */
if (rte_eal_alarm_set(ts->tv_sec * 1000000 + ts->tv_nsec / 1000,
async_reply_handle, pending_req) < 0) {
- RTE_LOG(ERR, EAL, "Fail to set alarm for request %s:%s\n",
+ RTE_LOG_LINE(ERR, EAL, "Fail to set alarm for request %s:%s",
dst, req->name);
ret = -1;
goto fail;
@@ -936,14 +936,14 @@ mp_request_sync(const char *dst, struct rte_mp_msg *req,
exist = find_pending_request(dst, req->name);
if (exist) {
- RTE_LOG(ERR, EAL, "A pending request %s:%s\n", dst, req->name);
+ RTE_LOG_LINE(ERR, EAL, "A pending request %s:%s", dst, req->name);
rte_errno = EEXIST;
return -1;
}
ret = send_msg(dst, req, MP_REQ);
if (ret < 0) {
- RTE_LOG(ERR, EAL, "Fail to send request %s:%s\n",
+ RTE_LOG_LINE(ERR, EAL, "Fail to send request %s:%s",
dst, req->name);
return -1;
} else if (ret == 0)
@@ -961,13 +961,13 @@ mp_request_sync(const char *dst, struct rte_mp_msg *req,
TAILQ_REMOVE(&pending_requests.requests, &pending_req, next);
if (pending_req.reply_received == 0) {
- RTE_LOG(ERR, EAL, "Fail to recv reply for request %s:%s\n",
+ RTE_LOG_LINE(ERR, EAL, "Fail to recv reply for request %s:%s",
dst, req->name);
rte_errno = ETIMEDOUT;
return -1;
}
if (pending_req.reply_received == -1) {
- RTE_LOG(DEBUG, EAL, "Asked to ignore response\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Asked to ignore response");
/* not receiving this message is not an error, so decrement
* number of sent messages
*/
@@ -977,7 +977,7 @@ mp_request_sync(const char *dst, struct rte_mp_msg *req,
tmp = realloc(reply->msgs, sizeof(msg) * (reply->nb_received + 1));
if (!tmp) {
- RTE_LOG(ERR, EAL, "Fail to alloc reply for request %s:%s\n",
+ RTE_LOG_LINE(ERR, EAL, "Fail to alloc reply for request %s:%s",
dst, req->name);
rte_errno = ENOMEM;
return -1;
@@ -999,7 +999,7 @@ rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply,
const struct internal_config *internal_conf =
eal_get_internal_configuration();
- RTE_LOG(DEBUG, EAL, "request: %s\n", req->name);
+ RTE_LOG_LINE(DEBUG, EAL, "request: %s", req->name);
reply->nb_sent = 0;
reply->nb_received = 0;
@@ -1009,13 +1009,13 @@ rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply,
goto end;
if (internal_conf->no_shconf) {
- RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n");
+ RTE_LOG_LINE(DEBUG, EAL, "No shared files mode enabled, IPC is disabled");
rte_errno = ENOTSUP;
return -1;
}
if (clock_gettime(CLOCK_MONOTONIC, &now) < 0) {
- RTE_LOG(ERR, EAL, "Failed to get current time\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed to get current time");
rte_errno = errno;
goto end;
}
@@ -1035,7 +1035,7 @@ rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply,
/* for primary process, broadcast request, and collect reply 1 by 1 */
mp_dir = opendir(mp_dir_path);
if (!mp_dir) {
- RTE_LOG(ERR, EAL, "Unable to open directory %s\n", mp_dir_path);
+ RTE_LOG_LINE(ERR, EAL, "Unable to open directory %s", mp_dir_path);
rte_errno = errno;
goto end;
}
@@ -1043,7 +1043,7 @@ rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply,
dir_fd = dirfd(mp_dir);
/* lock the directory to prevent processes spinning up while we send */
if (flock(dir_fd, LOCK_SH)) {
- RTE_LOG(ERR, EAL, "Unable to lock directory %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Unable to lock directory %s",
mp_dir_path);
rte_errno = errno;
goto close_end;
@@ -1102,19 +1102,19 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts,
const struct internal_config *internal_conf =
eal_get_internal_configuration();
- RTE_LOG(DEBUG, EAL, "request: %s\n", req->name);
+ RTE_LOG_LINE(DEBUG, EAL, "request: %s", req->name);
if (check_input(req) != 0)
return -1;
if (internal_conf->no_shconf) {
- RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n");
+ RTE_LOG_LINE(DEBUG, EAL, "No shared files mode enabled, IPC is disabled");
rte_errno = ENOTSUP;
return -1;
}
if (clock_gettime(CLOCK_MONOTONIC, &now) < 0) {
- RTE_LOG(ERR, EAL, "Failed to get current time\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed to get current time");
rte_errno = errno;
return -1;
}
@@ -1122,7 +1122,7 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts,
dummy = calloc(1, sizeof(*dummy));
param = calloc(1, sizeof(*param));
if (copy == NULL || dummy == NULL || param == NULL) {
- RTE_LOG(ERR, EAL, "Failed to allocate memory for async reply\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed to allocate memory for async reply");
rte_errno = ENOMEM;
goto fail;
}
@@ -1180,7 +1180,7 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts,
/* for primary process, broadcast request */
mp_dir = opendir(mp_dir_path);
if (!mp_dir) {
- RTE_LOG(ERR, EAL, "Unable to open directory %s\n", mp_dir_path);
+ RTE_LOG_LINE(ERR, EAL, "Unable to open directory %s", mp_dir_path);
rte_errno = errno;
goto unlock_fail;
}
@@ -1188,7 +1188,7 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts,
/* lock the directory to prevent processes spinning up while we send */
if (flock(dir_fd, LOCK_SH)) {
- RTE_LOG(ERR, EAL, "Unable to lock directory %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Unable to lock directory %s",
mp_dir_path);
rte_errno = errno;
goto closedir_fail;
@@ -1240,7 +1240,7 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts,
int
rte_mp_reply(struct rte_mp_msg *msg, const char *peer)
{
- RTE_LOG(DEBUG, EAL, "reply: %s\n", msg->name);
+ RTE_LOG_LINE(DEBUG, EAL, "reply: %s", msg->name);
const struct internal_config *internal_conf =
eal_get_internal_configuration();
@@ -1248,13 +1248,13 @@ rte_mp_reply(struct rte_mp_msg *msg, const char *peer)
return -1;
if (peer == NULL) {
- RTE_LOG(ERR, EAL, "peer is not specified\n");
+ RTE_LOG_LINE(ERR, EAL, "peer is not specified");
rte_errno = EINVAL;
return -1;
}
if (internal_conf->no_shconf) {
- RTE_LOG(DEBUG, EAL, "No shared files mode enabled, IPC is disabled\n");
+ RTE_LOG_LINE(DEBUG, EAL, "No shared files mode enabled, IPC is disabled");
return 0;
}
diff --git a/lib/eal/common/eal_common_tailqs.c b/lib/eal/common/eal_common_tailqs.c
index 580fbf24bc..06a6cac4ff 100644
--- a/lib/eal/common/eal_common_tailqs.c
+++ b/lib/eal/common/eal_common_tailqs.c
@@ -109,8 +109,8 @@ int
rte_eal_tailq_register(struct rte_tailq_elem *t)
{
if (rte_eal_tailq_local_register(t) < 0) {
- RTE_LOG(ERR, EAL,
- "%s tailq is already registered\n", t->name);
+ RTE_LOG_LINE(ERR, EAL,
+ "%s tailq is already registered", t->name);
goto error;
}
@@ -119,8 +119,8 @@ rte_eal_tailq_register(struct rte_tailq_elem *t)
if (rte_tailqs_count >= 0) {
rte_eal_tailq_update(t);
if (t->head == NULL) {
- RTE_LOG(ERR, EAL,
- "Cannot initialize tailq: %s\n", t->name);
+ RTE_LOG_LINE(ERR, EAL,
+ "Cannot initialize tailq: %s", t->name);
TAILQ_REMOVE(&rte_tailq_elem_head, t, next);
goto error;
}
@@ -145,8 +145,8 @@ rte_eal_tailqs_init(void)
* rte_eal_tailq_register and EAL_REGISTER_TAILQ */
rte_eal_tailq_update(t);
if (t->head == NULL) {
- RTE_LOG(ERR, EAL,
- "Cannot initialize tailq: %s\n", t->name);
+ RTE_LOG_LINE(ERR, EAL,
+ "Cannot initialize tailq: %s", t->name);
/* TAILQ_REMOVE not needed, error is already fatal */
goto fail;
}
diff --git a/lib/eal/common/eal_common_thread.c b/lib/eal/common/eal_common_thread.c
index c422ea8b53..b0974a7aa5 100644
--- a/lib/eal/common/eal_common_thread.c
+++ b/lib/eal/common/eal_common_thread.c
@@ -86,7 +86,7 @@ int
rte_thread_set_affinity(rte_cpuset_t *cpusetp)
{
if (rte_thread_set_affinity_by_id(rte_thread_self(), cpusetp) != 0) {
- RTE_LOG(ERR, EAL, "rte_thread_set_affinity_by_id failed\n");
+ RTE_LOG_LINE(ERR, EAL, "rte_thread_set_affinity_by_id failed");
return -1;
}
@@ -175,7 +175,7 @@ eal_thread_loop(void *arg)
__rte_thread_init(lcore_id, &lcore_config[lcore_id].cpuset);
ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset));
- RTE_LOG(DEBUG, EAL, "lcore %u is ready (tid=%zx;cpuset=[%s%s])\n",
+ RTE_LOG_LINE(DEBUG, EAL, "lcore %u is ready (tid=%zx;cpuset=[%s%s])",
lcore_id, rte_thread_self().opaque_id, cpuset,
ret == 0 ? "" : "...");
@@ -368,12 +368,12 @@ rte_thread_register(void)
/* EAL init flushes all lcores, we can't register before. */
if (eal_get_internal_configuration()->init_complete != 1) {
- RTE_LOG(DEBUG, EAL, "Called %s before EAL init.\n", __func__);
+ RTE_LOG_LINE(DEBUG, EAL, "Called %s before EAL init.", __func__);
rte_errno = EINVAL;
return -1;
}
if (!rte_mp_disable()) {
- RTE_LOG(ERR, EAL, "Multiprocess in use, registering non-EAL threads is not supported.\n");
+ RTE_LOG_LINE(ERR, EAL, "Multiprocess in use, registering non-EAL threads is not supported.");
rte_errno = EINVAL;
return -1;
}
@@ -387,7 +387,7 @@ rte_thread_register(void)
rte_errno = ENOMEM;
return -1;
}
- RTE_LOG(DEBUG, EAL, "Registered non-EAL thread as lcore %u.\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Registered non-EAL thread as lcore %u.",
lcore_id);
return 0;
}
@@ -401,7 +401,7 @@ rte_thread_unregister(void)
eal_lcore_non_eal_release(lcore_id);
__rte_thread_uninit();
if (lcore_id != LCORE_ID_ANY)
- RTE_LOG(DEBUG, EAL, "Unregistered non-EAL thread (was lcore %u).\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Unregistered non-EAL thread (was lcore %u).",
lcore_id);
}
diff --git a/lib/eal/common/eal_common_timer.c b/lib/eal/common/eal_common_timer.c
index 5686a5102b..bd2ca85c6c 100644
--- a/lib/eal/common/eal_common_timer.c
+++ b/lib/eal/common/eal_common_timer.c
@@ -39,8 +39,8 @@ static uint64_t
estimate_tsc_freq(void)
{
#define CYC_PER_10MHZ 1E7
- RTE_LOG(WARNING, EAL, "WARNING: TSC frequency estimated roughly"
- " - clock timings may be less accurate.\n");
+ RTE_LOG_LINE(WARNING, EAL, "WARNING: TSC frequency estimated roughly"
+ " - clock timings may be less accurate.");
/* assume that the rte_delay_us_sleep() will sleep for 1 second */
uint64_t start = rte_rdtsc();
rte_delay_us_sleep(US_PER_S);
@@ -71,7 +71,7 @@ set_tsc_freq(void)
if (!freq)
freq = estimate_tsc_freq();
- RTE_LOG(DEBUG, EAL, "TSC frequency is ~%" PRIu64 " KHz\n", freq / 1000);
+ RTE_LOG_LINE(DEBUG, EAL, "TSC frequency is ~%" PRIu64 " KHz", freq / 1000);
eal_tsc_resolution_hz = freq;
mcfg->tsc_hz = freq;
}
diff --git a/lib/eal/common/eal_common_trace_utils.c b/lib/eal/common/eal_common_trace_utils.c
index 8561a0e198..f5e724f9cd 100644
--- a/lib/eal/common/eal_common_trace_utils.c
+++ b/lib/eal/common/eal_common_trace_utils.c
@@ -348,7 +348,7 @@ trace_mkdir(void)
return -rte_errno;
}
- RTE_LOG(INFO, EAL, "Trace dir: %s\n", trace->dir);
+ RTE_LOG_LINE(INFO, EAL, "Trace dir: %s", trace->dir);
already_done = true;
return 0;
}
diff --git a/lib/eal/common/eal_trace.h b/lib/eal/common/eal_trace.h
index ace2ef3ee5..4dbd6ea457 100644
--- a/lib/eal/common/eal_trace.h
+++ b/lib/eal/common/eal_trace.h
@@ -17,10 +17,10 @@
#include "eal_thread.h"
#define trace_err(fmt, args...) \
- RTE_LOG(ERR, EAL, "%s():%u " fmt "\n", __func__, __LINE__, ## args)
+ RTE_LOG_LINE(ERR, EAL, "%s():%u " fmt, __func__, __LINE__, ## args)
#define trace_crit(fmt, args...) \
- RTE_LOG(CRIT, EAL, "%s():%u " fmt "\n", __func__, __LINE__, ## args)
+ RTE_LOG_LINE(CRIT, EAL, "%s():%u " fmt, __func__, __LINE__, ## args)
#define TRACE_CTF_MAGIC 0xC1FC1FC1
#define TRACE_MAX_ARGS 32
diff --git a/lib/eal/common/hotplug_mp.c b/lib/eal/common/hotplug_mp.c
index 602781966c..cd47c248f5 100644
--- a/lib/eal/common/hotplug_mp.c
+++ b/lib/eal/common/hotplug_mp.c
@@ -77,7 +77,7 @@ send_response_to_secondary(const struct eal_dev_mp_req *req,
ret = rte_mp_reply(&mp_resp, peer);
if (ret != 0)
- RTE_LOG(ERR, EAL, "failed to send response to secondary\n");
+ RTE_LOG_LINE(ERR, EAL, "failed to send response to secondary");
return ret;
}
@@ -101,18 +101,18 @@ __handle_secondary_request(void *param)
if (req->t == EAL_DEV_REQ_TYPE_ATTACH) {
ret = local_dev_probe(req->devargs, &dev);
if (ret != 0 && ret != -EEXIST) {
- RTE_LOG(ERR, EAL, "Failed to hotplug add device on primary\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed to hotplug add device on primary");
goto finish;
}
ret = eal_dev_hotplug_request_to_secondary(&tmp_req);
if (ret != 0) {
- RTE_LOG(ERR, EAL, "Failed to send hotplug request to secondary\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed to send hotplug request to secondary");
ret = -ENOMSG;
goto rollback;
}
if (tmp_req.result != 0) {
ret = tmp_req.result;
- RTE_LOG(ERR, EAL, "Failed to hotplug add device on secondary\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed to hotplug add device on secondary");
if (ret != -EEXIST)
goto rollback;
}
@@ -123,27 +123,27 @@ __handle_secondary_request(void *param)
ret = eal_dev_hotplug_request_to_secondary(&tmp_req);
if (ret != 0) {
- RTE_LOG(ERR, EAL, "Failed to send hotplug request to secondary\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed to send hotplug request to secondary");
ret = -ENOMSG;
goto rollback;
}
bus = rte_bus_find_by_name(da.bus->name);
if (bus == NULL) {
- RTE_LOG(ERR, EAL, "Cannot find bus (%s)\n", da.bus->name);
+ RTE_LOG_LINE(ERR, EAL, "Cannot find bus (%s)", da.bus->name);
ret = -ENOENT;
goto finish;
}
dev = bus->find_device(NULL, cmp_dev_name, da.name);
if (dev == NULL) {
- RTE_LOG(ERR, EAL, "Cannot find plugged device (%s)\n", da.name);
+ RTE_LOG_LINE(ERR, EAL, "Cannot find plugged device (%s)", da.name);
ret = -ENOENT;
goto finish;
}
if (tmp_req.result != 0) {
- RTE_LOG(ERR, EAL, "Failed to hotplug remove device on secondary\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed to hotplug remove device on secondary");
ret = tmp_req.result;
if (ret != -ENOENT)
goto rollback;
@@ -151,12 +151,12 @@ __handle_secondary_request(void *param)
ret = local_dev_remove(dev);
if (ret != 0) {
- RTE_LOG(ERR, EAL, "Failed to hotplug remove device on primary\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed to hotplug remove device on primary");
if (ret != -ENOENT)
goto rollback;
}
} else {
- RTE_LOG(ERR, EAL, "unsupported secondary to primary request\n");
+ RTE_LOG_LINE(ERR, EAL, "unsupported secondary to primary request");
ret = -ENOTSUP;
}
goto finish;
@@ -174,7 +174,7 @@ __handle_secondary_request(void *param)
finish:
ret = send_response_to_secondary(&tmp_req, ret, bundle->peer);
if (ret)
- RTE_LOG(ERR, EAL, "failed to send response to secondary\n");
+ RTE_LOG_LINE(ERR, EAL, "failed to send response to secondary");
rte_devargs_reset(&da);
free(bundle->peer);
@@ -191,7 +191,7 @@ handle_secondary_request(const struct rte_mp_msg *msg, const void *peer)
bundle = malloc(sizeof(*bundle));
if (bundle == NULL) {
- RTE_LOG(ERR, EAL, "not enough memory\n");
+ RTE_LOG_LINE(ERR, EAL, "not enough memory");
return send_response_to_secondary(req, -ENOMEM, peer);
}
@@ -204,7 +204,7 @@ handle_secondary_request(const struct rte_mp_msg *msg, const void *peer)
bundle->peer = strdup(peer);
if (bundle->peer == NULL) {
free(bundle);
- RTE_LOG(ERR, EAL, "not enough memory\n");
+ RTE_LOG_LINE(ERR, EAL, "not enough memory");
return send_response_to_secondary(req, -ENOMEM, peer);
}
@@ -214,7 +214,7 @@ handle_secondary_request(const struct rte_mp_msg *msg, const void *peer)
*/
ret = rte_eal_alarm_set(1, __handle_secondary_request, bundle);
if (ret != 0) {
- RTE_LOG(ERR, EAL, "failed to add mp task\n");
+ RTE_LOG_LINE(ERR, EAL, "failed to add mp task");
free(bundle->peer);
free(bundle);
return send_response_to_secondary(req, ret, peer);
@@ -257,14 +257,14 @@ static void __handle_primary_request(void *param)
bus = rte_bus_find_by_name(da->bus->name);
if (bus == NULL) {
- RTE_LOG(ERR, EAL, "Cannot find bus (%s)\n", da->bus->name);
+ RTE_LOG_LINE(ERR, EAL, "Cannot find bus (%s)", da->bus->name);
ret = -ENOENT;
goto quit;
}
dev = bus->find_device(NULL, cmp_dev_name, da->name);
if (dev == NULL) {
- RTE_LOG(ERR, EAL, "Cannot find plugged device (%s)\n", da->name);
+ RTE_LOG_LINE(ERR, EAL, "Cannot find plugged device (%s)", da->name);
ret = -ENOENT;
goto quit;
}
@@ -296,7 +296,7 @@ static void __handle_primary_request(void *param)
memcpy(resp, req, sizeof(*resp));
resp->result = ret;
if (rte_mp_reply(&mp_resp, bundle->peer) < 0)
- RTE_LOG(ERR, EAL, "failed to send reply to primary request\n");
+ RTE_LOG_LINE(ERR, EAL, "failed to send reply to primary request");
free(bundle->peer);
free(bundle);
@@ -320,11 +320,11 @@ handle_primary_request(const struct rte_mp_msg *msg, const void *peer)
bundle = calloc(1, sizeof(*bundle));
if (bundle == NULL) {
- RTE_LOG(ERR, EAL, "not enough memory\n");
+ RTE_LOG_LINE(ERR, EAL, "not enough memory");
resp->result = -ENOMEM;
ret = rte_mp_reply(&mp_resp, peer);
if (ret)
- RTE_LOG(ERR, EAL, "failed to send reply to primary request\n");
+ RTE_LOG_LINE(ERR, EAL, "failed to send reply to primary request");
return ret;
}
@@ -336,12 +336,12 @@ handle_primary_request(const struct rte_mp_msg *msg, const void *peer)
*/
bundle->peer = (void *)strdup(peer);
if (bundle->peer == NULL) {
- RTE_LOG(ERR, EAL, "not enough memory\n");
+ RTE_LOG_LINE(ERR, EAL, "not enough memory");
free(bundle);
resp->result = -ENOMEM;
ret = rte_mp_reply(&mp_resp, peer);
if (ret)
- RTE_LOG(ERR, EAL, "failed to send reply to primary request\n");
+ RTE_LOG_LINE(ERR, EAL, "failed to send reply to primary request");
return ret;
}
@@ -356,7 +356,7 @@ handle_primary_request(const struct rte_mp_msg *msg, const void *peer)
resp->result = ret;
ret = rte_mp_reply(&mp_resp, peer);
if (ret != 0) {
- RTE_LOG(ERR, EAL, "failed to send reply to primary request\n");
+ RTE_LOG_LINE(ERR, EAL, "failed to send reply to primary request");
return ret;
}
}
@@ -378,7 +378,7 @@ int eal_dev_hotplug_request_to_primary(struct eal_dev_mp_req *req)
ret = rte_mp_request_sync(&mp_req, &mp_reply, &ts);
if (ret || mp_reply.nb_received != 1) {
- RTE_LOG(ERR, EAL, "Cannot send request to primary\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot send request to primary");
if (!ret)
return -1;
return ret;
@@ -408,14 +408,14 @@ int eal_dev_hotplug_request_to_secondary(struct eal_dev_mp_req *req)
if (ret != 0) {
/* if IPC is not supported, behave as if the call succeeded */
if (rte_errno != ENOTSUP)
- RTE_LOG(ERR, EAL, "rte_mp_request_sync failed\n");
+ RTE_LOG_LINE(ERR, EAL, "rte_mp_request_sync failed");
else
ret = 0;
return ret;
}
if (mp_reply.nb_sent != mp_reply.nb_received) {
- RTE_LOG(ERR, EAL, "not all secondary reply\n");
+ RTE_LOG_LINE(ERR, EAL, "not all secondary reply");
free(mp_reply.msgs);
return -1;
}
@@ -448,7 +448,7 @@ int eal_mp_dev_hotplug_init(void)
handle_secondary_request);
/* primary is allowed to not support IPC */
if (ret != 0 && rte_errno != ENOTSUP) {
- RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n",
+ RTE_LOG_LINE(ERR, EAL, "Couldn't register '%s' action",
EAL_DEV_MP_ACTION_REQUEST);
return ret;
}
@@ -456,7 +456,7 @@ int eal_mp_dev_hotplug_init(void)
ret = rte_mp_action_register(EAL_DEV_MP_ACTION_REQUEST,
handle_primary_request);
if (ret != 0) {
- RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n",
+ RTE_LOG_LINE(ERR, EAL, "Couldn't register '%s' action",
EAL_DEV_MP_ACTION_REQUEST);
return ret;
}
diff --git a/lib/eal/common/malloc_elem.c b/lib/eal/common/malloc_elem.c
index f5d1c8c2e2..6e9d5b8660 100644
--- a/lib/eal/common/malloc_elem.c
+++ b/lib/eal/common/malloc_elem.c
@@ -148,7 +148,7 @@ malloc_elem_insert(struct malloc_elem *elem)
/* first and last elements must be both NULL or both non-NULL */
if ((heap->first == NULL) != (heap->last == NULL)) {
- RTE_LOG(ERR, EAL, "Heap is probably corrupt\n");
+ RTE_LOG_LINE(ERR, EAL, "Heap is probably corrupt");
return;
}
@@ -628,7 +628,7 @@ malloc_elem_hide_region(struct malloc_elem *elem, void *start, size_t len)
malloc_elem_free_list_insert(hide_end);
} else if (len_after > 0) {
- RTE_LOG(ERR, EAL, "Unaligned element, heap is probably corrupt\n");
+ RTE_LOG_LINE(ERR, EAL, "Unaligned element, heap is probably corrupt");
return;
}
}
@@ -647,7 +647,7 @@ malloc_elem_hide_region(struct malloc_elem *elem, void *start, size_t len)
malloc_elem_free_list_insert(prev);
} else if (len_before > 0) {
- RTE_LOG(ERR, EAL, "Unaligned element, heap is probably corrupt\n");
+ RTE_LOG_LINE(ERR, EAL, "Unaligned element, heap is probably corrupt");
return;
}
}
diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c
index 6b6cf9174c..010c84c36c 100644
--- a/lib/eal/common/malloc_heap.c
+++ b/lib/eal/common/malloc_heap.c
@@ -117,7 +117,7 @@ malloc_add_seg(const struct rte_memseg_list *msl,
heap_idx = malloc_socket_to_heap_id(msl->socket_id);
if (heap_idx < 0) {
- RTE_LOG(ERR, EAL, "Memseg list has invalid socket id\n");
+ RTE_LOG_LINE(ERR, EAL, "Memseg list has invalid socket id");
return -1;
}
heap = &mcfg->malloc_heaps[heap_idx];
@@ -135,7 +135,7 @@ malloc_add_seg(const struct rte_memseg_list *msl,
heap->total_size += len;
- RTE_LOG(DEBUG, EAL, "Added %zuM to heap on socket %i\n", len >> 20,
+ RTE_LOG_LINE(DEBUG, EAL, "Added %zuM to heap on socket %i", len >> 20,
msl->socket_id);
return 0;
}
@@ -308,7 +308,7 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size,
/* first, check if we're allowed to allocate this memory */
if (eal_memalloc_mem_alloc_validate(socket,
heap->total_size + alloc_sz) < 0) {
- RTE_LOG(DEBUG, EAL, "User has disallowed allocation\n");
+ RTE_LOG_LINE(DEBUG, EAL, "User has disallowed allocation");
return NULL;
}
@@ -324,7 +324,7 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size,
/* check if we wanted contiguous memory but didn't get it */
if (contig && !eal_memalloc_is_contig(msl, map_addr, alloc_sz)) {
- RTE_LOG(DEBUG, EAL, "%s(): couldn't allocate physically contiguous space\n",
+ RTE_LOG_LINE(DEBUG, EAL, "%s(): couldn't allocate physically contiguous space",
__func__);
goto fail;
}
@@ -352,8 +352,8 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size,
* which could solve some situations when IOVA VA is not
* really needed.
*/
- RTE_LOG(ERR, EAL,
- "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask\n",
+ RTE_LOG_LINE(ERR, EAL,
+ "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask",
__func__);
/*
@@ -363,8 +363,8 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size,
*/
if ((rte_eal_iova_mode() == RTE_IOVA_VA) &&
rte_eal_using_phys_addrs())
- RTE_LOG(ERR, EAL,
- "%s(): Please try initializing EAL with --iova-mode=pa parameter\n",
+ RTE_LOG_LINE(ERR, EAL,
+ "%s(): Please try initializing EAL with --iova-mode=pa parameter",
__func__);
goto fail;
}
@@ -440,7 +440,7 @@ try_expand_heap_primary(struct malloc_heap *heap, uint64_t pg_sz,
}
heap->total_size += alloc_sz;
- RTE_LOG(DEBUG, EAL, "Heap on socket %d was expanded by %zdMB\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Heap on socket %d was expanded by %zdMB",
socket, alloc_sz >> 20ULL);
free(ms);
@@ -693,7 +693,7 @@ malloc_heap_alloc_on_heap_id(const char *type, size_t size,
/* this should have succeeded */
if (ret == NULL)
- RTE_LOG(ERR, EAL, "Error allocating from heap\n");
+ RTE_LOG_LINE(ERR, EAL, "Error allocating from heap");
}
alloc_unlock:
rte_spinlock_unlock(&(heap->lock));
@@ -1040,7 +1040,7 @@ malloc_heap_free(struct malloc_elem *elem)
/* we didn't exit early, meaning we have unmapped some pages */
unmapped = true;
- RTE_LOG(DEBUG, EAL, "Heap on socket %d was shrunk by %zdMB\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Heap on socket %d was shrunk by %zdMB",
msl->socket_id, aligned_len >> 20ULL);
rte_mcfg_mem_write_unlock();
@@ -1199,7 +1199,7 @@ malloc_heap_create_external_seg(void *va_addr, rte_iova_t iova_addrs[],
}
}
if (msl == NULL) {
- RTE_LOG(ERR, EAL, "Couldn't find empty memseg list\n");
+ RTE_LOG_LINE(ERR, EAL, "Couldn't find empty memseg list");
rte_errno = ENOSPC;
return NULL;
}
@@ -1210,7 +1210,7 @@ malloc_heap_create_external_seg(void *va_addr, rte_iova_t iova_addrs[],
/* create the backing fbarray */
if (rte_fbarray_init(&msl->memseg_arr, fbarray_name, n_pages,
sizeof(struct rte_memseg)) < 0) {
- RTE_LOG(ERR, EAL, "Couldn't create fbarray backing the memseg list\n");
+ RTE_LOG_LINE(ERR, EAL, "Couldn't create fbarray backing the memseg list");
return NULL;
}
arr = &msl->memseg_arr;
@@ -1310,7 +1310,7 @@ malloc_heap_add_external_memory(struct malloc_heap *heap,
heap->total_size += msl->len;
/* all done! */
- RTE_LOG(DEBUG, EAL, "Added segment for heap %s starting at %p\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Added segment for heap %s starting at %p",
heap->name, msl->base_va);
/* notify all subscribers that a new memory area has been added */
@@ -1356,7 +1356,7 @@ malloc_heap_create(struct malloc_heap *heap, const char *heap_name)
/* prevent overflow. did you really create 2 billion heaps??? */
if (next_socket_id > INT32_MAX) {
- RTE_LOG(ERR, EAL, "Cannot assign new socket ID's\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot assign new socket ID's");
rte_errno = ENOSPC;
return -1;
}
@@ -1382,17 +1382,17 @@ int
malloc_heap_destroy(struct malloc_heap *heap)
{
if (heap->alloc_count != 0) {
- RTE_LOG(ERR, EAL, "Heap is still in use\n");
+ RTE_LOG_LINE(ERR, EAL, "Heap is still in use");
rte_errno = EBUSY;
return -1;
}
if (heap->first != NULL || heap->last != NULL) {
- RTE_LOG(ERR, EAL, "Heap still contains memory segments\n");
+ RTE_LOG_LINE(ERR, EAL, "Heap still contains memory segments");
rte_errno = EBUSY;
return -1;
}
if (heap->total_size != 0)
- RTE_LOG(ERR, EAL, "Total size not zero, heap is likely corrupt\n");
+ RTE_LOG_LINE(ERR, EAL, "Total size not zero, heap is likely corrupt");
/* Reset all of the heap but the (hold) lock so caller can release it. */
RTE_BUILD_BUG_ON(offsetof(struct malloc_heap, lock) != 0);
@@ -1411,7 +1411,7 @@ rte_eal_malloc_heap_init(void)
eal_get_internal_configuration();
if (internal_conf->match_allocations)
- RTE_LOG(DEBUG, EAL, "Hugepages will be freed exactly as allocated.\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Hugepages will be freed exactly as allocated.");
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
/* assign min socket ID to external heaps */
@@ -1431,7 +1431,7 @@ rte_eal_malloc_heap_init(void)
}
if (register_mp_requests()) {
- RTE_LOG(ERR, EAL, "Couldn't register malloc multiprocess actions\n");
+ RTE_LOG_LINE(ERR, EAL, "Couldn't register malloc multiprocess actions");
return -1;
}
diff --git a/lib/eal/common/malloc_mp.c b/lib/eal/common/malloc_mp.c
index 4d62397aba..e0f49bc471 100644
--- a/lib/eal/common/malloc_mp.c
+++ b/lib/eal/common/malloc_mp.c
@@ -156,7 +156,7 @@ handle_sync(const struct rte_mp_msg *msg, const void *peer)
int ret;
if (req->t != REQ_TYPE_SYNC) {
- RTE_LOG(ERR, EAL, "Unexpected request from primary\n");
+ RTE_LOG_LINE(ERR, EAL, "Unexpected request from primary");
return -1;
}
@@ -189,19 +189,19 @@ handle_free_request(const struct malloc_mp_req *m)
/* check if the requested memory actually exists */
msl = rte_mem_virt2memseg_list(start);
if (msl == NULL) {
- RTE_LOG(ERR, EAL, "Requested to free unknown memory\n");
+ RTE_LOG_LINE(ERR, EAL, "Requested to free unknown memory");
return -1;
}
/* check if end is within the same memory region */
if (rte_mem_virt2memseg_list(end) != msl) {
- RTE_LOG(ERR, EAL, "Requested to free memory spanning multiple regions\n");
+ RTE_LOG_LINE(ERR, EAL, "Requested to free memory spanning multiple regions");
return -1;
}
/* we're supposed to only free memory that's not external */
if (msl->external) {
- RTE_LOG(ERR, EAL, "Requested to free external memory\n");
+ RTE_LOG_LINE(ERR, EAL, "Requested to free external memory");
return -1;
}
@@ -228,13 +228,13 @@ handle_alloc_request(const struct malloc_mp_req *m,
/* this is checked by the API, but we need to prevent divide by zero */
if (ar->page_sz == 0 || !rte_is_power_of_2(ar->page_sz)) {
- RTE_LOG(ERR, EAL, "Attempting to allocate with invalid page size\n");
+ RTE_LOG_LINE(ERR, EAL, "Attempting to allocate with invalid page size");
return -1;
}
/* heap idx is index into the heap array, not socket ID */
if (ar->malloc_heap_idx >= RTE_MAX_HEAPS) {
- RTE_LOG(ERR, EAL, "Attempting to allocate from invalid heap\n");
+ RTE_LOG_LINE(ERR, EAL, "Attempting to allocate from invalid heap");
return -1;
}
@@ -247,7 +247,7 @@ handle_alloc_request(const struct malloc_mp_req *m,
* socket ID's are always lower than RTE_MAX_NUMA_NODES.
*/
if (heap->socket_id >= RTE_MAX_NUMA_NODES) {
- RTE_LOG(ERR, EAL, "Attempting to allocate from external heap\n");
+ RTE_LOG_LINE(ERR, EAL, "Attempting to allocate from external heap");
return -1;
}
@@ -258,7 +258,7 @@ handle_alloc_request(const struct malloc_mp_req *m,
/* we can't know in advance how many pages we'll need, so we malloc */
ms = malloc(sizeof(*ms) * n_segs);
if (ms == NULL) {
- RTE_LOG(ERR, EAL, "Couldn't allocate memory for request state\n");
+ RTE_LOG_LINE(ERR, EAL, "Couldn't allocate memory for request state");
return -1;
}
memset(ms, 0, sizeof(*ms) * n_segs);
@@ -307,13 +307,13 @@ handle_request(const struct rte_mp_msg *msg, const void *peer __rte_unused)
/* make sure it's not a dupe */
entry = find_request_by_id(m->id);
if (entry != NULL) {
- RTE_LOG(ERR, EAL, "Duplicate request id\n");
+ RTE_LOG_LINE(ERR, EAL, "Duplicate request id");
goto fail;
}
entry = malloc(sizeof(*entry));
if (entry == NULL) {
- RTE_LOG(ERR, EAL, "Unable to allocate memory for request\n");
+ RTE_LOG_LINE(ERR, EAL, "Unable to allocate memory for request");
goto fail;
}
@@ -325,7 +325,7 @@ handle_request(const struct rte_mp_msg *msg, const void *peer __rte_unused)
} else if (m->t == REQ_TYPE_FREE) {
ret = handle_free_request(m);
} else {
- RTE_LOG(ERR, EAL, "Unexpected request from secondary\n");
+ RTE_LOG_LINE(ERR, EAL, "Unexpected request from secondary");
goto fail;
}
@@ -345,7 +345,7 @@ handle_request(const struct rte_mp_msg *msg, const void *peer __rte_unused)
resp->id = m->id;
if (rte_mp_sendmsg(&resp_msg)) {
- RTE_LOG(ERR, EAL, "Couldn't send response\n");
+ RTE_LOG_LINE(ERR, EAL, "Couldn't send response");
goto fail;
}
/* we did not modify the request */
@@ -376,7 +376,7 @@ handle_request(const struct rte_mp_msg *msg, const void *peer __rte_unused)
handle_sync_response);
} while (ret != 0 && rte_errno == EEXIST);
if (ret != 0) {
- RTE_LOG(ERR, EAL, "Couldn't send sync request\n");
+ RTE_LOG_LINE(ERR, EAL, "Couldn't send sync request");
if (m->t == REQ_TYPE_ALLOC)
free(entry->alloc_state.ms);
goto fail;
@@ -414,7 +414,7 @@ handle_sync_response(const struct rte_mp_msg *request,
entry = find_request_by_id(mpreq->id);
if (entry == NULL) {
- RTE_LOG(ERR, EAL, "Wrong request ID\n");
+ RTE_LOG_LINE(ERR, EAL, "Wrong request ID");
goto fail;
}
@@ -428,12 +428,12 @@ handle_sync_response(const struct rte_mp_msg *request,
(struct malloc_mp_req *)reply->msgs[i].param;
if (resp->t != REQ_TYPE_SYNC) {
- RTE_LOG(ERR, EAL, "Unexpected response to sync request\n");
+ RTE_LOG_LINE(ERR, EAL, "Unexpected response to sync request");
result = REQ_RESULT_FAIL;
break;
}
if (resp->id != entry->user_req.id) {
- RTE_LOG(ERR, EAL, "Response to wrong sync request\n");
+ RTE_LOG_LINE(ERR, EAL, "Response to wrong sync request");
result = REQ_RESULT_FAIL;
break;
}
@@ -458,7 +458,7 @@ handle_sync_response(const struct rte_mp_msg *request,
strlcpy(msg.name, MP_ACTION_RESPONSE, sizeof(msg.name));
if (rte_mp_sendmsg(&msg))
- RTE_LOG(ERR, EAL, "Could not send message to secondary process\n");
+ RTE_LOG_LINE(ERR, EAL, "Could not send message to secondary process");
TAILQ_REMOVE(&mp_request_list.list, entry, next);
free(entry);
@@ -482,7 +482,7 @@ handle_sync_response(const struct rte_mp_msg *request,
strlcpy(msg.name, MP_ACTION_RESPONSE, sizeof(msg.name));
if (rte_mp_sendmsg(&msg))
- RTE_LOG(ERR, EAL, "Could not send message to secondary process\n");
+ RTE_LOG_LINE(ERR, EAL, "Could not send message to secondary process");
TAILQ_REMOVE(&mp_request_list.list, entry, next);
free(entry->alloc_state.ms);
@@ -524,7 +524,7 @@ handle_sync_response(const struct rte_mp_msg *request,
handle_rollback_response);
} while (ret != 0 && rte_errno == EEXIST);
if (ret != 0) {
- RTE_LOG(ERR, EAL, "Could not send rollback request to secondary process\n");
+ RTE_LOG_LINE(ERR, EAL, "Could not send rollback request to secondary process");
/* we couldn't send rollback request, but that's OK -
* secondary will time out, and memory has been removed
@@ -536,7 +536,7 @@ handle_sync_response(const struct rte_mp_msg *request,
goto fail;
}
} else {
- RTE_LOG(ERR, EAL, " to sync request of unknown type\n");
+ RTE_LOG_LINE(ERR, EAL, " to sync request of unknown type");
goto fail;
}
@@ -564,12 +564,12 @@ handle_rollback_response(const struct rte_mp_msg *request,
entry = find_request_by_id(mpreq->id);
if (entry == NULL) {
- RTE_LOG(ERR, EAL, "Wrong request ID\n");
+ RTE_LOG_LINE(ERR, EAL, "Wrong request ID");
goto fail;
}
if (entry->user_req.t != REQ_TYPE_ALLOC) {
- RTE_LOG(ERR, EAL, "Unexpected active request\n");
+ RTE_LOG_LINE(ERR, EAL, "Unexpected active request");
goto fail;
}
@@ -582,7 +582,7 @@ handle_rollback_response(const struct rte_mp_msg *request,
strlcpy(msg.name, MP_ACTION_RESPONSE, sizeof(msg.name));
if (rte_mp_sendmsg(&msg))
- RTE_LOG(ERR, EAL, "Could not send message to secondary process\n");
+ RTE_LOG_LINE(ERR, EAL, "Could not send message to secondary process");
/* clean up */
TAILQ_REMOVE(&mp_request_list.list, entry, next);
@@ -657,14 +657,14 @@ request_sync(void)
if (ret != 0) {
/* if IPC is unsupported, behave as if the call succeeded */
if (rte_errno != ENOTSUP)
- RTE_LOG(ERR, EAL, "Could not send sync request to secondary process\n");
+ RTE_LOG_LINE(ERR, EAL, "Could not send sync request to secondary process");
else
ret = 0;
goto out;
}
if (reply.nb_received != reply.nb_sent) {
- RTE_LOG(ERR, EAL, "Not all secondaries have responded\n");
+ RTE_LOG_LINE(ERR, EAL, "Not all secondaries have responded");
goto out;
}
@@ -672,15 +672,15 @@ request_sync(void)
struct malloc_mp_req *resp =
(struct malloc_mp_req *)reply.msgs[i].param;
if (resp->t != REQ_TYPE_SYNC) {
- RTE_LOG(ERR, EAL, "Unexpected response from secondary\n");
+ RTE_LOG_LINE(ERR, EAL, "Unexpected response from secondary");
goto out;
}
if (resp->id != req->id) {
- RTE_LOG(ERR, EAL, "Wrong request ID\n");
+ RTE_LOG_LINE(ERR, EAL, "Wrong request ID");
goto out;
}
if (resp->result != REQ_RESULT_SUCCESS) {
- RTE_LOG(ERR, EAL, "Secondary process failed to synchronize\n");
+ RTE_LOG_LINE(ERR, EAL, "Secondary process failed to synchronize");
goto out;
}
}
@@ -711,14 +711,14 @@ request_to_primary(struct malloc_mp_req *user_req)
entry = malloc(sizeof(*entry));
if (entry == NULL) {
- RTE_LOG(ERR, EAL, "Cannot allocate memory for request\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot allocate memory for request");
goto fail;
}
memset(entry, 0, sizeof(*entry));
if (gettimeofday(&now, NULL) < 0) {
- RTE_LOG(ERR, EAL, "Cannot get current time\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot get current time");
goto fail;
}
@@ -740,7 +740,7 @@ request_to_primary(struct malloc_mp_req *user_req)
memcpy(msg_req, user_req, sizeof(*msg_req));
if (rte_mp_sendmsg(&msg)) {
- RTE_LOG(ERR, EAL, "Cannot send message to primary\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot send message to primary");
goto fail;
}
@@ -759,7 +759,7 @@ request_to_primary(struct malloc_mp_req *user_req)
} while (ret != 0 && ret != ETIMEDOUT);
if (entry->state != REQ_STATE_COMPLETE) {
- RTE_LOG(ERR, EAL, "Request timed out\n");
+ RTE_LOG_LINE(ERR, EAL, "Request timed out");
ret = -1;
} else {
ret = 0;
@@ -783,24 +783,24 @@ register_mp_requests(void)
/* it's OK for primary to not support IPC */
if (rte_mp_action_register(MP_ACTION_REQUEST, handle_request) &&
rte_errno != ENOTSUP) {
- RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n",
+ RTE_LOG_LINE(ERR, EAL, "Couldn't register '%s' action",
MP_ACTION_REQUEST);
return -1;
}
} else {
if (rte_mp_action_register(MP_ACTION_SYNC, handle_sync)) {
- RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n",
+ RTE_LOG_LINE(ERR, EAL, "Couldn't register '%s' action",
MP_ACTION_SYNC);
return -1;
}
if (rte_mp_action_register(MP_ACTION_ROLLBACK, handle_sync)) {
- RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n",
+ RTE_LOG_LINE(ERR, EAL, "Couldn't register '%s' action",
MP_ACTION_SYNC);
return -1;
}
if (rte_mp_action_register(MP_ACTION_RESPONSE,
handle_response)) {
- RTE_LOG(ERR, EAL, "Couldn't register '%s' action\n",
+ RTE_LOG_LINE(ERR, EAL, "Couldn't register '%s' action",
MP_ACTION_RESPONSE);
return -1;
}
diff --git a/lib/eal/common/rte_keepalive.c b/lib/eal/common/rte_keepalive.c
index e0494b2010..699022ae1c 100644
--- a/lib/eal/common/rte_keepalive.c
+++ b/lib/eal/common/rte_keepalive.c
@@ -53,7 +53,7 @@ struct rte_keepalive {
static void
print_trace(const char *msg, struct rte_keepalive *keepcfg, int idx_core)
{
- RTE_LOG(INFO, EAL, "%sLast seen %" PRId64 "ms ago.\n",
+ RTE_LOG_LINE(INFO, EAL, "%sLast seen %" PRId64 "ms ago.",
msg,
((rte_rdtsc() - keepcfg->last_alive[idx_core])*1000)
/ rte_get_tsc_hz()
diff --git a/lib/eal/common/rte_malloc.c b/lib/eal/common/rte_malloc.c
index 9db0c399ae..9b3038805a 100644
--- a/lib/eal/common/rte_malloc.c
+++ b/lib/eal/common/rte_malloc.c
@@ -35,7 +35,7 @@ mem_free(void *addr, const bool trace_ena)
if (addr == NULL) return;
if (malloc_heap_free(malloc_elem_from_data(addr)) < 0)
- RTE_LOG(ERR, EAL, "Error: Invalid memory\n");
+ RTE_LOG_LINE(ERR, EAL, "Error: Invalid memory");
}
void
@@ -171,7 +171,7 @@ rte_realloc_socket(void *ptr, size_t size, unsigned int align, int socket)
struct malloc_elem *elem = malloc_elem_from_data(ptr);
if (elem == NULL) {
- RTE_LOG(ERR, EAL, "Error: memory corruption detected\n");
+ RTE_LOG_LINE(ERR, EAL, "Error: memory corruption detected");
return NULL;
}
@@ -598,7 +598,7 @@ rte_malloc_heap_create(const char *heap_name)
/* existing heap */
if (strncmp(heap_name, tmp->name,
RTE_HEAP_NAME_MAX_LEN) == 0) {
- RTE_LOG(ERR, EAL, "Heap %s already exists\n",
+ RTE_LOG_LINE(ERR, EAL, "Heap %s already exists",
heap_name);
rte_errno = EEXIST;
ret = -1;
@@ -611,7 +611,7 @@ rte_malloc_heap_create(const char *heap_name)
}
}
if (heap == NULL) {
- RTE_LOG(ERR, EAL, "Cannot create new heap: no space\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot create new heap: no space");
rte_errno = ENOSPC;
ret = -1;
goto unlock;
@@ -643,7 +643,7 @@ rte_malloc_heap_destroy(const char *heap_name)
/* start from non-socket heaps */
heap = find_named_heap(heap_name);
if (heap == NULL) {
- RTE_LOG(ERR, EAL, "Heap %s not found\n", heap_name);
+ RTE_LOG_LINE(ERR, EAL, "Heap %s not found", heap_name);
rte_errno = ENOENT;
ret = -1;
goto unlock;
diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c
index e183d2e631..3ed4186add 100644
--- a/lib/eal/common/rte_service.c
+++ b/lib/eal/common/rte_service.c
@@ -87,8 +87,8 @@ rte_service_init(void)
RTE_BUILD_BUG_ON(RTE_SERVICE_NUM_MAX > 64);
if (rte_service_library_initialized) {
- RTE_LOG(NOTICE, EAL,
- "service library init() called, init flag %d\n",
+ RTE_LOG_LINE(NOTICE, EAL,
+ "service library init() called, init flag %d",
rte_service_library_initialized);
return -EALREADY;
}
@@ -97,14 +97,14 @@ rte_service_init(void)
sizeof(struct rte_service_spec_impl),
RTE_CACHE_LINE_SIZE);
if (!rte_services) {
- RTE_LOG(ERR, EAL, "error allocating rte services array\n");
+ RTE_LOG_LINE(ERR, EAL, "error allocating rte services array");
goto fail_mem;
}
lcore_states = rte_calloc("rte_service_core_states", RTE_MAX_LCORE,
sizeof(struct core_state), RTE_CACHE_LINE_SIZE);
if (!lcore_states) {
- RTE_LOG(ERR, EAL, "error allocating core states array\n");
+ RTE_LOG_LINE(ERR, EAL, "error allocating core states array");
goto fail_mem;
}
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index 568e06e9ed..2c5d196af0 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -117,7 +117,7 @@ rte_eal_config_create(void)
if (mem_cfg_fd < 0){
mem_cfg_fd = open(pathname, O_RDWR | O_CREAT, 0600);
if (mem_cfg_fd < 0) {
- RTE_LOG(ERR, EAL, "Cannot open '%s' for rte_mem_config\n",
+ RTE_LOG_LINE(ERR, EAL, "Cannot open '%s' for rte_mem_config",
pathname);
return -1;
}
@@ -127,7 +127,7 @@ rte_eal_config_create(void)
if (retval < 0){
close(mem_cfg_fd);
mem_cfg_fd = -1;
- RTE_LOG(ERR, EAL, "Cannot resize '%s' for rte_mem_config\n",
+ RTE_LOG_LINE(ERR, EAL, "Cannot resize '%s' for rte_mem_config",
pathname);
return -1;
}
@@ -136,8 +136,8 @@ rte_eal_config_create(void)
if (retval < 0){
close(mem_cfg_fd);
mem_cfg_fd = -1;
- RTE_LOG(ERR, EAL, "Cannot create lock on '%s'. Is another primary "
- "process running?\n", pathname);
+ RTE_LOG_LINE(ERR, EAL, "Cannot create lock on '%s'. Is another primary "
+ "process running?", pathname);
return -1;
}
@@ -145,7 +145,7 @@ rte_eal_config_create(void)
rte_mem_cfg_addr = eal_get_virtual_area(rte_mem_cfg_addr,
&cfg_len_aligned, page_sz, 0, 0);
if (rte_mem_cfg_addr == NULL) {
- RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config");
close(mem_cfg_fd);
mem_cfg_fd = -1;
return -1;
@@ -156,7 +156,7 @@ rte_eal_config_create(void)
cfg_len_aligned, PROT_READ | PROT_WRITE,
MAP_SHARED | MAP_FIXED, mem_cfg_fd, 0);
if (mapped_mem_cfg_addr == MAP_FAILED) {
- RTE_LOG(ERR, EAL, "Cannot remap memory for rte_config\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot remap memory for rte_config");
munmap(rte_mem_cfg_addr, cfg_len);
close(mem_cfg_fd);
mem_cfg_fd = -1;
@@ -190,7 +190,7 @@ rte_eal_config_attach(void)
if (mem_cfg_fd < 0){
mem_cfg_fd = open(pathname, O_RDWR);
if (mem_cfg_fd < 0) {
- RTE_LOG(ERR, EAL, "Cannot open '%s' for rte_mem_config\n",
+ RTE_LOG_LINE(ERR, EAL, "Cannot open '%s' for rte_mem_config",
pathname);
return -1;
}
@@ -202,7 +202,7 @@ rte_eal_config_attach(void)
if (rte_mem_cfg_addr == MAP_FAILED) {
close(mem_cfg_fd);
mem_cfg_fd = -1;
- RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)\n",
+ RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)",
errno, strerror(errno));
return -1;
}
@@ -242,14 +242,14 @@ rte_eal_config_reattach(void)
if (mem_config == MAP_FAILED || mem_config != rte_mem_cfg_addr) {
if (mem_config != MAP_FAILED) {
/* errno is stale, don't use */
- RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config at [%p], got [%p]"
+ RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config at [%p], got [%p]"
" - please use '--" OPT_BASE_VIRTADDR
- "' option\n",
+ "' option",
rte_mem_cfg_addr, mem_config);
munmap(mem_config, sizeof(struct rte_mem_config));
return -1;
}
- RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)\n",
+ RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)",
errno, strerror(errno));
return -1;
}
@@ -280,7 +280,7 @@ eal_proc_type_detect(void)
ptype = RTE_PROC_SECONDARY;
}
- RTE_LOG(INFO, EAL, "Auto-detected process type: %s\n",
+ RTE_LOG_LINE(INFO, EAL, "Auto-detected process type: %s",
ptype == RTE_PROC_PRIMARY ? "PRIMARY" : "SECONDARY");
return ptype;
@@ -307,20 +307,20 @@ rte_config_init(void)
return -1;
eal_mcfg_wait_complete();
if (eal_mcfg_check_version() < 0) {
- RTE_LOG(ERR, EAL, "Primary and secondary process DPDK version mismatch\n");
+ RTE_LOG_LINE(ERR, EAL, "Primary and secondary process DPDK version mismatch");
return -1;
}
if (rte_eal_config_reattach() < 0)
return -1;
if (!__rte_mp_enable()) {
- RTE_LOG(ERR, EAL, "Primary process refused secondary attachment\n");
+ RTE_LOG_LINE(ERR, EAL, "Primary process refused secondary attachment");
return -1;
}
eal_mcfg_update_internal();
break;
case RTE_PROC_AUTO:
case RTE_PROC_INVALID:
- RTE_LOG(ERR, EAL, "Invalid process type %d\n",
+ RTE_LOG_LINE(ERR, EAL, "Invalid process type %d",
config->process_type);
return -1;
}
@@ -454,7 +454,7 @@ eal_parse_args(int argc, char **argv)
{
char *ops_name = strdup(optarg);
if (ops_name == NULL)
- RTE_LOG(ERR, EAL, "Could not store mbuf pool ops name\n");
+ RTE_LOG_LINE(ERR, EAL, "Could not store mbuf pool ops name");
else {
/* free old ops name */
free(internal_conf->user_mbuf_pool_ops_name);
@@ -469,16 +469,16 @@ eal_parse_args(int argc, char **argv)
exit(EXIT_SUCCESS);
default:
if (opt < OPT_LONG_MIN_NUM && isprint(opt)) {
- RTE_LOG(ERR, EAL, "Option %c is not supported "
- "on FreeBSD\n", opt);
+ RTE_LOG_LINE(ERR, EAL, "Option %c is not supported "
+ "on FreeBSD", opt);
} else if (opt >= OPT_LONG_MIN_NUM &&
opt < OPT_LONG_MAX_NUM) {
- RTE_LOG(ERR, EAL, "Option %s is not supported "
- "on FreeBSD\n",
+ RTE_LOG_LINE(ERR, EAL, "Option %s is not supported "
+ "on FreeBSD",
eal_long_options[option_index].name);
} else {
- RTE_LOG(ERR, EAL, "Option %d is not supported "
- "on FreeBSD\n", opt);
+ RTE_LOG_LINE(ERR, EAL, "Option %d is not supported "
+ "on FreeBSD", opt);
}
eal_usage(prgname);
ret = -1;
@@ -489,11 +489,11 @@ eal_parse_args(int argc, char **argv)
/* create runtime data directory. In no_shconf mode, skip any errors */
if (eal_create_runtime_dir() < 0) {
if (internal_conf->no_shconf == 0) {
- RTE_LOG(ERR, EAL, "Cannot create runtime directory\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot create runtime directory");
ret = -1;
goto out;
} else
- RTE_LOG(WARNING, EAL, "No DPDK runtime directory created\n");
+ RTE_LOG_LINE(WARNING, EAL, "No DPDK runtime directory created");
}
if (eal_adjust_config(internal_conf) != 0) {
@@ -545,7 +545,7 @@ eal_check_mem_on_local_socket(void)
socket_id = rte_lcore_to_socket_id(config->main_lcore);
if (rte_memseg_list_walk(check_socket, &socket_id) == 0)
- RTE_LOG(WARNING, EAL, "WARNING: Main core has no memory on local socket!\n");
+ RTE_LOG_LINE(WARNING, EAL, "WARNING: Main core has no memory on local socket!");
}
@@ -572,7 +572,7 @@ rte_eal_iopl_init(void)
static void rte_eal_init_alert(const char *msg)
{
fprintf(stderr, "EAL: FATAL: %s\n", msg);
- RTE_LOG(ERR, EAL, "%s\n", msg);
+ RTE_LOG_LINE(ERR, EAL, "%s", msg);
}
/* Launch threads, called at application init(). */
@@ -629,7 +629,7 @@ rte_eal_init(int argc, char **argv)
/* FreeBSD always uses legacy memory model */
internal_conf->legacy_mem = true;
if (internal_conf->in_memory) {
- RTE_LOG(WARNING, EAL, "Warning: ignoring unsupported flag, '%s'\n", OPT_IN_MEMORY);
+ RTE_LOG_LINE(WARNING, EAL, "Warning: ignoring unsupported flag, '%s'", OPT_IN_MEMORY);
internal_conf->in_memory = false;
}
@@ -695,14 +695,14 @@ rte_eal_init(int argc, char **argv)
has_phys_addr = internal_conf->no_hugetlbfs == 0;
iova_mode = internal_conf->iova_mode;
if (iova_mode == RTE_IOVA_DC) {
- RTE_LOG(DEBUG, EAL, "Specific IOVA mode is not requested, autodetecting\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Specific IOVA mode is not requested, autodetecting");
if (has_phys_addr) {
- RTE_LOG(DEBUG, EAL, "Selecting IOVA mode according to bus requests\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Selecting IOVA mode according to bus requests");
iova_mode = rte_bus_get_iommu_class();
if (iova_mode == RTE_IOVA_DC) {
if (!RTE_IOVA_IN_MBUF) {
iova_mode = RTE_IOVA_VA;
- RTE_LOG(DEBUG, EAL, "IOVA as VA mode is forced by build option.\n");
+ RTE_LOG_LINE(DEBUG, EAL, "IOVA as VA mode is forced by build option.");
} else {
iova_mode = RTE_IOVA_PA;
}
@@ -725,7 +725,7 @@ rte_eal_init(int argc, char **argv)
}
rte_eal_get_configuration()->iova_mode = iova_mode;
- RTE_LOG(INFO, EAL, "Selected IOVA mode '%s'\n",
+ RTE_LOG_LINE(INFO, EAL, "Selected IOVA mode '%s'",
rte_eal_iova_mode() == RTE_IOVA_PA ? "PA" : "VA");
if (internal_conf->no_hugetlbfs == 0) {
@@ -751,11 +751,11 @@ rte_eal_init(int argc, char **argv)
if (internal_conf->vmware_tsc_map == 1) {
#ifdef RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT
rte_cycles_vmware_tsc_map = 1;
- RTE_LOG (DEBUG, EAL, "Using VMWARE TSC MAP, "
- "you must have monitor_control.pseudo_perfctr = TRUE\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Using VMWARE TSC MAP, "
+ "you must have monitor_control.pseudo_perfctr = TRUE");
#else
- RTE_LOG (WARNING, EAL, "Ignoring --vmware-tsc-map because "
- "RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT is not set\n");
+ RTE_LOG_LINE(WARNING, EAL, "Ignoring --vmware-tsc-map because "
+ "RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT is not set");
#endif
}
@@ -818,7 +818,7 @@ rte_eal_init(int argc, char **argv)
ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset));
- RTE_LOG(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])",
config->main_lcore, (uintptr_t)pthread_self(), cpuset,
ret == 0 ? "" : "...");
@@ -917,7 +917,7 @@ rte_eal_cleanup(void)
if (!rte_atomic_compare_exchange_strong_explicit(&run_once, &has_run, 1,
rte_memory_order_relaxed, rte_memory_order_relaxed)) {
- RTE_LOG(WARNING, EAL, "Already called cleanup\n");
+ RTE_LOG_LINE(WARNING, EAL, "Already called cleanup");
rte_errno = EALREADY;
return -1;
}
diff --git a/lib/eal/freebsd/eal_alarm.c b/lib/eal/freebsd/eal_alarm.c
index e5b0909a45..2493adf8ae 100644
--- a/lib/eal/freebsd/eal_alarm.c
+++ b/lib/eal/freebsd/eal_alarm.c
@@ -59,7 +59,7 @@ rte_eal_alarm_init(void)
intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
if (intr_handle == NULL) {
- RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ RTE_LOG_LINE(ERR, EAL, "Fail to allocate intr_handle");
goto error;
}
diff --git a/lib/eal/freebsd/eal_dev.c b/lib/eal/freebsd/eal_dev.c
index c3dfe9108f..8d35148ba3 100644
--- a/lib/eal/freebsd/eal_dev.c
+++ b/lib/eal/freebsd/eal_dev.c
@@ -8,27 +8,27 @@
int
rte_dev_event_monitor_start(void)
{
- RTE_LOG(ERR, EAL, "Device event is not supported for FreeBSD\n");
+ RTE_LOG_LINE(ERR, EAL, "Device event is not supported for FreeBSD");
return -1;
}
int
rte_dev_event_monitor_stop(void)
{
- RTE_LOG(ERR, EAL, "Device event is not supported for FreeBSD\n");
+ RTE_LOG_LINE(ERR, EAL, "Device event is not supported for FreeBSD");
return -1;
}
int
rte_dev_hotplug_handle_enable(void)
{
- RTE_LOG(ERR, EAL, "Device event is not supported for FreeBSD\n");
+ RTE_LOG_LINE(ERR, EAL, "Device event is not supported for FreeBSD");
return -1;
}
int
rte_dev_hotplug_handle_disable(void)
{
- RTE_LOG(ERR, EAL, "Device event is not supported for FreeBSD\n");
+ RTE_LOG_LINE(ERR, EAL, "Device event is not supported for FreeBSD");
return -1;
}
diff --git a/lib/eal/freebsd/eal_hugepage_info.c b/lib/eal/freebsd/eal_hugepage_info.c
index e58e618469..3c97daa444 100644
--- a/lib/eal/freebsd/eal_hugepage_info.c
+++ b/lib/eal/freebsd/eal_hugepage_info.c
@@ -72,7 +72,7 @@ eal_hugepage_info_init(void)
&sysctl_size, NULL, 0);
if (error != 0) {
- RTE_LOG(ERR, EAL, "could not read sysctl hw.contigmem.num_buffers\n");
+ RTE_LOG_LINE(ERR, EAL, "could not read sysctl hw.contigmem.num_buffers");
return -1;
}
@@ -81,28 +81,28 @@ eal_hugepage_info_init(void)
&sysctl_size, NULL, 0);
if (error != 0) {
- RTE_LOG(ERR, EAL, "could not read sysctl hw.contigmem.buffer_size\n");
+ RTE_LOG_LINE(ERR, EAL, "could not read sysctl hw.contigmem.buffer_size");
return -1;
}
fd = open(CONTIGMEM_DEV, O_RDWR);
if (fd < 0) {
- RTE_LOG(ERR, EAL, "could not open "CONTIGMEM_DEV"\n");
+ RTE_LOG_LINE(ERR, EAL, "could not open "CONTIGMEM_DEV);
return -1;
}
if (flock(fd, LOCK_EX | LOCK_NB) < 0) {
- RTE_LOG(ERR, EAL, "could not lock memory. Is another DPDK process running?\n");
+ RTE_LOG_LINE(ERR, EAL, "could not lock memory. Is another DPDK process running?");
return -1;
}
if (buffer_size >= 1<<30)
- RTE_LOG(INFO, EAL, "Contigmem driver has %d buffers, each of size %dGB\n",
+ RTE_LOG_LINE(INFO, EAL, "Contigmem driver has %d buffers, each of size %dGB",
num_buffers, (int)(buffer_size>>30));
else if (buffer_size >= 1<<20)
- RTE_LOG(INFO, EAL, "Contigmem driver has %d buffers, each of size %dMB\n",
+ RTE_LOG_LINE(INFO, EAL, "Contigmem driver has %d buffers, each of size %dMB",
num_buffers, (int)(buffer_size>>20));
else
- RTE_LOG(INFO, EAL, "Contigmem driver has %d buffers, each of size %dKB\n",
+ RTE_LOG_LINE(INFO, EAL, "Contigmem driver has %d buffers, each of size %dKB",
num_buffers, (int)(buffer_size>>10));
strlcpy(hpi->hugedir, CONTIGMEM_DEV, sizeof(hpi->hugedir));
@@ -117,7 +117,7 @@ eal_hugepage_info_init(void)
tmp_hpi = create_shared_memory(eal_hugepage_info_path(),
sizeof(internal_conf->hugepage_info));
if (tmp_hpi == NULL ) {
- RTE_LOG(ERR, EAL, "Failed to create shared memory!\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed to create shared memory!");
return -1;
}
@@ -132,7 +132,7 @@ eal_hugepage_info_init(void)
}
if (munmap(tmp_hpi, sizeof(internal_conf->hugepage_info)) < 0) {
- RTE_LOG(ERR, EAL, "Failed to unmap shared memory!\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed to unmap shared memory!");
return -1;
}
@@ -154,14 +154,14 @@ eal_hugepage_info_read(void)
tmp_hpi = open_shared_memory(eal_hugepage_info_path(),
sizeof(internal_conf->hugepage_info));
if (tmp_hpi == NULL) {
- RTE_LOG(ERR, EAL, "Failed to open shared memory!\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed to open shared memory!");
return -1;
}
memcpy(hpi, tmp_hpi, sizeof(internal_conf->hugepage_info));
if (munmap(tmp_hpi, sizeof(internal_conf->hugepage_info)) < 0) {
- RTE_LOG(ERR, EAL, "Failed to unmap shared memory!\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed to unmap shared memory!");
return -1;
}
return 0;
diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c
index 2b31dfb099..ffba823808 100644
--- a/lib/eal/freebsd/eal_interrupts.c
+++ b/lib/eal/freebsd/eal_interrupts.c
@@ -90,12 +90,12 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* first do parameter checking */
if (rte_intr_fd_get(intr_handle) < 0 || cb == NULL) {
- RTE_LOG(ERR, EAL,
- "Registering with invalid input parameter\n");
+ RTE_LOG_LINE(ERR, EAL,
+ "Registering with invalid input parameter");
return -EINVAL;
}
if (kq < 0) {
- RTE_LOG(ERR, EAL, "Kqueue is not active: %d\n", kq);
+ RTE_LOG_LINE(ERR, EAL, "Kqueue is not active: %d", kq);
return -ENODEV;
}
@@ -120,7 +120,7 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* allocate a new interrupt callback entity */
callback = calloc(1, sizeof(*callback));
if (callback == NULL) {
- RTE_LOG(ERR, EAL, "Can not allocate memory\n");
+ RTE_LOG_LINE(ERR, EAL, "Can not allocate memory");
ret = -ENOMEM;
goto fail;
}
@@ -132,13 +132,13 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
if (src == NULL) {
src = calloc(1, sizeof(*src));
if (src == NULL) {
- RTE_LOG(ERR, EAL, "Can not allocate memory\n");
+ RTE_LOG_LINE(ERR, EAL, "Can not allocate memory");
ret = -ENOMEM;
goto fail;
} else {
src->intr_handle = rte_intr_instance_dup(intr_handle);
if (src->intr_handle == NULL) {
- RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+ RTE_LOG_LINE(ERR, EAL, "Can not create intr instance");
ret = -ENOMEM;
free(src);
src = NULL;
@@ -167,7 +167,7 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
ke.flags = EV_ADD; /* mark for addition to the queue */
if (intr_source_to_kevent(intr_handle, &ke) < 0) {
- RTE_LOG(ERR, EAL, "Cannot convert interrupt handle to kevent\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot convert interrupt handle to kevent");
ret = -ENODEV;
goto fail;
}
@@ -181,10 +181,10 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
* user. so, don't output it unless debug log level set.
*/
if (errno == ENODEV)
- RTE_LOG(DEBUG, EAL, "Interrupt handle %d not supported\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Interrupt handle %d not supported",
rte_intr_fd_get(src->intr_handle));
else
- RTE_LOG(ERR, EAL, "Error adding fd %d kevent, %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Error adding fd %d kevent, %s",
rte_intr_fd_get(src->intr_handle),
strerror(errno));
ret = -errno;
@@ -222,13 +222,13 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
/* do parameter checking first */
if (rte_intr_fd_get(intr_handle) < 0) {
- RTE_LOG(ERR, EAL,
- "Unregistering with invalid input parameter\n");
+ RTE_LOG_LINE(ERR, EAL,
+ "Unregistering with invalid input parameter");
return -EINVAL;
}
if (kq < 0) {
- RTE_LOG(ERR, EAL, "Kqueue is not active\n");
+ RTE_LOG_LINE(ERR, EAL, "Kqueue is not active");
return -ENODEV;
}
@@ -277,12 +277,12 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* do parameter checking first */
if (rte_intr_fd_get(intr_handle) < 0) {
- RTE_LOG(ERR, EAL,
- "Unregistering with invalid input parameter\n");
+ RTE_LOG_LINE(ERR, EAL,
+ "Unregistering with invalid input parameter");
return -EINVAL;
}
if (kq < 0) {
- RTE_LOG(ERR, EAL, "Kqueue is not active\n");
+ RTE_LOG_LINE(ERR, EAL, "Kqueue is not active");
return -ENODEV;
}
@@ -312,7 +312,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
ke.flags = EV_DELETE; /* mark for deletion from the queue */
if (intr_source_to_kevent(intr_handle, &ke) < 0) {
- RTE_LOG(ERR, EAL, "Cannot convert to kevent\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot convert to kevent");
ret = -ENODEV;
goto out;
}
@@ -321,7 +321,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
* remove intr file descriptor from wait list.
*/
if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
- RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Error removing fd %d kevent, %s",
rte_intr_fd_get(src->intr_handle),
strerror(errno));
/* removing non-existent even is an expected condition
@@ -396,7 +396,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
break;
/* unknown handle type */
default:
- RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
+ RTE_LOG_LINE(ERR, EAL, "Unknown handle type of fd %d",
rte_intr_fd_get(intr_handle));
rc = -1;
break;
@@ -437,7 +437,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
break;
/* unknown handle type */
default:
- RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
+ RTE_LOG_LINE(ERR, EAL, "Unknown handle type of fd %d",
rte_intr_fd_get(intr_handle));
rc = -1;
break;
@@ -513,13 +513,13 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
if (errno == EINTR || errno == EWOULDBLOCK)
continue;
- RTE_LOG(ERR, EAL, "Error reading from file "
- "descriptor %d: %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Error reading from file "
+ "descriptor %d: %s",
event_fd,
strerror(errno));
} else if (bytes_read == 0)
- RTE_LOG(ERR, EAL, "Read nothing from file "
- "descriptor %d\n", event_fd);
+ RTE_LOG_LINE(ERR, EAL, "Read nothing from file "
+ "descriptor %d", event_fd);
else
call = true;
}
@@ -556,7 +556,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
ke.flags = EV_DELETE;
if (intr_source_to_kevent(src->intr_handle, &ke) < 0) {
- RTE_LOG(ERR, EAL, "Cannot convert to kevent\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot convert to kevent");
rte_spinlock_unlock(&intr_lock);
return;
}
@@ -565,7 +565,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
* remove intr file descriptor from wait list.
*/
if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
- RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Error removing fd %d kevent, %s",
rte_intr_fd_get(src->intr_handle),
strerror(errno));
/* removing non-existent even is an expected
@@ -606,8 +606,8 @@ eal_intr_thread_main(void *arg __rte_unused)
if (nfds < 0) {
if (errno == EINTR)
continue;
- RTE_LOG(ERR, EAL,
- "kevent returns with fail\n");
+ RTE_LOG_LINE(ERR, EAL,
+ "kevent returns with fail");
break;
}
/* kevent timeout, will never happen here */
@@ -632,7 +632,7 @@ rte_eal_intr_init(void)
kq = kqueue();
if (kq < 0) {
- RTE_LOG(ERR, EAL, "Cannot create kqueue instance\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot create kqueue instance");
return -1;
}
@@ -641,8 +641,8 @@ rte_eal_intr_init(void)
eal_intr_thread_main, NULL);
if (ret != 0) {
rte_errno = -ret;
- RTE_LOG(ERR, EAL,
- "Failed to create thread for interrupt handling\n");
+ RTE_LOG_LINE(ERR, EAL,
+ "Failed to create thread for interrupt handling");
}
return ret;
diff --git a/lib/eal/freebsd/eal_lcore.c b/lib/eal/freebsd/eal_lcore.c
index d9ef4bc9c5..cfd375076a 100644
--- a/lib/eal/freebsd/eal_lcore.c
+++ b/lib/eal/freebsd/eal_lcore.c
@@ -30,7 +30,7 @@ eal_get_ncpus(void)
if (ncpu < 0) {
sysctl(mib, 2, &ncpu, &len, NULL, 0);
- RTE_LOG(INFO, EAL, "Sysctl reports %d cpus\n", ncpu);
+ RTE_LOG_LINE(INFO, EAL, "Sysctl reports %d cpus", ncpu);
}
return ncpu;
}
diff --git a/lib/eal/freebsd/eal_memalloc.c b/lib/eal/freebsd/eal_memalloc.c
index 00ab02cb63..f96ed2ce21 100644
--- a/lib/eal/freebsd/eal_memalloc.c
+++ b/lib/eal/freebsd/eal_memalloc.c
@@ -15,21 +15,21 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms __rte_unused,
int __rte_unused n_segs, size_t __rte_unused page_sz,
int __rte_unused socket, bool __rte_unused exact)
{
- RTE_LOG(ERR, EAL, "Memory hotplug not supported on FreeBSD\n");
+ RTE_LOG_LINE(ERR, EAL, "Memory hotplug not supported on FreeBSD");
return -1;
}
struct rte_memseg *
eal_memalloc_alloc_seg(size_t __rte_unused page_sz, int __rte_unused socket)
{
- RTE_LOG(ERR, EAL, "Memory hotplug not supported on FreeBSD\n");
+ RTE_LOG_LINE(ERR, EAL, "Memory hotplug not supported on FreeBSD");
return NULL;
}
int
eal_memalloc_free_seg(struct rte_memseg *ms __rte_unused)
{
- RTE_LOG(ERR, EAL, "Memory hotplug not supported on FreeBSD\n");
+ RTE_LOG_LINE(ERR, EAL, "Memory hotplug not supported on FreeBSD");
return -1;
}
@@ -37,14 +37,14 @@ int
eal_memalloc_free_seg_bulk(struct rte_memseg **ms __rte_unused,
int n_segs __rte_unused)
{
- RTE_LOG(ERR, EAL, "Memory hotplug not supported on FreeBSD\n");
+ RTE_LOG_LINE(ERR, EAL, "Memory hotplug not supported on FreeBSD");
return -1;
}
int
eal_memalloc_sync_with_primary(void)
{
- RTE_LOG(ERR, EAL, "Memory hotplug not supported on FreeBSD\n");
+ RTE_LOG_LINE(ERR, EAL, "Memory hotplug not supported on FreeBSD");
return -1;
}
diff --git a/lib/eal/freebsd/eal_memory.c b/lib/eal/freebsd/eal_memory.c
index 5c6165c580..195f570da0 100644
--- a/lib/eal/freebsd/eal_memory.c
+++ b/lib/eal/freebsd/eal_memory.c
@@ -84,7 +84,7 @@ rte_eal_hugepage_init(void)
addr = mmap(NULL, mem_sz, PROT_READ | PROT_WRITE,
MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
if (addr == MAP_FAILED) {
- RTE_LOG(ERR, EAL, "%s: mmap() failed: %s\n", __func__,
+ RTE_LOG_LINE(ERR, EAL, "%s: mmap() failed: %s", __func__,
strerror(errno));
return -1;
}
@@ -132,8 +132,8 @@ rte_eal_hugepage_init(void)
error = sysctlbyname(physaddr_str, &physaddr,
&sysctl_size, NULL, 0);
if (error < 0) {
- RTE_LOG(ERR, EAL, "Failed to get physical addr for buffer %u "
- "from %s\n", j, hpi->hugedir);
+ RTE_LOG_LINE(ERR, EAL, "Failed to get physical addr for buffer %u "
+ "from %s", j, hpi->hugedir);
return -1;
}
@@ -172,8 +172,8 @@ rte_eal_hugepage_init(void)
break;
}
if (msl_idx == RTE_MAX_MEMSEG_LISTS) {
- RTE_LOG(ERR, EAL, "Could not find space for memseg. Please increase RTE_MAX_MEMSEG_PER_LIST "
- "RTE_MAX_MEMSEG_PER_TYPE and/or RTE_MAX_MEM_MB_PER_TYPE in configuration.\n");
+ RTE_LOG_LINE(ERR, EAL, "Could not find space for memseg. Please increase RTE_MAX_MEMSEG_PER_LIST "
+ "RTE_MAX_MEMSEG_PER_TYPE and/or RTE_MAX_MEM_MB_PER_TYPE in configuration.");
return -1;
}
arr = &msl->memseg_arr;
@@ -190,7 +190,7 @@ rte_eal_hugepage_init(void)
hpi->lock_descriptor,
j * EAL_PAGE_SIZE);
if (addr == MAP_FAILED) {
- RTE_LOG(ERR, EAL, "Failed to mmap buffer %u from %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Failed to mmap buffer %u from %s",
j, hpi->hugedir);
return -1;
}
@@ -205,8 +205,8 @@ rte_eal_hugepage_init(void)
rte_fbarray_set_used(arr, ms_idx);
- RTE_LOG(INFO, EAL, "Mapped memory segment %u @ %p: physaddr:0x%"
- PRIx64", len %zu\n",
+ RTE_LOG_LINE(INFO, EAL, "Mapped memory segment %u @ %p: physaddr:0x%"
+ PRIx64", len %zu",
seg_idx++, addr, physaddr, page_sz);
total_mem += seg->len;
@@ -215,9 +215,9 @@ rte_eal_hugepage_init(void)
break;
}
if (total_mem < internal_conf->memory) {
- RTE_LOG(ERR, EAL, "Couldn't reserve requested memory, "
+ RTE_LOG_LINE(ERR, EAL, "Couldn't reserve requested memory, "
"requested: %" PRIu64 "M "
- "available: %" PRIu64 "M\n",
+ "available: %" PRIu64 "M",
internal_conf->memory >> 20, total_mem >> 20);
return -1;
}
@@ -268,7 +268,7 @@ rte_eal_hugepage_attach(void)
/* Obtain a file descriptor for contiguous memory */
fd_hugepage = open(cur_hpi->hugedir, O_RDWR);
if (fd_hugepage < 0) {
- RTE_LOG(ERR, EAL, "Could not open %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Could not open %s",
cur_hpi->hugedir);
goto error;
}
@@ -277,7 +277,7 @@ rte_eal_hugepage_attach(void)
/* Map the contiguous memory into each memory segment */
if (rte_memseg_walk(attach_segment, &wa) < 0) {
- RTE_LOG(ERR, EAL, "Failed to mmap buffer %u from %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Failed to mmap buffer %u from %s",
wa.seg_idx, cur_hpi->hugedir);
goto error;
}
@@ -402,8 +402,8 @@ memseg_primary_init(void)
unsigned int n_segs;
if (msl_idx >= RTE_MAX_MEMSEG_LISTS) {
- RTE_LOG(ERR, EAL,
- "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS\n");
+ RTE_LOG_LINE(ERR, EAL,
+ "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS");
return -1;
}
@@ -424,7 +424,7 @@ memseg_primary_init(void)
type_msl_idx++;
if (memseg_list_alloc(msl)) {
- RTE_LOG(ERR, EAL, "Cannot allocate VA space for memseg list\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot allocate VA space for memseg list");
return -1;
}
}
@@ -449,13 +449,13 @@ memseg_secondary_init(void)
continue;
if (rte_fbarray_attach(&msl->memseg_arr)) {
- RTE_LOG(ERR, EAL, "Cannot attach to primary process memseg lists\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot attach to primary process memseg lists");
return -1;
}
/* preallocate VA space */
if (memseg_list_alloc(msl)) {
- RTE_LOG(ERR, EAL, "Cannot preallocate VA space for hugepage memory\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot preallocate VA space for hugepage memory");
return -1;
}
}
diff --git a/lib/eal/freebsd/eal_thread.c b/lib/eal/freebsd/eal_thread.c
index 6f97a3c2c1..0f7284768a 100644
--- a/lib/eal/freebsd/eal_thread.c
+++ b/lib/eal/freebsd/eal_thread.c
@@ -38,7 +38,7 @@ void rte_thread_set_name(rte_thread_t thread_id, const char *thread_name)
const size_t truncatedsz = sizeof(truncated);
if (strlcpy(truncated, thread_name, truncatedsz) >= truncatedsz)
- RTE_LOG(DEBUG, EAL, "Truncated thread name\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Truncated thread name");
pthread_set_name_np((pthread_t)thread_id.opaque_id, truncated);
}
diff --git a/lib/eal/freebsd/eal_timer.c b/lib/eal/freebsd/eal_timer.c
index beff755a47..61488ff641 100644
--- a/lib/eal/freebsd/eal_timer.c
+++ b/lib/eal/freebsd/eal_timer.c
@@ -36,20 +36,20 @@ get_tsc_freq(void)
tmp = 0;
if (sysctlbyname("kern.timecounter.smp_tsc", &tmp, &sz, NULL, 0))
- RTE_LOG(WARNING, EAL, "%s\n", strerror(errno));
+ RTE_LOG_LINE(WARNING, EAL, "%s", strerror(errno));
else if (tmp != 1)
- RTE_LOG(WARNING, EAL, "TSC is not safe to use in SMP mode\n");
+ RTE_LOG_LINE(WARNING, EAL, "TSC is not safe to use in SMP mode");
tmp = 0;
if (sysctlbyname("kern.timecounter.invariant_tsc", &tmp, &sz, NULL, 0))
- RTE_LOG(WARNING, EAL, "%s\n", strerror(errno));
+ RTE_LOG_LINE(WARNING, EAL, "%s", strerror(errno));
else if (tmp != 1)
- RTE_LOG(WARNING, EAL, "TSC is not invariant\n");
+ RTE_LOG_LINE(WARNING, EAL, "TSC is not invariant");
sz = sizeof(tsc_hz);
if (sysctlbyname("machdep.tsc_freq", &tsc_hz, &sz, NULL, 0)) {
- RTE_LOG(WARNING, EAL, "%s\n", strerror(errno));
+ RTE_LOG_LINE(WARNING, EAL, "%s", strerror(errno));
return 0;
}
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index 57da058cec..8aaff34d54 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -94,7 +94,7 @@ eal_clean_runtime_dir(void)
/* open directory */
dir = opendir(runtime_dir);
if (!dir) {
- RTE_LOG(ERR, EAL, "Unable to open runtime directory %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Unable to open runtime directory %s",
runtime_dir);
goto error;
}
@@ -102,14 +102,14 @@ eal_clean_runtime_dir(void)
/* lock the directory before doing anything, to avoid races */
if (flock(dir_fd, LOCK_EX) < 0) {
- RTE_LOG(ERR, EAL, "Unable to lock runtime directory %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Unable to lock runtime directory %s",
runtime_dir);
goto error;
}
dirent = readdir(dir);
if (!dirent) {
- RTE_LOG(ERR, EAL, "Unable to read runtime directory %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Unable to read runtime directory %s",
runtime_dir);
goto error;
}
@@ -159,7 +159,7 @@ eal_clean_runtime_dir(void)
if (dir)
closedir(dir);
- RTE_LOG(ERR, EAL, "Error while clearing runtime dir: %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Error while clearing runtime dir: %s",
strerror(errno));
return -1;
@@ -200,7 +200,7 @@ rte_eal_config_create(void)
if (mem_cfg_fd < 0){
mem_cfg_fd = open(pathname, O_RDWR | O_CREAT, 0600);
if (mem_cfg_fd < 0) {
- RTE_LOG(ERR, EAL, "Cannot open '%s' for rte_mem_config\n",
+ RTE_LOG_LINE(ERR, EAL, "Cannot open '%s' for rte_mem_config",
pathname);
return -1;
}
@@ -210,7 +210,7 @@ rte_eal_config_create(void)
if (retval < 0){
close(mem_cfg_fd);
mem_cfg_fd = -1;
- RTE_LOG(ERR, EAL, "Cannot resize '%s' for rte_mem_config\n",
+ RTE_LOG_LINE(ERR, EAL, "Cannot resize '%s' for rte_mem_config",
pathname);
return -1;
}
@@ -219,8 +219,8 @@ rte_eal_config_create(void)
if (retval < 0){
close(mem_cfg_fd);
mem_cfg_fd = -1;
- RTE_LOG(ERR, EAL, "Cannot create lock on '%s'. Is another primary "
- "process running?\n", pathname);
+ RTE_LOG_LINE(ERR, EAL, "Cannot create lock on '%s'. Is another primary "
+ "process running?", pathname);
return -1;
}
@@ -228,7 +228,7 @@ rte_eal_config_create(void)
rte_mem_cfg_addr = eal_get_virtual_area(rte_mem_cfg_addr,
&cfg_len_aligned, page_sz, 0, 0);
if (rte_mem_cfg_addr == NULL) {
- RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config");
close(mem_cfg_fd);
mem_cfg_fd = -1;
return -1;
@@ -242,7 +242,7 @@ rte_eal_config_create(void)
munmap(rte_mem_cfg_addr, cfg_len);
close(mem_cfg_fd);
mem_cfg_fd = -1;
- RTE_LOG(ERR, EAL, "Cannot remap memory for rte_config\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot remap memory for rte_config");
return -1;
}
@@ -275,7 +275,7 @@ rte_eal_config_attach(void)
if (mem_cfg_fd < 0){
mem_cfg_fd = open(pathname, O_RDWR);
if (mem_cfg_fd < 0) {
- RTE_LOG(ERR, EAL, "Cannot open '%s' for rte_mem_config\n",
+ RTE_LOG_LINE(ERR, EAL, "Cannot open '%s' for rte_mem_config",
pathname);
return -1;
}
@@ -287,7 +287,7 @@ rte_eal_config_attach(void)
if (mem_config == MAP_FAILED) {
close(mem_cfg_fd);
mem_cfg_fd = -1;
- RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)\n",
+ RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)",
errno, strerror(errno));
return -1;
}
@@ -328,13 +328,13 @@ rte_eal_config_reattach(void)
if (mem_config == MAP_FAILED || mem_config != rte_mem_cfg_addr) {
if (mem_config != MAP_FAILED) {
/* errno is stale, don't use */
- RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config at [%p], got [%p]"
+ RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config at [%p], got [%p]"
" - please use '--" OPT_BASE_VIRTADDR
- "' option\n", rte_mem_cfg_addr, mem_config);
+ "' option", rte_mem_cfg_addr, mem_config);
munmap(mem_config, sizeof(struct rte_mem_config));
return -1;
}
- RTE_LOG(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)\n",
+ RTE_LOG_LINE(ERR, EAL, "Cannot mmap memory for rte_config! error %i (%s)",
errno, strerror(errno));
return -1;
}
@@ -365,7 +365,7 @@ eal_proc_type_detect(void)
ptype = RTE_PROC_SECONDARY;
}
- RTE_LOG(INFO, EAL, "Auto-detected process type: %s\n",
+ RTE_LOG_LINE(INFO, EAL, "Auto-detected process type: %s",
ptype == RTE_PROC_PRIMARY ? "PRIMARY" : "SECONDARY");
return ptype;
@@ -392,20 +392,20 @@ rte_config_init(void)
return -1;
eal_mcfg_wait_complete();
if (eal_mcfg_check_version() < 0) {
- RTE_LOG(ERR, EAL, "Primary and secondary process DPDK version mismatch\n");
+ RTE_LOG_LINE(ERR, EAL, "Primary and secondary process DPDK version mismatch");
return -1;
}
if (rte_eal_config_reattach() < 0)
return -1;
if (!__rte_mp_enable()) {
- RTE_LOG(ERR, EAL, "Primary process refused secondary attachment\n");
+ RTE_LOG_LINE(ERR, EAL, "Primary process refused secondary attachment");
return -1;
}
eal_mcfg_update_internal();
break;
case RTE_PROC_AUTO:
case RTE_PROC_INVALID:
- RTE_LOG(ERR, EAL, "Invalid process type %d\n",
+ RTE_LOG_LINE(ERR, EAL, "Invalid process type %d",
config->process_type);
return -1;
}
@@ -474,7 +474,7 @@ eal_parse_socket_arg(char *strval, volatile uint64_t *socket_arg)
len = strnlen(strval, SOCKET_MEM_STRLEN);
if (len == SOCKET_MEM_STRLEN) {
- RTE_LOG(ERR, EAL, "--socket-mem is too long\n");
+ RTE_LOG_LINE(ERR, EAL, "--socket-mem is too long");
return -1;
}
@@ -595,13 +595,13 @@ eal_parse_huge_worker_stack(const char *arg)
int ret;
if (pthread_attr_init(&attr) != 0) {
- RTE_LOG(ERR, EAL, "Could not retrieve default stack size\n");
+ RTE_LOG_LINE(ERR, EAL, "Could not retrieve default stack size");
return -1;
}
ret = pthread_attr_getstacksize(&attr, &cfg->huge_worker_stack_size);
pthread_attr_destroy(&attr);
if (ret != 0) {
- RTE_LOG(ERR, EAL, "Could not retrieve default stack size\n");
+ RTE_LOG_LINE(ERR, EAL, "Could not retrieve default stack size");
return -1;
}
} else {
@@ -617,7 +617,7 @@ eal_parse_huge_worker_stack(const char *arg)
cfg->huge_worker_stack_size = stack_size * 1024;
}
- RTE_LOG(DEBUG, EAL, "Each worker thread will use %zu kB of DPDK memory as stack\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Each worker thread will use %zu kB of DPDK memory as stack",
cfg->huge_worker_stack_size / 1024);
return 0;
}
@@ -673,7 +673,7 @@ eal_parse_args(int argc, char **argv)
{
char *hdir = strdup(optarg);
if (hdir == NULL)
- RTE_LOG(ERR, EAL, "Could not store hugepage directory\n");
+ RTE_LOG_LINE(ERR, EAL, "Could not store hugepage directory");
else {
/* free old hugepage dir */
free(internal_conf->hugepage_dir);
@@ -685,7 +685,7 @@ eal_parse_args(int argc, char **argv)
{
char *prefix = strdup(optarg);
if (prefix == NULL)
- RTE_LOG(ERR, EAL, "Could not store file prefix\n");
+ RTE_LOG_LINE(ERR, EAL, "Could not store file prefix");
else {
/* free old prefix */
free(internal_conf->hugefile_prefix);
@@ -696,8 +696,8 @@ eal_parse_args(int argc, char **argv)
case OPT_SOCKET_MEM_NUM:
if (eal_parse_socket_arg(optarg,
internal_conf->socket_mem) < 0) {
- RTE_LOG(ERR, EAL, "invalid parameters for --"
- OPT_SOCKET_MEM "\n");
+ RTE_LOG_LINE(ERR, EAL, "invalid parameters for --"
+ OPT_SOCKET_MEM);
eal_usage(prgname);
ret = -1;
goto out;
@@ -708,8 +708,8 @@ eal_parse_args(int argc, char **argv)
case OPT_SOCKET_LIMIT_NUM:
if (eal_parse_socket_arg(optarg,
internal_conf->socket_limit) < 0) {
- RTE_LOG(ERR, EAL, "invalid parameters for --"
- OPT_SOCKET_LIMIT "\n");
+ RTE_LOG_LINE(ERR, EAL, "invalid parameters for --"
+ OPT_SOCKET_LIMIT);
eal_usage(prgname);
ret = -1;
goto out;
@@ -719,8 +719,8 @@ eal_parse_args(int argc, char **argv)
case OPT_VFIO_INTR_NUM:
if (eal_parse_vfio_intr(optarg) < 0) {
- RTE_LOG(ERR, EAL, "invalid parameters for --"
- OPT_VFIO_INTR "\n");
+ RTE_LOG_LINE(ERR, EAL, "invalid parameters for --"
+ OPT_VFIO_INTR);
eal_usage(prgname);
ret = -1;
goto out;
@@ -729,8 +729,8 @@ eal_parse_args(int argc, char **argv)
case OPT_VFIO_VF_TOKEN_NUM:
if (eal_parse_vfio_vf_token(optarg) < 0) {
- RTE_LOG(ERR, EAL, "invalid parameters for --"
- OPT_VFIO_VF_TOKEN "\n");
+ RTE_LOG_LINE(ERR, EAL, "invalid parameters for --"
+ OPT_VFIO_VF_TOKEN);
eal_usage(prgname);
ret = -1;
goto out;
@@ -745,7 +745,7 @@ eal_parse_args(int argc, char **argv)
{
char *ops_name = strdup(optarg);
if (ops_name == NULL)
- RTE_LOG(ERR, EAL, "Could not store mbuf pool ops name\n");
+ RTE_LOG_LINE(ERR, EAL, "Could not store mbuf pool ops name");
else {
/* free old ops name */
free(internal_conf->user_mbuf_pool_ops_name);
@@ -761,8 +761,8 @@ eal_parse_args(int argc, char **argv)
case OPT_HUGE_WORKER_STACK_NUM:
if (eal_parse_huge_worker_stack(optarg) < 0) {
- RTE_LOG(ERR, EAL, "invalid parameter for --"
- OPT_HUGE_WORKER_STACK"\n");
+ RTE_LOG_LINE(ERR, EAL, "invalid parameter for --"
+ OPT_HUGE_WORKER_STACK);
eal_usage(prgname);
ret = -1;
goto out;
@@ -771,16 +771,16 @@ eal_parse_args(int argc, char **argv)
default:
if (opt < OPT_LONG_MIN_NUM && isprint(opt)) {
- RTE_LOG(ERR, EAL, "Option %c is not supported "
- "on Linux\n", opt);
+ RTE_LOG_LINE(ERR, EAL, "Option %c is not supported "
+ "on Linux", opt);
} else if (opt >= OPT_LONG_MIN_NUM &&
opt < OPT_LONG_MAX_NUM) {
- RTE_LOG(ERR, EAL, "Option %s is not supported "
- "on Linux\n",
+ RTE_LOG_LINE(ERR, EAL, "Option %s is not supported "
+ "on Linux",
eal_long_options[option_index].name);
} else {
- RTE_LOG(ERR, EAL, "Option %d is not supported "
- "on Linux\n", opt);
+ RTE_LOG_LINE(ERR, EAL, "Option %d is not supported "
+ "on Linux", opt);
}
eal_usage(prgname);
ret = -1;
@@ -791,11 +791,11 @@ eal_parse_args(int argc, char **argv)
/* create runtime data directory. In no_shconf mode, skip any errors */
if (eal_create_runtime_dir() < 0) {
if (internal_conf->no_shconf == 0) {
- RTE_LOG(ERR, EAL, "Cannot create runtime directory\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot create runtime directory");
ret = -1;
goto out;
} else
- RTE_LOG(WARNING, EAL, "No DPDK runtime directory created\n");
+ RTE_LOG_LINE(WARNING, EAL, "No DPDK runtime directory created");
}
if (eal_adjust_config(internal_conf) != 0) {
@@ -843,7 +843,7 @@ eal_check_mem_on_local_socket(void)
socket_id = rte_lcore_to_socket_id(config->main_lcore);
if (rte_memseg_list_walk(check_socket, &socket_id) == 0)
- RTE_LOG(WARNING, EAL, "WARNING: Main core has no memory on local socket!\n");
+ RTE_LOG_LINE(WARNING, EAL, "WARNING: Main core has no memory on local socket!");
}
static int
@@ -880,7 +880,7 @@ static int rte_eal_vfio_setup(void)
static void rte_eal_init_alert(const char *msg)
{
fprintf(stderr, "EAL: FATAL: %s\n", msg);
- RTE_LOG(ERR, EAL, "%s\n", msg);
+ RTE_LOG_LINE(ERR, EAL, "%s", msg);
}
/*
@@ -1073,27 +1073,27 @@ rte_eal_init(int argc, char **argv)
enum rte_iova_mode iova_mode = rte_bus_get_iommu_class();
if (iova_mode == RTE_IOVA_DC) {
- RTE_LOG(DEBUG, EAL, "Buses did not request a specific IOVA mode.\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Buses did not request a specific IOVA mode.");
if (!RTE_IOVA_IN_MBUF) {
iova_mode = RTE_IOVA_VA;
- RTE_LOG(DEBUG, EAL, "IOVA as VA mode is forced by build option.\n");
+ RTE_LOG_LINE(DEBUG, EAL, "IOVA as VA mode is forced by build option.");
} else if (!phys_addrs) {
/* if we have no access to physical addresses,
* pick IOVA as VA mode.
*/
iova_mode = RTE_IOVA_VA;
- RTE_LOG(DEBUG, EAL, "Physical addresses are unavailable, selecting IOVA as VA mode.\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Physical addresses are unavailable, selecting IOVA as VA mode.");
} else if (is_iommu_enabled()) {
/* we have an IOMMU, pick IOVA as VA mode */
iova_mode = RTE_IOVA_VA;
- RTE_LOG(DEBUG, EAL, "IOMMU is available, selecting IOVA as VA mode.\n");
+ RTE_LOG_LINE(DEBUG, EAL, "IOMMU is available, selecting IOVA as VA mode.");
} else {
/* physical addresses available, and no IOMMU
* found, so pick IOVA as PA.
*/
iova_mode = RTE_IOVA_PA;
- RTE_LOG(DEBUG, EAL, "IOMMU is not available, selecting IOVA as PA mode.\n");
+ RTE_LOG_LINE(DEBUG, EAL, "IOMMU is not available, selecting IOVA as PA mode.");
}
}
rte_eal_get_configuration()->iova_mode = iova_mode;
@@ -1114,7 +1114,7 @@ rte_eal_init(int argc, char **argv)
return -1;
}
- RTE_LOG(INFO, EAL, "Selected IOVA mode '%s'\n",
+ RTE_LOG_LINE(INFO, EAL, "Selected IOVA mode '%s'",
rte_eal_iova_mode() == RTE_IOVA_PA ? "PA" : "VA");
if (internal_conf->no_hugetlbfs == 0) {
@@ -1138,11 +1138,11 @@ rte_eal_init(int argc, char **argv)
if (internal_conf->vmware_tsc_map == 1) {
#ifdef RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT
rte_cycles_vmware_tsc_map = 1;
- RTE_LOG (DEBUG, EAL, "Using VMWARE TSC MAP, "
- "you must have monitor_control.pseudo_perfctr = TRUE\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Using VMWARE TSC MAP, "
+ "you must have monitor_control.pseudo_perfctr = TRUE");
#else
- RTE_LOG (WARNING, EAL, "Ignoring --vmware-tsc-map because "
- "RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT is not set\n");
+ RTE_LOG_LINE(WARNING, EAL, "Ignoring --vmware-tsc-map because "
+ "RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT is not set");
#endif
}
@@ -1229,7 +1229,7 @@ rte_eal_init(int argc, char **argv)
&lcore_config[config->main_lcore].cpuset);
ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset));
- RTE_LOG(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])",
config->main_lcore, (uintptr_t)pthread_self(), cpuset,
ret == 0 ? "" : "...");
@@ -1350,7 +1350,7 @@ rte_eal_cleanup(void)
if (!rte_atomic_compare_exchange_strong_explicit(&run_once, &has_run, 1,
rte_memory_order_relaxed, rte_memory_order_relaxed)) {
- RTE_LOG(WARNING, EAL, "Already called cleanup\n");
+ RTE_LOG_LINE(WARNING, EAL, "Already called cleanup");
rte_errno = EALREADY;
return -1;
}
@@ -1420,7 +1420,7 @@ rte_eal_check_module(const char *module_name)
/* Check if there is sysfs mounted */
if (stat("/sys/module", &st) != 0) {
- RTE_LOG(DEBUG, EAL, "sysfs is not mounted! error %i (%s)\n",
+ RTE_LOG_LINE(DEBUG, EAL, "sysfs is not mounted! error %i (%s)",
errno, strerror(errno));
return -1;
}
@@ -1428,12 +1428,12 @@ rte_eal_check_module(const char *module_name)
/* A module might be built-in, therefore try sysfs */
n = snprintf(sysfs_mod_name, PATH_MAX, "/sys/module/%s", module_name);
if (n < 0 || n > PATH_MAX) {
- RTE_LOG(DEBUG, EAL, "Could not format module path\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Could not format module path");
return -1;
}
if (stat(sysfs_mod_name, &st) != 0) {
- RTE_LOG(DEBUG, EAL, "Module %s not found! error %i (%s)\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Module %s not found! error %i (%s)",
sysfs_mod_name, errno, strerror(errno));
return 0;
}
diff --git a/lib/eal/linux/eal_alarm.c b/lib/eal/linux/eal_alarm.c
index 766ba2c251..3c0464ad10 100644
--- a/lib/eal/linux/eal_alarm.c
+++ b/lib/eal/linux/eal_alarm.c
@@ -65,7 +65,7 @@ rte_eal_alarm_init(void)
intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
if (intr_handle == NULL) {
- RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ RTE_LOG_LINE(ERR, EAL, "Fail to allocate intr_handle");
goto error;
}
diff --git a/lib/eal/linux/eal_dev.c b/lib/eal/linux/eal_dev.c
index ac76f6174d..16e817121d 100644
--- a/lib/eal/linux/eal_dev.c
+++ b/lib/eal/linux/eal_dev.c
@@ -64,7 +64,7 @@ static void sigbus_handler(int signum, siginfo_t *info,
{
int ret;
- RTE_LOG(DEBUG, EAL, "Thread catch SIGBUS, fault address:%p\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Thread catch SIGBUS, fault address:%p",
info->si_addr);
rte_spinlock_lock(&failure_handle_lock);
@@ -88,7 +88,7 @@ static void sigbus_handler(int signum, siginfo_t *info,
}
}
- RTE_LOG(DEBUG, EAL, "Success to handle SIGBUS for hot-unplug!\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Success to handle SIGBUS for hot-unplug!");
}
static int cmp_dev_name(const struct rte_device *dev,
@@ -108,7 +108,7 @@ dev_uev_socket_fd_create(void)
fd = socket(PF_NETLINK, SOCK_RAW | SOCK_CLOEXEC | SOCK_NONBLOCK,
NETLINK_KOBJECT_UEVENT);
if (fd < 0) {
- RTE_LOG(ERR, EAL, "create uevent fd failed.\n");
+ RTE_LOG_LINE(ERR, EAL, "create uevent fd failed.");
return -1;
}
@@ -119,7 +119,7 @@ dev_uev_socket_fd_create(void)
ret = bind(fd, (struct sockaddr *) &addr, sizeof(addr));
if (ret < 0) {
- RTE_LOG(ERR, EAL, "Failed to bind uevent socket.\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed to bind uevent socket.");
goto err;
}
@@ -245,18 +245,18 @@ dev_uev_handler(__rte_unused void *param)
return;
else if (ret <= 0) {
/* connection is closed or broken, can not up again. */
- RTE_LOG(ERR, EAL, "uevent socket connection is broken.\n");
+ RTE_LOG_LINE(ERR, EAL, "uevent socket connection is broken.");
rte_eal_alarm_set(1, dev_delayed_unregister, NULL);
return;
}
ret = dev_uev_parse(buf, &uevent, EAL_UEV_MSG_LEN);
if (ret < 0) {
- RTE_LOG(DEBUG, EAL, "Ignoring uevent '%s'\n", buf);
+ RTE_LOG_LINE(DEBUG, EAL, "Ignoring uevent '%s'", buf);
return;
}
- RTE_LOG(DEBUG, EAL, "receive uevent(name:%s, type:%d, subsystem:%d)\n",
+ RTE_LOG_LINE(DEBUG, EAL, "receive uevent(name:%s, type:%d, subsystem:%d)",
uevent.devname, uevent.type, uevent.subsystem);
switch (uevent.subsystem) {
@@ -273,7 +273,7 @@ dev_uev_handler(__rte_unused void *param)
rte_spinlock_lock(&failure_handle_lock);
bus = rte_bus_find_by_name(busname);
if (bus == NULL) {
- RTE_LOG(ERR, EAL, "Cannot find bus (%s)\n",
+ RTE_LOG_LINE(ERR, EAL, "Cannot find bus (%s)",
busname);
goto failure_handle_err;
}
@@ -281,15 +281,15 @@ dev_uev_handler(__rte_unused void *param)
dev = bus->find_device(NULL, cmp_dev_name,
uevent.devname);
if (dev == NULL) {
- RTE_LOG(ERR, EAL, "Cannot find device (%s) on "
- "bus (%s)\n", uevent.devname, busname);
+ RTE_LOG_LINE(ERR, EAL, "Cannot find device (%s) on "
+ "bus (%s)", uevent.devname, busname);
goto failure_handle_err;
}
ret = bus->hot_unplug_handler(dev);
if (ret) {
- RTE_LOG(ERR, EAL, "Can not handle hot-unplug "
- "for device (%s)\n", dev->name);
+ RTE_LOG_LINE(ERR, EAL, "Can not handle hot-unplug "
+ "for device (%s)", dev->name);
}
rte_spinlock_unlock(&failure_handle_lock);
}
@@ -318,7 +318,7 @@ rte_dev_event_monitor_start(void)
intr_handle = rte_intr_instance_alloc(RTE_INTR_INSTANCE_F_PRIVATE);
if (intr_handle == NULL) {
- RTE_LOG(ERR, EAL, "Fail to allocate intr_handle\n");
+ RTE_LOG_LINE(ERR, EAL, "Fail to allocate intr_handle");
goto exit;
}
@@ -332,7 +332,7 @@ rte_dev_event_monitor_start(void)
ret = dev_uev_socket_fd_create();
if (ret) {
- RTE_LOG(ERR, EAL, "error create device event fd.\n");
+ RTE_LOG_LINE(ERR, EAL, "error create device event fd.");
goto exit;
}
@@ -362,7 +362,7 @@ rte_dev_event_monitor_stop(void)
rte_rwlock_write_lock(&monitor_lock);
if (!monitor_refcount) {
- RTE_LOG(ERR, EAL, "device event monitor already stopped\n");
+ RTE_LOG_LINE(ERR, EAL, "device event monitor already stopped");
goto exit;
}
@@ -374,7 +374,7 @@ rte_dev_event_monitor_stop(void)
ret = rte_intr_callback_unregister(intr_handle, dev_uev_handler,
(void *)-1);
if (ret < 0) {
- RTE_LOG(ERR, EAL, "fail to unregister uevent callback.\n");
+ RTE_LOG_LINE(ERR, EAL, "fail to unregister uevent callback.");
goto exit;
}
@@ -429,8 +429,8 @@ rte_dev_hotplug_handle_enable(void)
ret = dev_sigbus_handler_register();
if (ret < 0)
- RTE_LOG(ERR, EAL,
- "fail to register sigbus handler for devices.\n");
+ RTE_LOG_LINE(ERR, EAL,
+ "fail to register sigbus handler for devices.");
hotplug_handle = true;
@@ -444,8 +444,8 @@ rte_dev_hotplug_handle_disable(void)
ret = dev_sigbus_handler_unregister();
if (ret < 0)
- RTE_LOG(ERR, EAL,
- "fail to unregister sigbus handler for devices.\n");
+ RTE_LOG_LINE(ERR, EAL,
+ "fail to unregister sigbus handler for devices.");
hotplug_handle = false;
diff --git a/lib/eal/linux/eal_hugepage_info.c b/lib/eal/linux/eal_hugepage_info.c
index 36a495fb1f..971c57989d 100644
--- a/lib/eal/linux/eal_hugepage_info.c
+++ b/lib/eal/linux/eal_hugepage_info.c
@@ -110,7 +110,7 @@ get_num_hugepages(const char *subdir, size_t sz, unsigned int reusable_pages)
over_pages = 0;
if (num_pages == 0 && over_pages == 0 && reusable_pages)
- RTE_LOG(WARNING, EAL, "No available %zu kB hugepages reported\n",
+ RTE_LOG_LINE(WARNING, EAL, "No available %zu kB hugepages reported",
sz >> 10);
num_pages += over_pages;
@@ -155,7 +155,7 @@ get_num_hugepages_on_node(const char *subdir, unsigned int socket, size_t sz)
return 0;
if (num_pages == 0)
- RTE_LOG(WARNING, EAL, "No free %zu kB hugepages reported on node %u\n",
+ RTE_LOG_LINE(WARNING, EAL, "No free %zu kB hugepages reported on node %u",
sz >> 10, socket);
/*
@@ -239,7 +239,7 @@ get_hugepage_dir(uint64_t hugepage_sz, char *hugedir, int len)
if (rte_strsplit(buf, sizeof(buf), splitstr, _FIELDNAME_MAX,
split_tok) != _FIELDNAME_MAX) {
- RTE_LOG(ERR, EAL, "Error parsing %s\n", proc_mounts);
+ RTE_LOG_LINE(ERR, EAL, "Error parsing %s", proc_mounts);
break; /* return NULL */
}
@@ -325,7 +325,7 @@ walk_hugedir(const char *hugedir, walk_hugedir_t *cb, void *user_data)
dir = opendir(hugedir);
if (!dir) {
- RTE_LOG(ERR, EAL, "Unable to open hugepage directory %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Unable to open hugepage directory %s",
hugedir);
goto error;
}
@@ -333,7 +333,7 @@ walk_hugedir(const char *hugedir, walk_hugedir_t *cb, void *user_data)
dirent = readdir(dir);
if (!dirent) {
- RTE_LOG(ERR, EAL, "Unable to read hugepage directory %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Unable to read hugepage directory %s",
hugedir);
goto error;
}
@@ -377,7 +377,7 @@ walk_hugedir(const char *hugedir, walk_hugedir_t *cb, void *user_data)
if (dir)
closedir(dir);
- RTE_LOG(ERR, EAL, "Error while walking hugepage dir: %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Error while walking hugepage dir: %s",
strerror(errno));
return -1;
@@ -403,7 +403,7 @@ inspect_hugedir_cb(const struct walk_hugedir_data *whd)
struct stat st;
if (fstat(whd->file_fd, &st) < 0)
- RTE_LOG(DEBUG, EAL, "%s(): stat(\"%s\") failed: %s\n",
+ RTE_LOG_LINE(DEBUG, EAL, "%s(): stat(\"%s\") failed: %s",
__func__, whd->file_name, strerror(errno));
else
(*total_size) += st.st_size;
@@ -492,8 +492,8 @@ hugepage_info_init(void)
dir = opendir(sys_dir_path);
if (dir == NULL) {
- RTE_LOG(ERR, EAL,
- "Cannot open directory %s to read system hugepage info\n",
+ RTE_LOG_LINE(ERR, EAL,
+ "Cannot open directory %s to read system hugepage info",
sys_dir_path);
return -1;
}
@@ -520,10 +520,10 @@ hugepage_info_init(void)
num_pages = get_num_hugepages(dirent->d_name,
hpi->hugepage_sz, 0);
if (num_pages > 0)
- RTE_LOG(NOTICE, EAL,
+ RTE_LOG_LINE(NOTICE, EAL,
"%" PRIu32 " hugepages of size "
"%" PRIu64 " reserved, but no mounted "
- "hugetlbfs found for that size\n",
+ "hugetlbfs found for that size",
num_pages, hpi->hugepage_sz);
/* if we have kernel support for reserving hugepages
* through mmap, and we're in in-memory mode, treat this
@@ -533,9 +533,9 @@ hugepage_info_init(void)
*/
#ifdef MAP_HUGE_SHIFT
if (internal_conf->in_memory) {
- RTE_LOG(DEBUG, EAL, "In-memory mode enabled, "
+ RTE_LOG_LINE(DEBUG, EAL, "In-memory mode enabled, "
"hugepages of size %" PRIu64 " bytes "
- "will be allocated anonymously\n",
+ "will be allocated anonymously",
hpi->hugepage_sz);
calc_num_pages(hpi, dirent, 0);
num_sizes++;
@@ -549,8 +549,8 @@ hugepage_info_init(void)
/* if blocking lock failed */
if (flock(hpi->lock_descriptor, LOCK_EX) == -1) {
- RTE_LOG(CRIT, EAL,
- "Failed to lock hugepage directory!\n");
+ RTE_LOG_LINE(CRIT, EAL,
+ "Failed to lock hugepage directory!");
break;
}
@@ -626,7 +626,7 @@ eal_hugepage_info_init(void)
tmp_hpi = create_shared_memory(eal_hugepage_info_path(),
sizeof(internal_conf->hugepage_info));
if (tmp_hpi == NULL) {
- RTE_LOG(ERR, EAL, "Failed to create shared memory!\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed to create shared memory!");
return -1;
}
@@ -641,7 +641,7 @@ eal_hugepage_info_init(void)
}
if (munmap(tmp_hpi, sizeof(internal_conf->hugepage_info)) < 0) {
- RTE_LOG(ERR, EAL, "Failed to unmap shared memory!\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed to unmap shared memory!");
return -1;
}
return 0;
@@ -657,14 +657,14 @@ int eal_hugepage_info_read(void)
tmp_hpi = open_shared_memory(eal_hugepage_info_path(),
sizeof(internal_conf->hugepage_info));
if (tmp_hpi == NULL) {
- RTE_LOG(ERR, EAL, "Failed to open shared memory!\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed to open shared memory!");
return -1;
}
memcpy(hpi, tmp_hpi, sizeof(internal_conf->hugepage_info));
if (munmap(tmp_hpi, sizeof(internal_conf->hugepage_info)) < 0) {
- RTE_LOG(ERR, EAL, "Failed to unmap shared memory!\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed to unmap shared memory!");
return -1;
}
return 0;
diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c
index eabac24992..9a7169c4e4 100644
--- a/lib/eal/linux/eal_interrupts.c
+++ b/lib/eal/linux/eal_interrupts.c
@@ -123,7 +123,7 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
- RTE_LOG(ERR, EAL, "Error enabling INTx interrupts for fd %d\n",
+ RTE_LOG_LINE(ERR, EAL, "Error enabling INTx interrupts for fd %d",
rte_intr_fd_get(intr_handle));
return -1;
}
@@ -140,7 +140,7 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
- RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
+ RTE_LOG_LINE(ERR, EAL, "Error unmasking INTx interrupts for fd %d",
rte_intr_fd_get(intr_handle));
return -1;
}
@@ -168,7 +168,7 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
- RTE_LOG(ERR, EAL, "Error masking INTx interrupts for fd %d\n",
+ RTE_LOG_LINE(ERR, EAL, "Error masking INTx interrupts for fd %d",
rte_intr_fd_get(intr_handle));
return -1;
}
@@ -184,7 +184,7 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
- RTE_LOG(ERR, EAL, "Error disabling INTx interrupts for fd %d\n",
+ RTE_LOG_LINE(ERR, EAL, "Error disabling INTx interrupts for fd %d",
rte_intr_fd_get(intr_handle));
return -1;
}
@@ -208,7 +208,7 @@ vfio_ack_intx(const struct rte_intr_handle *intr_handle)
vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
if (ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
- RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
+ RTE_LOG_LINE(ERR, EAL, "Error unmasking INTx interrupts for fd %d",
rte_intr_fd_get(intr_handle));
return -1;
}
@@ -238,7 +238,7 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
- RTE_LOG(ERR, EAL, "Error enabling MSI interrupts for fd %d\n",
+ RTE_LOG_LINE(ERR, EAL, "Error enabling MSI interrupts for fd %d",
rte_intr_fd_get(intr_handle));
return -1;
}
@@ -264,7 +264,7 @@ vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
- RTE_LOG(ERR, EAL, "Error disabling MSI interrupts for fd %d\n",
+ RTE_LOG_LINE(ERR, EAL, "Error disabling MSI interrupts for fd %d",
rte_intr_fd_get(intr_handle));
return ret;
@@ -303,7 +303,7 @@ vfio_enable_msix(const struct rte_intr_handle *intr_handle) {
ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
- RTE_LOG(ERR, EAL, "Error enabling MSI-X interrupts for fd %d\n",
+ RTE_LOG_LINE(ERR, EAL, "Error enabling MSI-X interrupts for fd %d",
rte_intr_fd_get(intr_handle));
return -1;
}
@@ -331,7 +331,7 @@ vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
- RTE_LOG(ERR, EAL, "Error disabling MSI-X interrupts for fd %d\n",
+ RTE_LOG_LINE(ERR, EAL, "Error disabling MSI-X interrupts for fd %d",
rte_intr_fd_get(intr_handle));
return ret;
@@ -363,7 +363,7 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret) {
- RTE_LOG(ERR, EAL, "Error enabling req interrupts for fd %d\n",
+ RTE_LOG_LINE(ERR, EAL, "Error enabling req interrupts for fd %d",
rte_intr_fd_get(intr_handle));
return -1;
}
@@ -392,7 +392,7 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
- RTE_LOG(ERR, EAL, "Error disabling req interrupts for fd %d\n",
+ RTE_LOG_LINE(ERR, EAL, "Error disabling req interrupts for fd %d",
rte_intr_fd_get(intr_handle));
return ret;
@@ -409,16 +409,16 @@ uio_intx_intr_disable(const struct rte_intr_handle *intr_handle)
/* use UIO config file descriptor for uio_pci_generic */
uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
if (uio_cfg_fd < 0 || pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
- RTE_LOG(ERR, EAL,
- "Error reading interrupts status for fd %d\n",
+ RTE_LOG_LINE(ERR, EAL,
+ "Error reading interrupts status for fd %d",
uio_cfg_fd);
return -1;
}
/* disable interrupts */
command_high |= 0x4;
if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
- RTE_LOG(ERR, EAL,
- "Error disabling interrupts for fd %d\n",
+ RTE_LOG_LINE(ERR, EAL,
+ "Error disabling interrupts for fd %d",
uio_cfg_fd);
return -1;
}
@@ -435,16 +435,16 @@ uio_intx_intr_enable(const struct rte_intr_handle *intr_handle)
/* use UIO config file descriptor for uio_pci_generic */
uio_cfg_fd = rte_intr_dev_fd_get(intr_handle);
if (uio_cfg_fd < 0 || pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
- RTE_LOG(ERR, EAL,
- "Error reading interrupts status for fd %d\n",
+ RTE_LOG_LINE(ERR, EAL,
+ "Error reading interrupts status for fd %d",
uio_cfg_fd);
return -1;
}
/* enable interrupts */
command_high &= ~0x4;
if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
- RTE_LOG(ERR, EAL,
- "Error enabling interrupts for fd %d\n",
+ RTE_LOG_LINE(ERR, EAL,
+ "Error enabling interrupts for fd %d",
uio_cfg_fd);
return -1;
}
@@ -459,7 +459,7 @@ uio_intr_disable(const struct rte_intr_handle *intr_handle)
if (rte_intr_fd_get(intr_handle) < 0 ||
write(rte_intr_fd_get(intr_handle), &value, sizeof(value)) < 0) {
- RTE_LOG(ERR, EAL, "Error disabling interrupts for fd %d (%s)\n",
+ RTE_LOG_LINE(ERR, EAL, "Error disabling interrupts for fd %d (%s)",
rte_intr_fd_get(intr_handle), strerror(errno));
return -1;
}
@@ -473,7 +473,7 @@ uio_intr_enable(const struct rte_intr_handle *intr_handle)
if (rte_intr_fd_get(intr_handle) < 0 ||
write(rte_intr_fd_get(intr_handle), &value, sizeof(value)) < 0) {
- RTE_LOG(ERR, EAL, "Error enabling interrupts for fd %d (%s)\n",
+ RTE_LOG_LINE(ERR, EAL, "Error enabling interrupts for fd %d (%s)",
rte_intr_fd_get(intr_handle), strerror(errno));
return -1;
}
@@ -492,14 +492,14 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
/* first do parameter checking */
if (rte_intr_fd_get(intr_handle) < 0 || cb == NULL) {
- RTE_LOG(ERR, EAL, "Registering with invalid input parameter\n");
+ RTE_LOG_LINE(ERR, EAL, "Registering with invalid input parameter");
return -EINVAL;
}
/* allocate a new interrupt callback entity */
callback = calloc(1, sizeof(*callback));
if (callback == NULL) {
- RTE_LOG(ERR, EAL, "Can not allocate memory\n");
+ RTE_LOG_LINE(ERR, EAL, "Can not allocate memory");
return -ENOMEM;
}
callback->cb_fn = cb;
@@ -526,14 +526,14 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
if (src == NULL) {
src = calloc(1, sizeof(*src));
if (src == NULL) {
- RTE_LOG(ERR, EAL, "Can not allocate memory\n");
+ RTE_LOG_LINE(ERR, EAL, "Can not allocate memory");
ret = -ENOMEM;
free(callback);
callback = NULL;
} else {
src->intr_handle = rte_intr_instance_dup(intr_handle);
if (src->intr_handle == NULL) {
- RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+ RTE_LOG_LINE(ERR, EAL, "Can not create intr instance");
ret = -ENOMEM;
free(callback);
callback = NULL;
@@ -575,7 +575,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
/* do parameter checking first */
if (rte_intr_fd_get(intr_handle) < 0) {
- RTE_LOG(ERR, EAL, "Unregistering with invalid input parameter\n");
+ RTE_LOG_LINE(ERR, EAL, "Unregistering with invalid input parameter");
return -EINVAL;
}
@@ -625,7 +625,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
/* do parameter checking first */
if (rte_intr_fd_get(intr_handle) < 0) {
- RTE_LOG(ERR, EAL, "Unregistering with invalid input parameter\n");
+ RTE_LOG_LINE(ERR, EAL, "Unregistering with invalid input parameter");
return -EINVAL;
}
@@ -752,7 +752,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
break;
/* unknown handle type */
default:
- RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
+ RTE_LOG_LINE(ERR, EAL, "Unknown handle type of fd %d",
rte_intr_fd_get(intr_handle));
rc = -1;
break;
@@ -817,7 +817,7 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
return -1;
/* unknown handle type */
default:
- RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
+ RTE_LOG_LINE(ERR, EAL, "Unknown handle type of fd %d",
rte_intr_fd_get(intr_handle));
return -1;
}
@@ -884,7 +884,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
break;
/* unknown handle type */
default:
- RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
+ RTE_LOG_LINE(ERR, EAL, "Unknown handle type of fd %d",
rte_intr_fd_get(intr_handle));
rc = -1;
break;
@@ -972,8 +972,8 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
if (errno == EINTR || errno == EWOULDBLOCK)
continue;
- RTE_LOG(ERR, EAL, "Error reading from file "
- "descriptor %d: %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Error reading from file "
+ "descriptor %d: %s",
events[n].data.fd,
strerror(errno));
/*
@@ -995,8 +995,8 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
free(src);
return -1;
} else if (bytes_read == 0)
- RTE_LOG(ERR, EAL, "Read nothing from file "
- "descriptor %d\n", events[n].data.fd);
+ RTE_LOG_LINE(ERR, EAL, "Read nothing from file "
+ "descriptor %d", events[n].data.fd);
else
call = true;
}
@@ -1080,8 +1080,8 @@ eal_intr_handle_interrupts(int pfd, unsigned totalfds)
if (nfds < 0) {
if (errno == EINTR)
continue;
- RTE_LOG(ERR, EAL,
- "epoll_wait returns with fail\n");
+ RTE_LOG_LINE(ERR, EAL,
+ "epoll_wait returns with fail");
return;
}
/* epoll_wait timeout, will never happens here */
@@ -1192,8 +1192,8 @@ rte_eal_intr_init(void)
eal_intr_thread_main, NULL);
if (ret != 0) {
rte_errno = -ret;
- RTE_LOG(ERR, EAL,
- "Failed to create thread for interrupt handling\n");
+ RTE_LOG_LINE(ERR, EAL,
+ "Failed to create thread for interrupt handling");
}
return ret;
@@ -1226,7 +1226,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
return;
default:
bytes_read = 1;
- RTE_LOG(INFO, EAL, "unexpected intr type\n");
+ RTE_LOG_LINE(INFO, EAL, "unexpected intr type");
break;
}
@@ -1242,11 +1242,11 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
if (errno == EINTR || errno == EWOULDBLOCK ||
errno == EAGAIN)
continue;
- RTE_LOG(ERR, EAL,
- "Error reading from fd %d: %s\n",
+ RTE_LOG_LINE(ERR, EAL,
+ "Error reading from fd %d: %s",
fd, strerror(errno));
} else if (nbytes == 0)
- RTE_LOG(ERR, EAL, "Read nothing from fd %d\n", fd);
+ RTE_LOG_LINE(ERR, EAL, "Read nothing from fd %d", fd);
return;
} while (1);
}
@@ -1296,8 +1296,8 @@ eal_init_tls_epfd(void)
int pfd = epoll_create(255);
if (pfd < 0) {
- RTE_LOG(ERR, EAL,
- "Cannot create epoll instance\n");
+ RTE_LOG_LINE(ERR, EAL,
+ "Cannot create epoll instance");
return -1;
}
return pfd;
@@ -1320,7 +1320,7 @@ eal_epoll_wait(int epfd, struct rte_epoll_event *events,
int rc;
if (!events) {
- RTE_LOG(ERR, EAL, "rte_epoll_event can't be NULL\n");
+ RTE_LOG_LINE(ERR, EAL, "rte_epoll_event can't be NULL");
return -1;
}
@@ -1342,7 +1342,7 @@ eal_epoll_wait(int epfd, struct rte_epoll_event *events,
continue;
}
/* epoll_wait fail */
- RTE_LOG(ERR, EAL, "epoll_wait returns with fail %s\n",
+ RTE_LOG_LINE(ERR, EAL, "epoll_wait returns with fail %s",
strerror(errno));
rc = -1;
break;
@@ -1393,7 +1393,7 @@ rte_epoll_ctl(int epfd, int op, int fd,
struct epoll_event ev;
if (!event) {
- RTE_LOG(ERR, EAL, "rte_epoll_event can't be NULL\n");
+ RTE_LOG_LINE(ERR, EAL, "rte_epoll_event can't be NULL");
return -1;
}
@@ -1411,7 +1411,7 @@ rte_epoll_ctl(int epfd, int op, int fd,
ev.events = event->epdata.event;
if (epoll_ctl(epfd, op, fd, &ev) < 0) {
- RTE_LOG(ERR, EAL, "Error op %d fd %d epoll_ctl, %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Error op %d fd %d epoll_ctl, %s",
op, fd, strerror(errno));
if (op == EPOLL_CTL_ADD)
/* rollback status when CTL_ADD fail */
@@ -1442,7 +1442,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
if (intr_handle == NULL || rte_intr_nb_efd_get(intr_handle) == 0 ||
efd_idx >= (unsigned int)rte_intr_nb_efd_get(intr_handle)) {
- RTE_LOG(ERR, EAL, "Wrong intr vector number.\n");
+ RTE_LOG_LINE(ERR, EAL, "Wrong intr vector number.");
return -EPERM;
}
@@ -1452,7 +1452,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
rev = rte_intr_elist_index_get(intr_handle, efd_idx);
if (rte_atomic_load_explicit(&rev->status,
rte_memory_order_relaxed) != RTE_EPOLL_INVALID) {
- RTE_LOG(INFO, EAL, "Event already been added.\n");
+ RTE_LOG_LINE(INFO, EAL, "Event already been added.");
return -EEXIST;
}
@@ -1465,9 +1465,9 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
rc = rte_epoll_ctl(epfd, epfd_op,
rte_intr_efds_index_get(intr_handle, efd_idx), rev);
if (!rc)
- RTE_LOG(DEBUG, EAL,
- "efd %d associated with vec %d added on epfd %d"
- "\n", rev->fd, vec, epfd);
+ RTE_LOG_LINE(DEBUG, EAL,
+ "efd %d associated with vec %d added on epfd %d",
+ rev->fd, vec, epfd);
else
rc = -EPERM;
break;
@@ -1476,7 +1476,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
rev = rte_intr_elist_index_get(intr_handle, efd_idx);
if (rte_atomic_load_explicit(&rev->status,
rte_memory_order_relaxed) == RTE_EPOLL_INVALID) {
- RTE_LOG(INFO, EAL, "Event does not exist.\n");
+ RTE_LOG_LINE(INFO, EAL, "Event does not exist.");
return -EPERM;
}
@@ -1485,7 +1485,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
rc = -EPERM;
break;
default:
- RTE_LOG(ERR, EAL, "event op type mismatch\n");
+ RTE_LOG_LINE(ERR, EAL, "event op type mismatch");
rc = -EPERM;
}
@@ -1523,8 +1523,8 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
for (i = 0; i < n; i++) {
fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
if (fd < 0) {
- RTE_LOG(ERR, EAL,
- "can't setup eventfd, error %i (%s)\n",
+ RTE_LOG_LINE(ERR, EAL,
+ "can't setup eventfd, error %i (%s)",
errno, strerror(errno));
return -errno;
}
@@ -1542,7 +1542,7 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
/* only check, initialization would be done in vdev driver.*/
if ((uint64_t)rte_intr_efd_counter_size_get(intr_handle) >
sizeof(union rte_intr_read_buffer)) {
- RTE_LOG(ERR, EAL, "the efd_counter_size is oversized\n");
+ RTE_LOG_LINE(ERR, EAL, "the efd_counter_size is oversized");
return -EINVAL;
}
} else {
diff --git a/lib/eal/linux/eal_lcore.c b/lib/eal/linux/eal_lcore.c
index 2e6a350603..42bf0ee7a1 100644
--- a/lib/eal/linux/eal_lcore.c
+++ b/lib/eal/linux/eal_lcore.c
@@ -68,7 +68,7 @@ eal_cpu_core_id(unsigned lcore_id)
return (unsigned)id;
err:
- RTE_LOG(ERR, EAL, "Error reading core id value from %s "
- "for lcore %u - assuming core 0\n", SYS_CPU_DIR, lcore_id);
+ RTE_LOG_LINE(ERR, EAL, "Error reading core id value from %s "
+ "for lcore %u - assuming core 0", SYS_CPU_DIR, lcore_id);
return 0;
}
diff --git a/lib/eal/linux/eal_memalloc.c b/lib/eal/linux/eal_memalloc.c
index 9853ec78a2..35a1868e32 100644
--- a/lib/eal/linux/eal_memalloc.c
+++ b/lib/eal/linux/eal_memalloc.c
@@ -147,7 +147,7 @@ check_numa(void)
bool ret = true;
/* Check if kernel supports NUMA. */
if (numa_available() != 0) {
- RTE_LOG(DEBUG, EAL, "NUMA is not supported.\n");
+ RTE_LOG_LINE(DEBUG, EAL, "NUMA is not supported.");
ret = false;
}
return ret;
@@ -156,16 +156,16 @@ check_numa(void)
static void
prepare_numa(int *oldpolicy, struct bitmask *oldmask, int socket_id)
{
- RTE_LOG(DEBUG, EAL, "Trying to obtain current memory policy.\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Trying to obtain current memory policy.");
if (get_mempolicy(oldpolicy, oldmask->maskp,
oldmask->size + 1, 0, 0) < 0) {
- RTE_LOG(ERR, EAL,
+ RTE_LOG_LINE(ERR, EAL,
"Failed to get current mempolicy: %s. "
- "Assuming MPOL_DEFAULT.\n", strerror(errno));
+ "Assuming MPOL_DEFAULT.", strerror(errno));
*oldpolicy = MPOL_DEFAULT;
}
- RTE_LOG(DEBUG, EAL,
- "Setting policy MPOL_PREFERRED for socket %d\n",
+ RTE_LOG_LINE(DEBUG, EAL,
+ "Setting policy MPOL_PREFERRED for socket %d",
socket_id);
numa_set_preferred(socket_id);
}
@@ -173,13 +173,13 @@ prepare_numa(int *oldpolicy, struct bitmask *oldmask, int socket_id)
static void
restore_numa(int *oldpolicy, struct bitmask *oldmask)
{
- RTE_LOG(DEBUG, EAL,
- "Restoring previous memory policy: %d\n", *oldpolicy);
+ RTE_LOG_LINE(DEBUG, EAL,
+ "Restoring previous memory policy: %d", *oldpolicy);
if (*oldpolicy == MPOL_DEFAULT) {
numa_set_localalloc();
} else if (set_mempolicy(*oldpolicy, oldmask->maskp,
oldmask->size + 1) < 0) {
- RTE_LOG(ERR, EAL, "Failed to restore mempolicy: %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Failed to restore mempolicy: %s",
strerror(errno));
numa_set_localalloc();
}
@@ -223,7 +223,7 @@ static int lock(int fd, int type)
/* couldn't lock */
return 0;
} else if (ret) {
- RTE_LOG(ERR, EAL, "%s(): error calling flock(): %s\n",
+ RTE_LOG_LINE(ERR, EAL, "%s(): error calling flock(): %s",
__func__, strerror(errno));
return -1;
}
@@ -251,7 +251,7 @@ get_seg_memfd(struct hugepage_info *hi __rte_unused,
snprintf(segname, sizeof(segname), "seg_%i", list_idx);
fd = memfd_create(segname, flags);
if (fd < 0) {
- RTE_LOG(DEBUG, EAL, "%s(): memfd create failed: %s\n",
+ RTE_LOG_LINE(DEBUG, EAL, "%s(): memfd create failed: %s",
__func__, strerror(errno));
return -1;
}
@@ -265,7 +265,7 @@ get_seg_memfd(struct hugepage_info *hi __rte_unused,
list_idx, seg_idx);
fd = memfd_create(segname, flags);
if (fd < 0) {
- RTE_LOG(DEBUG, EAL, "%s(): memfd create failed: %s\n",
+ RTE_LOG_LINE(DEBUG, EAL, "%s(): memfd create failed: %s",
__func__, strerror(errno));
return -1;
}
@@ -316,7 +316,7 @@ get_seg_fd(char *path, int buflen, struct hugepage_info *hi,
*/
ret = stat(path, &st);
if (ret < 0 && errno != ENOENT) {
- RTE_LOG(DEBUG, EAL, "%s(): stat() for '%s' failed: %s\n",
+ RTE_LOG_LINE(DEBUG, EAL, "%s(): stat() for '%s' failed: %s",
__func__, path, strerror(errno));
return -1;
}
@@ -342,7 +342,7 @@ get_seg_fd(char *path, int buflen, struct hugepage_info *hi,
ret == 0) {
/* coverity[toctou] */
if (unlink(path) < 0) {
- RTE_LOG(DEBUG, EAL, "%s(): could not remove '%s': %s\n",
+ RTE_LOG_LINE(DEBUG, EAL, "%s(): could not remove '%s': %s",
__func__, path, strerror(errno));
return -1;
}
@@ -351,13 +351,13 @@ get_seg_fd(char *path, int buflen, struct hugepage_info *hi,
/* coverity[toctou] */
fd = open(path, O_CREAT | O_RDWR, 0600);
if (fd < 0) {
- RTE_LOG(ERR, EAL, "%s(): open '%s' failed: %s\n",
+ RTE_LOG_LINE(ERR, EAL, "%s(): open '%s' failed: %s",
__func__, path, strerror(errno));
return -1;
}
/* take out a read lock */
if (lock(fd, LOCK_SH) < 0) {
- RTE_LOG(ERR, EAL, "%s(): lock '%s' failed: %s\n",
+ RTE_LOG_LINE(ERR, EAL, "%s(): lock '%s' failed: %s",
__func__, path, strerror(errno));
close(fd);
return -1;
@@ -378,7 +378,7 @@ resize_hugefile_in_memory(int fd, uint64_t fa_offset,
ret = fallocate(fd, flags, fa_offset, page_sz);
if (ret < 0) {
- RTE_LOG(DEBUG, EAL, "%s(): fallocate() failed: %s\n",
+ RTE_LOG_LINE(DEBUG, EAL, "%s(): fallocate() failed: %s",
__func__,
strerror(errno));
return -1;
@@ -402,7 +402,7 @@ resize_hugefile_in_filesystem(int fd, uint64_t fa_offset, uint64_t page_sz,
*/
if (!grow) {
- RTE_LOG(DEBUG, EAL, "%s(): fallocate not supported, not freeing page back to the system\n",
+ RTE_LOG_LINE(DEBUG, EAL, "%s(): fallocate not supported, not freeing page back to the system",
__func__);
return -1;
}
@@ -414,7 +414,7 @@ resize_hugefile_in_filesystem(int fd, uint64_t fa_offset, uint64_t page_sz,
*dirty = new_size <= cur_size;
if (new_size > cur_size &&
ftruncate(fd, new_size) < 0) {
- RTE_LOG(DEBUG, EAL, "%s(): ftruncate() failed: %s\n",
+ RTE_LOG_LINE(DEBUG, EAL, "%s(): ftruncate() failed: %s",
__func__, strerror(errno));
return -1;
}
@@ -444,12 +444,12 @@ resize_hugefile_in_filesystem(int fd, uint64_t fa_offset, uint64_t page_sz,
if (ret < 0) {
if (fallocate_supported == -1 &&
errno == ENOTSUP) {
- RTE_LOG(ERR, EAL, "%s(): fallocate() not supported, hugepage deallocation will be disabled\n",
+ RTE_LOG_LINE(ERR, EAL, "%s(): fallocate() not supported, hugepage deallocation will be disabled",
__func__);
again = true;
fallocate_supported = 0;
} else {
- RTE_LOG(DEBUG, EAL, "%s(): fallocate() failed: %s\n",
+ RTE_LOG_LINE(DEBUG, EAL, "%s(): fallocate() failed: %s",
__func__,
strerror(errno));
return -1;
@@ -483,7 +483,7 @@ close_hugefile(int fd, char *path, int list_idx)
if (!internal_conf->in_memory &&
rte_eal_process_type() == RTE_PROC_PRIMARY &&
unlink(path))
- RTE_LOG(ERR, EAL, "%s(): unlinking '%s' failed: %s\n",
+ RTE_LOG_LINE(ERR, EAL, "%s(): unlinking '%s' failed: %s",
__func__, path, strerror(errno));
close(fd);
@@ -536,12 +536,12 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id,
/* these are checked at init, but code analyzers don't know that */
if (internal_conf->in_memory && !anonymous_hugepages_supported) {
- RTE_LOG(ERR, EAL, "Anonymous hugepages not supported, in-memory mode cannot allocate memory\n");
+ RTE_LOG_LINE(ERR, EAL, "Anonymous hugepages not supported, in-memory mode cannot allocate memory");
return -1;
}
if (internal_conf->in_memory && !memfd_create_supported &&
internal_conf->single_file_segments) {
- RTE_LOG(ERR, EAL, "Single-file segments are not supported without memfd support\n");
+ RTE_LOG_LINE(ERR, EAL, "Single-file segments are not supported without memfd support");
return -1;
}
@@ -569,7 +569,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id,
fd = get_seg_fd(path, sizeof(path), hi, list_idx, seg_idx,
&dirty);
if (fd < 0) {
- RTE_LOG(ERR, EAL, "Couldn't get fd on hugepage file\n");
+ RTE_LOG_LINE(ERR, EAL, "Couldn't get fd on hugepage file");
return -1;
}
@@ -584,14 +584,14 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id,
} else {
map_offset = 0;
if (ftruncate(fd, alloc_sz) < 0) {
- RTE_LOG(DEBUG, EAL, "%s(): ftruncate() failed: %s\n",
+ RTE_LOG_LINE(DEBUG, EAL, "%s(): ftruncate() failed: %s",
__func__, strerror(errno));
goto resized;
}
if (internal_conf->hugepage_file.unlink_before_mapping &&
!internal_conf->in_memory) {
if (unlink(path)) {
- RTE_LOG(DEBUG, EAL, "%s(): unlink() failed: %s\n",
+ RTE_LOG_LINE(DEBUG, EAL, "%s(): unlink() failed: %s",
__func__, strerror(errno));
goto resized;
}
@@ -610,7 +610,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id,
map_offset);
if (va == MAP_FAILED) {
- RTE_LOG(DEBUG, EAL, "%s(): mmap() failed: %s\n", __func__,
+ RTE_LOG_LINE(DEBUG, EAL, "%s(): mmap() failed: %s", __func__,
strerror(errno));
/* mmap failed, but the previous region might have been
* unmapped anyway. try to remap it
@@ -618,7 +618,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id,
goto unmapped;
}
if (va != addr) {
- RTE_LOG(DEBUG, EAL, "%s(): wrong mmap() address\n", __func__);
+ RTE_LOG_LINE(DEBUG, EAL, "%s(): wrong mmap() address", __func__);
munmap(va, alloc_sz);
goto resized;
}
@@ -631,7 +631,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id,
* back here.
*/
if (huge_wrap_sigsetjmp()) {
- RTE_LOG(DEBUG, EAL, "SIGBUS: Cannot mmap more hugepages of size %uMB\n",
+ RTE_LOG_LINE(DEBUG, EAL, "SIGBUS: Cannot mmap more hugepages of size %uMB",
(unsigned int)(alloc_sz >> 20));
goto mapped;
}
@@ -645,7 +645,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id,
iova = rte_mem_virt2iova(addr);
if (iova == RTE_BAD_PHYS_ADDR) {
- RTE_LOG(DEBUG, EAL, "%s(): can't get IOVA addr\n",
+ RTE_LOG_LINE(DEBUG, EAL, "%s(): can't get IOVA addr",
__func__);
goto mapped;
}
@@ -661,19 +661,19 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id,
ret = get_mempolicy(&cur_socket_id, NULL, 0, addr,
MPOL_F_NODE | MPOL_F_ADDR);
if (ret < 0) {
- RTE_LOG(DEBUG, EAL, "%s(): get_mempolicy: %s\n",
+ RTE_LOG_LINE(DEBUG, EAL, "%s(): get_mempolicy: %s",
__func__, strerror(errno));
goto mapped;
} else if (cur_socket_id != socket_id) {
- RTE_LOG(DEBUG, EAL,
- "%s(): allocation happened on wrong socket (wanted %d, got %d)\n",
+ RTE_LOG_LINE(DEBUG, EAL,
+ "%s(): allocation happened on wrong socket (wanted %d, got %d)",
__func__, socket_id, cur_socket_id);
goto mapped;
}
}
#else
if (rte_socket_count() > 1)
- RTE_LOG(DEBUG, EAL, "%s(): not checking hugepage NUMA node.\n",
+ RTE_LOG_LINE(DEBUG, EAL, "%s(): not checking hugepage NUMA node.",
__func__);
#endif
@@ -703,7 +703,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id,
* somebody else maps this hole now, we could accidentally
* override it in the future.
*/
- RTE_LOG(CRIT, EAL, "Can't mmap holes in our virtual address space\n");
+ RTE_LOG_LINE(CRIT, EAL, "Can't mmap holes in our virtual address space");
}
/* roll back the ref count */
if (internal_conf->single_file_segments)
@@ -748,7 +748,7 @@ free_seg(struct rte_memseg *ms, struct hugepage_info *hi,
if (mmap(ms->addr, ms->len, PROT_NONE,
MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED, -1, 0) ==
MAP_FAILED) {
- RTE_LOG(DEBUG, EAL, "couldn't unmap page\n");
+ RTE_LOG_LINE(DEBUG, EAL, "couldn't unmap page");
return -1;
}
@@ -873,13 +873,13 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg)
if (wa->hi->lock_descriptor == -1 && !internal_conf->in_memory) {
dir_fd = open(wa->hi->hugedir, O_RDONLY);
if (dir_fd < 0) {
- RTE_LOG(ERR, EAL, "%s(): Cannot open '%s': %s\n",
+ RTE_LOG_LINE(ERR, EAL, "%s(): Cannot open '%s': %s",
__func__, wa->hi->hugedir, strerror(errno));
return -1;
}
/* blocking writelock */
if (flock(dir_fd, LOCK_EX)) {
- RTE_LOG(ERR, EAL, "%s(): Cannot lock '%s': %s\n",
+ RTE_LOG_LINE(ERR, EAL, "%s(): Cannot lock '%s': %s",
__func__, wa->hi->hugedir, strerror(errno));
close(dir_fd);
return -1;
@@ -896,7 +896,7 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg)
if (alloc_seg(cur, map_addr, wa->socket, wa->hi,
msl_idx, cur_idx)) {
- RTE_LOG(DEBUG, EAL, "attempted to allocate %i segments, but only %i were allocated\n",
+ RTE_LOG_LINE(DEBUG, EAL, "attempted to allocate %i segments, but only %i were allocated",
need, i);
/* if exact number wasn't requested, stop */
@@ -916,7 +916,7 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg)
* may fail.
*/
if (free_seg(tmp, wa->hi, msl_idx, j))
- RTE_LOG(DEBUG, EAL, "Cannot free page\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Cannot free page");
}
/* clear the list */
if (wa->ms)
@@ -980,13 +980,13 @@ free_seg_walk(const struct rte_memseg_list *msl, void *arg)
if (wa->hi->lock_descriptor == -1 && !internal_conf->in_memory) {
dir_fd = open(wa->hi->hugedir, O_RDONLY);
if (dir_fd < 0) {
- RTE_LOG(ERR, EAL, "%s(): Cannot open '%s': %s\n",
+ RTE_LOG_LINE(ERR, EAL, "%s(): Cannot open '%s': %s",
__func__, wa->hi->hugedir, strerror(errno));
return -1;
}
/* blocking writelock */
if (flock(dir_fd, LOCK_EX)) {
- RTE_LOG(ERR, EAL, "%s(): Cannot lock '%s': %s\n",
+ RTE_LOG_LINE(ERR, EAL, "%s(): Cannot lock '%s': %s",
__func__, wa->hi->hugedir, strerror(errno));
close(dir_fd);
return -1;
@@ -1037,7 +1037,7 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs, size_t page_sz,
}
}
if (!hi) {
- RTE_LOG(ERR, EAL, "%s(): can't find relevant hugepage_info entry\n",
+ RTE_LOG_LINE(ERR, EAL, "%s(): can't find relevant hugepage_info entry",
__func__);
return -1;
}
@@ -1061,7 +1061,7 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs, size_t page_sz,
/* memalloc is locked, so it's safe to use thread-unsafe version */
ret = rte_memseg_list_walk_thread_unsafe(alloc_seg_walk, &wa);
if (ret == 0) {
- RTE_LOG(ERR, EAL, "%s(): couldn't find suitable memseg_list\n",
+ RTE_LOG_LINE(ERR, EAL, "%s(): couldn't find suitable memseg_list",
__func__);
ret = -1;
} else if (ret > 0) {
@@ -1104,7 +1104,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs)
/* if this page is marked as unfreeable, fail */
if (cur->flags & RTE_MEMSEG_FLAG_DO_NOT_FREE) {
- RTE_LOG(DEBUG, EAL, "Page is not allowed to be freed\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Page is not allowed to be freed");
ret = -1;
continue;
}
@@ -1118,7 +1118,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs)
break;
}
if (i == (int)RTE_DIM(internal_conf->hugepage_info)) {
- RTE_LOG(ERR, EAL, "Can't find relevant hugepage_info entry\n");
+ RTE_LOG_LINE(ERR, EAL, "Can't find relevant hugepage_info entry");
ret = -1;
continue;
}
@@ -1133,7 +1133,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs)
if (walk_res == 1)
continue;
if (walk_res == 0)
- RTE_LOG(ERR, EAL, "Couldn't find memseg list\n");
+ RTE_LOG_LINE(ERR, EAL, "Couldn't find memseg list");
ret = -1;
}
return ret;
@@ -1344,13 +1344,13 @@ sync_existing(struct rte_memseg_list *primary_msl,
*/
dir_fd = open(hi->hugedir, O_RDONLY);
if (dir_fd < 0) {
- RTE_LOG(ERR, EAL, "%s(): Cannot open '%s': %s\n", __func__,
+ RTE_LOG_LINE(ERR, EAL, "%s(): Cannot open '%s': %s", __func__,
hi->hugedir, strerror(errno));
return -1;
}
/* blocking writelock */
if (flock(dir_fd, LOCK_EX)) {
- RTE_LOG(ERR, EAL, "%s(): Cannot lock '%s': %s\n", __func__,
+ RTE_LOG_LINE(ERR, EAL, "%s(): Cannot lock '%s': %s", __func__,
hi->hugedir, strerror(errno));
close(dir_fd);
return -1;
@@ -1405,7 +1405,7 @@ sync_walk(const struct rte_memseg_list *msl, void *arg __rte_unused)
}
}
if (!hi) {
- RTE_LOG(ERR, EAL, "Can't find relevant hugepage_info entry\n");
+ RTE_LOG_LINE(ERR, EAL, "Can't find relevant hugepage_info entry");
return -1;
}
@@ -1454,7 +1454,7 @@ secondary_msl_create_walk(const struct rte_memseg_list *msl,
primary_msl->memseg_arr.len,
primary_msl->memseg_arr.elt_sz);
if (ret < 0) {
- RTE_LOG(ERR, EAL, "Cannot initialize local memory map\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot initialize local memory map");
return -1;
}
local_msl->base_va = primary_msl->base_va;
@@ -1479,7 +1479,7 @@ secondary_msl_destroy_walk(const struct rte_memseg_list *msl,
ret = rte_fbarray_destroy(&local_msl->memseg_arr);
if (ret < 0) {
- RTE_LOG(ERR, EAL, "Cannot destroy local memory map\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot destroy local memory map");
return -1;
}
local_msl->base_va = NULL;
@@ -1501,7 +1501,7 @@ alloc_list(int list_idx, int len)
/* ensure we have space to store fd per each possible segment */
data = malloc(sizeof(int) * len);
if (data == NULL) {
- RTE_LOG(ERR, EAL, "Unable to allocate space for file descriptors\n");
+ RTE_LOG_LINE(ERR, EAL, "Unable to allocate space for file descriptors");
return -1;
}
/* set all fd's as invalid */
@@ -1750,13 +1750,13 @@ eal_memalloc_init(void)
int mfd_res = test_memfd_create();
if (mfd_res < 0) {
- RTE_LOG(ERR, EAL, "Unable to check if memfd is supported\n");
+ RTE_LOG_LINE(ERR, EAL, "Unable to check if memfd is supported");
return -1;
}
if (mfd_res == 1)
- RTE_LOG(DEBUG, EAL, "Using memfd for anonymous memory\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Using memfd for anonymous memory");
else
- RTE_LOG(INFO, EAL, "Using memfd is not supported, falling back to anonymous hugepages\n");
+ RTE_LOG_LINE(INFO, EAL, "Using memfd is not supported, falling back to anonymous hugepages");
/* we only support single-file segments mode with in-memory mode
* if we support hugetlbfs with memfd_create. this code will
@@ -1764,18 +1764,18 @@ eal_memalloc_init(void)
*/
if (internal_conf->single_file_segments &&
mfd_res != 1) {
- RTE_LOG(ERR, EAL, "Single-file segments mode cannot be used without memfd support\n");
+ RTE_LOG_LINE(ERR, EAL, "Single-file segments mode cannot be used without memfd support");
return -1;
}
/* this cannot ever happen but better safe than sorry */
if (!anonymous_hugepages_supported) {
- RTE_LOG(ERR, EAL, "Using anonymous memory is not supported\n");
+ RTE_LOG_LINE(ERR, EAL, "Using anonymous memory is not supported");
return -1;
}
/* safety net, should be impossible to configure */
if (internal_conf->hugepage_file.unlink_before_mapping &&
!internal_conf->hugepage_file.unlink_existing) {
- RTE_LOG(ERR, EAL, "Unlinking existing hugepage files is prohibited, cannot unlink them before mapping.\n");
+ RTE_LOG_LINE(ERR, EAL, "Unlinking existing hugepage files is prohibited, cannot unlink them before mapping.");
return -1;
}
}
diff --git a/lib/eal/linux/eal_memory.c b/lib/eal/linux/eal_memory.c
index 9b6f08fba8..2f2551588b 100644
--- a/lib/eal/linux/eal_memory.c
+++ b/lib/eal/linux/eal_memory.c
@@ -104,7 +104,7 @@ rte_mem_virt2phy(const void *virtaddr)
fd = open("/proc/self/pagemap", O_RDONLY);
if (fd < 0) {
- RTE_LOG(INFO, EAL, "%s(): cannot open /proc/self/pagemap: %s\n",
+ RTE_LOG_LINE(INFO, EAL, "%s(): cannot open /proc/self/pagemap: %s",
__func__, strerror(errno));
return RTE_BAD_IOVA;
}
@@ -112,7 +112,7 @@ rte_mem_virt2phy(const void *virtaddr)
virt_pfn = (unsigned long)virtaddr / page_size;
offset = sizeof(uint64_t) * virt_pfn;
if (lseek(fd, offset, SEEK_SET) == (off_t) -1) {
- RTE_LOG(INFO, EAL, "%s(): seek error in /proc/self/pagemap: %s\n",
+ RTE_LOG_LINE(INFO, EAL, "%s(): seek error in /proc/self/pagemap: %s",
__func__, strerror(errno));
close(fd);
return RTE_BAD_IOVA;
@@ -121,12 +121,12 @@ rte_mem_virt2phy(const void *virtaddr)
retval = read(fd, &page, PFN_MASK_SIZE);
close(fd);
if (retval < 0) {
- RTE_LOG(INFO, EAL, "%s(): cannot read /proc/self/pagemap: %s\n",
+ RTE_LOG_LINE(INFO, EAL, "%s(): cannot read /proc/self/pagemap: %s",
__func__, strerror(errno));
return RTE_BAD_IOVA;
} else if (retval != PFN_MASK_SIZE) {
- RTE_LOG(INFO, EAL, "%s(): read %d bytes from /proc/self/pagemap "
- "but expected %d:\n",
+ RTE_LOG_LINE(INFO, EAL, "%s(): read %d bytes from /proc/self/pagemap "
+ "but expected %d:",
__func__, retval, PFN_MASK_SIZE);
return RTE_BAD_IOVA;
}
@@ -237,7 +237,7 @@ static int huge_wrap_sigsetjmp(void)
/* Callback for numa library. */
void numa_error(char *where)
{
- RTE_LOG(ERR, EAL, "%s failed: %s\n", where, strerror(errno));
+ RTE_LOG_LINE(ERR, EAL, "%s failed: %s", where, strerror(errno));
}
#endif
@@ -267,18 +267,18 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi,
/* Check if kernel supports NUMA. */
if (numa_available() != 0) {
- RTE_LOG(DEBUG, EAL, "NUMA is not supported.\n");
+ RTE_LOG_LINE(DEBUG, EAL, "NUMA is not supported.");
have_numa = false;
}
if (have_numa) {
- RTE_LOG(DEBUG, EAL, "Trying to obtain current memory policy.\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Trying to obtain current memory policy.");
oldmask = numa_allocate_nodemask();
if (get_mempolicy(&oldpolicy, oldmask->maskp,
oldmask->size + 1, 0, 0) < 0) {
- RTE_LOG(ERR, EAL,
+ RTE_LOG_LINE(ERR, EAL,
"Failed to get current mempolicy: %s. "
- "Assuming MPOL_DEFAULT.\n", strerror(errno));
+ "Assuming MPOL_DEFAULT.", strerror(errno));
oldpolicy = MPOL_DEFAULT;
}
for (i = 0; i < RTE_MAX_NUMA_NODES; i++)
@@ -316,8 +316,8 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi,
essential_memory[j] -= hugepage_sz;
}
- RTE_LOG(DEBUG, EAL,
- "Setting policy MPOL_PREFERRED for socket %d\n",
+ RTE_LOG_LINE(DEBUG, EAL,
+ "Setting policy MPOL_PREFERRED for socket %d",
node_id);
numa_set_preferred(node_id);
}
@@ -332,7 +332,7 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi,
/* try to create hugepage file */
fd = open(hf->filepath, O_CREAT | O_RDWR, 0600);
if (fd < 0) {
- RTE_LOG(DEBUG, EAL, "%s(): open failed: %s\n", __func__,
+ RTE_LOG_LINE(DEBUG, EAL, "%s(): open failed: %s", __func__,
strerror(errno));
goto out;
}
@@ -345,7 +345,7 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi,
virtaddr = mmap(NULL, hugepage_sz, PROT_READ | PROT_WRITE,
MAP_SHARED | MAP_POPULATE, fd, 0);
if (virtaddr == MAP_FAILED) {
- RTE_LOG(DEBUG, EAL, "%s(): mmap failed: %s\n", __func__,
+ RTE_LOG_LINE(DEBUG, EAL, "%s(): mmap failed: %s", __func__,
strerror(errno));
close(fd);
goto out;
@@ -361,8 +361,8 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi,
* back here.
*/
if (huge_wrap_sigsetjmp()) {
- RTE_LOG(DEBUG, EAL, "SIGBUS: Cannot mmap more "
- "hugepages of size %u MB\n",
+ RTE_LOG_LINE(DEBUG, EAL, "SIGBUS: Cannot mmap more "
+ "hugepages of size %u MB",
(unsigned int)(hugepage_sz / 0x100000));
munmap(virtaddr, hugepage_sz);
close(fd);
@@ -378,7 +378,7 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi,
/* set shared lock on the file. */
if (flock(fd, LOCK_SH) < 0) {
- RTE_LOG(DEBUG, EAL, "%s(): Locking file failed:%s \n",
+ RTE_LOG_LINE(DEBUG, EAL, "%s(): Locking file failed:%s ",
__func__, strerror(errno));
close(fd);
goto out;
@@ -390,13 +390,13 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi,
out:
#ifdef RTE_EAL_NUMA_AWARE_HUGEPAGES
if (maxnode) {
- RTE_LOG(DEBUG, EAL,
- "Restoring previous memory policy: %d\n", oldpolicy);
+ RTE_LOG_LINE(DEBUG, EAL,
+ "Restoring previous memory policy: %d", oldpolicy);
if (oldpolicy == MPOL_DEFAULT) {
numa_set_localalloc();
} else if (set_mempolicy(oldpolicy, oldmask->maskp,
oldmask->size + 1) < 0) {
- RTE_LOG(ERR, EAL, "Failed to restore mempolicy: %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Failed to restore mempolicy: %s",
strerror(errno));
numa_set_localalloc();
}
@@ -424,8 +424,8 @@ find_numasocket(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi)
f = fopen("/proc/self/numa_maps", "r");
if (f == NULL) {
- RTE_LOG(NOTICE, EAL, "NUMA support not available"
- " consider that all memory is in socket_id 0\n");
+ RTE_LOG_LINE(NOTICE, EAL, "NUMA support not available"
+ " consider that all memory is in socket_id 0");
return 0;
}
@@ -443,20 +443,20 @@ find_numasocket(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi)
/* get zone addr */
virt_addr = strtoull(buf, &end, 16);
if (virt_addr == 0 || end == buf) {
- RTE_LOG(ERR, EAL, "%s(): error in numa_maps parsing\n", __func__);
+ RTE_LOG_LINE(ERR, EAL, "%s(): error in numa_maps parsing", __func__);
goto error;
}
/* get node id (socket id) */
nodestr = strstr(buf, " N");
if (nodestr == NULL) {
- RTE_LOG(ERR, EAL, "%s(): error in numa_maps parsing\n", __func__);
+ RTE_LOG_LINE(ERR, EAL, "%s(): error in numa_maps parsing", __func__);
goto error;
}
nodestr += 2;
end = strstr(nodestr, "=");
if (end == NULL) {
- RTE_LOG(ERR, EAL, "%s(): error in numa_maps parsing\n", __func__);
+ RTE_LOG_LINE(ERR, EAL, "%s(): error in numa_maps parsing", __func__);
goto error;
}
end[0] = '\0';
@@ -464,7 +464,7 @@ find_numasocket(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi)
socket_id = strtoul(nodestr, &end, 0);
if ((nodestr[0] == '\0') || (end == NULL) || (*end != '\0')) {
- RTE_LOG(ERR, EAL, "%s(): error in numa_maps parsing\n", __func__);
+ RTE_LOG_LINE(ERR, EAL, "%s(): error in numa_maps parsing", __func__);
goto error;
}
@@ -475,8 +475,8 @@ find_numasocket(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi)
hugepg_tbl[i].socket_id = socket_id;
hp_count++;
#ifdef RTE_EAL_NUMA_AWARE_HUGEPAGES
- RTE_LOG(DEBUG, EAL,
- "Hugepage %s is on socket %d\n",
+ RTE_LOG_LINE(DEBUG, EAL,
+ "Hugepage %s is on socket %d",
hugepg_tbl[i].filepath, socket_id);
#endif
}
@@ -589,7 +589,7 @@ unlink_hugepage_files(struct hugepage_file *hugepg_tbl,
struct hugepage_file *hp = &hugepg_tbl[page];
if (hp->orig_va != NULL && unlink(hp->filepath)) {
- RTE_LOG(WARNING, EAL, "%s(): Removing %s failed: %s\n",
+ RTE_LOG_LINE(WARNING, EAL, "%s(): Removing %s failed: %s",
__func__, hp->filepath, strerror(errno));
}
}
@@ -639,7 +639,7 @@ unmap_unneeded_hugepages(struct hugepage_file *hugepg_tbl,
hp->orig_va = NULL;
if (unlink(hp->filepath) == -1) {
- RTE_LOG(ERR, EAL, "%s(): Removing %s failed: %s\n",
+ RTE_LOG_LINE(ERR, EAL, "%s(): Removing %s failed: %s",
__func__, hp->filepath, strerror(errno));
return -1;
}
@@ -676,7 +676,7 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end)
socket_id = hugepages[seg_start].socket_id;
seg_len = seg_end - seg_start;
- RTE_LOG(DEBUG, EAL, "Attempting to map %" PRIu64 "M on socket %i\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Attempting to map %" PRIu64 "M on socket %i",
(seg_len * page_sz) >> 20ULL, socket_id);
/* find free space in memseg lists */
@@ -716,8 +716,8 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end)
break;
}
if (msl_idx == RTE_MAX_MEMSEG_LISTS) {
- RTE_LOG(ERR, EAL, "Could not find space for memseg. Please increase RTE_MAX_MEMSEG_PER_LIST "
- "RTE_MAX_MEMSEG_PER_TYPE and/or RTE_MAX_MEM_MB_PER_TYPE in configuration.\n");
+ RTE_LOG_LINE(ERR, EAL, "Could not find space for memseg. Please increase RTE_MAX_MEMSEG_PER_LIST "
+ "RTE_MAX_MEMSEG_PER_TYPE and/or RTE_MAX_MEM_MB_PER_TYPE in configuration.");
return -1;
}
@@ -735,13 +735,13 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end)
fd = open(hfile->filepath, O_RDWR);
if (fd < 0) {
- RTE_LOG(ERR, EAL, "Could not open '%s': %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Could not open '%s': %s",
hfile->filepath, strerror(errno));
return -1;
}
/* set shared lock on the file. */
if (flock(fd, LOCK_SH) < 0) {
- RTE_LOG(DEBUG, EAL, "Could not lock '%s': %s\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Could not lock '%s': %s",
hfile->filepath, strerror(errno));
close(fd);
return -1;
@@ -755,7 +755,7 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end)
addr = mmap(addr, page_sz, PROT_READ | PROT_WRITE,
MAP_SHARED | MAP_POPULATE | MAP_FIXED, fd, 0);
if (addr == MAP_FAILED) {
- RTE_LOG(ERR, EAL, "Couldn't remap '%s': %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Couldn't remap '%s': %s",
hfile->filepath, strerror(errno));
close(fd);
return -1;
@@ -790,10 +790,10 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end)
/* store segment fd internally */
if (eal_memalloc_set_seg_fd(msl_idx, ms_idx, fd) < 0)
- RTE_LOG(ERR, EAL, "Could not store segment fd: %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Could not store segment fd: %s",
rte_strerror(rte_errno));
}
- RTE_LOG(DEBUG, EAL, "Allocated %" PRIu64 "M on socket %i\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Allocated %" PRIu64 "M on socket %i",
(seg_len * page_sz) >> 20, socket_id);
return seg_len;
}
@@ -819,7 +819,7 @@ static int
memseg_list_free(struct rte_memseg_list *msl)
{
if (rte_fbarray_destroy(&msl->memseg_arr)) {
- RTE_LOG(ERR, EAL, "Cannot destroy memseg list\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot destroy memseg list");
return -1;
}
memset(msl, 0, sizeof(*msl));
@@ -965,7 +965,7 @@ prealloc_segments(struct hugepage_file *hugepages, int n_pages)
break;
}
if (msl_idx == RTE_MAX_MEMSEG_LISTS) {
- RTE_LOG(ERR, EAL, "Not enough space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS\n");
+ RTE_LOG_LINE(ERR, EAL, "Not enough space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS");
return -1;
}
@@ -976,7 +976,7 @@ prealloc_segments(struct hugepage_file *hugepages, int n_pages)
/* finally, allocate VA space */
if (eal_memseg_list_alloc(msl, 0) < 0) {
- RTE_LOG(ERR, EAL, "Cannot preallocate 0x%"PRIx64"kB hugepages\n",
+ RTE_LOG_LINE(ERR, EAL, "Cannot preallocate 0x%"PRIx64"kB hugepages",
page_sz >> 10);
return -1;
}
@@ -1177,15 +1177,15 @@ eal_legacy_hugepage_init(void)
/* create a memfd and store it in the segment fd table */
memfd = memfd_create("nohuge", 0);
if (memfd < 0) {
- RTE_LOG(DEBUG, EAL, "Cannot create memfd: %s\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Cannot create memfd: %s",
strerror(errno));
- RTE_LOG(DEBUG, EAL, "Falling back to anonymous map\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Falling back to anonymous map");
} else {
/* we got an fd - now resize it */
if (ftruncate(memfd, internal_conf->memory) < 0) {
- RTE_LOG(ERR, EAL, "Cannot resize memfd: %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Cannot resize memfd: %s",
strerror(errno));
- RTE_LOG(ERR, EAL, "Falling back to anonymous map\n");
+ RTE_LOG_LINE(ERR, EAL, "Falling back to anonymous map");
close(memfd);
} else {
/* creating memfd-backed file was successful.
@@ -1193,7 +1193,7 @@ eal_legacy_hugepage_init(void)
* other processes (such as vhost backend), so
* map it as shared memory.
*/
- RTE_LOG(DEBUG, EAL, "Using memfd for anonymous memory\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Using memfd for anonymous memory");
fd = memfd;
flags = MAP_SHARED;
}
@@ -1203,7 +1203,7 @@ eal_legacy_hugepage_init(void)
* fit into the DMA mask.
*/
if (eal_memseg_list_alloc(msl, 0)) {
- RTE_LOG(ERR, EAL, "Cannot preallocate VA space for hugepage memory\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot preallocate VA space for hugepage memory");
return -1;
}
@@ -1211,7 +1211,7 @@ eal_legacy_hugepage_init(void)
addr = mmap(prealloc_addr, mem_sz, PROT_READ | PROT_WRITE,
flags | MAP_FIXED, fd, 0);
if (addr == MAP_FAILED || addr != prealloc_addr) {
- RTE_LOG(ERR, EAL, "%s: mmap() failed: %s\n", __func__,
+ RTE_LOG_LINE(ERR, EAL, "%s: mmap() failed: %s", __func__,
strerror(errno));
munmap(prealloc_addr, mem_sz);
return -1;
@@ -1222,7 +1222,7 @@ eal_legacy_hugepage_init(void)
*/
if (fd != -1) {
if (eal_memalloc_set_seg_list_fd(0, fd) < 0) {
- RTE_LOG(ERR, EAL, "Cannot set up segment list fd\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot set up segment list fd");
/* not a serious error, proceed */
}
}
@@ -1231,13 +1231,13 @@ eal_legacy_hugepage_init(void)
if (mcfg->dma_maskbits &&
rte_mem_check_dma_mask_thread_unsafe(mcfg->dma_maskbits)) {
- RTE_LOG(ERR, EAL,
- "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask.\n",
+ RTE_LOG_LINE(ERR, EAL,
+ "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask.",
__func__);
if (rte_eal_iova_mode() == RTE_IOVA_VA &&
rte_eal_using_phys_addrs())
- RTE_LOG(ERR, EAL,
- "%s(): Please try initializing EAL with --iova-mode=pa parameter.\n",
+ RTE_LOG_LINE(ERR, EAL,
+ "%s(): Please try initializing EAL with --iova-mode=pa parameter.",
__func__);
goto fail;
}
@@ -1292,8 +1292,8 @@ eal_legacy_hugepage_init(void)
pages_old = hpi->num_pages[0];
pages_new = map_all_hugepages(&tmp_hp[hp_offset], hpi, memory);
if (pages_new < pages_old) {
- RTE_LOG(DEBUG, EAL,
- "%d not %d hugepages of size %u MB allocated\n",
+ RTE_LOG_LINE(DEBUG, EAL,
+ "%d not %d hugepages of size %u MB allocated",
pages_new, pages_old,
(unsigned)(hpi->hugepage_sz / 0x100000));
@@ -1309,23 +1309,23 @@ eal_legacy_hugepage_init(void)
rte_eal_iova_mode() != RTE_IOVA_VA) {
/* find physical addresses for each hugepage */
if (find_physaddrs(&tmp_hp[hp_offset], hpi) < 0) {
- RTE_LOG(DEBUG, EAL, "Failed to find phys addr "
- "for %u MB pages\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Failed to find phys addr "
+ "for %u MB pages",
(unsigned int)(hpi->hugepage_sz / 0x100000));
goto fail;
}
} else {
/* set physical addresses for each hugepage */
if (set_physaddrs(&tmp_hp[hp_offset], hpi) < 0) {
- RTE_LOG(DEBUG, EAL, "Failed to set phys addr "
- "for %u MB pages\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Failed to set phys addr "
+ "for %u MB pages",
(unsigned int)(hpi->hugepage_sz / 0x100000));
goto fail;
}
}
if (find_numasocket(&tmp_hp[hp_offset], hpi) < 0){
- RTE_LOG(DEBUG, EAL, "Failed to find NUMA socket for %u MB pages\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Failed to find NUMA socket for %u MB pages",
(unsigned)(hpi->hugepage_sz / 0x100000));
goto fail;
}
@@ -1382,9 +1382,9 @@ eal_legacy_hugepage_init(void)
for (i = 0; i < (int) internal_conf->num_hugepage_sizes; i++) {
for (j = 0; j < RTE_MAX_NUMA_NODES; j++) {
if (used_hp[i].num_pages[j] > 0) {
- RTE_LOG(DEBUG, EAL,
+ RTE_LOG_LINE(DEBUG, EAL,
"Requesting %u pages of size %uMB"
- " from socket %i\n",
+ " from socket %i",
used_hp[i].num_pages[j],
(unsigned)
(used_hp[i].hugepage_sz / 0x100000),
@@ -1398,7 +1398,7 @@ eal_legacy_hugepage_init(void)
nr_hugefiles * sizeof(struct hugepage_file));
if (hugepage == NULL) {
- RTE_LOG(ERR, EAL, "Failed to create shared memory!\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed to create shared memory!");
goto fail;
}
memset(hugepage, 0, nr_hugefiles * sizeof(struct hugepage_file));
@@ -1409,7 +1409,7 @@ eal_legacy_hugepage_init(void)
*/
if (unmap_unneeded_hugepages(tmp_hp, used_hp,
internal_conf->num_hugepage_sizes) < 0) {
- RTE_LOG(ERR, EAL, "Unmapping and locking hugepages failed!\n");
+ RTE_LOG_LINE(ERR, EAL, "Unmapping and locking hugepages failed!");
goto fail;
}
@@ -1420,7 +1420,7 @@ eal_legacy_hugepage_init(void)
*/
if (copy_hugepages_to_shared_mem(hugepage, nr_hugefiles,
tmp_hp, nr_hugefiles) < 0) {
- RTE_LOG(ERR, EAL, "Copying tables to shared memory failed!\n");
+ RTE_LOG_LINE(ERR, EAL, "Copying tables to shared memory failed!");
goto fail;
}
@@ -1428,7 +1428,7 @@ eal_legacy_hugepage_init(void)
/* for legacy 32-bit mode, we did not preallocate VA space, so do it */
if (internal_conf->legacy_mem &&
prealloc_segments(hugepage, nr_hugefiles)) {
- RTE_LOG(ERR, EAL, "Could not preallocate VA space for hugepages\n");
+ RTE_LOG_LINE(ERR, EAL, "Could not preallocate VA space for hugepages");
goto fail;
}
#endif
@@ -1437,14 +1437,14 @@ eal_legacy_hugepage_init(void)
* pages become first-class citizens in DPDK memory subsystem
*/
if (remap_needed_hugepages(hugepage, nr_hugefiles)) {
- RTE_LOG(ERR, EAL, "Couldn't remap hugepage files into memseg lists\n");
+ RTE_LOG_LINE(ERR, EAL, "Couldn't remap hugepage files into memseg lists");
goto fail;
}
/* free the hugepage backing files */
if (internal_conf->hugepage_file.unlink_before_mapping &&
unlink_hugepage_files(tmp_hp, internal_conf->num_hugepage_sizes) < 0) {
- RTE_LOG(ERR, EAL, "Unlinking hugepage files failed!\n");
+ RTE_LOG_LINE(ERR, EAL, "Unlinking hugepage files failed!");
goto fail;
}
@@ -1480,8 +1480,8 @@ eal_legacy_hugepage_init(void)
if (mcfg->dma_maskbits &&
rte_mem_check_dma_mask_thread_unsafe(mcfg->dma_maskbits)) {
- RTE_LOG(ERR, EAL,
- "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask.\n",
+ RTE_LOG_LINE(ERR, EAL,
+ "%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask.",
__func__);
goto fail;
}
@@ -1527,15 +1527,15 @@ eal_legacy_hugepage_attach(void)
int fd, fd_hugepage = -1;
if (aslr_enabled() > 0) {
- RTE_LOG(WARNING, EAL, "WARNING: Address Space Layout Randomization "
- "(ASLR) is enabled in the kernel.\n");
- RTE_LOG(WARNING, EAL, " This may cause issues with mapping memory "
- "into secondary processes\n");
+ RTE_LOG_LINE(WARNING, EAL, "WARNING: Address Space Layout Randomization "
+ "(ASLR) is enabled in the kernel.");
+ RTE_LOG_LINE(WARNING, EAL, " This may cause issues with mapping memory "
+ "into secondary processes");
}
fd_hugepage = open(eal_hugepage_data_path(), O_RDONLY);
if (fd_hugepage < 0) {
- RTE_LOG(ERR, EAL, "Could not open %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Could not open %s",
eal_hugepage_data_path());
goto error;
}
@@ -1543,13 +1543,13 @@ eal_legacy_hugepage_attach(void)
size = getFileSize(fd_hugepage);
hp = mmap(NULL, size, PROT_READ, MAP_PRIVATE, fd_hugepage, 0);
if (hp == MAP_FAILED) {
- RTE_LOG(ERR, EAL, "Could not mmap %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Could not mmap %s",
eal_hugepage_data_path());
goto error;
}
num_hp = size / sizeof(struct hugepage_file);
- RTE_LOG(DEBUG, EAL, "Analysing %u files\n", num_hp);
+ RTE_LOG_LINE(DEBUG, EAL, "Analysing %u files", num_hp);
/* map all segments into memory to make sure we get the addrs. the
* segments themselves are already in memseg list (which is shared and
@@ -1570,7 +1570,7 @@ eal_legacy_hugepage_attach(void)
fd = open(hf->filepath, O_RDWR);
if (fd < 0) {
- RTE_LOG(ERR, EAL, "Could not open %s: %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Could not open %s: %s",
hf->filepath, strerror(errno));
goto error;
}
@@ -1578,14 +1578,14 @@ eal_legacy_hugepage_attach(void)
map_addr = mmap(map_addr, map_sz, PROT_READ | PROT_WRITE,
MAP_SHARED | MAP_FIXED, fd, 0);
if (map_addr == MAP_FAILED) {
- RTE_LOG(ERR, EAL, "Could not map %s: %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Could not map %s: %s",
hf->filepath, strerror(errno));
goto fd_error;
}
/* set shared lock on the file. */
if (flock(fd, LOCK_SH) < 0) {
- RTE_LOG(DEBUG, EAL, "%s(): Locking file failed: %s\n",
+ RTE_LOG_LINE(DEBUG, EAL, "%s(): Locking file failed: %s",
__func__, strerror(errno));
goto mmap_error;
}
@@ -1593,13 +1593,13 @@ eal_legacy_hugepage_attach(void)
/* find segment data */
msl = rte_mem_virt2memseg_list(map_addr);
if (msl == NULL) {
- RTE_LOG(DEBUG, EAL, "%s(): Cannot find memseg list\n",
+ RTE_LOG_LINE(DEBUG, EAL, "%s(): Cannot find memseg list",
__func__);
goto mmap_error;
}
ms = rte_mem_virt2memseg(map_addr, msl);
if (ms == NULL) {
- RTE_LOG(DEBUG, EAL, "%s(): Cannot find memseg\n",
+ RTE_LOG_LINE(DEBUG, EAL, "%s(): Cannot find memseg",
__func__);
goto mmap_error;
}
@@ -1607,14 +1607,14 @@ eal_legacy_hugepage_attach(void)
msl_idx = msl - mcfg->memsegs;
ms_idx = rte_fbarray_find_idx(&msl->memseg_arr, ms);
if (ms_idx < 0) {
- RTE_LOG(DEBUG, EAL, "%s(): Cannot find memseg idx\n",
+ RTE_LOG_LINE(DEBUG, EAL, "%s(): Cannot find memseg idx",
__func__);
goto mmap_error;
}
/* store segment fd internally */
if (eal_memalloc_set_seg_fd(msl_idx, ms_idx, fd) < 0)
- RTE_LOG(ERR, EAL, "Could not store segment fd: %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Could not store segment fd: %s",
rte_strerror(rte_errno));
}
/* unmap the hugepage config file, since we are done using it */
@@ -1642,9 +1642,9 @@ static int
eal_hugepage_attach(void)
{
if (eal_memalloc_sync_with_primary()) {
- RTE_LOG(ERR, EAL, "Could not map memory from primary process\n");
+ RTE_LOG_LINE(ERR, EAL, "Could not map memory from primary process");
if (aslr_enabled() > 0)
- RTE_LOG(ERR, EAL, "It is recommended to disable ASLR in the kernel and retry running both primary and secondary processes\n");
+ RTE_LOG_LINE(ERR, EAL, "It is recommended to disable ASLR in the kernel and retry running both primary and secondary processes");
return -1;
}
return 0;
@@ -1740,7 +1740,7 @@ memseg_primary_init_32(void)
max_mem = (uint64_t)RTE_MAX_MEM_MB << 20;
if (total_requested_mem > max_mem) {
- RTE_LOG(ERR, EAL, "Invalid parameters: 32-bit process can at most use %uM of memory\n",
+ RTE_LOG_LINE(ERR, EAL, "Invalid parameters: 32-bit process can at most use %uM of memory",
(unsigned int)(max_mem >> 20));
return -1;
}
@@ -1787,7 +1787,7 @@ memseg_primary_init_32(void)
skip |= active_sockets == 0 && socket_id != main_lcore_socket;
if (skip) {
- RTE_LOG(DEBUG, EAL, "Will not preallocate memory on socket %u\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Will not preallocate memory on socket %u",
socket_id);
continue;
}
@@ -1819,8 +1819,8 @@ memseg_primary_init_32(void)
max_pagesz_mem = RTE_ALIGN_FLOOR(max_pagesz_mem,
hugepage_sz);
- RTE_LOG(DEBUG, EAL, "Attempting to preallocate "
- "%" PRIu64 "M on socket %i\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Attempting to preallocate "
+ "%" PRIu64 "M on socket %i",
max_pagesz_mem >> 20, socket_id);
type_msl_idx = 0;
@@ -1830,8 +1830,8 @@ memseg_primary_init_32(void)
unsigned int n_segs;
if (msl_idx >= RTE_MAX_MEMSEG_LISTS) {
- RTE_LOG(ERR, EAL,
- "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS\n");
+ RTE_LOG_LINE(ERR, EAL,
+ "No more space in memseg lists, please increase RTE_MAX_MEMSEG_LISTS");
return -1;
}
@@ -1847,7 +1847,7 @@ memseg_primary_init_32(void)
/* failing to allocate a memseg list is
* a serious error.
*/
- RTE_LOG(ERR, EAL, "Cannot allocate memseg list\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot allocate memseg list");
return -1;
}
@@ -1855,7 +1855,7 @@ memseg_primary_init_32(void)
/* if we couldn't allocate VA space, we
* can try with smaller page sizes.
*/
- RTE_LOG(ERR, EAL, "Cannot allocate VA space for memseg list, retrying with different page size\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot allocate VA space for memseg list, retrying with different page size");
/* deallocate memseg list */
if (memseg_list_free(msl))
return -1;
@@ -1870,7 +1870,7 @@ memseg_primary_init_32(void)
cur_socket_mem += cur_pagesz_mem;
}
if (cur_socket_mem == 0) {
- RTE_LOG(ERR, EAL, "Cannot allocate VA space on socket %u\n",
+ RTE_LOG_LINE(ERR, EAL, "Cannot allocate VA space on socket %u",
socket_id);
return -1;
}
@@ -1901,13 +1901,13 @@ memseg_secondary_init(void)
continue;
if (rte_fbarray_attach(&msl->memseg_arr)) {
- RTE_LOG(ERR, EAL, "Cannot attach to primary process memseg lists\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot attach to primary process memseg lists");
return -1;
}
/* preallocate VA space */
if (eal_memseg_list_alloc(msl, 0)) {
- RTE_LOG(ERR, EAL, "Cannot preallocate VA space for hugepage memory\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot preallocate VA space for hugepage memory");
return -1;
}
}
@@ -1930,21 +1930,21 @@ rte_eal_memseg_init(void)
lim.rlim_cur = lim.rlim_max;
if (setrlimit(RLIMIT_NOFILE, &lim) < 0) {
- RTE_LOG(DEBUG, EAL, "Setting maximum number of open files failed: %s\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Setting maximum number of open files failed: %s",
strerror(errno));
} else {
- RTE_LOG(DEBUG, EAL, "Setting maximum number of open files to %"
- PRIu64 "\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Setting maximum number of open files to %"
+ PRIu64,
(uint64_t)lim.rlim_cur);
}
} else {
- RTE_LOG(ERR, EAL, "Cannot get current resource limits\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot get current resource limits");
}
#ifndef RTE_EAL_NUMA_AWARE_HUGEPAGES
if (!internal_conf->legacy_mem && rte_socket_count() > 1) {
- RTE_LOG(WARNING, EAL, "DPDK is running on a NUMA system, but is compiled without NUMA support.\n");
- RTE_LOG(WARNING, EAL, "This will have adverse consequences for performance and usability.\n");
- RTE_LOG(WARNING, EAL, "Please use --"OPT_LEGACY_MEM" option, or recompile with NUMA support.\n");
+ RTE_LOG_LINE(WARNING, EAL, "DPDK is running on a NUMA system, but is compiled without NUMA support.");
+ RTE_LOG_LINE(WARNING, EAL, "This will have adverse consequences for performance and usability.");
+ RTE_LOG_LINE(WARNING, EAL, "Please use --"OPT_LEGACY_MEM" option, or recompile with NUMA support.");
}
#endif
diff --git a/lib/eal/linux/eal_thread.c b/lib/eal/linux/eal_thread.c
index 880070c627..80b6f19a9e 100644
--- a/lib/eal/linux/eal_thread.c
+++ b/lib/eal/linux/eal_thread.c
@@ -28,7 +28,7 @@ void rte_thread_set_name(rte_thread_t thread_id, const char *thread_name)
const size_t truncatedsz = sizeof(truncated);
if (strlcpy(truncated, thread_name, truncatedsz) >= truncatedsz)
- RTE_LOG(DEBUG, EAL, "Truncated thread name\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Truncated thread name");
ret = pthread_setname_np((pthread_t)thread_id.opaque_id, truncated);
#endif
@@ -37,5 +37,5 @@ void rte_thread_set_name(rte_thread_t thread_id, const char *thread_name)
RTE_SET_USED(thread_name);
if (ret != 0)
- RTE_LOG(DEBUG, EAL, "Failed to set thread name\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Failed to set thread name");
}
diff --git a/lib/eal/linux/eal_timer.c b/lib/eal/linux/eal_timer.c
index df9ad61ae9..3813b1a66e 100644
--- a/lib/eal/linux/eal_timer.c
+++ b/lib/eal/linux/eal_timer.c
@@ -139,20 +139,20 @@ rte_eal_hpet_init(int make_default)
eal_get_internal_configuration();
if (internal_conf->no_hpet) {
- RTE_LOG(NOTICE, EAL, "HPET is disabled\n");
+ RTE_LOG_LINE(NOTICE, EAL, "HPET is disabled");
return -1;
}
fd = open(DEV_HPET, O_RDONLY);
if (fd < 0) {
- RTE_LOG(ERR, EAL, "ERROR: Cannot open "DEV_HPET": %s!\n",
+ RTE_LOG_LINE(ERR, EAL, "ERROR: Cannot open "DEV_HPET": %s!",
strerror(errno));
internal_conf->no_hpet = 1;
return -1;
}
eal_hpet = mmap(NULL, 1024, PROT_READ, MAP_SHARED, fd, 0);
if (eal_hpet == MAP_FAILED) {
- RTE_LOG(ERR, EAL, "ERROR: Cannot mmap "DEV_HPET"!\n");
+ RTE_LOG_LINE(ERR, EAL, "ERROR: Cannot mmap "DEV_HPET"!");
close(fd);
internal_conf->no_hpet = 1;
return -1;
@@ -166,7 +166,7 @@ rte_eal_hpet_init(int make_default)
eal_hpet_resolution_hz = (1000ULL*1000ULL*1000ULL*1000ULL*1000ULL) /
(uint64_t)eal_hpet_resolution_fs;
- RTE_LOG(INFO, EAL, "HPET frequency is ~%"PRIu64" kHz\n",
+ RTE_LOG_LINE(INFO, EAL, "HPET frequency is ~%"PRIu64" kHz",
eal_hpet_resolution_hz/1000);
eal_hpet_msb = (eal_hpet->counter_l >> 30);
@@ -176,7 +176,7 @@ rte_eal_hpet_init(int make_default)
ret = rte_thread_create_internal_control(&msb_inc_thread_id, "hpet-msb",
hpet_msb_inc, NULL);
if (ret != 0) {
- RTE_LOG(ERR, EAL, "ERROR: Cannot create HPET timer thread!\n");
+ RTE_LOG_LINE(ERR, EAL, "ERROR: Cannot create HPET timer thread!");
internal_conf->no_hpet = 1;
return -1;
}
diff --git a/lib/eal/linux/eal_vfio.c b/lib/eal/linux/eal_vfio.c
index ad3c1654b2..e8a783aaa8 100644
--- a/lib/eal/linux/eal_vfio.c
+++ b/lib/eal/linux/eal_vfio.c
@@ -367,7 +367,7 @@ vfio_open_group_fd(int iommu_group_num)
if (vfio_group_fd < 0) {
/* if file not found, it's not an error */
if (errno != ENOENT) {
- RTE_LOG(ERR, EAL, "Cannot open %s: %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Cannot open %s: %s",
filename, strerror(errno));
return -1;
}
@@ -379,8 +379,8 @@ vfio_open_group_fd(int iommu_group_num)
vfio_group_fd = open(filename, O_RDWR);
if (vfio_group_fd < 0) {
if (errno != ENOENT) {
- RTE_LOG(ERR, EAL,
- "Cannot open %s: %s\n",
+ RTE_LOG_LINE(ERR, EAL,
+ "Cannot open %s: %s",
filename, strerror(errno));
return -1;
}
@@ -408,14 +408,14 @@ vfio_open_group_fd(int iommu_group_num)
if (p->result == SOCKET_OK && mp_rep->num_fds == 1) {
vfio_group_fd = mp_rep->fds[0];
} else if (p->result == SOCKET_NO_FD) {
- RTE_LOG(ERR, EAL, "Bad VFIO group fd\n");
+ RTE_LOG_LINE(ERR, EAL, "Bad VFIO group fd");
vfio_group_fd = -ENOENT;
}
}
free(mp_reply.msgs);
if (vfio_group_fd < 0 && vfio_group_fd != -ENOENT)
- RTE_LOG(ERR, EAL, "Cannot request VFIO group fd\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot request VFIO group fd");
return vfio_group_fd;
}
@@ -452,7 +452,7 @@ vfio_get_group_fd(struct vfio_config *vfio_cfg,
/* Lets see first if there is room for a new group */
if (vfio_cfg->vfio_active_groups == VFIO_MAX_GROUPS) {
- RTE_LOG(ERR, EAL, "Maximum number of VFIO groups reached!\n");
+ RTE_LOG_LINE(ERR, EAL, "Maximum number of VFIO groups reached!");
return -1;
}
@@ -465,13 +465,13 @@ vfio_get_group_fd(struct vfio_config *vfio_cfg,
/* This should not happen */
if (i == VFIO_MAX_GROUPS) {
- RTE_LOG(ERR, EAL, "No VFIO group free slot found\n");
+ RTE_LOG_LINE(ERR, EAL, "No VFIO group free slot found");
return -1;
}
vfio_group_fd = vfio_open_group_fd(iommu_group_num);
if (vfio_group_fd < 0) {
- RTE_LOG(ERR, EAL, "Failed to open VFIO group %d\n",
+ RTE_LOG_LINE(ERR, EAL, "Failed to open VFIO group %d",
iommu_group_num);
return vfio_group_fd;
}
@@ -551,13 +551,13 @@ vfio_group_device_get(int vfio_group_fd)
vfio_cfg = get_vfio_cfg_by_group_fd(vfio_group_fd);
if (vfio_cfg == NULL) {
- RTE_LOG(ERR, EAL, "Invalid VFIO group fd!\n");
+ RTE_LOG_LINE(ERR, EAL, "Invalid VFIO group fd!");
return;
}
i = get_vfio_group_idx(vfio_group_fd);
if (i < 0 || i > (VFIO_MAX_GROUPS - 1))
- RTE_LOG(ERR, EAL, "Wrong VFIO group index (%d)\n", i);
+ RTE_LOG_LINE(ERR, EAL, "Wrong VFIO group index (%d)", i);
else
vfio_cfg->vfio_groups[i].devices++;
}
@@ -570,13 +570,13 @@ vfio_group_device_put(int vfio_group_fd)
vfio_cfg = get_vfio_cfg_by_group_fd(vfio_group_fd);
if (vfio_cfg == NULL) {
- RTE_LOG(ERR, EAL, "Invalid VFIO group fd!\n");
+ RTE_LOG_LINE(ERR, EAL, "Invalid VFIO group fd!");
return;
}
i = get_vfio_group_idx(vfio_group_fd);
if (i < 0 || i > (VFIO_MAX_GROUPS - 1))
- RTE_LOG(ERR, EAL, "Wrong VFIO group index (%d)\n", i);
+ RTE_LOG_LINE(ERR, EAL, "Wrong VFIO group index (%d)", i);
else
vfio_cfg->vfio_groups[i].devices--;
}
@@ -589,13 +589,13 @@ vfio_group_device_count(int vfio_group_fd)
vfio_cfg = get_vfio_cfg_by_group_fd(vfio_group_fd);
if (vfio_cfg == NULL) {
- RTE_LOG(ERR, EAL, "Invalid VFIO group fd!\n");
+ RTE_LOG_LINE(ERR, EAL, "Invalid VFIO group fd!");
return -1;
}
i = get_vfio_group_idx(vfio_group_fd);
if (i < 0 || i > (VFIO_MAX_GROUPS - 1)) {
- RTE_LOG(ERR, EAL, "Wrong VFIO group index (%d)\n", i);
+ RTE_LOG_LINE(ERR, EAL, "Wrong VFIO group index (%d)", i);
return -1;
}
@@ -636,8 +636,8 @@ vfio_mem_event_callback(enum rte_mem_event type, const void *addr, size_t len,
while (cur_len < len) {
/* some memory segments may have invalid IOVA */
if (ms->iova == RTE_BAD_IOVA) {
- RTE_LOG(DEBUG, EAL,
- "Memory segment at %p has bad IOVA, skipping\n",
+ RTE_LOG_LINE(DEBUG, EAL,
+ "Memory segment at %p has bad IOVA, skipping",
ms->addr);
goto next;
}
@@ -670,7 +670,7 @@ vfio_sync_default_container(void)
/* default container fd should have been opened in rte_vfio_enable() */
if (!default_vfio_cfg->vfio_enabled ||
default_vfio_cfg->vfio_container_fd < 0) {
- RTE_LOG(ERR, EAL, "VFIO support is not initialized\n");
+ RTE_LOG_LINE(ERR, EAL, "VFIO support is not initialized");
return -1;
}
@@ -690,8 +690,8 @@ vfio_sync_default_container(void)
}
free(mp_reply.msgs);
if (iommu_type_id < 0) {
- RTE_LOG(ERR, EAL,
- "Could not get IOMMU type for default container\n");
+ RTE_LOG_LINE(ERR, EAL,
+ "Could not get IOMMU type for default container");
return -1;
}
@@ -708,7 +708,7 @@ vfio_sync_default_container(void)
return 0;
}
- RTE_LOG(ERR, EAL, "Could not find IOMMU type id (%i)\n",
+ RTE_LOG_LINE(ERR, EAL, "Could not find IOMMU type id (%i)",
iommu_type_id);
return -1;
}
@@ -721,7 +721,7 @@ rte_vfio_clear_group(int vfio_group_fd)
vfio_cfg = get_vfio_cfg_by_group_fd(vfio_group_fd);
if (vfio_cfg == NULL) {
- RTE_LOG(ERR, EAL, "Invalid VFIO group fd!\n");
+ RTE_LOG_LINE(ERR, EAL, "Invalid VFIO group fd!");
return -1;
}
@@ -756,8 +756,8 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
/* get group number */
ret = rte_vfio_get_group_num(sysfs_base, dev_addr, &iommu_group_num);
if (ret == 0) {
- RTE_LOG(NOTICE, EAL,
- "%s not managed by VFIO driver, skipping\n",
+ RTE_LOG_LINE(NOTICE, EAL,
+ "%s not managed by VFIO driver, skipping",
dev_addr);
return 1;
}
@@ -776,8 +776,8 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
* isn't managed by VFIO
*/
if (vfio_group_fd == -ENOENT) {
- RTE_LOG(NOTICE, EAL,
- "%s not managed by VFIO driver, skipping\n",
+ RTE_LOG_LINE(NOTICE, EAL,
+ "%s not managed by VFIO driver, skipping",
dev_addr);
return 1;
}
@@ -790,14 +790,14 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
/* check if the group is viable */
ret = ioctl(vfio_group_fd, VFIO_GROUP_GET_STATUS, &group_status);
if (ret) {
- RTE_LOG(ERR, EAL, "%s cannot get VFIO group status, "
- "error %i (%s)\n", dev_addr, errno, strerror(errno));
+ RTE_LOG_LINE(ERR, EAL, "%s cannot get VFIO group status, "
+ "error %i (%s)", dev_addr, errno, strerror(errno));
close(vfio_group_fd);
rte_vfio_clear_group(vfio_group_fd);
return -1;
} else if (!(group_status.flags & VFIO_GROUP_FLAGS_VIABLE)) {
- RTE_LOG(ERR, EAL, "%s VFIO group is not viable! "
- "Not all devices in IOMMU group bound to VFIO or unbound\n",
+ RTE_LOG_LINE(ERR, EAL, "%s VFIO group is not viable! "
+ "Not all devices in IOMMU group bound to VFIO or unbound",
dev_addr);
close(vfio_group_fd);
rte_vfio_clear_group(vfio_group_fd);
@@ -817,9 +817,9 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
ret = ioctl(vfio_group_fd, VFIO_GROUP_SET_CONTAINER,
&vfio_container_fd);
if (ret) {
- RTE_LOG(ERR, EAL,
+ RTE_LOG_LINE(ERR, EAL,
"%s cannot add VFIO group to container, error "
- "%i (%s)\n", dev_addr, errno, strerror(errno));
+ "%i (%s)", dev_addr, errno, strerror(errno));
close(vfio_group_fd);
rte_vfio_clear_group(vfio_group_fd);
return -1;
@@ -841,8 +841,8 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
/* select an IOMMU type which we will be using */
t = vfio_set_iommu_type(vfio_container_fd);
if (!t) {
- RTE_LOG(ERR, EAL,
- "%s failed to select IOMMU type\n",
+ RTE_LOG_LINE(ERR, EAL,
+ "%s failed to select IOMMU type",
dev_addr);
close(vfio_group_fd);
rte_vfio_clear_group(vfio_group_fd);
@@ -857,9 +857,9 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
else
ret = 0;
if (ret) {
- RTE_LOG(ERR, EAL,
+ RTE_LOG_LINE(ERR, EAL,
"%s DMA remapping failed, error "
- "%i (%s)\n",
+ "%i (%s)",
dev_addr, errno, strerror(errno));
close(vfio_group_fd);
rte_vfio_clear_group(vfio_group_fd);
@@ -886,10 +886,10 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
map->addr, map->iova, map->len,
1);
if (ret) {
- RTE_LOG(ERR, EAL, "Couldn't map user memory for DMA: "
+ RTE_LOG_LINE(ERR, EAL, "Couldn't map user memory for DMA: "
"va: 0x%" PRIx64 " "
"iova: 0x%" PRIx64 " "
- "len: 0x%" PRIu64 "\n",
+ "len: 0x%" PRIu64,
map->addr, map->iova,
map->len);
rte_spinlock_recursive_unlock(
@@ -911,13 +911,13 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
rte_mcfg_mem_read_unlock();
if (ret && rte_errno != ENOTSUP) {
- RTE_LOG(ERR, EAL, "Could not install memory event callback for VFIO\n");
+ RTE_LOG_LINE(ERR, EAL, "Could not install memory event callback for VFIO");
return -1;
}
if (ret)
- RTE_LOG(DEBUG, EAL, "Memory event callbacks not supported\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Memory event callbacks not supported");
else
- RTE_LOG(DEBUG, EAL, "Installed memory event callback for VFIO\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Installed memory event callback for VFIO");
}
} else if (rte_eal_process_type() != RTE_PROC_PRIMARY &&
vfio_cfg == default_vfio_cfg &&
@@ -929,7 +929,7 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
*/
ret = vfio_sync_default_container();
if (ret < 0) {
- RTE_LOG(ERR, EAL, "Could not sync default VFIO container\n");
+ RTE_LOG_LINE(ERR, EAL, "Could not sync default VFIO container");
close(vfio_group_fd);
rte_vfio_clear_group(vfio_group_fd);
return -1;
@@ -937,7 +937,7 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
/* we have successfully initialized VFIO, notify user */
const struct vfio_iommu_type *t =
default_vfio_cfg->vfio_iommu_type;
- RTE_LOG(INFO, EAL, "Using IOMMU type %d (%s)\n",
+ RTE_LOG_LINE(INFO, EAL, "Using IOMMU type %d (%s)",
t->type_id, t->name);
}
@@ -965,7 +965,7 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
* the VFIO group or the container not having IOMMU configured.
*/
- RTE_LOG(WARNING, EAL, "Getting a vfio_dev_fd for %s failed\n",
+ RTE_LOG_LINE(WARNING, EAL, "Getting a vfio_dev_fd for %s failed",
dev_addr);
close(vfio_group_fd);
rte_vfio_clear_group(vfio_group_fd);
@@ -976,8 +976,8 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
dev_get_info:
ret = ioctl(*vfio_dev_fd, VFIO_DEVICE_GET_INFO, device_info);
if (ret) {
- RTE_LOG(ERR, EAL, "%s cannot get device info, "
- "error %i (%s)\n", dev_addr, errno,
+ RTE_LOG_LINE(ERR, EAL, "%s cannot get device info, "
+ "error %i (%s)", dev_addr, errno,
strerror(errno));
close(*vfio_dev_fd);
close(vfio_group_fd);
@@ -1007,7 +1007,7 @@ rte_vfio_release_device(const char *sysfs_base, const char *dev_addr,
/* get group number */
ret = rte_vfio_get_group_num(sysfs_base, dev_addr, &iommu_group_num);
if (ret <= 0) {
- RTE_LOG(WARNING, EAL, "%s not managed by VFIO driver\n",
+ RTE_LOG_LINE(WARNING, EAL, "%s not managed by VFIO driver",
dev_addr);
/* This is an error at this point. */
ret = -1;
@@ -1017,7 +1017,7 @@ rte_vfio_release_device(const char *sysfs_base, const char *dev_addr,
/* get the actual group fd */
vfio_group_fd = rte_vfio_get_group_fd(iommu_group_num);
if (vfio_group_fd < 0) {
- RTE_LOG(INFO, EAL, "rte_vfio_get_group_fd failed for %s\n",
+ RTE_LOG_LINE(INFO, EAL, "rte_vfio_get_group_fd failed for %s",
dev_addr);
ret = vfio_group_fd;
goto out;
@@ -1034,7 +1034,7 @@ rte_vfio_release_device(const char *sysfs_base, const char *dev_addr,
/* Closing a device */
if (close(vfio_dev_fd) < 0) {
- RTE_LOG(INFO, EAL, "Error when closing vfio_dev_fd for %s\n",
+ RTE_LOG_LINE(INFO, EAL, "Error when closing vfio_dev_fd for %s",
dev_addr);
ret = -1;
goto out;
@@ -1047,14 +1047,14 @@ rte_vfio_release_device(const char *sysfs_base, const char *dev_addr,
if (!vfio_group_device_count(vfio_group_fd)) {
if (close(vfio_group_fd) < 0) {
- RTE_LOG(INFO, EAL, "Error when closing vfio_group_fd for %s\n",
+ RTE_LOG_LINE(INFO, EAL, "Error when closing vfio_group_fd for %s",
dev_addr);
ret = -1;
goto out;
}
if (rte_vfio_clear_group(vfio_group_fd) < 0) {
- RTE_LOG(INFO, EAL, "Error when clearing group for %s\n",
+ RTE_LOG_LINE(INFO, EAL, "Error when clearing group for %s",
dev_addr);
ret = -1;
goto out;
@@ -1101,21 +1101,21 @@ rte_vfio_enable(const char *modname)
}
}
- RTE_LOG(DEBUG, EAL, "Probing VFIO support...\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Probing VFIO support...");
/* check if vfio module is loaded */
vfio_available = rte_eal_check_module(modname);
/* return error directly */
if (vfio_available == -1) {
- RTE_LOG(INFO, EAL, "Could not get loaded module details!\n");
+ RTE_LOG_LINE(INFO, EAL, "Could not get loaded module details!");
return -1;
}
/* return 0 if VFIO modules not loaded */
if (vfio_available == 0) {
- RTE_LOG(DEBUG, EAL,
- "VFIO modules not loaded, skipping VFIO support...\n");
+ RTE_LOG_LINE(DEBUG, EAL,
+ "VFIO modules not loaded, skipping VFIO support...");
return 0;
}
@@ -1131,10 +1131,10 @@ rte_vfio_enable(const char *modname)
/* check if we have VFIO driver enabled */
if (default_vfio_cfg->vfio_container_fd != -1) {
- RTE_LOG(INFO, EAL, "VFIO support initialized\n");
+ RTE_LOG_LINE(INFO, EAL, "VFIO support initialized");
default_vfio_cfg->vfio_enabled = 1;
} else {
- RTE_LOG(NOTICE, EAL, "VFIO support could not be initialized\n");
+ RTE_LOG_LINE(NOTICE, EAL, "VFIO support could not be initialized");
}
return 0;
@@ -1186,7 +1186,7 @@ vfio_get_default_container_fd(void)
}
free(mp_reply.msgs);
- RTE_LOG(ERR, EAL, "Cannot request default VFIO container fd\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot request default VFIO container fd");
return -1;
}
@@ -1209,13 +1209,13 @@ vfio_set_iommu_type(int vfio_container_fd)
int ret = ioctl(vfio_container_fd, VFIO_SET_IOMMU,
t->type_id);
if (!ret) {
- RTE_LOG(INFO, EAL, "Using IOMMU type %d (%s)\n",
+ RTE_LOG_LINE(INFO, EAL, "Using IOMMU type %d (%s)",
t->type_id, t->name);
return t;
}
/* not an error, there may be more supported IOMMU types */
- RTE_LOG(DEBUG, EAL, "Set IOMMU type %d (%s) failed, error "
- "%i (%s)\n", t->type_id, t->name, errno,
+ RTE_LOG_LINE(DEBUG, EAL, "Set IOMMU type %d (%s) failed, error "
+ "%i (%s)", t->type_id, t->name, errno,
strerror(errno));
}
/* if we didn't find a suitable IOMMU type, fail */
@@ -1233,15 +1233,15 @@ vfio_has_supported_extensions(int vfio_container_fd)
ret = ioctl(vfio_container_fd, VFIO_CHECK_EXTENSION,
t->type_id);
if (ret < 0) {
- RTE_LOG(ERR, EAL, "Could not get IOMMU type, error "
- "%i (%s)\n", errno, strerror(errno));
+ RTE_LOG_LINE(ERR, EAL, "Could not get IOMMU type, error "
+ "%i (%s)", errno, strerror(errno));
close(vfio_container_fd);
return -1;
} else if (ret == 1) {
/* we found a supported extension */
n_extensions++;
}
- RTE_LOG(DEBUG, EAL, "IOMMU type %d (%s) is %s\n",
+ RTE_LOG_LINE(DEBUG, EAL, "IOMMU type %d (%s) is %s",
t->type_id, t->name,
ret ? "supported" : "not supported");
}
@@ -1271,9 +1271,9 @@ rte_vfio_get_container_fd(void)
if (internal_conf->process_type == RTE_PROC_PRIMARY) {
vfio_container_fd = open(VFIO_CONTAINER_PATH, O_RDWR);
if (vfio_container_fd < 0) {
- RTE_LOG(ERR, EAL,
+ RTE_LOG_LINE(ERR, EAL,
"Cannot open VFIO container %s, error "
- "%i (%s)\n", VFIO_CONTAINER_PATH,
+ "%i (%s)", VFIO_CONTAINER_PATH,
errno, strerror(errno));
return -1;
}
@@ -1282,19 +1282,19 @@ rte_vfio_get_container_fd(void)
ret = ioctl(vfio_container_fd, VFIO_GET_API_VERSION);
if (ret != VFIO_API_VERSION) {
if (ret < 0)
- RTE_LOG(ERR, EAL,
+ RTE_LOG_LINE(ERR, EAL,
"Could not get VFIO API version, error "
- "%i (%s)\n", errno, strerror(errno));
+ "%i (%s)", errno, strerror(errno));
else
- RTE_LOG(ERR, EAL, "Unsupported VFIO API version!\n");
+ RTE_LOG_LINE(ERR, EAL, "Unsupported VFIO API version!");
close(vfio_container_fd);
return -1;
}
ret = vfio_has_supported_extensions(vfio_container_fd);
if (ret) {
- RTE_LOG(ERR, EAL,
- "No supported IOMMU extensions found!\n");
+ RTE_LOG_LINE(ERR, EAL,
+ "No supported IOMMU extensions found!");
return -1;
}
@@ -1322,7 +1322,7 @@ rte_vfio_get_container_fd(void)
}
free(mp_reply.msgs);
- RTE_LOG(ERR, EAL, "Cannot request VFIO container fd\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot request VFIO container fd");
return -1;
}
@@ -1352,7 +1352,7 @@ rte_vfio_get_group_num(const char *sysfs_base,
tok, RTE_DIM(tok), '/');
if (ret <= 0) {
- RTE_LOG(ERR, EAL, "%s cannot get IOMMU group\n", dev_addr);
+ RTE_LOG_LINE(ERR, EAL, "%s cannot get IOMMU group", dev_addr);
return -1;
}
@@ -1362,7 +1362,7 @@ rte_vfio_get_group_num(const char *sysfs_base,
end = group_tok;
*iommu_group_num = strtol(group_tok, &end, 10);
if ((end != group_tok && *end != '\0') || errno != 0) {
- RTE_LOG(ERR, EAL, "%s error parsing IOMMU number!\n", dev_addr);
+ RTE_LOG_LINE(ERR, EAL, "%s error parsing IOMMU number!", dev_addr);
return -1;
}
@@ -1411,12 +1411,12 @@ vfio_type1_dma_mem_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova,
* returned from kernel.
*/
if (errno == EEXIST) {
- RTE_LOG(DEBUG, EAL,
+ RTE_LOG_LINE(DEBUG, EAL,
"Memory segment is already mapped, skipping");
} else {
- RTE_LOG(ERR, EAL,
+ RTE_LOG_LINE(ERR, EAL,
"Cannot set up DMA remapping, error "
- "%i (%s)\n", errno, strerror(errno));
+ "%i (%s)", errno, strerror(errno));
return -1;
}
}
@@ -1429,12 +1429,12 @@ vfio_type1_dma_mem_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova,
ret = ioctl(vfio_container_fd, VFIO_IOMMU_UNMAP_DMA,
&dma_unmap);
if (ret) {
- RTE_LOG(ERR, EAL, "Cannot clear DMA remapping, error "
- "%i (%s)\n", errno, strerror(errno));
+ RTE_LOG_LINE(ERR, EAL, "Cannot clear DMA remapping, error "
+ "%i (%s)", errno, strerror(errno));
return -1;
} else if (dma_unmap.size != len) {
- RTE_LOG(ERR, EAL, "Unexpected size %"PRIu64
- " of DMA remapping cleared instead of %"PRIu64"\n",
+ RTE_LOG_LINE(ERR, EAL, "Unexpected size %"PRIu64
+ " of DMA remapping cleared instead of %"PRIu64,
(uint64_t)dma_unmap.size, len);
rte_errno = EIO;
return -1;
@@ -1470,16 +1470,16 @@ vfio_spapr_dma_do_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova,
struct vfio_iommu_type1_dma_map dma_map;
if (iova + len > spapr_dma_win_len) {
- RTE_LOG(ERR, EAL, "DMA map attempt outside DMA window\n");
+ RTE_LOG_LINE(ERR, EAL, "DMA map attempt outside DMA window");
return -1;
}
ret = ioctl(vfio_container_fd,
VFIO_IOMMU_SPAPR_REGISTER_MEMORY, ®);
if (ret) {
- RTE_LOG(ERR, EAL,
+ RTE_LOG_LINE(ERR, EAL,
"Cannot register vaddr for IOMMU, error "
- "%i (%s)\n", errno, strerror(errno));
+ "%i (%s)", errno, strerror(errno));
return -1;
}
@@ -1493,8 +1493,8 @@ vfio_spapr_dma_do_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova,
ret = ioctl(vfio_container_fd, VFIO_IOMMU_MAP_DMA, &dma_map);
if (ret) {
- RTE_LOG(ERR, EAL, "Cannot map vaddr for IOMMU, error "
- "%i (%s)\n", errno, strerror(errno));
+ RTE_LOG_LINE(ERR, EAL, "Cannot map vaddr for IOMMU, error "
+ "%i (%s)", errno, strerror(errno));
return -1;
}
@@ -1509,17 +1509,17 @@ vfio_spapr_dma_do_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova,
ret = ioctl(vfio_container_fd, VFIO_IOMMU_UNMAP_DMA,
&dma_unmap);
if (ret) {
- RTE_LOG(ERR, EAL, "Cannot unmap vaddr for IOMMU, error "
- "%i (%s)\n", errno, strerror(errno));
+ RTE_LOG_LINE(ERR, EAL, "Cannot unmap vaddr for IOMMU, error "
+ "%i (%s)", errno, strerror(errno));
return -1;
}
ret = ioctl(vfio_container_fd,
VFIO_IOMMU_SPAPR_UNREGISTER_MEMORY, ®);
if (ret) {
- RTE_LOG(ERR, EAL,
+ RTE_LOG_LINE(ERR, EAL,
"Cannot unregister vaddr for IOMMU, error "
- "%i (%s)\n", errno, strerror(errno));
+ "%i (%s)", errno, strerror(errno));
return -1;
}
}
@@ -1599,7 +1599,7 @@ find_highest_mem_addr(struct spapr_size_walk_param *param)
*/
FILE *fd = fopen(proc_iomem, "r");
if (fd == NULL) {
- RTE_LOG(ERR, EAL, "Cannot open %s\n", proc_iomem);
+ RTE_LOG_LINE(ERR, EAL, "Cannot open %s", proc_iomem);
return -1;
}
/* Scan /proc/iomem for the highest PA in the system */
@@ -1612,15 +1612,15 @@ find_highest_mem_addr(struct spapr_size_walk_param *param)
/* Validate the format of the memory string */
if (space == NULL || dash == NULL || space < dash) {
- RTE_LOG(ERR, EAL, "Can't parse line \"%s\" in file %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Can't parse line \"%s\" in file %s",
line, proc_iomem);
continue;
}
start = strtoull(line, NULL, 16);
end = strtoull(dash + 1, NULL, 16);
- RTE_LOG(DEBUG, EAL, "Found system RAM from 0x%" PRIx64
- " to 0x%" PRIx64 "\n", start, end);
+ RTE_LOG_LINE(DEBUG, EAL, "Found system RAM from 0x%" PRIx64
+ " to 0x%" PRIx64, start, end);
if (end > max)
max = end;
}
@@ -1628,22 +1628,22 @@ find_highest_mem_addr(struct spapr_size_walk_param *param)
fclose(fd);
if (max == 0) {
- RTE_LOG(ERR, EAL, "Failed to find valid \"System RAM\" "
- "entry in file %s\n", proc_iomem);
+ RTE_LOG_LINE(ERR, EAL, "Failed to find valid \"System RAM\" "
+ "entry in file %s", proc_iomem);
return -1;
}
spapr_dma_win_len = rte_align64pow2(max + 1);
return 0;
} else if (rte_eal_iova_mode() == RTE_IOVA_VA) {
- RTE_LOG(DEBUG, EAL, "Highest VA address in memseg list is 0x%"
- PRIx64 "\n", param->max_va);
+ RTE_LOG_LINE(DEBUG, EAL, "Highest VA address in memseg list is 0x%"
+ PRIx64, param->max_va);
spapr_dma_win_len = rte_align64pow2(param->max_va);
return 0;
}
spapr_dma_win_len = 0;
- RTE_LOG(ERR, EAL, "Unsupported IOVA mode\n");
+ RTE_LOG_LINE(ERR, EAL, "Unsupported IOVA mode");
return -1;
}
@@ -1668,18 +1668,18 @@ spapr_dma_win_size(void)
/* walk the memseg list to find the page size/max VA address */
memset(¶m, 0, sizeof(param));
if (rte_memseg_list_walk(vfio_spapr_size_walk, ¶m) < 0) {
- RTE_LOG(ERR, EAL, "Failed to walk memseg list for DMA window size\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed to walk memseg list for DMA window size");
return -1;
}
/* we can't be sure if DMA window covers external memory */
if (param.is_user_managed)
- RTE_LOG(WARNING, EAL, "Detected user managed external memory which may not be managed by the IOMMU\n");
+ RTE_LOG_LINE(WARNING, EAL, "Detected user managed external memory which may not be managed by the IOMMU");
/* check physical/virtual memory size */
if (find_highest_mem_addr(¶m) < 0)
return -1;
- RTE_LOG(DEBUG, EAL, "Setting DMA window size to 0x%" PRIx64 "\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Setting DMA window size to 0x%" PRIx64,
spapr_dma_win_len);
spapr_dma_win_page_sz = param.page_sz;
rte_mem_set_dma_mask(rte_ctz64(spapr_dma_win_len));
@@ -1703,7 +1703,7 @@ vfio_spapr_create_dma_window(int vfio_container_fd)
ret = ioctl(vfio_container_fd, VFIO_IOMMU_SPAPR_TCE_GET_INFO, &info);
if (ret) {
- RTE_LOG(ERR, EAL, "Cannot get IOMMU info, error %i (%s)\n",
+ RTE_LOG_LINE(ERR, EAL, "Cannot get IOMMU info, error %i (%s)",
errno, strerror(errno));
return -1;
}
@@ -1744,17 +1744,17 @@ vfio_spapr_create_dma_window(int vfio_container_fd)
}
#endif /* VFIO_IOMMU_SPAPR_INFO_DDW */
if (ret) {
- RTE_LOG(ERR, EAL, "Cannot create new DMA window, error "
- "%i (%s)\n", errno, strerror(errno));
- RTE_LOG(ERR, EAL,
- "Consider using a larger hugepage size if supported by the system\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot create new DMA window, error "
+ "%i (%s)", errno, strerror(errno));
+ RTE_LOG_LINE(ERR, EAL,
+ "Consider using a larger hugepage size if supported by the system");
return -1;
}
/* verify the start address */
if (create.start_addr != 0) {
- RTE_LOG(ERR, EAL, "Received unsupported start address 0x%"
- PRIx64 "\n", (uint64_t)create.start_addr);
+ RTE_LOG_LINE(ERR, EAL, "Received unsupported start address 0x%"
+ PRIx64, (uint64_t)create.start_addr);
return -1;
}
return ret;
@@ -1769,13 +1769,13 @@ vfio_spapr_dma_mem_map(int vfio_container_fd, uint64_t vaddr,
if (do_map) {
if (vfio_spapr_dma_do_map(vfio_container_fd,
vaddr, iova, len, 1)) {
- RTE_LOG(ERR, EAL, "Failed to map DMA\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed to map DMA");
ret = -1;
}
} else {
if (vfio_spapr_dma_do_map(vfio_container_fd,
vaddr, iova, len, 0)) {
- RTE_LOG(ERR, EAL, "Failed to unmap DMA\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed to unmap DMA");
ret = -1;
}
}
@@ -1787,7 +1787,7 @@ static int
vfio_spapr_dma_map(int vfio_container_fd)
{
if (vfio_spapr_create_dma_window(vfio_container_fd) < 0) {
- RTE_LOG(ERR, EAL, "Could not create new DMA window!\n");
+ RTE_LOG_LINE(ERR, EAL, "Could not create new DMA window!");
return -1;
}
@@ -1822,14 +1822,14 @@ vfio_dma_mem_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
const struct vfio_iommu_type *t = vfio_cfg->vfio_iommu_type;
if (!t) {
- RTE_LOG(ERR, EAL, "VFIO support not initialized\n");
+ RTE_LOG_LINE(ERR, EAL, "VFIO support not initialized");
rte_errno = ENODEV;
return -1;
}
if (!t->dma_user_map_func) {
- RTE_LOG(ERR, EAL,
- "VFIO custom DMA region mapping not supported by IOMMU %s\n",
+ RTE_LOG_LINE(ERR, EAL,
+ "VFIO custom DMA region mapping not supported by IOMMU %s",
t->name);
rte_errno = ENOTSUP;
return -1;
@@ -1851,7 +1851,7 @@ container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
user_mem_maps = &vfio_cfg->mem_maps;
rte_spinlock_recursive_lock(&user_mem_maps->lock);
if (user_mem_maps->n_maps == VFIO_MAX_USER_MEM_MAPS) {
- RTE_LOG(ERR, EAL, "No more space for user mem maps\n");
+ RTE_LOG_LINE(ERR, EAL, "No more space for user mem maps");
rte_errno = ENOMEM;
ret = -1;
goto out;
@@ -1865,7 +1865,7 @@ container_dma_map(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
* this to be unsupported, because we can't just store any old
* mapping and pollute list of active mappings willy-nilly.
*/
- RTE_LOG(ERR, EAL, "Couldn't map new region for DMA\n");
+ RTE_LOG_LINE(ERR, EAL, "Couldn't map new region for DMA");
ret = -1;
goto out;
}
@@ -1921,7 +1921,7 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
orig_maps, RTE_DIM(orig_maps));
/* did we find anything? */
if (n_orig < 0) {
- RTE_LOG(ERR, EAL, "Couldn't find previously mapped region\n");
+ RTE_LOG_LINE(ERR, EAL, "Couldn't find previously mapped region");
rte_errno = EINVAL;
ret = -1;
goto out;
@@ -1943,7 +1943,7 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
vaddr + len, iova + len);
if (!start_aligned || !end_aligned) {
- RTE_LOG(DEBUG, EAL, "DMA partial unmap unsupported\n");
+ RTE_LOG_LINE(DEBUG, EAL, "DMA partial unmap unsupported");
rte_errno = ENOTSUP;
ret = -1;
goto out;
@@ -1961,7 +1961,7 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
/* can we store the new maps in our list? */
newlen = (user_mem_maps->n_maps - n_orig) + n_new;
if (newlen >= VFIO_MAX_USER_MEM_MAPS) {
- RTE_LOG(ERR, EAL, "Not enough space to store partial mapping\n");
+ RTE_LOG_LINE(ERR, EAL, "Not enough space to store partial mapping");
rte_errno = ENOMEM;
ret = -1;
goto out;
@@ -1978,11 +1978,11 @@ container_dma_unmap(struct vfio_config *vfio_cfg, uint64_t vaddr, uint64_t iova,
* within our mapped range but had invalid alignment).
*/
if (rte_errno != ENODEV && rte_errno != ENOTSUP) {
- RTE_LOG(ERR, EAL, "Couldn't unmap region for DMA\n");
+ RTE_LOG_LINE(ERR, EAL, "Couldn't unmap region for DMA");
ret = -1;
goto out;
} else {
- RTE_LOG(DEBUG, EAL, "DMA unmapping failed, but removing mappings anyway\n");
+ RTE_LOG_LINE(DEBUG, EAL, "DMA unmapping failed, but removing mappings anyway");
}
}
@@ -2005,8 +2005,8 @@ rte_vfio_noiommu_is_enabled(void)
fd = open(VFIO_NOIOMMU_MODE, O_RDONLY);
if (fd < 0) {
if (errno != ENOENT) {
- RTE_LOG(ERR, EAL, "Cannot open VFIO noiommu file "
- "%i (%s)\n", errno, strerror(errno));
+ RTE_LOG_LINE(ERR, EAL, "Cannot open VFIO noiommu file "
+ "%i (%s)", errno, strerror(errno));
return -1;
}
/*
@@ -2019,8 +2019,8 @@ rte_vfio_noiommu_is_enabled(void)
cnt = read(fd, &c, 1);
close(fd);
if (cnt != 1) {
- RTE_LOG(ERR, EAL, "Unable to read from VFIO noiommu file "
- "%i (%s)\n", errno, strerror(errno));
+ RTE_LOG_LINE(ERR, EAL, "Unable to read from VFIO noiommu file "
+ "%i (%s)", errno, strerror(errno));
return -1;
}
@@ -2039,13 +2039,13 @@ rte_vfio_container_create(void)
}
if (i == VFIO_MAX_CONTAINERS) {
- RTE_LOG(ERR, EAL, "Exceed max VFIO container limit\n");
+ RTE_LOG_LINE(ERR, EAL, "Exceed max VFIO container limit");
return -1;
}
vfio_cfgs[i].vfio_container_fd = rte_vfio_get_container_fd();
if (vfio_cfgs[i].vfio_container_fd < 0) {
- RTE_LOG(NOTICE, EAL, "Fail to create a new VFIO container\n");
+ RTE_LOG_LINE(NOTICE, EAL, "Fail to create a new VFIO container");
return -1;
}
@@ -2060,7 +2060,7 @@ rte_vfio_container_destroy(int container_fd)
vfio_cfg = get_vfio_cfg_by_container_fd(container_fd);
if (vfio_cfg == NULL) {
- RTE_LOG(ERR, EAL, "Invalid VFIO container fd\n");
+ RTE_LOG_LINE(ERR, EAL, "Invalid VFIO container fd");
return -1;
}
@@ -2084,7 +2084,7 @@ rte_vfio_container_group_bind(int container_fd, int iommu_group_num)
vfio_cfg = get_vfio_cfg_by_container_fd(container_fd);
if (vfio_cfg == NULL) {
- RTE_LOG(ERR, EAL, "Invalid VFIO container fd\n");
+ RTE_LOG_LINE(ERR, EAL, "Invalid VFIO container fd");
return -1;
}
@@ -2100,7 +2100,7 @@ rte_vfio_container_group_unbind(int container_fd, int iommu_group_num)
vfio_cfg = get_vfio_cfg_by_container_fd(container_fd);
if (vfio_cfg == NULL) {
- RTE_LOG(ERR, EAL, "Invalid VFIO container fd\n");
+ RTE_LOG_LINE(ERR, EAL, "Invalid VFIO container fd");
return -1;
}
@@ -2113,14 +2113,14 @@ rte_vfio_container_group_unbind(int container_fd, int iommu_group_num)
/* This should not happen */
if (i == VFIO_MAX_GROUPS || cur_grp == NULL) {
- RTE_LOG(ERR, EAL, "Specified VFIO group number not found\n");
+ RTE_LOG_LINE(ERR, EAL, "Specified VFIO group number not found");
return -1;
}
if (cur_grp->fd >= 0 && close(cur_grp->fd) < 0) {
- RTE_LOG(ERR, EAL,
+ RTE_LOG_LINE(ERR, EAL,
"Error when closing vfio_group_fd for iommu_group_num "
- "%d\n", iommu_group_num);
+ "%d", iommu_group_num);
return -1;
}
cur_grp->group_num = -1;
@@ -2144,7 +2144,7 @@ rte_vfio_container_dma_map(int container_fd, uint64_t vaddr, uint64_t iova,
vfio_cfg = get_vfio_cfg_by_container_fd(container_fd);
if (vfio_cfg == NULL) {
- RTE_LOG(ERR, EAL, "Invalid VFIO container fd\n");
+ RTE_LOG_LINE(ERR, EAL, "Invalid VFIO container fd");
return -1;
}
@@ -2164,7 +2164,7 @@ rte_vfio_container_dma_unmap(int container_fd, uint64_t vaddr, uint64_t iova,
vfio_cfg = get_vfio_cfg_by_container_fd(container_fd);
if (vfio_cfg == NULL) {
- RTE_LOG(ERR, EAL, "Invalid VFIO container fd\n");
+ RTE_LOG_LINE(ERR, EAL, "Invalid VFIO container fd");
return -1;
}
diff --git a/lib/eal/linux/eal_vfio_mp_sync.c b/lib/eal/linux/eal_vfio_mp_sync.c
index 157f20e583..a78113844b 100644
--- a/lib/eal/linux/eal_vfio_mp_sync.c
+++ b/lib/eal/linux/eal_vfio_mp_sync.c
@@ -33,7 +33,7 @@ vfio_mp_primary(const struct rte_mp_msg *msg, const void *peer)
(const struct vfio_mp_param *)msg->param;
if (msg->len_param != sizeof(*m)) {
- RTE_LOG(ERR, EAL, "vfio received invalid message!\n");
+ RTE_LOG_LINE(ERR, EAL, "vfio received invalid message!");
return -1;
}
@@ -95,7 +95,7 @@ vfio_mp_primary(const struct rte_mp_msg *msg, const void *peer)
break;
}
default:
- RTE_LOG(ERR, EAL, "vfio received invalid message!\n");
+ RTE_LOG_LINE(ERR, EAL, "vfio received invalid message!");
return -1;
}
diff --git a/lib/eal/riscv/rte_cycles.c b/lib/eal/riscv/rte_cycles.c
index 358f271311..e27e02d9a9 100644
--- a/lib/eal/riscv/rte_cycles.c
+++ b/lib/eal/riscv/rte_cycles.c
@@ -38,14 +38,14 @@ __rte_riscv_timefrq(void)
break;
}
fail:
- RTE_LOG(WARNING, EAL, "Unable to read timebase-frequency from FDT.\n");
+ RTE_LOG_LINE(WARNING, EAL, "Unable to read timebase-frequency from FDT.");
return 0;
}
uint64_t
get_tsc_freq_arch(void)
{
- RTE_LOG(NOTICE, EAL, "TSC using RISC-V %s.\n",
+ RTE_LOG_LINE(NOTICE, EAL, "TSC using RISC-V %s.",
RTE_RISCV_RDTSC_USE_HPM ? "rdcycle" : "rdtime");
if (!RTE_RISCV_RDTSC_USE_HPM)
return __rte_riscv_timefrq();
diff --git a/lib/eal/unix/eal_filesystem.c b/lib/eal/unix/eal_filesystem.c
index afbab9368a..4d90c2707f 100644
--- a/lib/eal/unix/eal_filesystem.c
+++ b/lib/eal/unix/eal_filesystem.c
@@ -41,7 +41,7 @@ int eal_create_runtime_dir(void)
/* create DPDK subdirectory under runtime dir */
ret = snprintf(tmp, sizeof(tmp), "%s/dpdk", directory);
if (ret < 0 || ret == sizeof(tmp)) {
- RTE_LOG(ERR, EAL, "Error creating DPDK runtime path name\n");
+ RTE_LOG_LINE(ERR, EAL, "Error creating DPDK runtime path name");
return -1;
}
@@ -49,7 +49,7 @@ int eal_create_runtime_dir(void)
ret = snprintf(run_dir, sizeof(run_dir), "%s/%s",
tmp, eal_get_hugefile_prefix());
if (ret < 0 || ret == sizeof(run_dir)) {
- RTE_LOG(ERR, EAL, "Error creating prefix-specific runtime path name\n");
+ RTE_LOG_LINE(ERR, EAL, "Error creating prefix-specific runtime path name");
return -1;
}
@@ -58,14 +58,14 @@ int eal_create_runtime_dir(void)
*/
ret = mkdir(tmp, 0700);
if (ret < 0 && errno != EEXIST) {
- RTE_LOG(ERR, EAL, "Error creating '%s': %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Error creating '%s': %s",
tmp, strerror(errno));
return -1;
}
ret = mkdir(run_dir, 0700);
if (ret < 0 && errno != EEXIST) {
- RTE_LOG(ERR, EAL, "Error creating '%s': %s\n",
+ RTE_LOG_LINE(ERR, EAL, "Error creating '%s': %s",
run_dir, strerror(errno));
return -1;
}
@@ -84,20 +84,20 @@ int eal_parse_sysfs_value(const char *filename, unsigned long *val)
char *end = NULL;
if ((f = fopen(filename, "r")) == NULL) {
- RTE_LOG(ERR, EAL, "%s(): cannot open sysfs value %s\n",
+ RTE_LOG_LINE(ERR, EAL, "%s(): cannot open sysfs value %s",
__func__, filename);
return -1;
}
if (fgets(buf, sizeof(buf), f) == NULL) {
- RTE_LOG(ERR, EAL, "%s(): cannot read sysfs value %s\n",
+ RTE_LOG_LINE(ERR, EAL, "%s(): cannot read sysfs value %s",
__func__, filename);
fclose(f);
return -1;
}
*val = strtoul(buf, &end, 0);
if ((buf[0] == '\0') || (end == NULL) || (*end != '\n')) {
- RTE_LOG(ERR, EAL, "%s(): cannot parse sysfs value %s\n",
+ RTE_LOG_LINE(ERR, EAL, "%s(): cannot parse sysfs value %s",
__func__, filename);
fclose(f);
return -1;
diff --git a/lib/eal/unix/eal_firmware.c b/lib/eal/unix/eal_firmware.c
index 1a7cf8e7b7..b071bb1396 100644
--- a/lib/eal/unix/eal_firmware.c
+++ b/lib/eal/unix/eal_firmware.c
@@ -151,7 +151,7 @@ rte_firmware_read(const char *name, void **buf, size_t *bufsz)
path[PATH_MAX - 1] = '\0';
#ifndef RTE_HAS_LIBARCHIVE
if (access(path, F_OK) == 0) {
- RTE_LOG(WARNING, EAL, "libarchive not linked, %s cannot be decompressed\n",
+ RTE_LOG_LINE(WARNING, EAL, "libarchive not linked, %s cannot be decompressed",
path);
}
#else
diff --git a/lib/eal/unix/eal_unix_memory.c b/lib/eal/unix/eal_unix_memory.c
index 68ae93bd6e..16183fb395 100644
--- a/lib/eal/unix/eal_unix_memory.c
+++ b/lib/eal/unix/eal_unix_memory.c
@@ -29,8 +29,8 @@ mem_map(void *requested_addr, size_t size, int prot, int flags,
{
void *virt = mmap(requested_addr, size, prot, flags, fd, offset);
if (virt == MAP_FAILED) {
- RTE_LOG(DEBUG, EAL,
- "Cannot mmap(%p, 0x%zx, 0x%x, 0x%x, %d, 0x%"PRIx64"): %s\n",
+ RTE_LOG_LINE(DEBUG, EAL,
+ "Cannot mmap(%p, 0x%zx, 0x%x, 0x%x, %d, 0x%"PRIx64"): %s",
requested_addr, size, prot, flags, fd, offset,
strerror(errno));
rte_errno = errno;
@@ -44,7 +44,7 @@ mem_unmap(void *virt, size_t size)
{
int ret = munmap(virt, size);
if (ret < 0) {
- RTE_LOG(DEBUG, EAL, "Cannot munmap(%p, 0x%zx): %s\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Cannot munmap(%p, 0x%zx): %s",
virt, size, strerror(errno));
rte_errno = errno;
}
@@ -83,7 +83,7 @@ eal_mem_set_dump(void *virt, size_t size, bool dump)
int flags = dump ? EAL_DODUMP : EAL_DONTDUMP;
int ret = madvise(virt, size, flags);
if (ret) {
- RTE_LOG(DEBUG, EAL, "madvise(%p, %#zx, %d) failed: %s\n",
+ RTE_LOG_LINE(DEBUG, EAL, "madvise(%p, %#zx, %d) failed: %s",
virt, size, flags, strerror(rte_errno));
rte_errno = errno;
}
diff --git a/lib/eal/unix/rte_thread.c b/lib/eal/unix/rte_thread.c
index 36a21ab2f9..bee77e9448 100644
--- a/lib/eal/unix/rte_thread.c
+++ b/lib/eal/unix/rte_thread.c
@@ -53,7 +53,7 @@ thread_map_priority_to_os_value(enum rte_thread_priority eal_pri, int *os_pri,
*os_pri = sched_get_priority_max(SCHED_RR);
break;
default:
- RTE_LOG(DEBUG, EAL, "The requested priority value is invalid.\n");
+ RTE_LOG_LINE(DEBUG, EAL, "The requested priority value is invalid.");
return EINVAL;
}
@@ -79,7 +79,7 @@ thread_map_os_priority_to_eal_priority(int policy, int os_pri,
}
break;
default:
- RTE_LOG(DEBUG, EAL, "The OS priority value does not map to an EAL-defined priority.\n");
+ RTE_LOG_LINE(DEBUG, EAL, "The OS priority value does not map to an EAL-defined priority.");
return EINVAL;
}
@@ -97,7 +97,7 @@ thread_start_wrapper(void *arg)
if (ctx->thread_attr != NULL && CPU_COUNT(&ctx->thread_attr->cpuset) > 0) {
ret = rte_thread_set_affinity_by_id(rte_thread_self(), &ctx->thread_attr->cpuset);
if (ret != 0)
- RTE_LOG(DEBUG, EAL, "rte_thread_set_affinity_by_id failed\n");
+ RTE_LOG_LINE(DEBUG, EAL, "rte_thread_set_affinity_by_id failed");
}
pthread_mutex_lock(&ctx->wrapper_mutex);
@@ -136,7 +136,7 @@ rte_thread_create(rte_thread_t *thread_id,
if (thread_attr != NULL) {
ret = pthread_attr_init(&attr);
if (ret != 0) {
- RTE_LOG(DEBUG, EAL, "pthread_attr_init failed\n");
+ RTE_LOG_LINE(DEBUG, EAL, "pthread_attr_init failed");
goto cleanup;
}
@@ -149,7 +149,7 @@ rte_thread_create(rte_thread_t *thread_id,
ret = pthread_attr_setinheritsched(attrp,
PTHREAD_EXPLICIT_SCHED);
if (ret != 0) {
- RTE_LOG(DEBUG, EAL, "pthread_attr_setinheritsched failed\n");
+ RTE_LOG_LINE(DEBUG, EAL, "pthread_attr_setinheritsched failed");
goto cleanup;
}
@@ -165,13 +165,13 @@ rte_thread_create(rte_thread_t *thread_id,
ret = pthread_attr_setschedpolicy(attrp, policy);
if (ret != 0) {
- RTE_LOG(DEBUG, EAL, "pthread_attr_setschedpolicy failed\n");
+ RTE_LOG_LINE(DEBUG, EAL, "pthread_attr_setschedpolicy failed");
goto cleanup;
}
ret = pthread_attr_setschedparam(attrp, ¶m);
if (ret != 0) {
- RTE_LOG(DEBUG, EAL, "pthread_attr_setschedparam failed\n");
+ RTE_LOG_LINE(DEBUG, EAL, "pthread_attr_setschedparam failed");
goto cleanup;
}
}
@@ -179,7 +179,7 @@ rte_thread_create(rte_thread_t *thread_id,
ret = pthread_create((pthread_t *)&thread_id->opaque_id, attrp,
thread_start_wrapper, &ctx);
if (ret != 0) {
- RTE_LOG(DEBUG, EAL, "pthread_create failed\n");
+ RTE_LOG_LINE(DEBUG, EAL, "pthread_create failed");
goto cleanup;
}
@@ -211,7 +211,7 @@ rte_thread_join(rte_thread_t thread_id, uint32_t *value_ptr)
ret = pthread_join((pthread_t)thread_id.opaque_id, pres);
if (ret != 0) {
- RTE_LOG(DEBUG, EAL, "pthread_join failed\n");
+ RTE_LOG_LINE(DEBUG, EAL, "pthread_join failed");
return ret;
}
@@ -256,7 +256,7 @@ rte_thread_get_priority(rte_thread_t thread_id,
ret = pthread_getschedparam((pthread_t)thread_id.opaque_id, &policy,
¶m);
if (ret != 0) {
- RTE_LOG(DEBUG, EAL, "pthread_getschedparam failed\n");
+ RTE_LOG_LINE(DEBUG, EAL, "pthread_getschedparam failed");
goto cleanup;
}
@@ -295,13 +295,13 @@ rte_thread_key_create(rte_thread_key *key, void (*destructor)(void *))
*key = malloc(sizeof(**key));
if ((*key) == NULL) {
- RTE_LOG(DEBUG, EAL, "Cannot allocate TLS key.\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Cannot allocate TLS key.");
rte_errno = ENOMEM;
return -1;
}
err = pthread_key_create(&((*key)->thread_index), destructor);
if (err) {
- RTE_LOG(DEBUG, EAL, "pthread_key_create failed: %s\n",
+ RTE_LOG_LINE(DEBUG, EAL, "pthread_key_create failed: %s",
strerror(err));
free(*key);
rte_errno = ENOEXEC;
@@ -316,13 +316,13 @@ rte_thread_key_delete(rte_thread_key key)
int err;
if (!key) {
- RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Invalid TLS key.");
rte_errno = EINVAL;
return -1;
}
err = pthread_key_delete(key->thread_index);
if (err) {
- RTE_LOG(DEBUG, EAL, "pthread_key_delete failed: %s\n",
+ RTE_LOG_LINE(DEBUG, EAL, "pthread_key_delete failed: %s",
strerror(err));
free(key);
rte_errno = ENOEXEC;
@@ -338,13 +338,13 @@ rte_thread_value_set(rte_thread_key key, const void *value)
int err;
if (!key) {
- RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Invalid TLS key.");
rte_errno = EINVAL;
return -1;
}
err = pthread_setspecific(key->thread_index, value);
if (err) {
- RTE_LOG(DEBUG, EAL, "pthread_setspecific failed: %s\n",
+ RTE_LOG_LINE(DEBUG, EAL, "pthread_setspecific failed: %s",
strerror(err));
rte_errno = ENOEXEC;
return -1;
@@ -356,7 +356,7 @@ void *
rte_thread_value_get(rte_thread_key key)
{
if (!key) {
- RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Invalid TLS key.");
rte_errno = EINVAL;
return NULL;
}
diff --git a/lib/eal/windows/eal.c b/lib/eal/windows/eal.c
index 7ec2152211..b573fa7c74 100644
--- a/lib/eal/windows/eal.c
+++ b/lib/eal/windows/eal.c
@@ -67,7 +67,7 @@ eal_proc_type_detect(void)
ptype = RTE_PROC_SECONDARY;
}
- RTE_LOG(INFO, EAL, "Auto-detected process type: %s\n",
+ RTE_LOG_LINE(INFO, EAL, "Auto-detected process type: %s",
ptype == RTE_PROC_PRIMARY ? "PRIMARY" : "SECONDARY");
return ptype;
@@ -175,16 +175,16 @@ eal_parse_args(int argc, char **argv)
exit(EXIT_SUCCESS);
default:
if (opt < OPT_LONG_MIN_NUM && isprint(opt)) {
- RTE_LOG(ERR, EAL, "Option %c is not supported "
- "on Windows\n", opt);
+ RTE_LOG_LINE(ERR, EAL, "Option %c is not supported "
+ "on Windows", opt);
} else if (opt >= OPT_LONG_MIN_NUM &&
opt < OPT_LONG_MAX_NUM) {
- RTE_LOG(ERR, EAL, "Option %s is not supported "
- "on Windows\n",
+ RTE_LOG_LINE(ERR, EAL, "Option %s is not supported "
+ "on Windows",
eal_long_options[option_index].name);
} else {
- RTE_LOG(ERR, EAL, "Option %d is not supported "
- "on Windows\n", opt);
+ RTE_LOG_LINE(ERR, EAL, "Option %d is not supported "
+ "on Windows", opt);
}
eal_usage(prgname);
return -1;
@@ -217,7 +217,7 @@ static void
rte_eal_init_alert(const char *msg)
{
fprintf(stderr, "EAL: FATAL: %s\n", msg);
- RTE_LOG(ERR, EAL, "%s\n", msg);
+ RTE_LOG_LINE(ERR, EAL, "%s", msg);
}
/* Stubs to enable EAL trace point compilation
@@ -312,8 +312,8 @@ rte_eal_init(int argc, char **argv)
/* Prevent creation of shared memory files. */
if (internal_conf->in_memory == 0) {
- RTE_LOG(WARNING, EAL, "Multi-process support is requested, "
- "but not available.\n");
+ RTE_LOG_LINE(WARNING, EAL, "Multi-process support is requested, "
+ "but not available.");
internal_conf->in_memory = 1;
internal_conf->no_shconf = 1;
}
@@ -356,21 +356,21 @@ rte_eal_init(int argc, char **argv)
has_phys_addr = true;
if (eal_mem_virt2iova_init() < 0) {
/* Non-fatal error if physical addresses are not required. */
- RTE_LOG(DEBUG, EAL, "Cannot access virt2phys driver, "
- "PA will not be available\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Cannot access virt2phys driver, "
+ "PA will not be available");
has_phys_addr = false;
}
iova_mode = internal_conf->iova_mode;
if (iova_mode == RTE_IOVA_DC) {
- RTE_LOG(DEBUG, EAL, "Specific IOVA mode is not requested, autodetecting\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Specific IOVA mode is not requested, autodetecting");
if (has_phys_addr) {
- RTE_LOG(DEBUG, EAL, "Selecting IOVA mode according to bus requests\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Selecting IOVA mode according to bus requests");
iova_mode = rte_bus_get_iommu_class();
if (iova_mode == RTE_IOVA_DC) {
if (!RTE_IOVA_IN_MBUF) {
iova_mode = RTE_IOVA_VA;
- RTE_LOG(DEBUG, EAL, "IOVA as VA mode is forced by build option.\n");
+ RTE_LOG_LINE(DEBUG, EAL, "IOVA as VA mode is forced by build option.");
} else {
iova_mode = RTE_IOVA_PA;
}
@@ -392,7 +392,7 @@ rte_eal_init(int argc, char **argv)
return -1;
}
- RTE_LOG(DEBUG, EAL, "Selected IOVA mode '%s'\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Selected IOVA mode '%s'",
iova_mode == RTE_IOVA_PA ? "PA" : "VA");
rte_eal_get_configuration()->iova_mode = iova_mode;
@@ -442,7 +442,7 @@ rte_eal_init(int argc, char **argv)
&lcore_config[config->main_lcore].cpuset);
ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset));
- RTE_LOG(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])",
config->main_lcore, rte_thread_self().opaque_id, cpuset,
ret == 0 ? "" : "...");
@@ -474,7 +474,7 @@ rte_eal_init(int argc, char **argv)
ret = rte_thread_set_affinity_by_id(lcore_config[i].thread_id,
&lcore_config[i].cpuset);
if (ret != 0)
- RTE_LOG(DEBUG, EAL, "Cannot set affinity\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Cannot set affinity");
}
/* Initialize services so drivers can register services during probe. */
diff --git a/lib/eal/windows/eal_alarm.c b/lib/eal/windows/eal_alarm.c
index 34b52380ce..c56aa0e687 100644
--- a/lib/eal/windows/eal_alarm.c
+++ b/lib/eal/windows/eal_alarm.c
@@ -92,7 +92,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
int ret;
if (cb_fn == NULL) {
- RTE_LOG(ERR, EAL, "NULL callback\n");
+ RTE_LOG_LINE(ERR, EAL, "NULL callback");
ret = -EINVAL;
goto exit;
}
@@ -105,7 +105,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
ap = calloc(1, sizeof(*ap));
if (ap == NULL) {
- RTE_LOG(ERR, EAL, "Cannot allocate alarm entry\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot allocate alarm entry");
ret = -ENOMEM;
goto exit;
}
@@ -129,7 +129,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
/* Directly schedule callback execution. */
ret = alarm_set(ap, deadline);
if (ret < 0) {
- RTE_LOG(ERR, EAL, "Cannot setup alarm\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot setup alarm");
goto fail;
}
} else {
@@ -143,7 +143,7 @@ rte_eal_alarm_set(uint64_t us, rte_eal_alarm_callback cb_fn, void *cb_arg)
ret = intr_thread_exec_sync(alarm_task_exec, &task);
if (ret < 0) {
- RTE_LOG(ERR, EAL, "Cannot setup alarm in interrupt thread\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot setup alarm in interrupt thread");
goto fail;
}
@@ -187,7 +187,7 @@ rte_eal_alarm_cancel(rte_eal_alarm_callback cb_fn, void *cb_arg)
removed = 0;
if (cb_fn == NULL) {
- RTE_LOG(ERR, EAL, "NULL callback\n");
+ RTE_LOG_LINE(ERR, EAL, "NULL callback");
return -EINVAL;
}
@@ -246,7 +246,7 @@ intr_thread_exec_sync(void (*func)(void *arg), void *arg)
rte_spinlock_lock(&task.lock);
ret = eal_intr_thread_schedule(intr_thread_entry, &task);
if (ret < 0) {
- RTE_LOG(ERR, EAL, "Cannot schedule task to interrupt thread\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot schedule task to interrupt thread");
return -EINVAL;
}
diff --git a/lib/eal/windows/eal_debug.c b/lib/eal/windows/eal_debug.c
index 56ed70df7d..be646080c3 100644
--- a/lib/eal/windows/eal_debug.c
+++ b/lib/eal/windows/eal_debug.c
@@ -48,8 +48,8 @@ rte_dump_stack(void)
error_code = GetLastError();
if (error_code == ERROR_INVALID_ADDRESS) {
/* Missing symbols, print message */
- rte_log(RTE_LOG_ERR, RTE_LOGTYPE_EAL,
- "%d: [<missing_symbols>]\n", frame_num--);
+ RTE_LOG_LINE(ERR, EAL,
+ "%d: [<missing_symbols>]", frame_num--);
continue;
} else {
RTE_LOG_WIN32_ERR("SymFromAddr()");
@@ -67,8 +67,8 @@ rte_dump_stack(void)
}
}
- rte_log(RTE_LOG_ERR, RTE_LOGTYPE_EAL,
- "%d: [%s (%s+0x%0llx)[0x%0llX]]\n", frame_num,
+ RTE_LOG_LINE(ERR, EAL,
+ "%d: [%s (%s+0x%0llx)[0x%0llX]]", frame_num,
error_code ? "<unknown>" : line.FileName,
symbol_info->Name, sym_disp, symbol_info->Address);
frame_num--;
diff --git a/lib/eal/windows/eal_dev.c b/lib/eal/windows/eal_dev.c
index 35191056fd..264bc4a649 100644
--- a/lib/eal/windows/eal_dev.c
+++ b/lib/eal/windows/eal_dev.c
@@ -7,27 +7,27 @@
int
rte_dev_event_monitor_start(void)
{
- RTE_LOG(ERR, EAL, "Device event is not supported for Windows\n");
+ RTE_LOG_LINE(ERR, EAL, "Device event is not supported for Windows");
return -1;
}
int
rte_dev_event_monitor_stop(void)
{
- RTE_LOG(ERR, EAL, "Device event is not supported for Windows\n");
+ RTE_LOG_LINE(ERR, EAL, "Device event is not supported for Windows");
return -1;
}
int
rte_dev_hotplug_handle_enable(void)
{
- RTE_LOG(ERR, EAL, "Device event is not supported for Windows\n");
+ RTE_LOG_LINE(ERR, EAL, "Device event is not supported for Windows");
return -1;
}
int
rte_dev_hotplug_handle_disable(void)
{
- RTE_LOG(ERR, EAL, "Device event is not supported for Windows\n");
+ RTE_LOG_LINE(ERR, EAL, "Device event is not supported for Windows");
return -1;
}
diff --git a/lib/eal/windows/eal_hugepages.c b/lib/eal/windows/eal_hugepages.c
index 701cd0cb08..c7dfe2d238 100644
--- a/lib/eal/windows/eal_hugepages.c
+++ b/lib/eal/windows/eal_hugepages.c
@@ -89,8 +89,8 @@ hugepage_info_init(void)
}
hpi->num_pages[socket_id] = bytes / hpi->hugepage_sz;
- RTE_LOG(DEBUG, EAL,
- "Found %u hugepages of %zu bytes on socket %u\n",
+ RTE_LOG_LINE(DEBUG, EAL,
+ "Found %u hugepages of %zu bytes on socket %u",
hpi->num_pages[socket_id], hpi->hugepage_sz, socket_id);
}
@@ -105,13 +105,13 @@ int
eal_hugepage_info_init(void)
{
if (hugepage_claim_privilege() < 0) {
- RTE_LOG(ERR, EAL, "Cannot claim hugepage privilege, "
- "verify that large-page support privilege is assigned to the current user\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot claim hugepage privilege, "
+ "verify that large-page support privilege is assigned to the current user");
return -1;
}
if (hugepage_info_init() < 0) {
- RTE_LOG(ERR, EAL, "Cannot discover available hugepages\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot discover available hugepages");
return -1;
}
diff --git a/lib/eal/windows/eal_interrupts.c b/lib/eal/windows/eal_interrupts.c
index 49efdc098c..a9c62453b8 100644
--- a/lib/eal/windows/eal_interrupts.c
+++ b/lib/eal/windows/eal_interrupts.c
@@ -39,7 +39,7 @@ eal_intr_thread_main(LPVOID arg __rte_unused)
bool finished = false;
if (eal_intr_thread_handle_init() < 0) {
- RTE_LOG(ERR, EAL, "Cannot open interrupt thread handle\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot open interrupt thread handle");
goto cleanup;
}
@@ -57,7 +57,7 @@ eal_intr_thread_main(LPVOID arg __rte_unused)
DWORD error = GetLastError();
if (error != WAIT_IO_COMPLETION) {
RTE_LOG_WIN32_ERR("GetQueuedCompletionStatusEx()");
- RTE_LOG(ERR, EAL, "Failed waiting for interrupts\n");
+ RTE_LOG_LINE(ERR, EAL, "Failed waiting for interrupts");
break;
}
@@ -94,7 +94,7 @@ rte_eal_intr_init(void)
intr_iocp = CreateIoCompletionPort(INVALID_HANDLE_VALUE, NULL, 0, 1);
if (intr_iocp == NULL) {
RTE_LOG_WIN32_ERR("CreateIoCompletionPort()");
- RTE_LOG(ERR, EAL, "Cannot create interrupt IOCP\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot create interrupt IOCP");
return -1;
}
@@ -102,7 +102,7 @@ rte_eal_intr_init(void)
eal_intr_thread_main, NULL);
if (ret != 0) {
rte_errno = -ret;
- RTE_LOG(ERR, EAL, "Cannot create interrupt thread\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot create interrupt thread");
}
return ret;
@@ -140,7 +140,7 @@ eal_intr_thread_cancel(void)
if (!PostQueuedCompletionStatus(
intr_iocp, 0, IOCP_KEY_SHUTDOWN, NULL)) {
RTE_LOG_WIN32_ERR("PostQueuedCompletionStatus()");
- RTE_LOG(ERR, EAL, "Cannot cancel interrupt thread\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot cancel interrupt thread");
return;
}
diff --git a/lib/eal/windows/eal_lcore.c b/lib/eal/windows/eal_lcore.c
index 286fe241eb..da3be08aab 100644
--- a/lib/eal/windows/eal_lcore.c
+++ b/lib/eal/windows/eal_lcore.c
@@ -65,7 +65,7 @@ eal_query_group_affinity(void)
&infos_size)) {
DWORD error = GetLastError();
if (error != ERROR_INSUFFICIENT_BUFFER) {
- RTE_LOG(ERR, EAL, "Cannot get group information size, error %lu\n", error);
+ RTE_LOG_LINE(ERR, EAL, "Cannot get group information size, error %lu", error);
rte_errno = EINVAL;
ret = -1;
goto cleanup;
@@ -74,7 +74,7 @@ eal_query_group_affinity(void)
infos = malloc(infos_size);
if (infos == NULL) {
- RTE_LOG(ERR, EAL, "Cannot allocate memory for NUMA node information\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot allocate memory for NUMA node information");
rte_errno = ENOMEM;
ret = -1;
goto cleanup;
@@ -82,7 +82,7 @@ eal_query_group_affinity(void)
if (!GetLogicalProcessorInformationEx(RelationGroup, infos,
&infos_size)) {
- RTE_LOG(ERR, EAL, "Cannot get group information, error %lu\n",
+ RTE_LOG_LINE(ERR, EAL, "Cannot get group information, error %lu",
GetLastError());
rte_errno = EINVAL;
ret = -1;
diff --git a/lib/eal/windows/eal_memalloc.c b/lib/eal/windows/eal_memalloc.c
index aa7589b81d..fa9d1fdc1e 100644
--- a/lib/eal/windows/eal_memalloc.c
+++ b/lib/eal/windows/eal_memalloc.c
@@ -52,7 +52,7 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id,
}
/* Bugcheck, should not happen. */
- RTE_LOG(DEBUG, EAL, "Attempted to reallocate segment %p "
+ RTE_LOG_LINE(DEBUG, EAL, "Attempted to reallocate segment %p "
"(size %zu) on socket %d", ms->addr,
ms->len, ms->socket_id);
return -1;
@@ -66,8 +66,8 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id,
/* Request a new chunk of memory from OS. */
addr = eal_mem_alloc_socket(alloc_sz, socket_id);
if (addr == NULL) {
- RTE_LOG(DEBUG, EAL, "Cannot allocate %zu bytes "
- "on socket %d\n", alloc_sz, socket_id);
+ RTE_LOG_LINE(DEBUG, EAL, "Cannot allocate %zu bytes "
+ "on socket %d", alloc_sz, socket_id);
return -1;
}
} else {
@@ -79,15 +79,15 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id,
* error, because it breaks MSL assumptions.
*/
if ((addr != NULL) && (addr != requested_addr)) {
- RTE_LOG(CRIT, EAL, "Address %p occupied by an alien "
- " allocation - MSL is not VA-contiguous!\n",
+ RTE_LOG_LINE(CRIT, EAL, "Address %p occupied by an alien "
+ " allocation - MSL is not VA-contiguous!",
requested_addr);
return -1;
}
if (addr == NULL) {
- RTE_LOG(DEBUG, EAL, "Cannot commit reserved memory %p "
- "(size %zu) on socket %d\n",
+ RTE_LOG_LINE(DEBUG, EAL, "Cannot commit reserved memory %p "
+ "(size %zu) on socket %d",
requested_addr, alloc_sz, socket_id);
return -1;
}
@@ -101,8 +101,8 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id,
iova = rte_mem_virt2iova(addr);
if (iova == RTE_BAD_IOVA) {
- RTE_LOG(DEBUG, EAL,
- "Cannot get IOVA of allocated segment\n");
+ RTE_LOG_LINE(DEBUG, EAL,
+ "Cannot get IOVA of allocated segment");
goto error;
}
@@ -115,12 +115,12 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id,
page = &info.VirtualAttributes;
if (!page->Valid || !page->LargePage) {
- RTE_LOG(DEBUG, EAL, "Got regular page instead of a hugepage\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Got regular page instead of a hugepage");
goto error;
}
if (page->Node != numa_node) {
- RTE_LOG(DEBUG, EAL,
- "NUMA node hint %u (socket %d) not respected, got %u\n",
+ RTE_LOG_LINE(DEBUG, EAL,
+ "NUMA node hint %u (socket %d) not respected, got %u",
numa_node, socket_id, page->Node);
goto error;
}
@@ -141,8 +141,8 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id,
/* During decommitment, memory is temporarily returned
* to the system and the address may become unavailable.
*/
- RTE_LOG(CRIT, EAL, "Address %p occupied by an alien "
- " allocation - MSL is not VA-contiguous!\n", addr);
+ RTE_LOG_LINE(CRIT, EAL, "Address %p occupied by an alien "
+ " allocation - MSL is not VA-contiguous!", addr);
}
return -1;
}
@@ -153,8 +153,8 @@ free_seg(struct rte_memseg *ms)
if (eal_mem_decommit(ms->addr, ms->len)) {
if (rte_errno == EADDRNOTAVAIL) {
/* See alloc_seg() for explanation. */
- RTE_LOG(CRIT, EAL, "Address %p occupied by an alien "
- " allocation - MSL is not VA-contiguous!\n",
+ RTE_LOG_LINE(CRIT, EAL, "Address %p occupied by an alien "
+ " allocation - MSL is not VA-contiguous!",
ms->addr);
}
return -1;
@@ -233,8 +233,8 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg)
map_addr = RTE_PTR_ADD(cur_msl->base_va, cur_idx * page_sz);
if (alloc_seg(cur, map_addr, wa->socket, wa->hi)) {
- RTE_LOG(DEBUG, EAL, "attempted to allocate %i segments, "
- "but only %i were allocated\n", need, i);
+ RTE_LOG_LINE(DEBUG, EAL, "attempted to allocate %i segments, "
+ "but only %i were allocated", need, i);
/* if exact number wasn't requested, stop */
if (!wa->exact)
@@ -249,7 +249,7 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg)
rte_fbarray_set_free(arr, j);
if (free_seg(tmp))
- RTE_LOG(DEBUG, EAL, "Cannot free page\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Cannot free page");
}
/* clear the list */
if (wa->ms)
@@ -318,7 +318,7 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs,
eal_get_internal_configuration();
if (internal_conf->legacy_mem) {
- RTE_LOG(ERR, EAL, "dynamic allocation not supported in legacy mode\n");
+ RTE_LOG_LINE(ERR, EAL, "dynamic allocation not supported in legacy mode");
return -ENOTSUP;
}
@@ -330,7 +330,7 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs,
}
}
if (!hi) {
- RTE_LOG(ERR, EAL, "cannot find relevant hugepage_info entry\n");
+ RTE_LOG_LINE(ERR, EAL, "cannot find relevant hugepage_info entry");
return -1;
}
@@ -346,7 +346,7 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs,
/* memalloc is locked, so it's safe to use thread-unsafe version */
ret = rte_memseg_list_walk_thread_unsafe(alloc_seg_walk, &wa);
if (ret == 0) {
- RTE_LOG(ERR, EAL, "cannot find suitable memseg_list\n");
+ RTE_LOG_LINE(ERR, EAL, "cannot find suitable memseg_list");
ret = -1;
} else if (ret > 0) {
ret = (int)wa.segs_allocated;
@@ -383,7 +383,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs)
/* if this page is marked as unfreeable, fail */
if (cur->flags & RTE_MEMSEG_FLAG_DO_NOT_FREE) {
- RTE_LOG(DEBUG, EAL, "Page is not allowed to be freed\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Page is not allowed to be freed");
ret = -1;
continue;
}
@@ -396,7 +396,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs)
break;
}
if (i == RTE_DIM(internal_conf->hugepage_info)) {
- RTE_LOG(ERR, EAL, "Can't find relevant hugepage_info entry\n");
+ RTE_LOG_LINE(ERR, EAL, "Can't find relevant hugepage_info entry");
ret = -1;
continue;
}
@@ -411,7 +411,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs)
if (walk_res == 1)
continue;
if (walk_res == 0)
- RTE_LOG(ERR, EAL, "Couldn't find memseg list\n");
+ RTE_LOG_LINE(ERR, EAL, "Couldn't find memseg list");
ret = -1;
}
return ret;
diff --git a/lib/eal/windows/eal_memory.c b/lib/eal/windows/eal_memory.c
index fd39155163..7e1e8d4c84 100644
--- a/lib/eal/windows/eal_memory.c
+++ b/lib/eal/windows/eal_memory.c
@@ -114,8 +114,8 @@ eal_mem_win32api_init(void)
library_name, function);
/* Contrary to the docs, Server 2016 is not supported. */
- RTE_LOG(ERR, EAL, "Windows 10 or Windows Server 2019 "
- " is required for memory management\n");
+ RTE_LOG_LINE(ERR, EAL, "Windows 10 or Windows Server 2019 "
+ " is required for memory management");
ret = -1;
}
@@ -173,8 +173,8 @@ eal_mem_virt2iova_init(void)
detail = malloc(detail_size);
if (detail == NULL) {
- RTE_LOG(ERR, EAL, "Cannot allocate virt2phys "
- "device interface detail data\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot allocate virt2phys "
+ "device interface detail data");
goto exit;
}
@@ -185,7 +185,7 @@ eal_mem_virt2iova_init(void)
goto exit;
}
- RTE_LOG(DEBUG, EAL, "Found virt2phys device: %s\n", detail->DevicePath);
+ RTE_LOG_LINE(DEBUG, EAL, "Found virt2phys device: %s", detail->DevicePath);
virt2phys_device = CreateFile(
detail->DevicePath, 0, 0, NULL, OPEN_EXISTING, 0, NULL);
@@ -574,8 +574,8 @@ rte_mem_map(void *requested_addr, size_t size, int prot, int flags,
int ret = mem_free(requested_addr, size, true);
if (ret) {
if (ret > 0) {
- RTE_LOG(ERR, EAL, "Cannot map memory "
- "to a region not reserved\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot map memory "
+ "to a region not reserved");
rte_errno = EADDRNOTAVAIL;
}
return NULL;
@@ -691,7 +691,7 @@ eal_nohuge_init(void)
NULL, mem_sz, MEM_RESERVE | MEM_COMMIT, PAGE_READWRITE);
if (addr == NULL) {
RTE_LOG_WIN32_ERR("VirtualAlloc(size=%#zx)", mem_sz);
- RTE_LOG(ERR, EAL, "Cannot allocate memory\n");
+ RTE_LOG_LINE(ERR, EAL, "Cannot allocate memory");
return -1;
}
@@ -702,9 +702,9 @@ eal_nohuge_init(void)
if (mcfg->dma_maskbits &&
rte_mem_check_dma_mask_thread_unsafe(mcfg->dma_maskbits)) {
- RTE_LOG(ERR, EAL,
+ RTE_LOG_LINE(ERR, EAL,
"%s(): couldn't allocate memory due to IOVA "
- "exceeding limits of current DMA mask.\n", __func__);
+ "exceeding limits of current DMA mask.", __func__);
return -1;
}
diff --git a/lib/eal/windows/eal_windows.h b/lib/eal/windows/eal_windows.h
index 43b228d388..ee206f365d 100644
--- a/lib/eal/windows/eal_windows.h
+++ b/lib/eal/windows/eal_windows.h
@@ -17,7 +17,7 @@
*/
#define EAL_LOG_NOT_IMPLEMENTED() \
do { \
- RTE_LOG(DEBUG, EAL, "%s() is not implemented\n", __func__); \
+ RTE_LOG_LINE(DEBUG, EAL, "%s() is not implemented", __func__); \
rte_errno = ENOTSUP; \
} while (0)
@@ -25,7 +25,7 @@
* Log current function as a stub.
*/
#define EAL_LOG_STUB() \
- RTE_LOG(DEBUG, EAL, "Windows: %s() is a stub\n", __func__)
+ RTE_LOG_LINE(DEBUG, EAL, "Windows: %s() is a stub", __func__)
/**
* Create a map of processors and cores on the system.
diff --git a/lib/eal/windows/include/rte_windows.h b/lib/eal/windows/include/rte_windows.h
index 83730c3d2e..015072885b 100644
--- a/lib/eal/windows/include/rte_windows.h
+++ b/lib/eal/windows/include/rte_windows.h
@@ -48,8 +48,8 @@ extern "C" {
* Log GetLastError() with context, usually a Win32 API function and arguments.
*/
#define RTE_LOG_WIN32_ERR(...) \
- RTE_LOG(DEBUG, EAL, RTE_FMT("GetLastError()=%lu: " \
- RTE_FMT_HEAD(__VA_ARGS__,) "\n", GetLastError(), \
+ RTE_LOG_LINE(DEBUG, EAL, RTE_FMT("GetLastError()=%lu: " \
+ RTE_FMT_HEAD(__VA_ARGS__,), GetLastError(), \
RTE_FMT_TAIL(__VA_ARGS__,)))
#ifdef __cplusplus
diff --git a/lib/eal/windows/rte_thread.c b/lib/eal/windows/rte_thread.c
index 145ac4b5aa..7c62f57e0d 100644
--- a/lib/eal/windows/rte_thread.c
+++ b/lib/eal/windows/rte_thread.c
@@ -67,7 +67,7 @@ static int
thread_log_last_error(const char *message)
{
DWORD error = GetLastError();
- RTE_LOG(DEBUG, EAL, "GetLastError()=%lu: %s\n", error, message);
+ RTE_LOG_LINE(DEBUG, EAL, "GetLastError()=%lu: %s", error, message);
return thread_translate_win32_error(error);
}
@@ -90,7 +90,7 @@ thread_map_priority_to_os_value(enum rte_thread_priority eal_pri, int *os_pri,
*os_pri = THREAD_PRIORITY_TIME_CRITICAL;
break;
default:
- RTE_LOG(DEBUG, EAL, "The requested priority value is invalid.\n");
+ RTE_LOG_LINE(DEBUG, EAL, "The requested priority value is invalid.");
return EINVAL;
}
@@ -109,7 +109,7 @@ thread_map_os_priority_to_eal_value(int os_pri, DWORD pri_class,
}
break;
case HIGH_PRIORITY_CLASS:
- RTE_LOG(WARNING, EAL, "The OS priority class is high not real-time.\n");
+ RTE_LOG_LINE(WARNING, EAL, "The OS priority class is high not real-time.");
/* FALLTHROUGH */
case REALTIME_PRIORITY_CLASS:
if (os_pri == THREAD_PRIORITY_TIME_CRITICAL) {
@@ -118,7 +118,7 @@ thread_map_os_priority_to_eal_value(int os_pri, DWORD pri_class,
}
break;
default:
- RTE_LOG(DEBUG, EAL, "The OS priority value does not map to an EAL-defined priority.\n");
+ RTE_LOG_LINE(DEBUG, EAL, "The OS priority value does not map to an EAL-defined priority.");
return EINVAL;
}
@@ -148,7 +148,7 @@ convert_cpuset_to_affinity(const rte_cpuset_t *cpuset,
if (affinity->Group == (USHORT)-1) {
affinity->Group = cpu_affinity->Group;
} else if (affinity->Group != cpu_affinity->Group) {
- RTE_LOG(DEBUG, EAL, "All processors must belong to the same processor group\n");
+ RTE_LOG_LINE(DEBUG, EAL, "All processors must belong to the same processor group");
ret = ENOTSUP;
goto cleanup;
}
@@ -194,7 +194,7 @@ rte_thread_create(rte_thread_t *thread_id,
ctx = calloc(1, sizeof(*ctx));
if (ctx == NULL) {
- RTE_LOG(DEBUG, EAL, "Insufficient memory for thread context allocations\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Insufficient memory for thread context allocations");
ret = ENOMEM;
goto cleanup;
}
@@ -217,7 +217,7 @@ rte_thread_create(rte_thread_t *thread_id,
&thread_affinity
);
if (ret != 0) {
- RTE_LOG(DEBUG, EAL, "Unable to convert cpuset to thread affinity\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Unable to convert cpuset to thread affinity");
thread_exit = true;
goto resume_thread;
}
@@ -232,7 +232,7 @@ rte_thread_create(rte_thread_t *thread_id,
ret = rte_thread_set_priority(*thread_id,
thread_attr->priority);
if (ret != 0) {
- RTE_LOG(DEBUG, EAL, "Unable to set thread priority\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Unable to set thread priority");
thread_exit = true;
goto resume_thread;
}
@@ -360,7 +360,7 @@ rte_thread_set_name(rte_thread_t thread_id, const char *thread_name)
CloseHandle(thread_handle);
if (ret != 0)
- RTE_LOG(DEBUG, EAL, "Failed to set thread name\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Failed to set thread name");
}
int
@@ -446,7 +446,7 @@ rte_thread_key_create(rte_thread_key *key,
{
*key = malloc(sizeof(**key));
if ((*key) == NULL) {
- RTE_LOG(DEBUG, EAL, "Cannot allocate TLS key.\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Cannot allocate TLS key.");
rte_errno = ENOMEM;
return -1;
}
@@ -464,7 +464,7 @@ int
rte_thread_key_delete(rte_thread_key key)
{
if (!key) {
- RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Invalid TLS key.");
rte_errno = EINVAL;
return -1;
}
@@ -484,7 +484,7 @@ rte_thread_value_set(rte_thread_key key, const void *value)
char *p;
if (!key) {
- RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Invalid TLS key.");
rte_errno = EINVAL;
return -1;
}
@@ -504,7 +504,7 @@ rte_thread_value_get(rte_thread_key key)
void *output;
if (!key) {
- RTE_LOG(DEBUG, EAL, "Invalid TLS key.\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Invalid TLS key.");
rte_errno = EINVAL;
return NULL;
}
@@ -532,7 +532,7 @@ rte_thread_set_affinity_by_id(rte_thread_t thread_id,
ret = convert_cpuset_to_affinity(cpuset, &thread_affinity);
if (ret != 0) {
- RTE_LOG(DEBUG, EAL, "Unable to convert cpuset to thread affinity\n");
+ RTE_LOG_LINE(DEBUG, EAL, "Unable to convert cpuset to thread affinity");
goto cleanup;
}
diff --git a/lib/efd/rte_efd.c b/lib/efd/rte_efd.c
index 78fb9250ef..e441263335 100644
--- a/lib/efd/rte_efd.c
+++ b/lib/efd/rte_efd.c
@@ -512,13 +512,13 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len,
efd_list = RTE_TAILQ_CAST(rte_efd_tailq.head, rte_efd_list);
if (online_cpu_socket_bitmask == 0) {
- RTE_LOG(ERR, EFD, "At least one CPU socket must be enabled "
- "in the bitmask\n");
+ RTE_LOG_LINE(ERR, EFD, "At least one CPU socket must be enabled "
+ "in the bitmask");
return NULL;
}
if (max_num_rules == 0) {
- RTE_LOG(ERR, EFD, "Max num rules must be higher than 0\n");
+ RTE_LOG_LINE(ERR, EFD, "Max num rules must be higher than 0");
return NULL;
}
@@ -557,7 +557,7 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len,
te = rte_zmalloc("EFD_TAILQ_ENTRY", sizeof(*te), 0);
if (te == NULL) {
- RTE_LOG(ERR, EFD, "tailq entry allocation failed\n");
+ RTE_LOG_LINE(ERR, EFD, "tailq entry allocation failed");
goto error_unlock_exit;
}
@@ -567,15 +567,15 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len,
RTE_CACHE_LINE_SIZE,
offline_cpu_socket);
if (table == NULL) {
- RTE_LOG(ERR, EFD, "Allocating EFD table management structure"
- " on socket %u failed\n",
+ RTE_LOG_LINE(ERR, EFD, "Allocating EFD table management structure"
+ " on socket %u failed",
offline_cpu_socket);
goto error_unlock_exit;
}
- RTE_LOG(DEBUG, EFD, "Allocated EFD table management structure "
- "on socket %u\n", offline_cpu_socket);
+ RTE_LOG_LINE(DEBUG, EFD, "Allocated EFD table management structure "
+ "on socket %u", offline_cpu_socket);
table->max_num_rules = num_chunks * EFD_TARGET_CHUNK_MAX_NUM_RULES;
table->num_rules = 0;
@@ -589,16 +589,16 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len,
RTE_CACHE_LINE_SIZE,
offline_cpu_socket);
if (key_array == NULL) {
- RTE_LOG(ERR, EFD, "Allocating key array"
- " on socket %u failed\n",
+ RTE_LOG_LINE(ERR, EFD, "Allocating key array"
+ " on socket %u failed",
offline_cpu_socket);
goto error_unlock_exit;
}
table->keys = key_array;
strlcpy(table->name, name, sizeof(table->name));
- RTE_LOG(DEBUG, EFD, "Creating an EFD table with %u chunks,"
- " which potentially supports %u entries\n",
+ RTE_LOG_LINE(DEBUG, EFD, "Creating an EFD table with %u chunks,"
+ " which potentially supports %u entries",
num_chunks, table->max_num_rules);
/* Make sure all the allocatable table pointers are NULL initially */
@@ -626,15 +626,15 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len,
RTE_CACHE_LINE_SIZE,
socket_id);
if (table->chunks[socket_id] == NULL) {
- RTE_LOG(ERR, EFD,
+ RTE_LOG_LINE(ERR, EFD,
"Allocating EFD online table on "
- "socket %u failed\n",
+ "socket %u failed",
socket_id);
goto error_unlock_exit;
}
- RTE_LOG(DEBUG, EFD,
+ RTE_LOG_LINE(DEBUG, EFD,
"Allocated EFD online table of size "
- "%"PRIu64" bytes (%.2f MB) on socket %u\n",
+ "%"PRIu64" bytes (%.2f MB) on socket %u",
online_table_size,
(float) online_table_size /
(1024.0F * 1024.0F),
@@ -678,14 +678,14 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len,
RTE_CACHE_LINE_SIZE,
offline_cpu_socket);
if (table->offline_chunks == NULL) {
- RTE_LOG(ERR, EFD, "Allocating EFD offline table on socket %u "
- "failed\n", offline_cpu_socket);
+ RTE_LOG_LINE(ERR, EFD, "Allocating EFD offline table on socket %u "
+ "failed", offline_cpu_socket);
goto error_unlock_exit;
}
- RTE_LOG(DEBUG, EFD,
+ RTE_LOG_LINE(DEBUG, EFD,
"Allocated EFD offline table of size %"PRIu64" bytes "
- " (%.2f MB) on socket %u\n", offline_table_size,
+ " (%.2f MB) on socket %u", offline_table_size,
(float) offline_table_size / (1024.0F * 1024.0F),
offline_cpu_socket);
@@ -698,7 +698,7 @@ rte_efd_create(const char *name, uint32_t max_num_rules, uint32_t key_len,
r = rte_ring_create(ring_name, rte_align32pow2(table->max_num_rules),
offline_cpu_socket, 0);
if (r == NULL) {
- RTE_LOG(ERR, EFD, "memory allocation failed\n");
+ RTE_LOG_LINE(ERR, EFD, "memory allocation failed");
rte_efd_free(table);
return NULL;
}
@@ -1018,9 +1018,9 @@ efd_compute_update(struct rte_efd_table * const table,
if (found == 0) {
/* Key does not exist. Insert the rule into the bin/group */
if (unlikely(current_group->num_rules >= EFD_MAX_GROUP_NUM_RULES)) {
- RTE_LOG(ERR, EFD,
+ RTE_LOG_LINE(ERR, EFD,
"Fatal: No room remaining for insert into "
- "chunk %u group %u bin %u\n",
+ "chunk %u group %u bin %u",
*chunk_id,
current_group_id, *bin_id);
return RTE_EFD_UPDATE_FAILED;
@@ -1028,9 +1028,9 @@ efd_compute_update(struct rte_efd_table * const table,
if (unlikely(current_group->num_rules ==
(EFD_MAX_GROUP_NUM_RULES - 1))) {
- RTE_LOG(INFO, EFD, "Warn: Insert into last "
+ RTE_LOG_LINE(INFO, EFD, "Warn: Insert into last "
"available slot in chunk %u "
- "group %u bin %u\n", *chunk_id,
+ "group %u bin %u", *chunk_id,
current_group_id, *bin_id);
status = RTE_EFD_UPDATE_WARN_GROUP_FULL;
}
@@ -1117,10 +1117,10 @@ efd_compute_update(struct rte_efd_table * const table,
if (current_group != new_group &&
new_group->num_rules + bin_size >
EFD_MAX_GROUP_NUM_RULES) {
- RTE_LOG(DEBUG, EFD,
+ RTE_LOG_LINE(DEBUG, EFD,
"Unable to move_groups to dest group "
"containing %u entries."
- "bin_size:%u choice:%02x\n",
+ "bin_size:%u choice:%02x",
new_group->num_rules, bin_size,
choice - 1);
goto next_choice;
@@ -1135,9 +1135,9 @@ efd_compute_update(struct rte_efd_table * const table,
if (!ret)
return status;
- RTE_LOG(DEBUG, EFD,
+ RTE_LOG_LINE(DEBUG, EFD,
"Failed to find perfect hash for group "
- "containing %u entries. bin_size:%u choice:%02x\n",
+ "containing %u entries. bin_size:%u choice:%02x",
new_group->num_rules, bin_size, choice - 1);
/* Restore groups modified to their previous state */
revert_groups(current_group, new_group, bin_size);
diff --git a/lib/fib/rte_fib.c b/lib/fib/rte_fib.c
index f88e71a59d..3d9bf6fe9d 100644
--- a/lib/fib/rte_fib.c
+++ b/lib/fib/rte_fib.c
@@ -171,8 +171,8 @@ rte_fib_create(const char *name, int socket_id, struct rte_fib_conf *conf)
rib = rte_rib_create(name, socket_id, &rib_conf);
if (rib == NULL) {
- RTE_LOG(ERR, LPM,
- "Can not allocate RIB %s\n", name);
+ RTE_LOG_LINE(ERR, LPM,
+ "Can not allocate RIB %s", name);
return NULL;
}
@@ -196,8 +196,8 @@ rte_fib_create(const char *name, int socket_id, struct rte_fib_conf *conf)
/* allocate tailq entry */
te = rte_zmalloc("FIB_TAILQ_ENTRY", sizeof(*te), 0);
if (te == NULL) {
- RTE_LOG(ERR, LPM,
- "Can not allocate tailq entry for FIB %s\n", name);
+ RTE_LOG_LINE(ERR, LPM,
+ "Can not allocate tailq entry for FIB %s", name);
rte_errno = ENOMEM;
goto exit;
}
@@ -206,7 +206,7 @@ rte_fib_create(const char *name, int socket_id, struct rte_fib_conf *conf)
fib = rte_zmalloc_socket(mem_name,
sizeof(struct rte_fib), RTE_CACHE_LINE_SIZE, socket_id);
if (fib == NULL) {
- RTE_LOG(ERR, LPM, "FIB %s memory allocation failed\n", name);
+ RTE_LOG_LINE(ERR, LPM, "FIB %s memory allocation failed", name);
rte_errno = ENOMEM;
goto free_te;
}
@@ -217,9 +217,9 @@ rte_fib_create(const char *name, int socket_id, struct rte_fib_conf *conf)
fib->def_nh = conf->default_nh;
ret = init_dataplane(fib, socket_id, conf);
if (ret < 0) {
- RTE_LOG(ERR, LPM,
+ RTE_LOG_LINE(ERR, LPM,
"FIB dataplane struct %s memory allocation failed "
- "with err %d\n", name, ret);
+ "with err %d", name, ret);
rte_errno = -ret;
goto free_fib;
}
diff --git a/lib/fib/rte_fib6.c b/lib/fib/rte_fib6.c
index ab1d960479..2d23c09eea 100644
--- a/lib/fib/rte_fib6.c
+++ b/lib/fib/rte_fib6.c
@@ -171,8 +171,8 @@ rte_fib6_create(const char *name, int socket_id, struct rte_fib6_conf *conf)
rib = rte_rib6_create(name, socket_id, &rib_conf);
if (rib == NULL) {
- RTE_LOG(ERR, LPM,
- "Can not allocate RIB %s\n", name);
+ RTE_LOG_LINE(ERR, LPM,
+ "Can not allocate RIB %s", name);
return NULL;
}
@@ -196,8 +196,8 @@ rte_fib6_create(const char *name, int socket_id, struct rte_fib6_conf *conf)
/* allocate tailq entry */
te = rte_zmalloc("FIB_TAILQ_ENTRY", sizeof(*te), 0);
if (te == NULL) {
- RTE_LOG(ERR, LPM,
- "Can not allocate tailq entry for FIB %s\n", name);
+ RTE_LOG_LINE(ERR, LPM,
+ "Can not allocate tailq entry for FIB %s", name);
rte_errno = ENOMEM;
goto exit;
}
@@ -206,7 +206,7 @@ rte_fib6_create(const char *name, int socket_id, struct rte_fib6_conf *conf)
fib = rte_zmalloc_socket(mem_name,
sizeof(struct rte_fib6), RTE_CACHE_LINE_SIZE, socket_id);
if (fib == NULL) {
- RTE_LOG(ERR, LPM, "FIB %s memory allocation failed\n", name);
+ RTE_LOG_LINE(ERR, LPM, "FIB %s memory allocation failed", name);
rte_errno = ENOMEM;
goto free_te;
}
@@ -217,8 +217,8 @@ rte_fib6_create(const char *name, int socket_id, struct rte_fib6_conf *conf)
fib->def_nh = conf->default_nh;
ret = init_dataplane(fib, socket_id, conf);
if (ret < 0) {
- RTE_LOG(ERR, LPM,
- "FIB dataplane struct %s memory allocation failed\n",
+ RTE_LOG_LINE(ERR, LPM,
+ "FIB dataplane struct %s memory allocation failed",
name);
rte_errno = -ret;
goto free_fib;
diff --git a/lib/hash/rte_cuckoo_hash.c b/lib/hash/rte_cuckoo_hash.c
index 8e4364f060..2a7b38843d 100644
--- a/lib/hash/rte_cuckoo_hash.c
+++ b/lib/hash/rte_cuckoo_hash.c
@@ -164,7 +164,7 @@ rte_hash_create(const struct rte_hash_parameters *params)
hash_list = RTE_TAILQ_CAST(rte_hash_tailq.head, rte_hash_list);
if (params == NULL) {
- RTE_LOG(ERR, HASH, "rte_hash_create has no parameters\n");
+ RTE_LOG_LINE(ERR, HASH, "rte_hash_create has no parameters");
return NULL;
}
@@ -173,13 +173,13 @@ rte_hash_create(const struct rte_hash_parameters *params)
(params->entries < RTE_HASH_BUCKET_ENTRIES) ||
(params->key_len == 0)) {
rte_errno = EINVAL;
- RTE_LOG(ERR, HASH, "rte_hash_create has invalid parameters\n");
+ RTE_LOG_LINE(ERR, HASH, "rte_hash_create has invalid parameters");
return NULL;
}
if (params->extra_flag & ~RTE_HASH_EXTRA_FLAGS_MASK) {
rte_errno = EINVAL;
- RTE_LOG(ERR, HASH, "rte_hash_create: unsupported extra flags\n");
+ RTE_LOG_LINE(ERR, HASH, "rte_hash_create: unsupported extra flags");
return NULL;
}
@@ -187,8 +187,8 @@ rte_hash_create(const struct rte_hash_parameters *params)
if ((params->extra_flag & RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY) &&
(params->extra_flag & RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF)) {
rte_errno = EINVAL;
- RTE_LOG(ERR, HASH, "rte_hash_create: choose rw concurrency or "
- "rw concurrency lock free\n");
+ RTE_LOG_LINE(ERR, HASH, "rte_hash_create: choose rw concurrency or "
+ "rw concurrency lock free");
return NULL;
}
@@ -238,7 +238,7 @@ rte_hash_create(const struct rte_hash_parameters *params)
r = rte_ring_create_elem(ring_name, sizeof(uint32_t),
rte_align32pow2(num_key_slots), params->socket_id, 0);
if (r == NULL) {
- RTE_LOG(ERR, HASH, "memory allocation failed\n");
+ RTE_LOG_LINE(ERR, HASH, "memory allocation failed");
goto err;
}
@@ -254,8 +254,8 @@ rte_hash_create(const struct rte_hash_parameters *params)
params->socket_id, 0);
if (r_ext == NULL) {
- RTE_LOG(ERR, HASH, "ext buckets memory allocation "
- "failed\n");
+ RTE_LOG_LINE(ERR, HASH, "ext buckets memory allocation "
+ "failed");
goto err;
}
}
@@ -280,7 +280,7 @@ rte_hash_create(const struct rte_hash_parameters *params)
te = rte_zmalloc("HASH_TAILQ_ENTRY", sizeof(*te), 0);
if (te == NULL) {
- RTE_LOG(ERR, HASH, "tailq entry allocation failed\n");
+ RTE_LOG_LINE(ERR, HASH, "tailq entry allocation failed");
goto err_unlock;
}
@@ -288,7 +288,7 @@ rte_hash_create(const struct rte_hash_parameters *params)
RTE_CACHE_LINE_SIZE, params->socket_id);
if (h == NULL) {
- RTE_LOG(ERR, HASH, "memory allocation failed\n");
+ RTE_LOG_LINE(ERR, HASH, "memory allocation failed");
goto err_unlock;
}
@@ -297,7 +297,7 @@ rte_hash_create(const struct rte_hash_parameters *params)
RTE_CACHE_LINE_SIZE, params->socket_id);
if (buckets == NULL) {
- RTE_LOG(ERR, HASH, "buckets memory allocation failed\n");
+ RTE_LOG_LINE(ERR, HASH, "buckets memory allocation failed");
goto err_unlock;
}
@@ -307,8 +307,8 @@ rte_hash_create(const struct rte_hash_parameters *params)
num_buckets * sizeof(struct rte_hash_bucket),
RTE_CACHE_LINE_SIZE, params->socket_id);
if (buckets_ext == NULL) {
- RTE_LOG(ERR, HASH, "ext buckets memory allocation "
- "failed\n");
+ RTE_LOG_LINE(ERR, HASH, "ext buckets memory allocation "
+ "failed");
goto err_unlock;
}
/* Populate ext bkt ring. We reserve 0 similar to the
@@ -323,8 +323,8 @@ rte_hash_create(const struct rte_hash_parameters *params)
ext_bkt_to_free = rte_zmalloc(NULL, sizeof(uint32_t) *
num_key_slots, 0);
if (ext_bkt_to_free == NULL) {
- RTE_LOG(ERR, HASH, "ext bkt to free memory allocation "
- "failed\n");
+ RTE_LOG_LINE(ERR, HASH, "ext bkt to free memory allocation "
+ "failed");
goto err_unlock;
}
}
@@ -339,7 +339,7 @@ rte_hash_create(const struct rte_hash_parameters *params)
RTE_CACHE_LINE_SIZE, params->socket_id);
if (k == NULL) {
- RTE_LOG(ERR, HASH, "memory allocation failed\n");
+ RTE_LOG_LINE(ERR, HASH, "memory allocation failed");
goto err_unlock;
}
@@ -347,7 +347,7 @@ rte_hash_create(const struct rte_hash_parameters *params)
RTE_CACHE_LINE_SIZE, params->socket_id);
if (tbl_chng_cnt == NULL) {
- RTE_LOG(ERR, HASH, "memory allocation failed\n");
+ RTE_LOG_LINE(ERR, HASH, "memory allocation failed");
goto err_unlock;
}
@@ -395,7 +395,7 @@ rte_hash_create(const struct rte_hash_parameters *params)
sizeof(struct lcore_cache) * RTE_MAX_LCORE,
RTE_CACHE_LINE_SIZE, params->socket_id);
if (local_free_slots == NULL) {
- RTE_LOG(ERR, HASH, "local free slots memory allocation failed\n");
+ RTE_LOG_LINE(ERR, HASH, "local free slots memory allocation failed");
goto err_unlock;
}
}
@@ -637,7 +637,7 @@ rte_hash_reset(struct rte_hash *h)
/* Reclaim all the resources */
rte_rcu_qsbr_dq_reclaim(h->dq, ~0, NULL, &pending, NULL);
if (pending != 0)
- RTE_LOG(ERR, HASH, "RCU reclaim all resources failed\n");
+ RTE_LOG_LINE(ERR, HASH, "RCU reclaim all resources failed");
}
memset(h->buckets, 0, h->num_buckets * sizeof(struct rte_hash_bucket));
@@ -1511,8 +1511,8 @@ __hash_rcu_qsbr_free_resource(void *p, void *e, unsigned int n)
/* Return key indexes to free slot ring */
ret = free_slot(h, rcu_dq_entry.key_idx);
if (ret < 0) {
- RTE_LOG(ERR, HASH,
- "%s: could not enqueue free slots in global ring\n",
+ RTE_LOG_LINE(ERR, HASH,
+ "%s: could not enqueue free slots in global ring",
__func__);
}
}
@@ -1540,7 +1540,7 @@ rte_hash_rcu_qsbr_add(struct rte_hash *h, struct rte_hash_rcu_config *cfg)
hash_rcu_cfg = rte_zmalloc(NULL, sizeof(struct rte_hash_rcu_config), 0);
if (hash_rcu_cfg == NULL) {
- RTE_LOG(ERR, HASH, "memory allocation failed\n");
+ RTE_LOG_LINE(ERR, HASH, "memory allocation failed");
return 1;
}
@@ -1564,7 +1564,7 @@ rte_hash_rcu_qsbr_add(struct rte_hash *h, struct rte_hash_rcu_config *cfg)
h->dq = rte_rcu_qsbr_dq_create(¶ms);
if (h->dq == NULL) {
rte_free(hash_rcu_cfg);
- RTE_LOG(ERR, HASH, "HASH defer queue creation failed\n");
+ RTE_LOG_LINE(ERR, HASH, "HASH defer queue creation failed");
return 1;
}
} else {
@@ -1593,8 +1593,8 @@ remove_entry(const struct rte_hash *h, struct rte_hash_bucket *bkt,
int ret = free_slot(h, bkt->key_idx[i]);
if (ret < 0) {
- RTE_LOG(ERR, HASH,
- "%s: could not enqueue free slots in global ring\n",
+ RTE_LOG_LINE(ERR, HASH,
+ "%s: could not enqueue free slots in global ring",
__func__);
}
}
@@ -1783,7 +1783,7 @@ __rte_hash_del_key_with_hash(const struct rte_hash *h, const void *key,
} else if (h->dq)
/* Push into QSBR FIFO if using RTE_HASH_QSBR_MODE_DQ */
if (rte_rcu_qsbr_dq_enqueue(h->dq, &rcu_dq_entry) != 0)
- RTE_LOG(ERR, HASH, "Failed to push QSBR FIFO\n");
+ RTE_LOG_LINE(ERR, HASH, "Failed to push QSBR FIFO");
}
__hash_rw_writer_unlock(h);
return ret;
diff --git a/lib/hash/rte_fbk_hash.c b/lib/hash/rte_fbk_hash.c
index faeb50cd89..20433a92c8 100644
--- a/lib/hash/rte_fbk_hash.c
+++ b/lib/hash/rte_fbk_hash.c
@@ -118,7 +118,7 @@ rte_fbk_hash_create(const struct rte_fbk_hash_params *params)
te = rte_zmalloc("FBK_HASH_TAILQ_ENTRY", sizeof(*te), 0);
if (te == NULL) {
- RTE_LOG(ERR, HASH, "Failed to allocate tailq entry\n");
+ RTE_LOG_LINE(ERR, HASH, "Failed to allocate tailq entry");
goto exit;
}
@@ -126,7 +126,7 @@ rte_fbk_hash_create(const struct rte_fbk_hash_params *params)
ht = rte_zmalloc_socket(hash_name, mem_size,
0, params->socket_id);
if (ht == NULL) {
- RTE_LOG(ERR, HASH, "Failed to allocate fbk hash table\n");
+ RTE_LOG_LINE(ERR, HASH, "Failed to allocate fbk hash table");
rte_free(te);
goto exit;
}
diff --git a/lib/hash/rte_hash_crc.c b/lib/hash/rte_hash_crc.c
index 1439d8a71f..0d52840eaa 100644
--- a/lib/hash/rte_hash_crc.c
+++ b/lib/hash/rte_hash_crc.c
@@ -34,8 +34,8 @@ rte_hash_crc_set_alg(uint8_t alg)
#if defined RTE_ARCH_X86
if (!(alg & CRC32_SSE42_x64))
- RTE_LOG(WARNING, HASH_CRC,
- "Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42\n");
+ RTE_LOG_LINE(WARNING, HASH_CRC,
+ "Unsupported CRC32 algorithm requested using CRC32_x64/CRC32_SSE42");
if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_EM64T) || alg == CRC32_SSE42)
rte_hash_crc32_alg = CRC32_SSE42;
else
@@ -44,15 +44,15 @@ rte_hash_crc_set_alg(uint8_t alg)
#if defined RTE_ARCH_ARM64
if (!(alg & CRC32_ARM64))
- RTE_LOG(WARNING, HASH_CRC,
- "Unsupported CRC32 algorithm requested using CRC32_ARM64\n");
+ RTE_LOG_LINE(WARNING, HASH_CRC,
+ "Unsupported CRC32 algorithm requested using CRC32_ARM64");
if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_CRC32))
rte_hash_crc32_alg = CRC32_ARM64;
#endif
if (rte_hash_crc32_alg == CRC32_SW)
- RTE_LOG(WARNING, HASH_CRC,
- "Unsupported CRC32 algorithm requested using CRC32_SW\n");
+ RTE_LOG_LINE(WARNING, HASH_CRC,
+ "Unsupported CRC32 algorithm requested using CRC32_SW");
}
/* Setting the best available algorithm */
diff --git a/lib/hash/rte_thash.c b/lib/hash/rte_thash.c
index d819dddd84..a5d84eee8e 100644
--- a/lib/hash/rte_thash.c
+++ b/lib/hash/rte_thash.c
@@ -243,8 +243,8 @@ rte_thash_init_ctx(const char *name, uint32_t key_len, uint32_t reta_sz,
/* allocate tailq entry */
te = rte_zmalloc("THASH_TAILQ_ENTRY", sizeof(*te), 0);
if (te == NULL) {
- RTE_LOG(ERR, HASH,
- "Can not allocate tailq entry for thash context %s\n",
+ RTE_LOG_LINE(ERR, HASH,
+ "Can not allocate tailq entry for thash context %s",
name);
rte_errno = ENOMEM;
goto exit;
@@ -252,7 +252,7 @@ rte_thash_init_ctx(const char *name, uint32_t key_len, uint32_t reta_sz,
ctx = rte_zmalloc(NULL, sizeof(struct rte_thash_ctx) + key_len, 0);
if (ctx == NULL) {
- RTE_LOG(ERR, HASH, "thash ctx %s memory allocation failed\n",
+ RTE_LOG_LINE(ERR, HASH, "thash ctx %s memory allocation failed",
name);
rte_errno = ENOMEM;
goto free_te;
@@ -275,7 +275,7 @@ rte_thash_init_ctx(const char *name, uint32_t key_len, uint32_t reta_sz,
ctx->matrices = rte_zmalloc(NULL, key_len * sizeof(uint64_t),
RTE_CACHE_LINE_SIZE);
if (ctx->matrices == NULL) {
- RTE_LOG(ERR, HASH, "Cannot allocate matrices\n");
+ RTE_LOG_LINE(ERR, HASH, "Cannot allocate matrices");
rte_errno = ENOMEM;
goto free_ctx;
}
@@ -390,8 +390,8 @@ generate_subkey(struct rte_thash_ctx *ctx, struct thash_lfsr *lfsr,
if (((lfsr->bits_cnt + req_bits) > (1ULL << lfsr->deg) - 1) &&
((ctx->flags & RTE_THASH_IGNORE_PERIOD_OVERFLOW) !=
RTE_THASH_IGNORE_PERIOD_OVERFLOW)) {
- RTE_LOG(ERR, HASH,
- "Can't generate m-sequence due to period overflow\n");
+ RTE_LOG_LINE(ERR, HASH,
+ "Can't generate m-sequence due to period overflow");
return -ENOSPC;
}
@@ -470,9 +470,9 @@ insert_before(struct rte_thash_ctx *ctx,
return ret;
}
} else if ((next_ent != NULL) && (end > next_ent->offset)) {
- RTE_LOG(ERR, HASH,
+ RTE_LOG_LINE(ERR, HASH,
"Can't add helper %s due to conflict with existing"
- " helper %s\n", ent->name, next_ent->name);
+ " helper %s", ent->name, next_ent->name);
rte_free(ent);
return -ENOSPC;
}
@@ -519,9 +519,9 @@ insert_after(struct rte_thash_ctx *ctx,
int ret;
if ((next_ent != NULL) && (end > next_ent->offset)) {
- RTE_LOG(ERR, HASH,
+ RTE_LOG_LINE(ERR, HASH,
"Can't add helper %s due to conflict with existing"
- " helper %s\n", ent->name, next_ent->name);
+ " helper %s", ent->name, next_ent->name);
rte_free(ent);
return -EEXIST;
}
diff --git a/lib/hash/rte_thash_gfni.c b/lib/hash/rte_thash_gfni.c
index c863789b51..6b84180b62 100644
--- a/lib/hash/rte_thash_gfni.c
+++ b/lib/hash/rte_thash_gfni.c
@@ -20,8 +20,8 @@ rte_thash_gfni(const uint64_t *mtrx __rte_unused,
if (!warned) {
warned = true;
- RTE_LOG(ERR, HASH,
- "%s is undefined under given arch\n", __func__);
+ RTE_LOG_LINE(ERR, HASH,
+ "%s is undefined under given arch", __func__);
}
return 0;
@@ -38,8 +38,8 @@ rte_thash_gfni_bulk(const uint64_t *mtrx __rte_unused,
if (!warned) {
warned = true;
- RTE_LOG(ERR, HASH,
- "%s is undefined under given arch\n", __func__);
+ RTE_LOG_LINE(ERR, HASH,
+ "%s is undefined under given arch", __func__);
}
for (i = 0; i < num; i++)
diff --git a/lib/ip_frag/rte_ip_frag_common.c b/lib/ip_frag/rte_ip_frag_common.c
index eed399da6b..02dcac3137 100644
--- a/lib/ip_frag/rte_ip_frag_common.c
+++ b/lib/ip_frag/rte_ip_frag_common.c
@@ -54,20 +54,20 @@ rte_ip_frag_table_create(uint32_t bucket_num, uint32_t bucket_entries,
if (rte_is_power_of_2(bucket_entries) == 0 ||
nb_entries > UINT32_MAX || nb_entries == 0 ||
nb_entries < max_entries) {
- RTE_LOG(ERR, IPFRAG, "%s: invalid input parameter\n", __func__);
+ RTE_LOG_LINE(ERR, IPFRAG, "%s: invalid input parameter", __func__);
return NULL;
}
sz = sizeof (*tbl) + nb_entries * sizeof (tbl->pkt[0]);
if ((tbl = rte_zmalloc_socket(__func__, sz, RTE_CACHE_LINE_SIZE,
socket_id)) == NULL) {
- RTE_LOG(ERR, IPFRAG,
- "%s: allocation of %zu bytes at socket %d failed do\n",
+ RTE_LOG_LINE(ERR, IPFRAG,
+ "%s: allocation of %zu bytes at socket %d failed do",
__func__, sz, socket_id);
return NULL;
}
- RTE_LOG(INFO, IPFRAG, "%s: allocated of %zu bytes at socket %d\n",
+ RTE_LOG_LINE(INFO, IPFRAG, "%s: allocated of %zu bytes at socket %d",
__func__, sz, socket_id);
tbl->max_cycles = max_cycles;
diff --git a/lib/latencystats/rte_latencystats.c b/lib/latencystats/rte_latencystats.c
index f3c1746cca..cc3c2cf4de 100644
--- a/lib/latencystats/rte_latencystats.c
+++ b/lib/latencystats/rte_latencystats.c
@@ -25,7 +25,6 @@ latencystat_cycles_per_ns(void)
return rte_get_timer_hz() / NS_PER_SEC;
}
-/* Macros for printing using RTE_LOG */
RTE_LOG_REGISTER_DEFAULT(latencystat_logtype, INFO);
#define RTE_LOGTYPE_LATENCY_STATS latencystat_logtype
@@ -96,7 +95,7 @@ rte_latencystats_update(void)
latency_stats_index,
values, NUM_LATENCY_STATS);
if (ret < 0)
- RTE_LOG(INFO, LATENCY_STATS, "Failed to push the stats\n");
+ RTE_LOG_LINE(INFO, LATENCY_STATS, "Failed to push the stats");
return ret;
}
@@ -228,7 +227,7 @@ rte_latencystats_init(uint64_t app_samp_intvl,
mz = rte_memzone_reserve(MZ_RTE_LATENCY_STATS, sizeof(*glob_stats),
rte_socket_id(), flags);
if (mz == NULL) {
- RTE_LOG(ERR, LATENCY_STATS, "Cannot reserve memory: %s:%d\n",
+ RTE_LOG_LINE(ERR, LATENCY_STATS, "Cannot reserve memory: %s:%d",
__func__, __LINE__);
return -ENOMEM;
}
@@ -244,8 +243,8 @@ rte_latencystats_init(uint64_t app_samp_intvl,
latency_stats_index = rte_metrics_reg_names(ptr_strings,
NUM_LATENCY_STATS);
if (latency_stats_index < 0) {
- RTE_LOG(DEBUG, LATENCY_STATS,
- "Failed to register latency stats names\n");
+ RTE_LOG_LINE(DEBUG, LATENCY_STATS,
+ "Failed to register latency stats names");
return -1;
}
@@ -253,8 +252,8 @@ rte_latencystats_init(uint64_t app_samp_intvl,
ret = rte_mbuf_dyn_rx_timestamp_register(×tamp_dynfield_offset,
×tamp_dynflag);
if (ret != 0) {
- RTE_LOG(ERR, LATENCY_STATS,
- "Cannot register mbuf field/flag for timestamp\n");
+ RTE_LOG_LINE(ERR, LATENCY_STATS,
+ "Cannot register mbuf field/flag for timestamp");
return -rte_errno;
}
@@ -264,8 +263,8 @@ rte_latencystats_init(uint64_t app_samp_intvl,
ret = rte_eth_dev_info_get(pid, &dev_info);
if (ret != 0) {
- RTE_LOG(INFO, LATENCY_STATS,
- "Error during getting device (port %u) info: %s\n",
+ RTE_LOG_LINE(INFO, LATENCY_STATS,
+ "Error during getting device (port %u) info: %s",
pid, strerror(-ret));
continue;
@@ -276,18 +275,18 @@ rte_latencystats_init(uint64_t app_samp_intvl,
cbs->cb = rte_eth_add_first_rx_callback(pid, qid,
add_time_stamps, user_cb);
if (!cbs->cb)
- RTE_LOG(INFO, LATENCY_STATS, "Failed to "
+ RTE_LOG_LINE(INFO, LATENCY_STATS, "Failed to "
"register Rx callback for pid=%d, "
- "qid=%d\n", pid, qid);
+ "qid=%d", pid, qid);
}
for (qid = 0; qid < dev_info.nb_tx_queues; qid++) {
cbs = &tx_cbs[pid][qid];
cbs->cb = rte_eth_add_tx_callback(pid, qid,
calc_latency, user_cb);
if (!cbs->cb)
- RTE_LOG(INFO, LATENCY_STATS, "Failed to "
+ RTE_LOG_LINE(INFO, LATENCY_STATS, "Failed to "
"register Tx callback for pid=%d, "
- "qid=%d\n", pid, qid);
+ "qid=%d", pid, qid);
}
}
return 0;
@@ -308,8 +307,8 @@ rte_latencystats_uninit(void)
ret = rte_eth_dev_info_get(pid, &dev_info);
if (ret != 0) {
- RTE_LOG(INFO, LATENCY_STATS,
- "Error during getting device (port %u) info: %s\n",
+ RTE_LOG_LINE(INFO, LATENCY_STATS,
+ "Error during getting device (port %u) info: %s",
pid, strerror(-ret));
continue;
@@ -319,17 +318,17 @@ rte_latencystats_uninit(void)
cbs = &rx_cbs[pid][qid];
ret = rte_eth_remove_rx_callback(pid, qid, cbs->cb);
if (ret)
- RTE_LOG(INFO, LATENCY_STATS, "failed to "
+ RTE_LOG_LINE(INFO, LATENCY_STATS, "failed to "
"remove Rx callback for pid=%d, "
- "qid=%d\n", pid, qid);
+ "qid=%d", pid, qid);
}
for (qid = 0; qid < dev_info.nb_tx_queues; qid++) {
cbs = &tx_cbs[pid][qid];
ret = rte_eth_remove_tx_callback(pid, qid, cbs->cb);
if (ret)
- RTE_LOG(INFO, LATENCY_STATS, "failed to "
+ RTE_LOG_LINE(INFO, LATENCY_STATS, "failed to "
"remove Tx callback for pid=%d, "
- "qid=%d\n", pid, qid);
+ "qid=%d", pid, qid);
}
}
@@ -366,8 +365,8 @@ rte_latencystats_get(struct rte_metric_value *values, uint16_t size)
const struct rte_memzone *mz;
mz = rte_memzone_lookup(MZ_RTE_LATENCY_STATS);
if (mz == NULL) {
- RTE_LOG(ERR, LATENCY_STATS,
- "Latency stats memzone not found\n");
+ RTE_LOG_LINE(ERR, LATENCY_STATS,
+ "Latency stats memzone not found");
return -ENOMEM;
}
glob_stats = mz->addr;
diff --git a/lib/log/log.c b/lib/log/log.c
index e3cd4cff0f..d03691db0d 100644
--- a/lib/log/log.c
+++ b/lib/log/log.c
@@ -146,7 +146,7 @@ logtype_set_level(uint32_t type, uint32_t level)
if (current != level) {
rte_logs.dynamic_types[type].loglevel = level;
- RTE_LOG(DEBUG, EAL, "%s log level changed from %s to %s\n",
+ RTE_LOG_LINE(DEBUG, EAL, "%s log level changed from %s to %s",
rte_logs.dynamic_types[type].name == NULL ?
"" : rte_logs.dynamic_types[type].name,
eal_log_level2str(current),
@@ -519,8 +519,8 @@ eal_log_set_default(FILE *default_log)
default_log_stream = default_log;
#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
- RTE_LOG(NOTICE, EAL,
- "Debug dataplane logs available - lower performance\n");
+ RTE_LOG_LINE(NOTICE, EAL,
+ "Debug dataplane logs available - lower performance");
#endif
}
diff --git a/lib/lpm/rte_lpm.c b/lib/lpm/rte_lpm.c
index 0ca8214786..a332faf720 100644
--- a/lib/lpm/rte_lpm.c
+++ b/lib/lpm/rte_lpm.c
@@ -192,7 +192,7 @@ rte_lpm_create(const char *name, int socket_id,
/* allocate tailq entry */
te = rte_zmalloc("LPM_TAILQ_ENTRY", sizeof(*te), 0);
if (te == NULL) {
- RTE_LOG(ERR, LPM, "Failed to allocate tailq entry\n");
+ RTE_LOG_LINE(ERR, LPM, "Failed to allocate tailq entry");
rte_errno = ENOMEM;
goto exit;
}
@@ -201,7 +201,7 @@ rte_lpm_create(const char *name, int socket_id,
i_lpm = rte_zmalloc_socket(mem_name, mem_size,
RTE_CACHE_LINE_SIZE, socket_id);
if (i_lpm == NULL) {
- RTE_LOG(ERR, LPM, "LPM memory allocation failed\n");
+ RTE_LOG_LINE(ERR, LPM, "LPM memory allocation failed");
rte_free(te);
rte_errno = ENOMEM;
goto exit;
@@ -211,7 +211,7 @@ rte_lpm_create(const char *name, int socket_id,
(size_t)rules_size, RTE_CACHE_LINE_SIZE, socket_id);
if (i_lpm->rules_tbl == NULL) {
- RTE_LOG(ERR, LPM, "LPM rules_tbl memory allocation failed\n");
+ RTE_LOG_LINE(ERR, LPM, "LPM rules_tbl memory allocation failed");
rte_free(i_lpm);
i_lpm = NULL;
rte_free(te);
@@ -223,7 +223,7 @@ rte_lpm_create(const char *name, int socket_id,
(size_t)tbl8s_size, RTE_CACHE_LINE_SIZE, socket_id);
if (i_lpm->lpm.tbl8 == NULL) {
- RTE_LOG(ERR, LPM, "LPM tbl8 memory allocation failed\n");
+ RTE_LOG_LINE(ERR, LPM, "LPM tbl8 memory allocation failed");
rte_free(i_lpm->rules_tbl);
rte_free(i_lpm);
i_lpm = NULL;
@@ -338,7 +338,7 @@ rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_lpm_rcu_config *cfg)
params.v = cfg->v;
i_lpm->dq = rte_rcu_qsbr_dq_create(¶ms);
if (i_lpm->dq == NULL) {
- RTE_LOG(ERR, LPM, "LPM defer queue creation failed\n");
+ RTE_LOG_LINE(ERR, LPM, "LPM defer queue creation failed");
return 1;
}
} else {
@@ -565,7 +565,7 @@ tbl8_free(struct __rte_lpm *i_lpm, uint32_t tbl8_group_start)
status = rte_rcu_qsbr_dq_enqueue(i_lpm->dq,
(void *)&tbl8_group_start);
if (status == 1) {
- RTE_LOG(ERR, LPM, "Failed to push QSBR FIFO\n");
+ RTE_LOG_LINE(ERR, LPM, "Failed to push QSBR FIFO");
return -rte_errno;
}
}
diff --git a/lib/lpm/rte_lpm6.c b/lib/lpm/rte_lpm6.c
index 24ce7dd022..251bfcc73d 100644
--- a/lib/lpm/rte_lpm6.c
+++ b/lib/lpm/rte_lpm6.c
@@ -280,7 +280,7 @@ rte_lpm6_create(const char *name, int socket_id,
rules_tbl = rte_hash_create(&rule_hash_tbl_params);
if (rules_tbl == NULL) {
- RTE_LOG(ERR, LPM, "LPM rules hash table allocation failed: %s (%d)\n",
+ RTE_LOG_LINE(ERR, LPM, "LPM rules hash table allocation failed: %s (%d)",
rte_strerror(rte_errno), rte_errno);
goto fail_wo_unlock;
}
@@ -290,7 +290,7 @@ rte_lpm6_create(const char *name, int socket_id,
sizeof(uint32_t) * config->number_tbl8s,
RTE_CACHE_LINE_SIZE);
if (tbl8_pool == NULL) {
- RTE_LOG(ERR, LPM, "LPM tbl8 pool allocation failed: %s (%d)\n",
+ RTE_LOG_LINE(ERR, LPM, "LPM tbl8 pool allocation failed: %s (%d)",
rte_strerror(rte_errno), rte_errno);
rte_errno = ENOMEM;
goto fail_wo_unlock;
@@ -301,7 +301,7 @@ rte_lpm6_create(const char *name, int socket_id,
sizeof(struct rte_lpm_tbl8_hdr) * config->number_tbl8s,
RTE_CACHE_LINE_SIZE);
if (tbl8_hdrs == NULL) {
- RTE_LOG(ERR, LPM, "LPM tbl8 headers allocation failed: %s (%d)\n",
+ RTE_LOG_LINE(ERR, LPM, "LPM tbl8 headers allocation failed: %s (%d)",
rte_strerror(rte_errno), rte_errno);
rte_errno = ENOMEM;
goto fail_wo_unlock;
@@ -330,7 +330,7 @@ rte_lpm6_create(const char *name, int socket_id,
/* allocate tailq entry */
te = rte_zmalloc("LPM6_TAILQ_ENTRY", sizeof(*te), 0);
if (te == NULL) {
- RTE_LOG(ERR, LPM, "Failed to allocate tailq entry!\n");
+ RTE_LOG_LINE(ERR, LPM, "Failed to allocate tailq entry!");
rte_errno = ENOMEM;
goto fail;
}
@@ -340,7 +340,7 @@ rte_lpm6_create(const char *name, int socket_id,
RTE_CACHE_LINE_SIZE, socket_id);
if (lpm == NULL) {
- RTE_LOG(ERR, LPM, "LPM memory allocation failed\n");
+ RTE_LOG_LINE(ERR, LPM, "LPM memory allocation failed");
rte_free(te);
rte_errno = ENOMEM;
goto fail;
diff --git a/lib/mbuf/rte_mbuf.c b/lib/mbuf/rte_mbuf.c
index 3eccc61827..8472c6a977 100644
--- a/lib/mbuf/rte_mbuf.c
+++ b/lib/mbuf/rte_mbuf.c
@@ -231,7 +231,7 @@ rte_pktmbuf_pool_create_by_ops(const char *name, unsigned int n,
int ret;
if (RTE_ALIGN(priv_size, RTE_MBUF_PRIV_ALIGN) != priv_size) {
- RTE_LOG(ERR, MBUF, "mbuf priv_size=%u is not aligned\n",
+ RTE_LOG_LINE(ERR, MBUF, "mbuf priv_size=%u is not aligned",
priv_size);
rte_errno = EINVAL;
return NULL;
@@ -251,7 +251,7 @@ rte_pktmbuf_pool_create_by_ops(const char *name, unsigned int n,
mp_ops_name = rte_mbuf_best_mempool_ops();
ret = rte_mempool_set_ops_byname(mp, mp_ops_name, NULL);
if (ret != 0) {
- RTE_LOG(ERR, MBUF, "error setting mempool handler\n");
+ RTE_LOG_LINE(ERR, MBUF, "error setting mempool handler");
rte_mempool_free(mp);
rte_errno = -ret;
return NULL;
@@ -297,7 +297,7 @@ rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n,
int ret;
if (RTE_ALIGN(priv_size, RTE_MBUF_PRIV_ALIGN) != priv_size) {
- RTE_LOG(ERR, MBUF, "mbuf priv_size=%u is not aligned\n",
+ RTE_LOG_LINE(ERR, MBUF, "mbuf priv_size=%u is not aligned",
priv_size);
rte_errno = EINVAL;
return NULL;
@@ -307,12 +307,12 @@ rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n,
const struct rte_pktmbuf_extmem *extm = ext_mem + i;
if (!extm->elt_size || !extm->buf_len || !extm->buf_ptr) {
- RTE_LOG(ERR, MBUF, "invalid extmem descriptor\n");
+ RTE_LOG_LINE(ERR, MBUF, "invalid extmem descriptor");
rte_errno = EINVAL;
return NULL;
}
if (data_room_size > extm->elt_size) {
- RTE_LOG(ERR, MBUF, "ext elt_size=%u is too small\n",
+ RTE_LOG_LINE(ERR, MBUF, "ext elt_size=%u is too small",
priv_size);
rte_errno = EINVAL;
return NULL;
@@ -321,7 +321,7 @@ rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n,
}
/* Check whether enough external memory provided. */
if (n_elts < n) {
- RTE_LOG(ERR, MBUF, "not enough extmem\n");
+ RTE_LOG_LINE(ERR, MBUF, "not enough extmem");
rte_errno = ENOMEM;
return NULL;
}
@@ -342,7 +342,7 @@ rte_pktmbuf_pool_create_extbuf(const char *name, unsigned int n,
mp_ops_name = rte_mbuf_best_mempool_ops();
ret = rte_mempool_set_ops_byname(mp, mp_ops_name, NULL);
if (ret != 0) {
- RTE_LOG(ERR, MBUF, "error setting mempool handler\n");
+ RTE_LOG_LINE(ERR, MBUF, "error setting mempool handler");
rte_mempool_free(mp);
rte_errno = -ret;
return NULL;
diff --git a/lib/mbuf/rte_mbuf_dyn.c b/lib/mbuf/rte_mbuf_dyn.c
index 4fb1863a10..a9f7bb2b81 100644
--- a/lib/mbuf/rte_mbuf_dyn.c
+++ b/lib/mbuf/rte_mbuf_dyn.c
@@ -118,7 +118,7 @@ init_shared_mem(void)
mz = rte_memzone_lookup(RTE_MBUF_DYN_MZNAME);
}
if (mz == NULL) {
- RTE_LOG(ERR, MBUF, "Failed to get mbuf dyn shared memory\n");
+ RTE_LOG_LINE(ERR, MBUF, "Failed to get mbuf dyn shared memory");
return -1;
}
@@ -317,7 +317,7 @@ __rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield *params,
shm->free_space[i] = 0;
process_score();
- RTE_LOG(DEBUG, MBUF, "Registered dynamic field %s (sz=%zu, al=%zu, fl=0x%x) -> %zd\n",
+ RTE_LOG_LINE(DEBUG, MBUF, "Registered dynamic field %s (sz=%zu, al=%zu, fl=0x%x) -> %zd",
params->name, params->size, params->align, params->flags,
offset);
@@ -491,7 +491,7 @@ __rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params,
shm->free_flags &= ~(1ULL << bitnum);
- RTE_LOG(DEBUG, MBUF, "Registered dynamic flag %s (fl=0x%x) -> %u\n",
+ RTE_LOG_LINE(DEBUG, MBUF, "Registered dynamic flag %s (fl=0x%x) -> %u",
params->name, params->flags, bitnum);
return bitnum;
@@ -592,8 +592,8 @@ rte_mbuf_dyn_timestamp_register(int *field_offset, uint64_t *flag,
offset = rte_mbuf_dynfield_register(&field_desc);
if (offset < 0) {
- RTE_LOG(ERR, MBUF,
- "Failed to register mbuf field for timestamp\n");
+ RTE_LOG_LINE(ERR, MBUF,
+ "Failed to register mbuf field for timestamp");
return -1;
}
if (field_offset != NULL)
@@ -602,8 +602,8 @@ rte_mbuf_dyn_timestamp_register(int *field_offset, uint64_t *flag,
strlcpy(flag_desc.name, flag_name, sizeof(flag_desc.name));
offset = rte_mbuf_dynflag_register(&flag_desc);
if (offset < 0) {
- RTE_LOG(ERR, MBUF,
- "Failed to register mbuf flag for %s timestamp\n",
+ RTE_LOG_LINE(ERR, MBUF,
+ "Failed to register mbuf flag for %s timestamp",
direction);
return -1;
}
diff --git a/lib/mbuf/rte_mbuf_pool_ops.c b/lib/mbuf/rte_mbuf_pool_ops.c
index 5318430126..639aa557f8 100644
--- a/lib/mbuf/rte_mbuf_pool_ops.c
+++ b/lib/mbuf/rte_mbuf_pool_ops.c
@@ -33,8 +33,8 @@ rte_mbuf_set_platform_mempool_ops(const char *ops_name)
return 0;
}
- RTE_LOG(ERR, MBUF,
- "%s is already registered as platform mbuf pool ops\n",
+ RTE_LOG_LINE(ERR, MBUF,
+ "%s is already registered as platform mbuf pool ops",
(char *)mz->addr);
return -EEXIST;
}
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 2f8adad5ca..b66c8898a8 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -775,7 +775,7 @@ rte_mempool_cache_create(uint32_t size, int socket_id)
cache = rte_zmalloc_socket("MEMPOOL_CACHE", sizeof(*cache),
RTE_CACHE_LINE_SIZE, socket_id);
if (cache == NULL) {
- RTE_LOG(ERR, MEMPOOL, "Cannot allocate mempool cache.\n");
+ RTE_LOG_LINE(ERR, MEMPOOL, "Cannot allocate mempool cache.");
rte_errno = ENOMEM;
return NULL;
}
@@ -877,7 +877,7 @@ rte_mempool_create_empty(const char *name, unsigned n, unsigned elt_size,
/* try to allocate tailq entry */
te = rte_zmalloc("MEMPOOL_TAILQ_ENTRY", sizeof(*te), 0);
if (te == NULL) {
- RTE_LOG(ERR, MEMPOOL, "Cannot allocate tailq entry!\n");
+ RTE_LOG_LINE(ERR, MEMPOOL, "Cannot allocate tailq entry!");
goto exit_unlock;
}
@@ -1088,16 +1088,16 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
if (free == 0) {
if (cookie != RTE_MEMPOOL_HEADER_COOKIE1) {
- RTE_LOG(CRIT, MEMPOOL,
- "obj=%p, mempool=%p, cookie=%" PRIx64 "\n",
+ RTE_LOG_LINE(CRIT, MEMPOOL,
+ "obj=%p, mempool=%p, cookie=%" PRIx64,
obj, (const void *) mp, cookie);
rte_panic("MEMPOOL: bad header cookie (put)\n");
}
hdr->cookie = RTE_MEMPOOL_HEADER_COOKIE2;
} else if (free == 1) {
if (cookie != RTE_MEMPOOL_HEADER_COOKIE2) {
- RTE_LOG(CRIT, MEMPOOL,
- "obj=%p, mempool=%p, cookie=%" PRIx64 "\n",
+ RTE_LOG_LINE(CRIT, MEMPOOL,
+ "obj=%p, mempool=%p, cookie=%" PRIx64,
obj, (const void *) mp, cookie);
rte_panic("MEMPOOL: bad header cookie (get)\n");
}
@@ -1105,8 +1105,8 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
} else if (free == 2) {
if (cookie != RTE_MEMPOOL_HEADER_COOKIE1 &&
cookie != RTE_MEMPOOL_HEADER_COOKIE2) {
- RTE_LOG(CRIT, MEMPOOL,
- "obj=%p, mempool=%p, cookie=%" PRIx64 "\n",
+ RTE_LOG_LINE(CRIT, MEMPOOL,
+ "obj=%p, mempool=%p, cookie=%" PRIx64,
obj, (const void *) mp, cookie);
rte_panic("MEMPOOL: bad header cookie (audit)\n");
}
@@ -1114,8 +1114,8 @@ void rte_mempool_check_cookies(const struct rte_mempool *mp,
tlr = rte_mempool_get_trailer(obj);
cookie = tlr->cookie;
if (cookie != RTE_MEMPOOL_TRAILER_COOKIE) {
- RTE_LOG(CRIT, MEMPOOL,
- "obj=%p, mempool=%p, cookie=%" PRIx64 "\n",
+ RTE_LOG_LINE(CRIT, MEMPOOL,
+ "obj=%p, mempool=%p, cookie=%" PRIx64,
obj, (const void *) mp, cookie);
rte_panic("MEMPOOL: bad trailer cookie\n");
}
@@ -1200,7 +1200,7 @@ mempool_audit_cache(const struct rte_mempool *mp)
const struct rte_mempool_cache *cache;
cache = &mp->local_cache[lcore_id];
if (cache->len > RTE_DIM(cache->objs)) {
- RTE_LOG(CRIT, MEMPOOL, "badness on cache[%u]\n",
+ RTE_LOG_LINE(CRIT, MEMPOOL, "badness on cache[%u]",
lcore_id);
rte_panic("MEMPOOL: invalid cache len\n");
}
@@ -1429,7 +1429,7 @@ rte_mempool_event_callback_register(rte_mempool_event_callback *func,
cb = calloc(1, sizeof(*cb));
if (cb == NULL) {
- RTE_LOG(ERR, MEMPOOL, "Cannot allocate event callback!\n");
+ RTE_LOG_LINE(ERR, MEMPOOL, "Cannot allocate event callback!");
ret = -ENOMEM;
goto exit;
}
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 4f8511b8f5..30ce579737 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -847,7 +847,7 @@ rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table,
ret = ops->enqueue(mp, obj_table, n);
#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
if (unlikely(ret < 0))
- RTE_LOG(CRIT, MEMPOOL, "cannot enqueue %u objects to mempool %s\n",
+ RTE_LOG_LINE(CRIT, MEMPOOL, "cannot enqueue %u objects to mempool %s",
n, mp->name);
#endif
return ret;
diff --git a/lib/mempool/rte_mempool_ops.c b/lib/mempool/rte_mempool_ops.c
index e871de9ec9..d35e9b118b 100644
--- a/lib/mempool/rte_mempool_ops.c
+++ b/lib/mempool/rte_mempool_ops.c
@@ -31,22 +31,22 @@ rte_mempool_register_ops(const struct rte_mempool_ops *h)
if (rte_mempool_ops_table.num_ops >=
RTE_MEMPOOL_MAX_OPS_IDX) {
rte_spinlock_unlock(&rte_mempool_ops_table.sl);
- RTE_LOG(ERR, MEMPOOL,
- "Maximum number of mempool ops structs exceeded\n");
+ RTE_LOG_LINE(ERR, MEMPOOL,
+ "Maximum number of mempool ops structs exceeded");
return -ENOSPC;
}
if (h->alloc == NULL || h->enqueue == NULL ||
h->dequeue == NULL || h->get_count == NULL) {
rte_spinlock_unlock(&rte_mempool_ops_table.sl);
- RTE_LOG(ERR, MEMPOOL,
- "Missing callback while registering mempool ops\n");
+ RTE_LOG_LINE(ERR, MEMPOOL,
+ "Missing callback while registering mempool ops");
return -EINVAL;
}
if (strlen(h->name) >= sizeof(ops->name) - 1) {
rte_spinlock_unlock(&rte_mempool_ops_table.sl);
- RTE_LOG(DEBUG, MEMPOOL, "%s(): mempool_ops <%s>: name too long\n",
+ RTE_LOG_LINE(DEBUG, MEMPOOL, "%s(): mempool_ops <%s>: name too long",
__func__, h->name);
rte_errno = EEXIST;
return -EEXIST;
diff --git a/lib/pipeline/rte_pipeline.c b/lib/pipeline/rte_pipeline.c
index 436cf54953..fe91c48947 100644
--- a/lib/pipeline/rte_pipeline.c
+++ b/lib/pipeline/rte_pipeline.c
@@ -160,22 +160,22 @@ static int
rte_pipeline_check_params(struct rte_pipeline_params *params)
{
if (params == NULL) {
- RTE_LOG(ERR, PIPELINE,
- "%s: Incorrect value for parameter params\n", __func__);
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: Incorrect value for parameter params", __func__);
return -EINVAL;
}
/* name */
if (params->name == NULL) {
- RTE_LOG(ERR, PIPELINE,
- "%s: Incorrect value for parameter name\n", __func__);
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: Incorrect value for parameter name", __func__);
return -EINVAL;
}
/* socket */
if (params->socket_id < 0) {
- RTE_LOG(ERR, PIPELINE,
- "%s: Incorrect value for parameter socket_id\n",
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: Incorrect value for parameter socket_id",
__func__);
return -EINVAL;
}
@@ -192,8 +192,8 @@ rte_pipeline_create(struct rte_pipeline_params *params)
/* Check input parameters */
status = rte_pipeline_check_params(params);
if (status != 0) {
- RTE_LOG(ERR, PIPELINE,
- "%s: Pipeline params check failed (%d)\n",
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: Pipeline params check failed (%d)",
__func__, status);
return NULL;
}
@@ -203,8 +203,8 @@ rte_pipeline_create(struct rte_pipeline_params *params)
RTE_CACHE_LINE_SIZE, params->socket_id);
if (p == NULL) {
- RTE_LOG(ERR, PIPELINE,
- "%s: Pipeline memory allocation failed\n", __func__);
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: Pipeline memory allocation failed", __func__);
return NULL;
}
@@ -232,8 +232,8 @@ rte_pipeline_free(struct rte_pipeline *p)
/* Check input parameters */
if (p == NULL) {
- RTE_LOG(ERR, PIPELINE,
- "%s: rte_pipeline parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: rte_pipeline parameter is NULL", __func__);
return -EINVAL;
}
@@ -273,44 +273,44 @@ rte_table_check_params(struct rte_pipeline *p,
uint32_t *table_id)
{
if (p == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter is NULL\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter is NULL",
__func__);
return -EINVAL;
}
if (params == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: params parameter is NULL\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: params parameter is NULL",
__func__);
return -EINVAL;
}
if (table_id == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: table_id parameter is NULL\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: table_id parameter is NULL",
__func__);
return -EINVAL;
}
/* ops */
if (params->ops == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: params->ops is NULL\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: params->ops is NULL",
__func__);
return -EINVAL;
}
if (params->ops->f_create == NULL) {
- RTE_LOG(ERR, PIPELINE,
- "%s: f_create function pointer is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: f_create function pointer is NULL", __func__);
return -EINVAL;
}
if (params->ops->f_lookup == NULL) {
- RTE_LOG(ERR, PIPELINE,
- "%s: f_lookup function pointer is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: f_lookup function pointer is NULL", __func__);
return -EINVAL;
}
/* De we have room for one more table? */
if (p->num_tables == RTE_PIPELINE_TABLE_MAX) {
- RTE_LOG(ERR, PIPELINE,
- "%s: Incorrect value for num_tables parameter\n",
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: Incorrect value for num_tables parameter",
__func__);
return -EINVAL;
}
@@ -343,8 +343,8 @@ rte_pipeline_table_create(struct rte_pipeline *p,
default_entry = rte_zmalloc_socket(
"PIPELINE", entry_size, RTE_CACHE_LINE_SIZE, p->socket_id);
if (default_entry == NULL) {
- RTE_LOG(ERR, PIPELINE,
- "%s: Failed to allocate default entry\n", __func__);
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: Failed to allocate default entry", __func__);
return -EINVAL;
}
@@ -353,7 +353,7 @@ rte_pipeline_table_create(struct rte_pipeline *p,
entry_size);
if (h_table == NULL) {
rte_free(default_entry);
- RTE_LOG(ERR, PIPELINE, "%s: Table creation failed\n", __func__);
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: Table creation failed", __func__);
return -EINVAL;
}
@@ -399,20 +399,20 @@ rte_pipeline_table_default_entry_add(struct rte_pipeline *p,
/* Check input arguments */
if (p == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter is NULL\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter is NULL",
__func__);
return -EINVAL;
}
if (default_entry == NULL) {
- RTE_LOG(ERR, PIPELINE,
- "%s: default_entry parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: default_entry parameter is NULL", __func__);
return -EINVAL;
}
if (table_id >= p->num_tables) {
- RTE_LOG(ERR, PIPELINE,
- "%s: table_id %d out of range\n", __func__, table_id);
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: table_id %d out of range", __func__, table_id);
return -EINVAL;
}
@@ -421,8 +421,8 @@ rte_pipeline_table_default_entry_add(struct rte_pipeline *p,
if ((default_entry->action == RTE_PIPELINE_ACTION_TABLE) &&
table->table_next_id_valid &&
(default_entry->table_id != table->table_next_id)) {
- RTE_LOG(ERR, PIPELINE,
- "%s: Tree-like topologies not allowed\n", __func__);
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: Tree-like topologies not allowed", __func__);
return -EINVAL;
}
@@ -448,14 +448,14 @@ rte_pipeline_table_default_entry_delete(struct rte_pipeline *p,
/* Check input arguments */
if (p == NULL) {
- RTE_LOG(ERR, PIPELINE,
- "%s: pipeline parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: pipeline parameter is NULL", __func__);
return -EINVAL;
}
if (table_id >= p->num_tables) {
- RTE_LOG(ERR, PIPELINE,
- "%s: table_id %d out of range\n", __func__, table_id);
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: table_id %d out of range", __func__, table_id);
return -EINVAL;
}
@@ -484,32 +484,32 @@ rte_pipeline_table_entry_add(struct rte_pipeline *p,
/* Check input arguments */
if (p == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter is NULL\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter is NULL",
__func__);
return -EINVAL;
}
if (key == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: key parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: key parameter is NULL", __func__);
return -EINVAL;
}
if (entry == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: entry parameter is NULL\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: entry parameter is NULL",
__func__);
return -EINVAL;
}
if (table_id >= p->num_tables) {
- RTE_LOG(ERR, PIPELINE,
- "%s: table_id %d out of range\n", __func__, table_id);
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: table_id %d out of range", __func__, table_id);
return -EINVAL;
}
table = &p->tables[table_id];
if (table->ops.f_add == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: f_add function pointer NULL\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: f_add function pointer NULL",
__func__);
return -EINVAL;
}
@@ -517,8 +517,8 @@ rte_pipeline_table_entry_add(struct rte_pipeline *p,
if ((entry->action == RTE_PIPELINE_ACTION_TABLE) &&
table->table_next_id_valid &&
(entry->table_id != table->table_next_id)) {
- RTE_LOG(ERR, PIPELINE,
- "%s: Tree-like topologies not allowed\n", __func__);
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: Tree-like topologies not allowed", __func__);
return -EINVAL;
}
@@ -544,28 +544,28 @@ rte_pipeline_table_entry_delete(struct rte_pipeline *p,
/* Check input arguments */
if (p == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL",
__func__);
return -EINVAL;
}
if (key == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: key parameter is NULL\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: key parameter is NULL",
__func__);
return -EINVAL;
}
if (table_id >= p->num_tables) {
- RTE_LOG(ERR, PIPELINE,
- "%s: table_id %d out of range\n", __func__, table_id);
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: table_id %d out of range", __func__, table_id);
return -EINVAL;
}
table = &p->tables[table_id];
if (table->ops.f_delete == NULL) {
- RTE_LOG(ERR, PIPELINE,
- "%s: f_delete function pointer NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: f_delete function pointer NULL", __func__);
return -EINVAL;
}
@@ -585,32 +585,32 @@ int rte_pipeline_table_entry_add_bulk(struct rte_pipeline *p,
/* Check input arguments */
if (p == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter is NULL\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter is NULL",
__func__);
return -EINVAL;
}
if (keys == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: keys parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: keys parameter is NULL", __func__);
return -EINVAL;
}
if (entries == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: entries parameter is NULL\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: entries parameter is NULL",
__func__);
return -EINVAL;
}
if (table_id >= p->num_tables) {
- RTE_LOG(ERR, PIPELINE,
- "%s: table_id %d out of range\n", __func__, table_id);
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: table_id %d out of range", __func__, table_id);
return -EINVAL;
}
table = &p->tables[table_id];
if (table->ops.f_add_bulk == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: f_add_bulk function pointer NULL\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: f_add_bulk function pointer NULL",
__func__);
return -EINVAL;
}
@@ -619,8 +619,8 @@ int rte_pipeline_table_entry_add_bulk(struct rte_pipeline *p,
if ((entries[i]->action == RTE_PIPELINE_ACTION_TABLE) &&
table->table_next_id_valid &&
(entries[i]->table_id != table->table_next_id)) {
- RTE_LOG(ERR, PIPELINE,
- "%s: Tree-like topologies not allowed\n", __func__);
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: Tree-like topologies not allowed", __func__);
return -EINVAL;
}
}
@@ -649,28 +649,28 @@ int rte_pipeline_table_entry_delete_bulk(struct rte_pipeline *p,
/* Check input arguments */
if (p == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL",
__func__);
return -EINVAL;
}
if (keys == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: key parameter is NULL\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: key parameter is NULL",
__func__);
return -EINVAL;
}
if (table_id >= p->num_tables) {
- RTE_LOG(ERR, PIPELINE,
- "%s: table_id %d out of range\n", __func__, table_id);
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: table_id %d out of range", __func__, table_id);
return -EINVAL;
}
table = &p->tables[table_id];
if (table->ops.f_delete_bulk == NULL) {
- RTE_LOG(ERR, PIPELINE,
- "%s: f_delete function pointer NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: f_delete function pointer NULL", __func__);
return -EINVAL;
}
@@ -687,35 +687,35 @@ rte_pipeline_port_in_check_params(struct rte_pipeline *p,
uint32_t *port_id)
{
if (p == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL",
__func__);
return -EINVAL;
}
if (params == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: params parameter NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: params parameter NULL", __func__);
return -EINVAL;
}
if (port_id == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: port_id parameter NULL\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: port_id parameter NULL",
__func__);
return -EINVAL;
}
/* ops */
if (params->ops == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: params->ops parameter NULL\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: params->ops parameter NULL",
__func__);
return -EINVAL;
}
if (params->ops->f_create == NULL) {
- RTE_LOG(ERR, PIPELINE,
- "%s: f_create function pointer NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: f_create function pointer NULL", __func__);
return -EINVAL;
}
if (params->ops->f_rx == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: f_rx function pointer NULL\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: f_rx function pointer NULL",
__func__);
return -EINVAL;
}
@@ -723,15 +723,15 @@ rte_pipeline_port_in_check_params(struct rte_pipeline *p,
/* burst_size */
if ((params->burst_size == 0) ||
(params->burst_size > RTE_PORT_IN_BURST_SIZE_MAX)) {
- RTE_LOG(ERR, PIPELINE, "%s: invalid value for burst_size\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: invalid value for burst_size",
__func__);
return -EINVAL;
}
/* Do we have room for one more port? */
if (p->num_ports_in == RTE_PIPELINE_PORT_IN_MAX) {
- RTE_LOG(ERR, PIPELINE,
- "%s: invalid value for num_ports_in\n", __func__);
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: invalid value for num_ports_in", __func__);
return -EINVAL;
}
@@ -744,51 +744,51 @@ rte_pipeline_port_out_check_params(struct rte_pipeline *p,
uint32_t *port_id)
{
if (p == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL",
__func__);
return -EINVAL;
}
if (params == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: params parameter NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: params parameter NULL", __func__);
return -EINVAL;
}
if (port_id == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: port_id parameter NULL\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: port_id parameter NULL",
__func__);
return -EINVAL;
}
/* ops */
if (params->ops == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: params->ops parameter NULL\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: params->ops parameter NULL",
__func__);
return -EINVAL;
}
if (params->ops->f_create == NULL) {
- RTE_LOG(ERR, PIPELINE,
- "%s: f_create function pointer NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: f_create function pointer NULL", __func__);
return -EINVAL;
}
if (params->ops->f_tx == NULL) {
- RTE_LOG(ERR, PIPELINE,
- "%s: f_tx function pointer NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: f_tx function pointer NULL", __func__);
return -EINVAL;
}
if (params->ops->f_tx_bulk == NULL) {
- RTE_LOG(ERR, PIPELINE,
- "%s: f_tx_bulk function pointer NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: f_tx_bulk function pointer NULL", __func__);
return -EINVAL;
}
/* Do we have room for one more port? */
if (p->num_ports_out == RTE_PIPELINE_PORT_OUT_MAX) {
- RTE_LOG(ERR, PIPELINE,
- "%s: invalid value for num_ports_out\n", __func__);
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: invalid value for num_ports_out", __func__);
return -EINVAL;
}
@@ -816,7 +816,7 @@ rte_pipeline_port_in_create(struct rte_pipeline *p,
/* Create the port */
h_port = params->ops->f_create(params->arg_create, p->socket_id);
if (h_port == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: Port creation failed\n", __func__);
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: Port creation failed", __func__);
return -EINVAL;
}
@@ -866,7 +866,7 @@ rte_pipeline_port_out_create(struct rte_pipeline *p,
/* Create the port */
h_port = params->ops->f_create(params->arg_create, p->socket_id);
if (h_port == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: Port creation failed\n", __func__);
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: Port creation failed", __func__);
return -EINVAL;
}
@@ -901,21 +901,21 @@ rte_pipeline_port_in_connect_to_table(struct rte_pipeline *p,
/* Check input arguments */
if (p == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL",
__func__);
return -EINVAL;
}
if (port_id >= p->num_ports_in) {
- RTE_LOG(ERR, PIPELINE,
- "%s: port IN ID %u is out of range\n",
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: port IN ID %u is out of range",
__func__, port_id);
return -EINVAL;
}
if (table_id >= p->num_tables) {
- RTE_LOG(ERR, PIPELINE,
- "%s: Table ID %u is out of range\n",
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: Table ID %u is out of range",
__func__, table_id);
return -EINVAL;
}
@@ -935,14 +935,14 @@ rte_pipeline_port_in_enable(struct rte_pipeline *p, uint32_t port_id)
/* Check input arguments */
if (p == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL",
__func__);
return -EINVAL;
}
if (port_id >= p->num_ports_in) {
- RTE_LOG(ERR, PIPELINE,
- "%s: port IN ID %u is out of range\n",
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: port IN ID %u is out of range",
__func__, port_id);
return -EINVAL;
}
@@ -982,13 +982,13 @@ rte_pipeline_port_in_disable(struct rte_pipeline *p, uint32_t port_id)
/* Check input arguments */
if (p == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL",
__func__);
return -EINVAL;
}
if (port_id >= p->num_ports_in) {
- RTE_LOG(ERR, PIPELINE, "%s: port IN ID %u is out of range\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: port IN ID %u is out of range",
__func__, port_id);
return -EINVAL;
}
@@ -1035,7 +1035,7 @@ rte_pipeline_check(struct rte_pipeline *p)
/* Check input arguments */
if (p == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL",
__func__);
return -EINVAL;
}
@@ -1043,17 +1043,17 @@ rte_pipeline_check(struct rte_pipeline *p)
/* Check that pipeline has at least one input port, one table and one
output port */
if (p->num_ports_in == 0) {
- RTE_LOG(ERR, PIPELINE, "%s: must have at least 1 input port\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: must have at least 1 input port",
__func__);
return -EINVAL;
}
if (p->num_tables == 0) {
- RTE_LOG(ERR, PIPELINE, "%s: must have at least 1 table\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: must have at least 1 table",
__func__);
return -EINVAL;
}
if (p->num_ports_out == 0) {
- RTE_LOG(ERR, PIPELINE, "%s: must have at least 1 output port\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: must have at least 1 output port",
__func__);
return -EINVAL;
}
@@ -1063,8 +1063,8 @@ rte_pipeline_check(struct rte_pipeline *p)
struct rte_port_in *port_in = &p->ports_in[port_in_id];
if (port_in->table_id == RTE_TABLE_INVALID) {
- RTE_LOG(ERR, PIPELINE,
- "%s: Port IN ID %u is not connected\n",
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: Port IN ID %u is not connected",
__func__, port_in_id);
return -EINVAL;
}
@@ -1447,7 +1447,7 @@ rte_pipeline_flush(struct rte_pipeline *p)
/* Check input arguments */
if (p == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL",
__func__);
return -EINVAL;
}
@@ -1500,14 +1500,14 @@ int rte_pipeline_port_in_stats_read(struct rte_pipeline *p, uint32_t port_id,
int retval;
if (p == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL",
__func__);
return -EINVAL;
}
if (port_id >= p->num_ports_in) {
- RTE_LOG(ERR, PIPELINE,
- "%s: port IN ID %u is out of range\n",
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: port IN ID %u is out of range",
__func__, port_id);
return -EINVAL;
}
@@ -1537,13 +1537,13 @@ int rte_pipeline_port_out_stats_read(struct rte_pipeline *p, uint32_t port_id,
int retval;
if (p == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL", __func__);
return -EINVAL;
}
if (port_id >= p->num_ports_out) {
- RTE_LOG(ERR, PIPELINE,
- "%s: port OUT ID %u is out of range\n", __func__, port_id);
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: port OUT ID %u is out of range", __func__, port_id);
return -EINVAL;
}
@@ -1571,14 +1571,14 @@ int rte_pipeline_table_stats_read(struct rte_pipeline *p, uint32_t table_id,
int retval;
if (p == NULL) {
- RTE_LOG(ERR, PIPELINE, "%s: pipeline parameter NULL\n",
+ RTE_LOG_LINE(ERR, PIPELINE, "%s: pipeline parameter NULL",
__func__);
return -EINVAL;
}
if (table_id >= p->num_tables) {
- RTE_LOG(ERR, PIPELINE,
- "%s: table %u is out of range\n", __func__, table_id);
+ RTE_LOG_LINE(ERR, PIPELINE,
+ "%s: table %u is out of range", __func__, table_id);
return -EINVAL;
}
diff --git a/lib/port/rte_port_ethdev.c b/lib/port/rte_port_ethdev.c
index e6bb7ee480..7f7eadda11 100644
--- a/lib/port/rte_port_ethdev.c
+++ b/lib/port/rte_port_ethdev.c
@@ -43,7 +43,7 @@ rte_port_ethdev_reader_create(void *params, int socket_id)
/* Check input parameters */
if (conf == NULL) {
- RTE_LOG(ERR, PORT, "%s: params is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: params is NULL", __func__);
return NULL;
}
@@ -51,7 +51,7 @@ rte_port_ethdev_reader_create(void *params, int socket_id)
port = rte_zmalloc_socket("PORT", sizeof(*port),
RTE_CACHE_LINE_SIZE, socket_id);
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__);
return NULL;
}
@@ -78,7 +78,7 @@ static int
rte_port_ethdev_reader_free(void *port)
{
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__);
return -EINVAL;
}
@@ -142,7 +142,7 @@ rte_port_ethdev_writer_create(void *params, int socket_id)
(conf->tx_burst_sz == 0) ||
(conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) ||
(!rte_is_power_of_2(conf->tx_burst_sz))) {
- RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__);
return NULL;
}
@@ -150,7 +150,7 @@ rte_port_ethdev_writer_create(void *params, int socket_id)
port = rte_zmalloc_socket("PORT", sizeof(*port),
RTE_CACHE_LINE_SIZE, socket_id);
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__);
return NULL;
}
@@ -257,7 +257,7 @@ static int
rte_port_ethdev_writer_free(void *port)
{
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__);
return -EINVAL;
}
@@ -323,7 +323,7 @@ rte_port_ethdev_writer_nodrop_create(void *params, int socket_id)
(conf->tx_burst_sz == 0) ||
(conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) ||
(!rte_is_power_of_2(conf->tx_burst_sz))) {
- RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__);
return NULL;
}
@@ -331,7 +331,7 @@ rte_port_ethdev_writer_nodrop_create(void *params, int socket_id)
port = rte_zmalloc_socket("PORT", sizeof(*port),
RTE_CACHE_LINE_SIZE, socket_id);
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__);
return NULL;
}
@@ -470,7 +470,7 @@ static int
rte_port_ethdev_writer_nodrop_free(void *port)
{
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__);
return -EINVAL;
}
diff --git a/lib/port/rte_port_eventdev.c b/lib/port/rte_port_eventdev.c
index 13350fd608..1d0571966c 100644
--- a/lib/port/rte_port_eventdev.c
+++ b/lib/port/rte_port_eventdev.c
@@ -45,7 +45,7 @@ rte_port_eventdev_reader_create(void *params, int socket_id)
/* Check input parameters */
if (conf == NULL) {
- RTE_LOG(ERR, PORT, "%s: params is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: params is NULL", __func__);
return NULL;
}
@@ -53,7 +53,7 @@ rte_port_eventdev_reader_create(void *params, int socket_id)
port = rte_zmalloc_socket("PORT", sizeof(*port),
RTE_CACHE_LINE_SIZE, socket_id);
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__);
return NULL;
}
@@ -85,7 +85,7 @@ static int
rte_port_eventdev_reader_free(void *port)
{
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__);
return -EINVAL;
}
@@ -155,7 +155,7 @@ rte_port_eventdev_writer_create(void *params, int socket_id)
(conf->enq_burst_sz == 0) ||
(conf->enq_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) ||
(!rte_is_power_of_2(conf->enq_burst_sz))) {
- RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__);
return NULL;
}
@@ -163,7 +163,7 @@ rte_port_eventdev_writer_create(void *params, int socket_id)
port = rte_zmalloc_socket("PORT", sizeof(*port),
RTE_CACHE_LINE_SIZE, socket_id);
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__);
return NULL;
}
@@ -290,7 +290,7 @@ static int
rte_port_eventdev_writer_free(void *port)
{
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__);
return -EINVAL;
}
@@ -362,7 +362,7 @@ rte_port_eventdev_writer_nodrop_create(void *params, int socket_id)
(conf->enq_burst_sz == 0) ||
(conf->enq_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) ||
(!rte_is_power_of_2(conf->enq_burst_sz))) {
- RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__);
return NULL;
}
@@ -370,7 +370,7 @@ rte_port_eventdev_writer_nodrop_create(void *params, int socket_id)
port = rte_zmalloc_socket("PORT", sizeof(*port),
RTE_CACHE_LINE_SIZE, socket_id);
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__);
return NULL;
}
@@ -530,7 +530,7 @@ static int
rte_port_eventdev_writer_nodrop_free(void *port)
{
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__);
return -EINVAL;
}
diff --git a/lib/port/rte_port_fd.c b/lib/port/rte_port_fd.c
index 7e140793b2..1b95d7b014 100644
--- a/lib/port/rte_port_fd.c
+++ b/lib/port/rte_port_fd.c
@@ -43,19 +43,19 @@ rte_port_fd_reader_create(void *params, int socket_id)
/* Check input parameters */
if (conf == NULL) {
- RTE_LOG(ERR, PORT, "%s: params is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: params is NULL", __func__);
return NULL;
}
if (conf->fd < 0) {
- RTE_LOG(ERR, PORT, "%s: Invalid file descriptor\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Invalid file descriptor", __func__);
return NULL;
}
if (conf->mtu == 0) {
- RTE_LOG(ERR, PORT, "%s: Invalid MTU\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Invalid MTU", __func__);
return NULL;
}
if (conf->mempool == NULL) {
- RTE_LOG(ERR, PORT, "%s: Invalid mempool\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Invalid mempool", __func__);
return NULL;
}
@@ -63,7 +63,7 @@ rte_port_fd_reader_create(void *params, int socket_id)
port = rte_zmalloc_socket("PORT", sizeof(*port),
RTE_CACHE_LINE_SIZE, socket_id);
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__);
return NULL;
}
@@ -109,7 +109,7 @@ static int
rte_port_fd_reader_free(void *port)
{
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__);
return -EINVAL;
}
@@ -171,7 +171,7 @@ rte_port_fd_writer_create(void *params, int socket_id)
(conf->tx_burst_sz == 0) ||
(conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) ||
(!rte_is_power_of_2(conf->tx_burst_sz))) {
- RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__);
return NULL;
}
@@ -179,7 +179,7 @@ rte_port_fd_writer_create(void *params, int socket_id)
port = rte_zmalloc_socket("PORT", sizeof(*port),
RTE_CACHE_LINE_SIZE, socket_id);
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__);
return NULL;
}
@@ -279,7 +279,7 @@ static int
rte_port_fd_writer_free(void *port)
{
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__);
return -EINVAL;
}
@@ -344,7 +344,7 @@ rte_port_fd_writer_nodrop_create(void *params, int socket_id)
(conf->tx_burst_sz == 0) ||
(conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) ||
(!rte_is_power_of_2(conf->tx_burst_sz))) {
- RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__);
return NULL;
}
@@ -352,7 +352,7 @@ rte_port_fd_writer_nodrop_create(void *params, int socket_id)
port = rte_zmalloc_socket("PORT", sizeof(*port),
RTE_CACHE_LINE_SIZE, socket_id);
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__);
return NULL;
}
@@ -464,7 +464,7 @@ static int
rte_port_fd_writer_nodrop_free(void *port)
{
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__);
return -EINVAL;
}
diff --git a/lib/port/rte_port_frag.c b/lib/port/rte_port_frag.c
index e1f1892176..39ff31e447 100644
--- a/lib/port/rte_port_frag.c
+++ b/lib/port/rte_port_frag.c
@@ -62,24 +62,24 @@ rte_port_ring_reader_frag_create(void *params, int socket_id, int is_ipv4)
/* Check input parameters */
if (conf == NULL) {
- RTE_LOG(ERR, PORT, "%s: Parameter conf is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Parameter conf is NULL", __func__);
return NULL;
}
if (conf->ring == NULL) {
- RTE_LOG(ERR, PORT, "%s: Parameter ring is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Parameter ring is NULL", __func__);
return NULL;
}
if (conf->mtu == 0) {
- RTE_LOG(ERR, PORT, "%s: Parameter mtu is invalid\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Parameter mtu is invalid", __func__);
return NULL;
}
if (conf->pool_direct == NULL) {
- RTE_LOG(ERR, PORT, "%s: Parameter pool_direct is NULL\n",
+ RTE_LOG_LINE(ERR, PORT, "%s: Parameter pool_direct is NULL",
__func__);
return NULL;
}
if (conf->pool_indirect == NULL) {
- RTE_LOG(ERR, PORT, "%s: Parameter pool_indirect is NULL\n",
+ RTE_LOG_LINE(ERR, PORT, "%s: Parameter pool_indirect is NULL",
__func__);
return NULL;
}
@@ -88,7 +88,7 @@ rte_port_ring_reader_frag_create(void *params, int socket_id, int is_ipv4)
port = rte_zmalloc_socket("PORT", sizeof(*port), RTE_CACHE_LINE_SIZE,
socket_id);
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__);
return NULL;
}
@@ -232,7 +232,7 @@ static int
rte_port_ring_reader_frag_free(void *port)
{
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Parameter port is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Parameter port is NULL", __func__);
return -1;
}
diff --git a/lib/port/rte_port_ras.c b/lib/port/rte_port_ras.c
index 15109661d1..1e697fd226 100644
--- a/lib/port/rte_port_ras.c
+++ b/lib/port/rte_port_ras.c
@@ -69,16 +69,16 @@ rte_port_ring_writer_ras_create(void *params, int socket_id, int is_ipv4)
/* Check input parameters */
if (conf == NULL) {
- RTE_LOG(ERR, PORT, "%s: Parameter conf is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Parameter conf is NULL", __func__);
return NULL;
}
if (conf->ring == NULL) {
- RTE_LOG(ERR, PORT, "%s: Parameter ring is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Parameter ring is NULL", __func__);
return NULL;
}
if ((conf->tx_burst_sz == 0) ||
(conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX)) {
- RTE_LOG(ERR, PORT, "%s: Parameter tx_burst_sz is invalid\n",
+ RTE_LOG_LINE(ERR, PORT, "%s: Parameter tx_burst_sz is invalid",
__func__);
return NULL;
}
@@ -87,7 +87,7 @@ rte_port_ring_writer_ras_create(void *params, int socket_id, int is_ipv4)
port = rte_zmalloc_socket("PORT", sizeof(*port),
RTE_CACHE_LINE_SIZE, socket_id);
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Failed to allocate socket\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate socket", __func__);
return NULL;
}
@@ -103,7 +103,7 @@ rte_port_ring_writer_ras_create(void *params, int socket_id, int is_ipv4)
socket_id);
if (port->frag_tbl == NULL) {
- RTE_LOG(ERR, PORT, "%s: rte_ip_frag_table_create failed\n",
+ RTE_LOG_LINE(ERR, PORT, "%s: rte_ip_frag_table_create failed",
__func__);
rte_free(port);
return NULL;
@@ -282,7 +282,7 @@ rte_port_ring_writer_ras_free(void *port)
port;
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Parameter port is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Parameter port is NULL", __func__);
return -1;
}
diff --git a/lib/port/rte_port_ring.c b/lib/port/rte_port_ring.c
index 002efb7c3e..42b33763d1 100644
--- a/lib/port/rte_port_ring.c
+++ b/lib/port/rte_port_ring.c
@@ -46,7 +46,7 @@ rte_port_ring_reader_create_internal(void *params, int socket_id,
(conf->ring == NULL) ||
(rte_ring_is_cons_single(conf->ring) && is_multi) ||
(!rte_ring_is_cons_single(conf->ring) && !is_multi)) {
- RTE_LOG(ERR, PORT, "%s: Invalid Parameters\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Invalid Parameters", __func__);
return NULL;
}
@@ -54,7 +54,7 @@ rte_port_ring_reader_create_internal(void *params, int socket_id,
port = rte_zmalloc_socket("PORT", sizeof(*port),
RTE_CACHE_LINE_SIZE, socket_id);
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__);
return NULL;
}
@@ -107,7 +107,7 @@ static int
rte_port_ring_reader_free(void *port)
{
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__);
return -EINVAL;
}
@@ -174,7 +174,7 @@ rte_port_ring_writer_create_internal(void *params, int socket_id,
(rte_ring_is_prod_single(conf->ring) && is_multi) ||
(!rte_ring_is_prod_single(conf->ring) && !is_multi) ||
(conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX)) {
- RTE_LOG(ERR, PORT, "%s: Invalid Parameters\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Invalid Parameters", __func__);
return NULL;
}
@@ -182,7 +182,7 @@ rte_port_ring_writer_create_internal(void *params, int socket_id,
port = rte_zmalloc_socket("PORT", sizeof(*port),
RTE_CACHE_LINE_SIZE, socket_id);
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__);
return NULL;
}
@@ -370,7 +370,7 @@ rte_port_ring_writer_free(void *port)
struct rte_port_ring_writer *p = port;
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__);
return -EINVAL;
}
@@ -443,7 +443,7 @@ rte_port_ring_writer_nodrop_create_internal(void *params, int socket_id,
(rte_ring_is_prod_single(conf->ring) && is_multi) ||
(!rte_ring_is_prod_single(conf->ring) && !is_multi) ||
(conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX)) {
- RTE_LOG(ERR, PORT, "%s: Invalid Parameters\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Invalid Parameters", __func__);
return NULL;
}
@@ -451,7 +451,7 @@ rte_port_ring_writer_nodrop_create_internal(void *params, int socket_id,
port = rte_zmalloc_socket("PORT", sizeof(*port),
RTE_CACHE_LINE_SIZE, socket_id);
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__);
return NULL;
}
@@ -703,7 +703,7 @@ rte_port_ring_writer_nodrop_free(void *port)
port;
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__);
return -EINVAL;
}
diff --git a/lib/port/rte_port_sched.c b/lib/port/rte_port_sched.c
index f6255c4346..e83112989f 100644
--- a/lib/port/rte_port_sched.c
+++ b/lib/port/rte_port_sched.c
@@ -40,7 +40,7 @@ rte_port_sched_reader_create(void *params, int socket_id)
/* Check input parameters */
if ((conf == NULL) ||
(conf->sched == NULL)) {
- RTE_LOG(ERR, PORT, "%s: Invalid params\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Invalid params", __func__);
return NULL;
}
@@ -48,7 +48,7 @@ rte_port_sched_reader_create(void *params, int socket_id)
port = rte_zmalloc_socket("PORT", sizeof(*port),
RTE_CACHE_LINE_SIZE, socket_id);
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__);
return NULL;
}
@@ -74,7 +74,7 @@ static int
rte_port_sched_reader_free(void *port)
{
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__);
return -EINVAL;
}
@@ -139,7 +139,7 @@ rte_port_sched_writer_create(void *params, int socket_id)
(conf->tx_burst_sz == 0) ||
(conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) ||
(!rte_is_power_of_2(conf->tx_burst_sz))) {
- RTE_LOG(ERR, PORT, "%s: Invalid params\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Invalid params", __func__);
return NULL;
}
@@ -147,7 +147,7 @@ rte_port_sched_writer_create(void *params, int socket_id)
port = rte_zmalloc_socket("PORT", sizeof(*port),
RTE_CACHE_LINE_SIZE, socket_id);
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__);
return NULL;
}
@@ -247,7 +247,7 @@ static int
rte_port_sched_writer_free(void *port)
{
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__);
return -EINVAL;
}
diff --git a/lib/port/rte_port_source_sink.c b/lib/port/rte_port_source_sink.c
index ff9677cdfe..cb4b7fa7fb 100644
--- a/lib/port/rte_port_source_sink.c
+++ b/lib/port/rte_port_source_sink.c
@@ -75,8 +75,8 @@ pcap_source_load(struct rte_port_source *port,
/* first time open, get packet number */
pcap_handle = pcap_open_offline(file_name, pcap_errbuf);
if (pcap_handle == NULL) {
- RTE_LOG(ERR, PORT, "Failed to open pcap file "
- "'%s' for reading\n", file_name);
+ RTE_LOG_LINE(ERR, PORT, "Failed to open pcap file "
+ "'%s' for reading", file_name);
goto error_exit;
}
@@ -88,29 +88,29 @@ pcap_source_load(struct rte_port_source *port,
port->pkt_len = rte_zmalloc_socket("PCAP",
(sizeof(*port->pkt_len) * n_pkts), 0, socket_id);
if (port->pkt_len == NULL) {
- RTE_LOG(ERR, PORT, "No enough memory\n");
+ RTE_LOG_LINE(ERR, PORT, "No enough memory");
goto error_exit;
}
pkt_len_aligns = rte_malloc("PCAP",
(sizeof(*pkt_len_aligns) * n_pkts), 0);
if (pkt_len_aligns == NULL) {
- RTE_LOG(ERR, PORT, "No enough memory\n");
+ RTE_LOG_LINE(ERR, PORT, "No enough memory");
goto error_exit;
}
port->pkts = rte_zmalloc_socket("PCAP",
(sizeof(*port->pkts) * n_pkts), 0, socket_id);
if (port->pkts == NULL) {
- RTE_LOG(ERR, PORT, "No enough memory\n");
+ RTE_LOG_LINE(ERR, PORT, "No enough memory");
goto error_exit;
}
/* open 2nd time, get pkt_len */
pcap_handle = pcap_open_offline(file_name, pcap_errbuf);
if (pcap_handle == NULL) {
- RTE_LOG(ERR, PORT, "Failed to open pcap file "
- "'%s' for reading\n", file_name);
+ RTE_LOG_LINE(ERR, PORT, "Failed to open pcap file "
+ "'%s' for reading", file_name);
goto error_exit;
}
@@ -128,7 +128,7 @@ pcap_source_load(struct rte_port_source *port,
buff = rte_zmalloc_socket("PCAP",
total_buff_len, 0, socket_id);
if (buff == NULL) {
- RTE_LOG(ERR, PORT, "No enough memory\n");
+ RTE_LOG_LINE(ERR, PORT, "No enough memory");
goto error_exit;
}
@@ -137,8 +137,8 @@ pcap_source_load(struct rte_port_source *port,
/* open file one last time to copy the pkt content */
pcap_handle = pcap_open_offline(file_name, pcap_errbuf);
if (pcap_handle == NULL) {
- RTE_LOG(ERR, PORT, "Failed to open pcap file "
- "'%s' for reading\n", file_name);
+ RTE_LOG_LINE(ERR, PORT, "Failed to open pcap file "
+ "'%s' for reading", file_name);
goto error_exit;
}
@@ -155,8 +155,8 @@ pcap_source_load(struct rte_port_source *port,
rte_free(pkt_len_aligns);
- RTE_LOG(INFO, PORT, "Successfully load pcap file "
- "'%s' with %u pkts\n",
+ RTE_LOG_LINE(INFO, PORT, "Successfully load pcap file "
+ "'%s' with %u pkts",
file_name, port->n_pkts);
return 0;
@@ -180,8 +180,8 @@ pcap_source_load(struct rte_port_source *port,
int _ret = 0; \
\
if (file_name) { \
- RTE_LOG(ERR, PORT, "Source port field " \
- "\"file_name\" is not NULL.\n"); \
+ RTE_LOG_LINE(ERR, PORT, "Source port field " \
+ "\"file_name\" is not NULL."); \
_ret = -1; \
} \
\
@@ -199,7 +199,7 @@ rte_port_source_create(void *params, int socket_id)
/* Check input arguments*/
if ((p == NULL) || (p->mempool == NULL)) {
- RTE_LOG(ERR, PORT, "%s: Invalid params\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Invalid params", __func__);
return NULL;
}
@@ -207,7 +207,7 @@ rte_port_source_create(void *params, int socket_id)
port = rte_zmalloc_socket("PORT", sizeof(*port),
RTE_CACHE_LINE_SIZE, socket_id);
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__);
return NULL;
}
@@ -332,15 +332,15 @@ pcap_sink_open(struct rte_port_sink *port,
/** Open a dead pcap handler for opening dumper file */
tx_pcap = pcap_open_dead(DLT_EN10MB, 65535);
if (tx_pcap == NULL) {
- RTE_LOG(ERR, PORT, "Cannot open pcap dead handler\n");
+ RTE_LOG_LINE(ERR, PORT, "Cannot open pcap dead handler");
return -1;
}
/* The dumper is created using the previous pcap_t reference */
pcap_dumper = pcap_dump_open(tx_pcap, file_name);
if (pcap_dumper == NULL) {
- RTE_LOG(ERR, PORT, "Failed to open pcap file "
- "\"%s\" for writing\n", file_name);
+ RTE_LOG_LINE(ERR, PORT, "Failed to open pcap file "
+ "\"%s\" for writing", file_name);
return -1;
}
@@ -349,7 +349,7 @@ pcap_sink_open(struct rte_port_sink *port,
port->pkt_index = 0;
port->dump_finish = 0;
- RTE_LOG(INFO, PORT, "Ready to dump packets to file \"%s\"\n",
+ RTE_LOG_LINE(INFO, PORT, "Ready to dump packets to file \"%s\"",
file_name);
return 0;
@@ -402,7 +402,7 @@ pcap_sink_write_pkt(struct rte_port_sink *port, struct rte_mbuf *mbuf)
if ((port->max_pkts != 0) && (port->pkt_index >= port->max_pkts)) {
port->dump_finish = 1;
- RTE_LOG(INFO, PORT, "Dumped %u packets to file\n",
+ RTE_LOG_LINE(INFO, PORT, "Dumped %u packets to file",
port->pkt_index);
}
@@ -433,8 +433,8 @@ do { \
int _ret = 0; \
\
if (file_name) { \
- RTE_LOG(ERR, PORT, "Sink port field " \
- "\"file_name\" is not NULL.\n"); \
+ RTE_LOG_LINE(ERR, PORT, "Sink port field " \
+ "\"file_name\" is not NULL."); \
_ret = -1; \
} \
\
@@ -459,7 +459,7 @@ rte_port_sink_create(void *params, int socket_id)
port = rte_zmalloc_socket("PORT", sizeof(*port),
RTE_CACHE_LINE_SIZE, socket_id);
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__);
return NULL;
}
diff --git a/lib/port/rte_port_sym_crypto.c b/lib/port/rte_port_sym_crypto.c
index 27b7e07cea..8e9abff9d6 100644
--- a/lib/port/rte_port_sym_crypto.c
+++ b/lib/port/rte_port_sym_crypto.c
@@ -44,7 +44,7 @@ rte_port_sym_crypto_reader_create(void *params, int socket_id)
/* Check input parameters */
if (conf == NULL) {
- RTE_LOG(ERR, PORT, "%s: params is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: params is NULL", __func__);
return NULL;
}
@@ -52,7 +52,7 @@ rte_port_sym_crypto_reader_create(void *params, int socket_id)
port = rte_zmalloc_socket("PORT", sizeof(*port),
RTE_CACHE_LINE_SIZE, socket_id);
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__);
return NULL;
}
@@ -100,7 +100,7 @@ static int
rte_port_sym_crypto_reader_free(void *port)
{
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: port is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: port is NULL", __func__);
return -EINVAL;
}
@@ -167,7 +167,7 @@ rte_port_sym_crypto_writer_create(void *params, int socket_id)
(conf->tx_burst_sz == 0) ||
(conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) ||
(!rte_is_power_of_2(conf->tx_burst_sz))) {
- RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__);
return NULL;
}
@@ -175,7 +175,7 @@ rte_port_sym_crypto_writer_create(void *params, int socket_id)
port = rte_zmalloc_socket("PORT", sizeof(*port),
RTE_CACHE_LINE_SIZE, socket_id);
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__);
return NULL;
}
@@ -285,7 +285,7 @@ static int
rte_port_sym_crypto_writer_free(void *port)
{
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__);
return -EINVAL;
}
@@ -353,7 +353,7 @@ rte_port_sym_crypto_writer_nodrop_create(void *params, int socket_id)
(conf->tx_burst_sz == 0) ||
(conf->tx_burst_sz > RTE_PORT_IN_BURST_SIZE_MAX) ||
(!rte_is_power_of_2(conf->tx_burst_sz))) {
- RTE_LOG(ERR, PORT, "%s: Invalid input parameters\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Invalid input parameters", __func__);
return NULL;
}
@@ -361,7 +361,7 @@ rte_port_sym_crypto_writer_nodrop_create(void *params, int socket_id)
port = rte_zmalloc_socket("PORT", sizeof(*port),
RTE_CACHE_LINE_SIZE, socket_id);
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Failed to allocate port\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Failed to allocate port", __func__);
return NULL;
}
@@ -497,7 +497,7 @@ static int
rte_port_sym_crypto_writer_nodrop_free(void *port)
{
if (port == NULL) {
- RTE_LOG(ERR, PORT, "%s: Port is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, PORT, "%s: Port is NULL", __func__);
return -EINVAL;
}
diff --git a/lib/power/guest_channel.c b/lib/power/guest_channel.c
index a6f2097d5b..a9bbda8f48 100644
--- a/lib/power/guest_channel.c
+++ b/lib/power/guest_channel.c
@@ -59,38 +59,38 @@ guest_channel_host_connect(const char *path, unsigned int lcore_id)
int fd = -1;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d\n",
+ RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d",
lcore_id, RTE_MAX_LCORE-1);
return -1;
}
/* check if path is already open */
if (global_fds[lcore_id] != -1) {
- RTE_LOG(ERR, GUEST_CHANNEL, "Channel(%u) is already open with fd %d\n",
+ RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Channel(%u) is already open with fd %d",
lcore_id, global_fds[lcore_id]);
return -1;
}
snprintf(fd_path, PATH_MAX, "%s.%u", path, lcore_id);
- RTE_LOG(INFO, GUEST_CHANNEL, "Opening channel '%s' for lcore %u\n",
+ RTE_LOG_LINE(INFO, GUEST_CHANNEL, "Opening channel '%s' for lcore %u",
fd_path, lcore_id);
fd = open(fd_path, O_RDWR);
if (fd < 0) {
- RTE_LOG(ERR, GUEST_CHANNEL, "Unable to connect to '%s' with error "
- "%s\n", fd_path, strerror(errno));
+ RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Unable to connect to '%s' with error "
+ "%s", fd_path, strerror(errno));
return -1;
}
flags = fcntl(fd, F_GETFL, 0);
if (flags < 0) {
- RTE_LOG(ERR, GUEST_CHANNEL, "Failed on fcntl get flags for file %s\n",
+ RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Failed on fcntl get flags for file %s",
fd_path);
goto error;
}
flags |= O_NONBLOCK;
if (fcntl(fd, F_SETFL, flags) < 0) {
- RTE_LOG(ERR, GUEST_CHANNEL, "Failed on setting non-blocking mode for "
- "file %s\n", fd_path);
+ RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Failed on setting non-blocking mode for "
+ "file %s", fd_path);
goto error;
}
/* QEMU needs a delay after connection */
@@ -103,13 +103,13 @@ guest_channel_host_connect(const char *path, unsigned int lcore_id)
global_fds[lcore_id] = fd;
ret = guest_channel_send_msg(&pkt, lcore_id);
if (ret != 0) {
- RTE_LOG(ERR, GUEST_CHANNEL,
- "Error on channel '%s' communications test: %s\n",
+ RTE_LOG_LINE(ERR, GUEST_CHANNEL,
+ "Error on channel '%s' communications test: %s",
fd_path, ret > 0 ? strerror(ret) :
"channel not connected");
goto error;
}
- RTE_LOG(INFO, GUEST_CHANNEL, "Channel '%s' is now connected\n", fd_path);
+ RTE_LOG_LINE(INFO, GUEST_CHANNEL, "Channel '%s' is now connected", fd_path);
return 0;
error:
close(fd);
@@ -125,13 +125,13 @@ guest_channel_send_msg(struct rte_power_channel_packet *pkt,
void *buffer = pkt;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d\n",
+ RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d",
lcore_id, RTE_MAX_LCORE-1);
return -1;
}
if (global_fds[lcore_id] < 0) {
- RTE_LOG(ERR, GUEST_CHANNEL, "Channel is not connected\n");
+ RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Channel is not connected");
return -1;
}
while (buffer_len > 0) {
@@ -166,13 +166,13 @@ int power_guest_channel_read_msg(void *pkt,
return -1;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d\n",
+ RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d",
lcore_id, RTE_MAX_LCORE-1);
return -1;
}
if (global_fds[lcore_id] < 0) {
- RTE_LOG(ERR, GUEST_CHANNEL, "Channel is not connected\n");
+ RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Channel is not connected");
return -1;
}
@@ -181,10 +181,10 @@ int power_guest_channel_read_msg(void *pkt,
ret = poll(&fds, 1, TIMEOUT);
if (ret == 0) {
- RTE_LOG(DEBUG, GUEST_CHANNEL, "Timeout occurred during poll function.\n");
+ RTE_LOG_LINE(DEBUG, GUEST_CHANNEL, "Timeout occurred during poll function.");
return -1;
} else if (ret < 0) {
- RTE_LOG(ERR, GUEST_CHANNEL, "Error occurred during poll function: %s\n",
+ RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Error occurred during poll function: %s",
strerror(errno));
return -1;
}
@@ -200,7 +200,7 @@ int power_guest_channel_read_msg(void *pkt,
}
if (ret == 0) {
- RTE_LOG(ERR, GUEST_CHANNEL, "Expected more data, but connection has been closed.\n");
+ RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Expected more data, but connection has been closed.");
return -1;
}
pkt = (char *)pkt + ret;
@@ -221,7 +221,7 @@ void
guest_channel_host_disconnect(unsigned int lcore_id)
{
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d\n",
+ RTE_LOG_LINE(ERR, GUEST_CHANNEL, "Channel(%u) is out of range 0...%d",
lcore_id, RTE_MAX_LCORE-1);
return;
}
diff --git a/lib/power/power_acpi_cpufreq.c b/lib/power/power_acpi_cpufreq.c
index 8b55f19247..dd143f2cc8 100644
--- a/lib/power/power_acpi_cpufreq.c
+++ b/lib/power/power_acpi_cpufreq.c
@@ -63,8 +63,8 @@ static int
set_freq_internal(struct acpi_power_info *pi, uint32_t idx)
{
if (idx >= RTE_MAX_LCORE_FREQS || idx >= pi->nb_freqs) {
- RTE_LOG(ERR, POWER, "Invalid frequency index %u, which "
- "should be less than %u\n", idx, pi->nb_freqs);
+ RTE_LOG_LINE(ERR, POWER, "Invalid frequency index %u, which "
+ "should be less than %u", idx, pi->nb_freqs);
return -1;
}
@@ -75,13 +75,13 @@ set_freq_internal(struct acpi_power_info *pi, uint32_t idx)
POWER_DEBUG_TRACE("Frequency[%u] %u to be set for lcore %u\n",
idx, pi->freqs[idx], pi->lcore_id);
if (fseek(pi->f, 0, SEEK_SET) < 0) {
- RTE_LOG(ERR, POWER, "Fail to set file position indicator to 0 "
- "for setting frequency for lcore %u\n", pi->lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Fail to set file position indicator to 0 "
+ "for setting frequency for lcore %u", pi->lcore_id);
return -1;
}
if (fprintf(pi->f, "%u", pi->freqs[idx]) < 0) {
- RTE_LOG(ERR, POWER, "Fail to write new frequency for "
- "lcore %u\n", pi->lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Fail to write new frequency for "
+ "lcore %u", pi->lcore_id);
return -1;
}
fflush(pi->f);
@@ -127,14 +127,14 @@ power_get_available_freqs(struct acpi_power_info *pi)
open_core_sysfs_file(&f, "r", POWER_SYSFILE_AVAIL_FREQ, pi->lcore_id);
if (f == NULL) {
- RTE_LOG(ERR, POWER, "failed to open %s\n",
+ RTE_LOG_LINE(ERR, POWER, "failed to open %s",
POWER_SYSFILE_AVAIL_FREQ);
goto out;
}
ret = read_core_sysfs_s(f, buf, sizeof(buf));
if ((ret) < 0) {
- RTE_LOG(ERR, POWER, "Failed to read %s\n",
+ RTE_LOG_LINE(ERR, POWER, "Failed to read %s",
POWER_SYSFILE_AVAIL_FREQ);
goto out;
}
@@ -143,12 +143,12 @@ power_get_available_freqs(struct acpi_power_info *pi)
count = rte_strsplit(buf, sizeof(buf), freqs,
RTE_MAX_LCORE_FREQS, ' ');
if (count <= 0) {
- RTE_LOG(ERR, POWER, "No available frequency in "
- ""POWER_SYSFILE_AVAIL_FREQ"\n", pi->lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "No available frequency in "
+ POWER_SYSFILE_AVAIL_FREQ, pi->lcore_id);
goto out;
}
if (count >= RTE_MAX_LCORE_FREQS) {
- RTE_LOG(ERR, POWER, "Too many available frequencies : %d\n",
+ RTE_LOG_LINE(ERR, POWER, "Too many available frequencies : %d",
count);
goto out;
}
@@ -196,14 +196,14 @@ power_init_for_setting_freq(struct acpi_power_info *pi)
open_core_sysfs_file(&f, "rw+", POWER_SYSFILE_SETSPEED, pi->lcore_id);
if (f == NULL) {
- RTE_LOG(ERR, POWER, "Failed to open %s\n",
+ RTE_LOG_LINE(ERR, POWER, "Failed to open %s",
POWER_SYSFILE_SETSPEED);
goto err;
}
ret = read_core_sysfs_s(f, buf, sizeof(buf));
if ((ret) < 0) {
- RTE_LOG(ERR, POWER, "Failed to read %s\n",
+ RTE_LOG_LINE(ERR, POWER, "Failed to read %s",
POWER_SYSFILE_SETSPEED);
goto err;
}
@@ -237,7 +237,7 @@ power_acpi_cpufreq_init(unsigned int lcore_id)
uint32_t exp_state;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n",
+ RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceeds %u",
lcore_id, RTE_MAX_LCORE - 1U);
return -1;
}
@@ -253,42 +253,42 @@ power_acpi_cpufreq_init(unsigned int lcore_id)
if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state,
POWER_ONGOING,
rte_memory_order_acquire, rte_memory_order_relaxed)) {
- RTE_LOG(INFO, POWER, "Power management of lcore %u is "
- "in use\n", lcore_id);
+ RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is "
+ "in use", lcore_id);
return -1;
}
pi->lcore_id = lcore_id;
/* Check and set the governor */
if (power_set_governor_userspace(pi) < 0) {
- RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to "
- "userspace\n", lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Cannot set governor of lcore %u to "
+ "userspace", lcore_id);
goto fail;
}
/* Get the available frequencies */
if (power_get_available_freqs(pi) < 0) {
- RTE_LOG(ERR, POWER, "Cannot get available frequencies of "
- "lcore %u\n", lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Cannot get available frequencies of "
+ "lcore %u", lcore_id);
goto fail;
}
/* Init for setting lcore frequency */
if (power_init_for_setting_freq(pi) < 0) {
- RTE_LOG(ERR, POWER, "Cannot init for setting frequency for "
- "lcore %u\n", lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Cannot init for setting frequency for "
+ "lcore %u", lcore_id);
goto fail;
}
/* Set freq to max by default */
if (power_acpi_cpufreq_freq_max(lcore_id) < 0) {
- RTE_LOG(ERR, POWER, "Cannot set frequency of lcore %u "
- "to max\n", lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Cannot set frequency of lcore %u "
+ "to max", lcore_id);
goto fail;
}
- RTE_LOG(INFO, POWER, "Initialized successfully for lcore %u "
- "power management\n", lcore_id);
+ RTE_LOG_LINE(INFO, POWER, "Initialized successfully for lcore %u "
+ "power management", lcore_id);
exp_state = POWER_ONGOING;
rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_USED,
rte_memory_order_release, rte_memory_order_relaxed);
@@ -310,7 +310,7 @@ power_acpi_cpufreq_exit(unsigned int lcore_id)
uint32_t exp_state;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n",
+ RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceeds %u",
lcore_id, RTE_MAX_LCORE - 1U);
return -1;
}
@@ -325,8 +325,8 @@ power_acpi_cpufreq_exit(unsigned int lcore_id)
if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state,
POWER_ONGOING,
rte_memory_order_acquire, rte_memory_order_relaxed)) {
- RTE_LOG(INFO, POWER, "Power management of lcore %u is "
- "not used\n", lcore_id);
+ RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is "
+ "not used", lcore_id);
return -1;
}
@@ -336,14 +336,14 @@ power_acpi_cpufreq_exit(unsigned int lcore_id)
/* Set the governor back to the original */
if (power_set_governor_original(pi) < 0) {
- RTE_LOG(ERR, POWER, "Cannot set the governor of %u back "
- "to the original\n", lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Cannot set the governor of %u back "
+ "to the original", lcore_id);
goto fail;
}
- RTE_LOG(INFO, POWER, "Power management of lcore %u has exited from "
+ RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u has exited from "
"'userspace' mode and been set back to the "
- "original\n", lcore_id);
+ "original", lcore_id);
exp_state = POWER_ONGOING;
rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_IDLE,
rte_memory_order_release, rte_memory_order_relaxed);
@@ -364,18 +364,18 @@ power_acpi_cpufreq_freqs(unsigned int lcore_id, uint32_t *freqs, uint32_t num)
struct acpi_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return 0;
}
if (freqs == NULL) {
- RTE_LOG(ERR, POWER, "NULL buffer supplied\n");
+ RTE_LOG_LINE(ERR, POWER, "NULL buffer supplied");
return 0;
}
pi = &lcore_power_info[lcore_id];
if (num < pi->nb_freqs) {
- RTE_LOG(ERR, POWER, "Buffer size is not enough\n");
+ RTE_LOG_LINE(ERR, POWER, "Buffer size is not enough");
return 0;
}
rte_memcpy(freqs, pi->freqs, pi->nb_freqs * sizeof(uint32_t));
@@ -387,7 +387,7 @@ uint32_t
power_acpi_cpufreq_get_freq(unsigned int lcore_id)
{
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return RTE_POWER_INVALID_FREQ_INDEX;
}
@@ -398,7 +398,7 @@ int
power_acpi_cpufreq_set_freq(unsigned int lcore_id, uint32_t index)
{
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -411,7 +411,7 @@ power_acpi_cpufreq_freq_down(unsigned int lcore_id)
struct acpi_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -429,7 +429,7 @@ power_acpi_cpufreq_freq_up(unsigned int lcore_id)
struct acpi_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -446,7 +446,7 @@ int
power_acpi_cpufreq_freq_max(unsigned int lcore_id)
{
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -470,7 +470,7 @@ power_acpi_cpufreq_freq_min(unsigned int lcore_id)
struct acpi_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -487,7 +487,7 @@ power_acpi_turbo_status(unsigned int lcore_id)
struct acpi_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -503,7 +503,7 @@ power_acpi_enable_turbo(unsigned int lcore_id)
struct acpi_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -513,16 +513,16 @@ power_acpi_enable_turbo(unsigned int lcore_id)
pi->turbo_enable = 1;
else {
pi->turbo_enable = 0;
- RTE_LOG(ERR, POWER,
- "Failed to enable turbo on lcore %u\n",
+ RTE_LOG_LINE(ERR, POWER,
+ "Failed to enable turbo on lcore %u",
lcore_id);
return -1;
}
/* Max may have changed, so call to max function */
if (power_acpi_cpufreq_freq_max(lcore_id) < 0) {
- RTE_LOG(ERR, POWER,
- "Failed to set frequency of lcore %u to max\n",
+ RTE_LOG_LINE(ERR, POWER,
+ "Failed to set frequency of lcore %u to max",
lcore_id);
return -1;
}
@@ -536,7 +536,7 @@ power_acpi_disable_turbo(unsigned int lcore_id)
struct acpi_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -547,8 +547,8 @@ power_acpi_disable_turbo(unsigned int lcore_id)
if ((pi->turbo_available) && (pi->curr_idx <= 1)) {
/* Try to set freq to max by default coming out of turbo */
if (power_acpi_cpufreq_freq_max(lcore_id) < 0) {
- RTE_LOG(ERR, POWER,
- "Failed to set frequency of lcore %u to max\n",
+ RTE_LOG_LINE(ERR, POWER,
+ "Failed to set frequency of lcore %u to max",
lcore_id);
return -1;
}
@@ -563,11 +563,11 @@ int power_acpi_get_capabilities(unsigned int lcore_id,
struct acpi_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
if (caps == NULL) {
- RTE_LOG(ERR, POWER, "Invalid argument\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid argument");
return -1;
}
diff --git a/lib/power/power_amd_pstate_cpufreq.c b/lib/power/power_amd_pstate_cpufreq.c
index dbd9d2b3ee..44581fd48b 100644
--- a/lib/power/power_amd_pstate_cpufreq.c
+++ b/lib/power/power_amd_pstate_cpufreq.c
@@ -70,8 +70,8 @@ static int
set_freq_internal(struct amd_pstate_power_info *pi, uint32_t idx)
{
if (idx >= RTE_MAX_LCORE_FREQS || idx >= pi->nb_freqs) {
- RTE_LOG(ERR, POWER, "Invalid frequency index %u, which "
- "should be less than %u\n", idx, pi->nb_freqs);
+ RTE_LOG_LINE(ERR, POWER, "Invalid frequency index %u, which "
+ "should be less than %u", idx, pi->nb_freqs);
return -1;
}
@@ -82,13 +82,13 @@ set_freq_internal(struct amd_pstate_power_info *pi, uint32_t idx)
POWER_DEBUG_TRACE("Frequency[%u] %u to be set for lcore %u\n",
idx, pi->freqs[idx], pi->lcore_id);
if (fseek(pi->f, 0, SEEK_SET) < 0) {
- RTE_LOG(ERR, POWER, "Fail to set file position indicator to 0 "
- "for setting frequency for lcore %u\n", pi->lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Fail to set file position indicator to 0 "
+ "for setting frequency for lcore %u", pi->lcore_id);
return -1;
}
if (fprintf(pi->f, "%u", pi->freqs[idx]) < 0) {
- RTE_LOG(ERR, POWER, "Fail to write new frequency for "
- "lcore %u\n", pi->lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Fail to write new frequency for "
+ "lcore %u", pi->lcore_id);
return -1;
}
fflush(pi->f);
@@ -119,7 +119,7 @@ power_check_turbo(struct amd_pstate_power_info *pi)
open_core_sysfs_file(&f_max, "r", POWER_SYSFILE_HIGHEST_PERF,
pi->lcore_id);
if (f_max == NULL) {
- RTE_LOG(ERR, POWER, "failed to open %s\n",
+ RTE_LOG_LINE(ERR, POWER, "failed to open %s",
POWER_SYSFILE_HIGHEST_PERF);
goto err;
}
@@ -127,21 +127,21 @@ power_check_turbo(struct amd_pstate_power_info *pi)
open_core_sysfs_file(&f_nom, "r", POWER_SYSFILE_NOMINAL_PERF,
pi->lcore_id);
if (f_nom == NULL) {
- RTE_LOG(ERR, POWER, "failed to open %s\n",
+ RTE_LOG_LINE(ERR, POWER, "failed to open %s",
POWER_SYSFILE_NOMINAL_PERF);
goto err;
}
ret = read_core_sysfs_u32(f_max, &highest_perf);
if (ret < 0) {
- RTE_LOG(ERR, POWER, "Failed to read %s\n",
+ RTE_LOG_LINE(ERR, POWER, "Failed to read %s",
POWER_SYSFILE_HIGHEST_PERF);
goto err;
}
ret = read_core_sysfs_u32(f_nom, &nominal_perf);
if (ret < 0) {
- RTE_LOG(ERR, POWER, "Failed to read %s\n",
+ RTE_LOG_LINE(ERR, POWER, "Failed to read %s",
POWER_SYSFILE_NOMINAL_PERF);
goto err;
}
@@ -190,7 +190,7 @@ power_get_available_freqs(struct amd_pstate_power_info *pi)
open_core_sysfs_file(&f_max, "r", POWER_SYSFILE_SCALING_MAX_FREQ,
pi->lcore_id);
if (f_max == NULL) {
- RTE_LOG(ERR, POWER, "failed to open %s\n",
+ RTE_LOG_LINE(ERR, POWER, "failed to open %s",
POWER_SYSFILE_SCALING_MAX_FREQ);
goto out;
}
@@ -198,7 +198,7 @@ power_get_available_freqs(struct amd_pstate_power_info *pi)
open_core_sysfs_file(&f_min, "r", POWER_SYSFILE_SCALING_MIN_FREQ,
pi->lcore_id);
if (f_min == NULL) {
- RTE_LOG(ERR, POWER, "failed to open %s\n",
+ RTE_LOG_LINE(ERR, POWER, "failed to open %s",
POWER_SYSFILE_SCALING_MIN_FREQ);
goto out;
}
@@ -206,28 +206,28 @@ power_get_available_freqs(struct amd_pstate_power_info *pi)
open_core_sysfs_file(&f_nom, "r", POWER_SYSFILE_NOMINAL_FREQ,
pi->lcore_id);
if (f_nom == NULL) {
- RTE_LOG(ERR, POWER, "failed to open %s\n",
+ RTE_LOG_LINE(ERR, POWER, "failed to open %s",
POWER_SYSFILE_NOMINAL_FREQ);
goto out;
}
ret = read_core_sysfs_u32(f_max, &scaling_max_freq);
if (ret < 0) {
- RTE_LOG(ERR, POWER, "Failed to read %s\n",
+ RTE_LOG_LINE(ERR, POWER, "Failed to read %s",
POWER_SYSFILE_SCALING_MAX_FREQ);
goto out;
}
ret = read_core_sysfs_u32(f_min, &scaling_min_freq);
if (ret < 0) {
- RTE_LOG(ERR, POWER, "Failed to read %s\n",
+ RTE_LOG_LINE(ERR, POWER, "Failed to read %s",
POWER_SYSFILE_SCALING_MIN_FREQ);
goto out;
}
ret = read_core_sysfs_u32(f_nom, &nominal_freq);
if (ret < 0) {
- RTE_LOG(ERR, POWER, "Failed to read %s\n",
+ RTE_LOG_LINE(ERR, POWER, "Failed to read %s",
POWER_SYSFILE_NOMINAL_FREQ);
goto out;
}
@@ -235,8 +235,8 @@ power_get_available_freqs(struct amd_pstate_power_info *pi)
power_check_turbo(pi);
if (scaling_max_freq < scaling_min_freq) {
- RTE_LOG(ERR, POWER, "scaling min freq exceeds max freq, "
- "not expected! Check system power policy\n");
+ RTE_LOG_LINE(ERR, POWER, "scaling min freq exceeds max freq, "
+ "not expected! Check system power policy");
goto out;
} else if (scaling_max_freq == scaling_min_freq) {
num_freqs = 1;
@@ -304,14 +304,14 @@ power_init_for_setting_freq(struct amd_pstate_power_info *pi)
open_core_sysfs_file(&f, "rw+", POWER_SYSFILE_SETSPEED, pi->lcore_id);
if (f == NULL) {
- RTE_LOG(ERR, POWER, "failed to open %s\n",
+ RTE_LOG_LINE(ERR, POWER, "failed to open %s",
POWER_SYSFILE_SETSPEED);
goto err;
}
ret = read_core_sysfs_s(f, buf, sizeof(buf));
if (ret < 0) {
- RTE_LOG(ERR, POWER, "Failed to read %s\n",
+ RTE_LOG_LINE(ERR, POWER, "Failed to read %s",
POWER_SYSFILE_SETSPEED);
goto err;
}
@@ -355,7 +355,7 @@ power_amd_pstate_cpufreq_init(unsigned int lcore_id)
uint32_t exp_state;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n",
+ RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceeds %u",
lcore_id, RTE_MAX_LCORE - 1U);
return -1;
}
@@ -371,42 +371,42 @@ power_amd_pstate_cpufreq_init(unsigned int lcore_id)
if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state),
&exp_state, POWER_ONGOING,
rte_memory_order_acquire, rte_memory_order_relaxed)) {
- RTE_LOG(INFO, POWER, "Power management of lcore %u is "
- "in use\n", lcore_id);
+ RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is "
+ "in use", lcore_id);
return -1;
}
pi->lcore_id = lcore_id;
/* Check and set the governor */
if (power_set_governor_userspace(pi) < 0) {
- RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to "
- "userspace\n", lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Cannot set governor of lcore %u to "
+ "userspace", lcore_id);
goto fail;
}
/* Get the available frequencies */
if (power_get_available_freqs(pi) < 0) {
- RTE_LOG(ERR, POWER, "Cannot get available frequencies of "
- "lcore %u\n", lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Cannot get available frequencies of "
+ "lcore %u", lcore_id);
goto fail;
}
/* Init for setting lcore frequency */
if (power_init_for_setting_freq(pi) < 0) {
- RTE_LOG(ERR, POWER, "Cannot init for setting frequency for "
- "lcore %u\n", lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Cannot init for setting frequency for "
+ "lcore %u", lcore_id);
goto fail;
}
/* Set freq to max by default */
if (power_amd_pstate_cpufreq_freq_max(lcore_id) < 0) {
- RTE_LOG(ERR, POWER, "Cannot set frequency of lcore %u "
- "to max\n", lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Cannot set frequency of lcore %u "
+ "to max", lcore_id);
goto fail;
}
- RTE_LOG(INFO, POWER, "Initialized successfully for lcore %u "
- "power management\n", lcore_id);
+ RTE_LOG_LINE(INFO, POWER, "Initialized successfully for lcore %u "
+ "power management", lcore_id);
rte_atomic_store_explicit(&(pi->state), POWER_USED, rte_memory_order_release);
@@ -434,7 +434,7 @@ power_amd_pstate_cpufreq_exit(unsigned int lcore_id)
uint32_t exp_state;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n",
+ RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceeds %u",
lcore_id, RTE_MAX_LCORE - 1U);
return -1;
}
@@ -449,8 +449,8 @@ power_amd_pstate_cpufreq_exit(unsigned int lcore_id)
if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state),
&exp_state, POWER_ONGOING,
rte_memory_order_acquire, rte_memory_order_relaxed)) {
- RTE_LOG(INFO, POWER, "Power management of lcore %u is "
- "not used\n", lcore_id);
+ RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is "
+ "not used", lcore_id);
return -1;
}
@@ -460,14 +460,14 @@ power_amd_pstate_cpufreq_exit(unsigned int lcore_id)
/* Set the governor back to the original */
if (power_set_governor_original(pi) < 0) {
- RTE_LOG(ERR, POWER, "Cannot set the governor of %u back "
- "to the original\n", lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Cannot set the governor of %u back "
+ "to the original", lcore_id);
goto fail;
}
- RTE_LOG(INFO, POWER, "Power management of lcore %u has exited from "
+ RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u has exited from "
"'userspace' mode and been set back to the "
- "original\n", lcore_id);
+ "original", lcore_id);
rte_atomic_store_explicit(&(pi->state), POWER_IDLE, rte_memory_order_release);
return 0;
@@ -484,18 +484,18 @@ power_amd_pstate_cpufreq_freqs(unsigned int lcore_id, uint32_t *freqs, uint32_t
struct amd_pstate_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return 0;
}
if (freqs == NULL) {
- RTE_LOG(ERR, POWER, "NULL buffer supplied\n");
+ RTE_LOG_LINE(ERR, POWER, "NULL buffer supplied");
return 0;
}
pi = &lcore_power_info[lcore_id];
if (num < pi->nb_freqs) {
- RTE_LOG(ERR, POWER, "Buffer size is not enough\n");
+ RTE_LOG_LINE(ERR, POWER, "Buffer size is not enough");
return 0;
}
rte_memcpy(freqs, pi->freqs, pi->nb_freqs * sizeof(uint32_t));
@@ -507,7 +507,7 @@ uint32_t
power_amd_pstate_cpufreq_get_freq(unsigned int lcore_id)
{
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return RTE_POWER_INVALID_FREQ_INDEX;
}
@@ -518,7 +518,7 @@ int
power_amd_pstate_cpufreq_set_freq(unsigned int lcore_id, uint32_t index)
{
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -531,7 +531,7 @@ power_amd_pstate_cpufreq_freq_down(unsigned int lcore_id)
struct amd_pstate_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -549,7 +549,7 @@ power_amd_pstate_cpufreq_freq_up(unsigned int lcore_id)
struct amd_pstate_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -566,7 +566,7 @@ int
power_amd_pstate_cpufreq_freq_max(unsigned int lcore_id)
{
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -591,7 +591,7 @@ power_amd_pstate_cpufreq_freq_min(unsigned int lcore_id)
struct amd_pstate_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -607,7 +607,7 @@ power_amd_pstate_turbo_status(unsigned int lcore_id)
struct amd_pstate_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -622,7 +622,7 @@ power_amd_pstate_enable_turbo(unsigned int lcore_id)
struct amd_pstate_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -632,8 +632,8 @@ power_amd_pstate_enable_turbo(unsigned int lcore_id)
pi->turbo_enable = 1;
else {
pi->turbo_enable = 0;
- RTE_LOG(ERR, POWER,
- "Failed to enable turbo on lcore %u\n",
+ RTE_LOG_LINE(ERR, POWER,
+ "Failed to enable turbo on lcore %u",
lcore_id);
return -1;
}
@@ -643,8 +643,8 @@ power_amd_pstate_enable_turbo(unsigned int lcore_id)
*/
/* Max may have changed, so call to max function */
if (power_amd_pstate_cpufreq_freq_max(lcore_id) < 0) {
- RTE_LOG(ERR, POWER,
- "Failed to set frequency of lcore %u to max\n",
+ RTE_LOG_LINE(ERR, POWER,
+ "Failed to set frequency of lcore %u to max",
lcore_id);
return -1;
}
@@ -658,7 +658,7 @@ power_amd_pstate_disable_turbo(unsigned int lcore_id)
struct amd_pstate_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -669,8 +669,8 @@ power_amd_pstate_disable_turbo(unsigned int lcore_id)
if ((pi->turbo_available) && (pi->curr_idx <= pi->nom_idx)) {
/* Try to set freq to max by default coming out of turbo */
if (power_amd_pstate_cpufreq_freq_max(lcore_id) < 0) {
- RTE_LOG(ERR, POWER,
- "Failed to set frequency of lcore %u to max\n",
+ RTE_LOG_LINE(ERR, POWER,
+ "Failed to set frequency of lcore %u to max",
lcore_id);
return -1;
}
@@ -686,11 +686,11 @@ power_amd_pstate_get_capabilities(unsigned int lcore_id,
struct amd_pstate_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
if (caps == NULL) {
- RTE_LOG(ERR, POWER, "Invalid argument\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid argument");
return -1;
}
diff --git a/lib/power/power_common.c b/lib/power/power_common.c
index bf77eafa88..bc57642cd1 100644
--- a/lib/power/power_common.c
+++ b/lib/power/power_common.c
@@ -163,14 +163,14 @@ power_set_governor(unsigned int lcore_id, const char *new_governor,
open_core_sysfs_file(&f_governor, "rw+", POWER_SYSFILE_GOVERNOR,
lcore_id);
if (f_governor == NULL) {
- RTE_LOG(ERR, POWER, "failed to open %s\n",
+ RTE_LOG_LINE(ERR, POWER, "failed to open %s",
POWER_SYSFILE_GOVERNOR);
goto out;
}
ret = read_core_sysfs_s(f_governor, buf, sizeof(buf));
if (ret < 0) {
- RTE_LOG(ERR, POWER, "Failed to read %s\n",
+ RTE_LOG_LINE(ERR, POWER, "Failed to read %s",
POWER_SYSFILE_GOVERNOR);
goto out;
}
@@ -190,14 +190,14 @@ power_set_governor(unsigned int lcore_id, const char *new_governor,
/* Write the new governor */
ret = write_core_sysfs_s(f_governor, new_governor);
if (ret < 0) {
- RTE_LOG(ERR, POWER, "Failed to write %s\n",
+ RTE_LOG_LINE(ERR, POWER, "Failed to write %s",
POWER_SYSFILE_GOVERNOR);
goto out;
}
ret = 0;
- RTE_LOG(INFO, POWER, "Power management governor of lcore %u has been "
- "set to '%s' successfully\n", lcore_id, new_governor);
+ RTE_LOG_LINE(INFO, POWER, "Power management governor of lcore %u has been "
+ "set to '%s' successfully", lcore_id, new_governor);
out:
if (f_governor != NULL)
fclose(f_governor);
diff --git a/lib/power/power_cppc_cpufreq.c b/lib/power/power_cppc_cpufreq.c
index bb70f6ae52..83e1e62830 100644
--- a/lib/power/power_cppc_cpufreq.c
+++ b/lib/power/power_cppc_cpufreq.c
@@ -73,8 +73,8 @@ static int
set_freq_internal(struct cppc_power_info *pi, uint32_t idx)
{
if (idx >= RTE_MAX_LCORE_FREQS || idx >= pi->nb_freqs) {
- RTE_LOG(ERR, POWER, "Invalid frequency index %u, which "
- "should be less than %u\n", idx, pi->nb_freqs);
+ RTE_LOG_LINE(ERR, POWER, "Invalid frequency index %u, which "
+ "should be less than %u", idx, pi->nb_freqs);
return -1;
}
@@ -85,13 +85,13 @@ set_freq_internal(struct cppc_power_info *pi, uint32_t idx)
POWER_DEBUG_TRACE("Frequency[%u] %u to be set for lcore %u\n",
idx, pi->freqs[idx], pi->lcore_id);
if (fseek(pi->f, 0, SEEK_SET) < 0) {
- RTE_LOG(ERR, POWER, "Fail to set file position indicator to 0 "
- "for setting frequency for lcore %u\n", pi->lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Fail to set file position indicator to 0 "
+ "for setting frequency for lcore %u", pi->lcore_id);
return -1;
}
if (fprintf(pi->f, "%u", pi->freqs[idx]) < 0) {
- RTE_LOG(ERR, POWER, "Fail to write new frequency for "
- "lcore %u\n", pi->lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Fail to write new frequency for "
+ "lcore %u", pi->lcore_id);
return -1;
}
fflush(pi->f);
@@ -122,7 +122,7 @@ power_check_turbo(struct cppc_power_info *pi)
open_core_sysfs_file(&f_max, "r", POWER_SYSFILE_HIGHEST_PERF,
pi->lcore_id);
if (f_max == NULL) {
- RTE_LOG(ERR, POWER, "failed to open %s\n",
+ RTE_LOG_LINE(ERR, POWER, "failed to open %s",
POWER_SYSFILE_HIGHEST_PERF);
goto err;
}
@@ -130,7 +130,7 @@ power_check_turbo(struct cppc_power_info *pi)
open_core_sysfs_file(&f_nom, "r", POWER_SYSFILE_NOMINAL_PERF,
pi->lcore_id);
if (f_nom == NULL) {
- RTE_LOG(ERR, POWER, "failed to open %s\n",
+ RTE_LOG_LINE(ERR, POWER, "failed to open %s",
POWER_SYSFILE_NOMINAL_PERF);
goto err;
}
@@ -138,28 +138,28 @@ power_check_turbo(struct cppc_power_info *pi)
open_core_sysfs_file(&f_cmax, "r", POWER_SYSFILE_SYS_MAX,
pi->lcore_id);
if (f_cmax == NULL) {
- RTE_LOG(ERR, POWER, "failed to open %s\n",
+ RTE_LOG_LINE(ERR, POWER, "failed to open %s",
POWER_SYSFILE_SYS_MAX);
goto err;
}
ret = read_core_sysfs_u32(f_max, &highest_perf);
if (ret < 0) {
- RTE_LOG(ERR, POWER, "Failed to read %s\n",
+ RTE_LOG_LINE(ERR, POWER, "Failed to read %s",
POWER_SYSFILE_HIGHEST_PERF);
goto err;
}
ret = read_core_sysfs_u32(f_nom, &nominal_perf);
if (ret < 0) {
- RTE_LOG(ERR, POWER, "Failed to read %s\n",
+ RTE_LOG_LINE(ERR, POWER, "Failed to read %s",
POWER_SYSFILE_NOMINAL_PERF);
goto err;
}
ret = read_core_sysfs_u32(f_cmax, &cpuinfo_max_freq);
if (ret < 0) {
- RTE_LOG(ERR, POWER, "Failed to read %s\n",
+ RTE_LOG_LINE(ERR, POWER, "Failed to read %s",
POWER_SYSFILE_SYS_MAX);
goto err;
}
@@ -209,7 +209,7 @@ power_get_available_freqs(struct cppc_power_info *pi)
open_core_sysfs_file(&f_max, "r", POWER_SYSFILE_SCALING_MAX_FREQ,
pi->lcore_id);
if (f_max == NULL) {
- RTE_LOG(ERR, POWER, "failed to open %s\n",
+ RTE_LOG_LINE(ERR, POWER, "failed to open %s",
POWER_SYSFILE_SCALING_MAX_FREQ);
goto out;
}
@@ -217,21 +217,21 @@ power_get_available_freqs(struct cppc_power_info *pi)
open_core_sysfs_file(&f_min, "r", POWER_SYSFILE_SCALING_MIN_FREQ,
pi->lcore_id);
if (f_min == NULL) {
- RTE_LOG(ERR, POWER, "failed to open %s\n",
+ RTE_LOG_LINE(ERR, POWER, "failed to open %s",
POWER_SYSFILE_SCALING_MIN_FREQ);
goto out;
}
ret = read_core_sysfs_u32(f_max, &scaling_max_freq);
if (ret < 0) {
- RTE_LOG(ERR, POWER, "Failed to read %s\n",
+ RTE_LOG_LINE(ERR, POWER, "Failed to read %s",
POWER_SYSFILE_SCALING_MAX_FREQ);
goto out;
}
ret = read_core_sysfs_u32(f_min, &scaling_min_freq);
if (ret < 0) {
- RTE_LOG(ERR, POWER, "Failed to read %s\n",
+ RTE_LOG_LINE(ERR, POWER, "Failed to read %s",
POWER_SYSFILE_SCALING_MIN_FREQ);
goto out;
}
@@ -249,7 +249,7 @@ power_get_available_freqs(struct cppc_power_info *pi)
num_freqs = (nominal_perf - scaling_min_freq) / BUS_FREQ + 1 +
pi->turbo_available;
if (num_freqs >= RTE_MAX_LCORE_FREQS) {
- RTE_LOG(ERR, POWER, "Too many available frequencies: %d\n",
+ RTE_LOG_LINE(ERR, POWER, "Too many available frequencies: %d",
num_freqs);
goto out;
}
@@ -290,14 +290,14 @@ power_init_for_setting_freq(struct cppc_power_info *pi)
open_core_sysfs_file(&f, "rw+", POWER_SYSFILE_SETSPEED, pi->lcore_id);
if (f == NULL) {
- RTE_LOG(ERR, POWER, "failed to open %s\n",
+ RTE_LOG_LINE(ERR, POWER, "failed to open %s",
POWER_SYSFILE_SETSPEED);
goto err;
}
ret = read_core_sysfs_s(f, buf, sizeof(buf));
if (ret < 0) {
- RTE_LOG(ERR, POWER, "Failed to read %s\n",
+ RTE_LOG_LINE(ERR, POWER, "Failed to read %s",
POWER_SYSFILE_SETSPEED);
goto err;
}
@@ -341,7 +341,7 @@ power_cppc_cpufreq_init(unsigned int lcore_id)
uint32_t exp_state;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n",
+ RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceeds %u",
lcore_id, RTE_MAX_LCORE - 1U);
return -1;
}
@@ -357,42 +357,42 @@ power_cppc_cpufreq_init(unsigned int lcore_id)
if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state,
POWER_ONGOING,
rte_memory_order_acquire, rte_memory_order_relaxed)) {
- RTE_LOG(INFO, POWER, "Power management of lcore %u is "
- "in use\n", lcore_id);
+ RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is "
+ "in use", lcore_id);
return -1;
}
pi->lcore_id = lcore_id;
/* Check and set the governor */
if (power_set_governor_userspace(pi) < 0) {
- RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to "
- "userspace\n", lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Cannot set governor of lcore %u to "
+ "userspace", lcore_id);
goto fail;
}
/* Get the available frequencies */
if (power_get_available_freqs(pi) < 0) {
- RTE_LOG(ERR, POWER, "Cannot get available frequencies of "
- "lcore %u\n", lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Cannot get available frequencies of "
+ "lcore %u", lcore_id);
goto fail;
}
/* Init for setting lcore frequency */
if (power_init_for_setting_freq(pi) < 0) {
- RTE_LOG(ERR, POWER, "Cannot init for setting frequency for "
- "lcore %u\n", lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Cannot init for setting frequency for "
+ "lcore %u", lcore_id);
goto fail;
}
/* Set freq to max by default */
if (power_cppc_cpufreq_freq_max(lcore_id) < 0) {
- RTE_LOG(ERR, POWER, "Cannot set frequency of lcore %u "
- "to max\n", lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Cannot set frequency of lcore %u "
+ "to max", lcore_id);
goto fail;
}
- RTE_LOG(INFO, POWER, "Initialized successfully for lcore %u "
- "power management\n", lcore_id);
+ RTE_LOG_LINE(INFO, POWER, "Initialized successfully for lcore %u "
+ "power management", lcore_id);
rte_atomic_store_explicit(&(pi->state), POWER_USED, rte_memory_order_release);
@@ -420,7 +420,7 @@ power_cppc_cpufreq_exit(unsigned int lcore_id)
uint32_t exp_state;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n",
+ RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceeds %u",
lcore_id, RTE_MAX_LCORE - 1U);
return -1;
}
@@ -435,8 +435,8 @@ power_cppc_cpufreq_exit(unsigned int lcore_id)
if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state,
POWER_ONGOING,
rte_memory_order_acquire, rte_memory_order_relaxed)) {
- RTE_LOG(INFO, POWER, "Power management of lcore %u is "
- "not used\n", lcore_id);
+ RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is "
+ "not used", lcore_id);
return -1;
}
@@ -446,14 +446,14 @@ power_cppc_cpufreq_exit(unsigned int lcore_id)
/* Set the governor back to the original */
if (power_set_governor_original(pi) < 0) {
- RTE_LOG(ERR, POWER, "Cannot set the governor of %u back "
- "to the original\n", lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Cannot set the governor of %u back "
+ "to the original", lcore_id);
goto fail;
}
- RTE_LOG(INFO, POWER, "Power management of lcore %u has exited from "
+ RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u has exited from "
"'userspace' mode and been set back to the "
- "original\n", lcore_id);
+ "original", lcore_id);
rte_atomic_store_explicit(&(pi->state), POWER_IDLE, rte_memory_order_release);
return 0;
@@ -470,18 +470,18 @@ power_cppc_cpufreq_freqs(unsigned int lcore_id, uint32_t *freqs, uint32_t num)
struct cppc_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return 0;
}
if (freqs == NULL) {
- RTE_LOG(ERR, POWER, "NULL buffer supplied\n");
+ RTE_LOG_LINE(ERR, POWER, "NULL buffer supplied");
return 0;
}
pi = &lcore_power_info[lcore_id];
if (num < pi->nb_freqs) {
- RTE_LOG(ERR, POWER, "Buffer size is not enough\n");
+ RTE_LOG_LINE(ERR, POWER, "Buffer size is not enough");
return 0;
}
rte_memcpy(freqs, pi->freqs, pi->nb_freqs * sizeof(uint32_t));
@@ -493,7 +493,7 @@ uint32_t
power_cppc_cpufreq_get_freq(unsigned int lcore_id)
{
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return RTE_POWER_INVALID_FREQ_INDEX;
}
@@ -504,7 +504,7 @@ int
power_cppc_cpufreq_set_freq(unsigned int lcore_id, uint32_t index)
{
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -517,7 +517,7 @@ power_cppc_cpufreq_freq_down(unsigned int lcore_id)
struct cppc_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -535,7 +535,7 @@ power_cppc_cpufreq_freq_up(unsigned int lcore_id)
struct cppc_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -552,7 +552,7 @@ int
power_cppc_cpufreq_freq_max(unsigned int lcore_id)
{
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -576,7 +576,7 @@ power_cppc_cpufreq_freq_min(unsigned int lcore_id)
struct cppc_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -592,7 +592,7 @@ power_cppc_turbo_status(unsigned int lcore_id)
struct cppc_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -607,7 +607,7 @@ power_cppc_enable_turbo(unsigned int lcore_id)
struct cppc_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -617,8 +617,8 @@ power_cppc_enable_turbo(unsigned int lcore_id)
pi->turbo_enable = 1;
else {
pi->turbo_enable = 0;
- RTE_LOG(ERR, POWER,
- "Failed to enable turbo on lcore %u\n",
+ RTE_LOG_LINE(ERR, POWER,
+ "Failed to enable turbo on lcore %u",
lcore_id);
return -1;
}
@@ -628,8 +628,8 @@ power_cppc_enable_turbo(unsigned int lcore_id)
*/
/* Max may have changed, so call to max function */
if (power_cppc_cpufreq_freq_max(lcore_id) < 0) {
- RTE_LOG(ERR, POWER,
- "Failed to set frequency of lcore %u to max\n",
+ RTE_LOG_LINE(ERR, POWER,
+ "Failed to set frequency of lcore %u to max",
lcore_id);
return -1;
}
@@ -643,7 +643,7 @@ power_cppc_disable_turbo(unsigned int lcore_id)
struct cppc_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -654,8 +654,8 @@ power_cppc_disable_turbo(unsigned int lcore_id)
if ((pi->turbo_available) && (pi->curr_idx <= 1)) {
/* Try to set freq to max by default coming out of turbo */
if (power_cppc_cpufreq_freq_max(lcore_id) < 0) {
- RTE_LOG(ERR, POWER,
- "Failed to set frequency of lcore %u to max\n",
+ RTE_LOG_LINE(ERR, POWER,
+ "Failed to set frequency of lcore %u to max",
lcore_id);
return -1;
}
@@ -671,11 +671,11 @@ power_cppc_get_capabilities(unsigned int lcore_id,
struct cppc_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
if (caps == NULL) {
- RTE_LOG(ERR, POWER, "Invalid argument\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid argument");
return -1;
}
diff --git a/lib/power/power_intel_uncore.c b/lib/power/power_intel_uncore.c
index 688aebc4ee..0ee8e603d2 100644
--- a/lib/power/power_intel_uncore.c
+++ b/lib/power/power_intel_uncore.c
@@ -52,8 +52,8 @@ set_uncore_freq_internal(struct uncore_power_info *ui, uint32_t idx)
int ret;
if (idx >= MAX_UNCORE_FREQS || idx >= ui->nb_freqs) {
- RTE_LOG(DEBUG, POWER, "Invalid uncore frequency index %u, which "
- "should be less than %u\n", idx, ui->nb_freqs);
+ RTE_LOG_LINE(DEBUG, POWER, "Invalid uncore frequency index %u, which "
+ "should be less than %u", idx, ui->nb_freqs);
return -1;
}
@@ -65,13 +65,13 @@ set_uncore_freq_internal(struct uncore_power_info *ui, uint32_t idx)
open_core_sysfs_file(&ui->f_cur_max, "rw+", POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ,
ui->pkg, ui->die);
if (ui->f_cur_max == NULL) {
- RTE_LOG(DEBUG, POWER, "failed to open %s\n",
+ RTE_LOG_LINE(DEBUG, POWER, "failed to open %s",
POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ);
return -1;
}
ret = read_core_sysfs_u32(ui->f_cur_max, &curr_max_freq);
if (ret < 0) {
- RTE_LOG(DEBUG, POWER, "Failed to read %s\n",
+ RTE_LOG_LINE(DEBUG, POWER, "Failed to read %s",
POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ);
fclose(ui->f_cur_max);
return -1;
@@ -79,14 +79,14 @@ set_uncore_freq_internal(struct uncore_power_info *ui, uint32_t idx)
/* check this value first before fprintf value to f_cur_max, so value isn't overwritten */
if (fprintf(ui->f_cur_min, "%u", target_uncore_freq) < 0) {
- RTE_LOG(ERR, POWER, "Fail to write new uncore frequency for "
- "pkg %02u die %02u\n", ui->pkg, ui->die);
+ RTE_LOG_LINE(ERR, POWER, "Fail to write new uncore frequency for "
+ "pkg %02u die %02u", ui->pkg, ui->die);
return -1;
}
if (fprintf(ui->f_cur_max, "%u", target_uncore_freq) < 0) {
- RTE_LOG(ERR, POWER, "Fail to write new uncore frequency for "
- "pkg %02u die %02u\n", ui->pkg, ui->die);
+ RTE_LOG_LINE(ERR, POWER, "Fail to write new uncore frequency for "
+ "pkg %02u die %02u", ui->pkg, ui->die);
return -1;
}
@@ -121,13 +121,13 @@ power_init_for_setting_uncore_freq(struct uncore_power_info *ui)
open_core_sysfs_file(&f_base_max, "r", POWER_INTEL_UNCORE_SYSFILE_BASE_MAX_FREQ,
ui->pkg, ui->die);
if (f_base_max == NULL) {
- RTE_LOG(DEBUG, POWER, "failed to open %s\n",
+ RTE_LOG_LINE(DEBUG, POWER, "failed to open %s",
POWER_INTEL_UNCORE_SYSFILE_BASE_MAX_FREQ);
goto err;
}
ret = read_core_sysfs_u32(f_base_max, &base_max_freq);
if (ret < 0) {
- RTE_LOG(DEBUG, POWER, "Failed to read %s\n",
+ RTE_LOG_LINE(DEBUG, POWER, "Failed to read %s",
POWER_INTEL_UNCORE_SYSFILE_BASE_MAX_FREQ);
goto err;
}
@@ -136,14 +136,14 @@ power_init_for_setting_uncore_freq(struct uncore_power_info *ui)
open_core_sysfs_file(&f_base_min, "r", POWER_INTEL_UNCORE_SYSFILE_BASE_MIN_FREQ,
ui->pkg, ui->die);
if (f_base_min == NULL) {
- RTE_LOG(DEBUG, POWER, "failed to open %s\n",
+ RTE_LOG_LINE(DEBUG, POWER, "failed to open %s",
POWER_INTEL_UNCORE_SYSFILE_BASE_MIN_FREQ);
goto err;
}
if (f_base_min != NULL) {
ret = read_core_sysfs_u32(f_base_min, &base_min_freq);
if (ret < 0) {
- RTE_LOG(DEBUG, POWER, "Failed to read %s\n",
+ RTE_LOG_LINE(DEBUG, POWER, "Failed to read %s",
POWER_INTEL_UNCORE_SYSFILE_BASE_MIN_FREQ);
goto err;
}
@@ -153,14 +153,14 @@ power_init_for_setting_uncore_freq(struct uncore_power_info *ui)
open_core_sysfs_file(&f_min, "rw+", POWER_INTEL_UNCORE_SYSFILE_MIN_FREQ,
ui->pkg, ui->die);
if (f_min == NULL) {
- RTE_LOG(DEBUG, POWER, "failed to open %s\n",
+ RTE_LOG_LINE(DEBUG, POWER, "failed to open %s",
POWER_INTEL_UNCORE_SYSFILE_MIN_FREQ);
goto err;
}
if (f_min != NULL) {
ret = read_core_sysfs_u32(f_min, &min_freq);
if (ret < 0) {
- RTE_LOG(DEBUG, POWER, "Failed to read %s\n",
+ RTE_LOG_LINE(DEBUG, POWER, "Failed to read %s",
POWER_INTEL_UNCORE_SYSFILE_MIN_FREQ);
goto err;
}
@@ -170,14 +170,14 @@ power_init_for_setting_uncore_freq(struct uncore_power_info *ui)
open_core_sysfs_file(&f_max, "rw+", POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ,
ui->pkg, ui->die);
if (f_max == NULL) {
- RTE_LOG(DEBUG, POWER, "failed to open %s\n",
+ RTE_LOG_LINE(DEBUG, POWER, "failed to open %s",
POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ);
goto err;
}
if (f_max != NULL) {
ret = read_core_sysfs_u32(f_max, &max_freq);
if (ret < 0) {
- RTE_LOG(DEBUG, POWER, "Failed to read %s\n",
+ RTE_LOG_LINE(DEBUG, POWER, "Failed to read %s",
POWER_INTEL_UNCORE_SYSFILE_MAX_FREQ);
goto err;
}
@@ -222,7 +222,7 @@ power_get_available_uncore_freqs(struct uncore_power_info *ui)
num_uncore_freqs = (ui->init_max_freq - ui->init_min_freq) / BUS_FREQ + 1;
if (num_uncore_freqs >= MAX_UNCORE_FREQS) {
- RTE_LOG(ERR, POWER, "Too many available uncore frequencies: %d\n",
+ RTE_LOG_LINE(ERR, POWER, "Too many available uncore frequencies: %d",
num_uncore_freqs);
goto out;
}
@@ -250,7 +250,7 @@ check_pkg_die_values(unsigned int pkg, unsigned int die)
if (max_pkgs == 0)
return -1;
if (pkg >= max_pkgs) {
- RTE_LOG(DEBUG, POWER, "Package number %02u can not exceed %u\n",
+ RTE_LOG_LINE(DEBUG, POWER, "Package number %02u can not exceed %u",
pkg, max_pkgs);
return -1;
}
@@ -259,7 +259,7 @@ check_pkg_die_values(unsigned int pkg, unsigned int die)
if (max_dies == 0)
return -1;
if (die >= max_dies) {
- RTE_LOG(DEBUG, POWER, "Die number %02u can not exceed %u\n",
+ RTE_LOG_LINE(DEBUG, POWER, "Die number %02u can not exceed %u",
die, max_dies);
return -1;
}
@@ -282,15 +282,15 @@ power_intel_uncore_init(unsigned int pkg, unsigned int die)
/* Init for setting uncore die frequency */
if (power_init_for_setting_uncore_freq(ui) < 0) {
- RTE_LOG(DEBUG, POWER, "Cannot init for setting uncore frequency for "
- "pkg %02u die %02u\n", pkg, die);
+ RTE_LOG_LINE(DEBUG, POWER, "Cannot init for setting uncore frequency for "
+ "pkg %02u die %02u", pkg, die);
return -1;
}
/* Get the available frequencies */
if (power_get_available_uncore_freqs(ui) < 0) {
- RTE_LOG(DEBUG, POWER, "Cannot get available uncore frequencies of "
- "pkg %02u die %02u\n", pkg, die);
+ RTE_LOG_LINE(DEBUG, POWER, "Cannot get available uncore frequencies of "
+ "pkg %02u die %02u", pkg, die);
return -1;
}
@@ -309,14 +309,14 @@ power_intel_uncore_exit(unsigned int pkg, unsigned int die)
ui = &uncore_info[pkg][die];
if (fprintf(ui->f_cur_min, "%u", ui->org_min_freq) < 0) {
- RTE_LOG(ERR, POWER, "Fail to write original uncore frequency for "
- "pkg %02u die %02u\n", ui->pkg, ui->die);
+ RTE_LOG_LINE(ERR, POWER, "Fail to write original uncore frequency for "
+ "pkg %02u die %02u", ui->pkg, ui->die);
return -1;
}
if (fprintf(ui->f_cur_max, "%u", ui->org_max_freq) < 0) {
- RTE_LOG(ERR, POWER, "Fail to write original uncore frequency for "
- "pkg %02u die %02u\n", ui->pkg, ui->die);
+ RTE_LOG_LINE(ERR, POWER, "Fail to write original uncore frequency for "
+ "pkg %02u die %02u", ui->pkg, ui->die);
return -1;
}
@@ -385,13 +385,13 @@ power_intel_uncore_freqs(unsigned int pkg, unsigned int die, uint32_t *freqs, ui
return -1;
if (freqs == NULL) {
- RTE_LOG(ERR, POWER, "NULL buffer supplied\n");
+ RTE_LOG_LINE(ERR, POWER, "NULL buffer supplied");
return 0;
}
ui = &uncore_info[pkg][die];
if (num < ui->nb_freqs) {
- RTE_LOG(ERR, POWER, "Buffer size is not enough\n");
+ RTE_LOG_LINE(ERR, POWER, "Buffer size is not enough");
return 0;
}
rte_memcpy(freqs, ui->freqs, ui->nb_freqs * sizeof(uint32_t));
@@ -419,10 +419,10 @@ power_intel_uncore_get_num_pkgs(void)
d = opendir(INTEL_UNCORE_FREQUENCY_DIR);
if (d == NULL) {
- RTE_LOG(ERR, POWER,
+ RTE_LOG_LINE(ERR, POWER,
"Uncore frequency management not supported/enabled on this kernel. "
"Please enable CONFIG_INTEL_UNCORE_FREQ_CONTROL if on Intel x86 with linux kernel"
- " >= 5.6\n");
+ " >= 5.6");
return 0;
}
@@ -451,16 +451,16 @@ power_intel_uncore_get_num_dies(unsigned int pkg)
if (max_pkgs == 0)
return 0;
if (pkg >= max_pkgs) {
- RTE_LOG(DEBUG, POWER, "Invalid package number\n");
+ RTE_LOG_LINE(DEBUG, POWER, "Invalid package number");
return 0;
}
d = opendir(INTEL_UNCORE_FREQUENCY_DIR);
if (d == NULL) {
- RTE_LOG(ERR, POWER,
+ RTE_LOG_LINE(ERR, POWER,
"Uncore frequency management not supported/enabled on this kernel. "
"Please enable CONFIG_INTEL_UNCORE_FREQ_CONTROL if on Intel x86 with linux kernel"
- " >= 5.6\n");
+ " >= 5.6");
return 0;
}
diff --git a/lib/power/power_kvm_vm.c b/lib/power/power_kvm_vm.c
index db031f4310..218799491e 100644
--- a/lib/power/power_kvm_vm.c
+++ b/lib/power/power_kvm_vm.c
@@ -25,7 +25,7 @@ int
power_kvm_vm_init(unsigned int lcore_id)
{
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Core(%u) is out of range 0...%d\n",
+ RTE_LOG_LINE(ERR, POWER, "Core(%u) is out of range 0...%d",
lcore_id, RTE_MAX_LCORE-1);
return -1;
}
@@ -46,16 +46,16 @@ power_kvm_vm_freqs(__rte_unused unsigned int lcore_id,
__rte_unused uint32_t *freqs,
__rte_unused uint32_t num)
{
- RTE_LOG(ERR, POWER, "rte_power_freqs is not implemented "
- "for Virtual Machine Power Management\n");
+ RTE_LOG_LINE(ERR, POWER, "rte_power_freqs is not implemented "
+ "for Virtual Machine Power Management");
return -ENOTSUP;
}
uint32_t
power_kvm_vm_get_freq(__rte_unused unsigned int lcore_id)
{
- RTE_LOG(ERR, POWER, "rte_power_get_freq is not implemented "
- "for Virtual Machine Power Management\n");
+ RTE_LOG_LINE(ERR, POWER, "rte_power_get_freq is not implemented "
+ "for Virtual Machine Power Management");
return -ENOTSUP;
}
@@ -63,8 +63,8 @@ int
power_kvm_vm_set_freq(__rte_unused unsigned int lcore_id,
__rte_unused uint32_t index)
{
- RTE_LOG(ERR, POWER, "rte_power_set_freq is not implemented "
- "for Virtual Machine Power Management\n");
+ RTE_LOG_LINE(ERR, POWER, "rte_power_set_freq is not implemented "
+ "for Virtual Machine Power Management");
return -ENOTSUP;
}
@@ -74,7 +74,7 @@ send_msg(unsigned int lcore_id, uint32_t scale_direction)
int ret;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Core(%u) is out of range 0...%d\n",
+ RTE_LOG_LINE(ERR, POWER, "Core(%u) is out of range 0...%d",
lcore_id, RTE_MAX_LCORE-1);
return -1;
}
@@ -82,7 +82,7 @@ send_msg(unsigned int lcore_id, uint32_t scale_direction)
ret = guest_channel_send_msg(&pkt[lcore_id], lcore_id);
if (ret == 0)
return 1;
- RTE_LOG(DEBUG, POWER, "Error sending message: %s\n",
+ RTE_LOG_LINE(DEBUG, POWER, "Error sending message: %s",
ret > 0 ? strerror(ret) : "channel not connected");
return -1;
}
@@ -114,7 +114,7 @@ power_kvm_vm_freq_min(unsigned int lcore_id)
int
power_kvm_vm_turbo_status(__rte_unused unsigned int lcore_id)
{
- RTE_LOG(ERR, POWER, "rte_power_turbo_status is not implemented for Virtual Machine Power Management\n");
+ RTE_LOG_LINE(ERR, POWER, "rte_power_turbo_status is not implemented for Virtual Machine Power Management");
return -ENOTSUP;
}
@@ -134,6 +134,6 @@ struct rte_power_core_capabilities;
int power_kvm_vm_get_capabilities(__rte_unused unsigned int lcore_id,
__rte_unused struct rte_power_core_capabilities *caps)
{
- RTE_LOG(ERR, POWER, "rte_power_get_capabilities is not implemented for Virtual Machine Power Management\n");
+ RTE_LOG_LINE(ERR, POWER, "rte_power_get_capabilities is not implemented for Virtual Machine Power Management");
return -ENOTSUP;
}
diff --git a/lib/power/power_pstate_cpufreq.c b/lib/power/power_pstate_cpufreq.c
index 5ca5f60bcd..56aa302b5d 100644
--- a/lib/power/power_pstate_cpufreq.c
+++ b/lib/power/power_pstate_cpufreq.c
@@ -82,7 +82,7 @@ power_read_turbo_pct(uint64_t *outVal)
fd = open(POWER_SYSFILE_TURBO_PCT, O_RDONLY);
if (fd < 0) {
- RTE_LOG(ERR, POWER, "Error opening '%s': %s\n", POWER_SYSFILE_TURBO_PCT,
+ RTE_LOG_LINE(ERR, POWER, "Error opening '%s': %s", POWER_SYSFILE_TURBO_PCT,
strerror(errno));
return fd;
}
@@ -90,7 +90,7 @@ power_read_turbo_pct(uint64_t *outVal)
ret = read(fd, val, sizeof(val));
if (ret < 0) {
- RTE_LOG(ERR, POWER, "Error reading '%s': %s\n", POWER_SYSFILE_TURBO_PCT,
+ RTE_LOG_LINE(ERR, POWER, "Error reading '%s': %s", POWER_SYSFILE_TURBO_PCT,
strerror(errno));
goto out;
}
@@ -98,7 +98,7 @@ power_read_turbo_pct(uint64_t *outVal)
errno = 0;
*outVal = (uint64_t) strtol(val, &endptr, 10);
if (errno != 0 || (*endptr != 0 && *endptr != '\n')) {
- RTE_LOG(ERR, POWER, "Error converting str to digits, read from %s: %s\n",
+ RTE_LOG_LINE(ERR, POWER, "Error converting str to digits, read from %s: %s",
POWER_SYSFILE_TURBO_PCT, strerror(errno));
ret = -1;
goto out;
@@ -126,7 +126,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi)
open_core_sysfs_file(&f_base_max, "r", POWER_SYSFILE_BASE_MAX_FREQ,
pi->lcore_id);
if (f_base_max == NULL) {
- RTE_LOG(ERR, POWER, "failed to open %s\n",
+ RTE_LOG_LINE(ERR, POWER, "failed to open %s",
POWER_SYSFILE_BASE_MAX_FREQ);
goto err;
}
@@ -134,7 +134,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi)
open_core_sysfs_file(&f_base_min, "r", POWER_SYSFILE_BASE_MIN_FREQ,
pi->lcore_id);
if (f_base_min == NULL) {
- RTE_LOG(ERR, POWER, "failed to open %s\n",
+ RTE_LOG_LINE(ERR, POWER, "failed to open %s",
POWER_SYSFILE_BASE_MIN_FREQ);
goto err;
}
@@ -142,7 +142,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi)
open_core_sysfs_file(&f_min, "rw+", POWER_SYSFILE_MIN_FREQ,
pi->lcore_id);
if (f_min == NULL) {
- RTE_LOG(ERR, POWER, "failed to open %s\n",
+ RTE_LOG_LINE(ERR, POWER, "failed to open %s",
POWER_SYSFILE_MIN_FREQ);
goto err;
}
@@ -150,7 +150,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi)
open_core_sysfs_file(&f_max, "rw+", POWER_SYSFILE_MAX_FREQ,
pi->lcore_id);
if (f_max == NULL) {
- RTE_LOG(ERR, POWER, "failed to open %s\n",
+ RTE_LOG_LINE(ERR, POWER, "failed to open %s",
POWER_SYSFILE_MAX_FREQ);
goto err;
}
@@ -162,7 +162,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi)
/* read base max ratio */
ret = read_core_sysfs_u32(f_base_max, &base_max_ratio);
if (ret < 0) {
- RTE_LOG(ERR, POWER, "Failed to read %s\n",
+ RTE_LOG_LINE(ERR, POWER, "Failed to read %s",
POWER_SYSFILE_BASE_MAX_FREQ);
goto err;
}
@@ -170,7 +170,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi)
/* read base min ratio */
ret = read_core_sysfs_u32(f_base_min, &base_min_ratio);
if (ret < 0) {
- RTE_LOG(ERR, POWER, "Failed to read %s\n",
+ RTE_LOG_LINE(ERR, POWER, "Failed to read %s",
POWER_SYSFILE_BASE_MIN_FREQ);
goto err;
}
@@ -179,7 +179,7 @@ power_init_for_setting_freq(struct pstate_power_info *pi)
if (f_base != NULL) {
ret = read_core_sysfs_u32(f_base, &base_ratio);
if (ret < 0) {
- RTE_LOG(ERR, POWER, "Failed to read %s\n",
+ RTE_LOG_LINE(ERR, POWER, "Failed to read %s",
POWER_SYSFILE_BASE_FREQ);
goto err;
}
@@ -257,8 +257,8 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx)
uint32_t target_freq = 0;
if (idx >= RTE_MAX_LCORE_FREQS || idx >= pi->nb_freqs) {
- RTE_LOG(ERR, POWER, "Invalid frequency index %u, which "
- "should be less than %u\n", idx, pi->nb_freqs);
+ RTE_LOG_LINE(ERR, POWER, "Invalid frequency index %u, which "
+ "should be less than %u", idx, pi->nb_freqs);
return -1;
}
@@ -270,15 +270,15 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx)
* User need change the min/max as same value.
*/
if (fseek(pi->f_cur_min, 0, SEEK_SET) < 0) {
- RTE_LOG(ERR, POWER, "Fail to set file position indicator to 0 "
- "for setting frequency for lcore %u\n",
+ RTE_LOG_LINE(ERR, POWER, "Fail to set file position indicator to 0 "
+ "for setting frequency for lcore %u",
pi->lcore_id);
return -1;
}
if (fseek(pi->f_cur_max, 0, SEEK_SET) < 0) {
- RTE_LOG(ERR, POWER, "Fail to set file position indicator to 0 "
- "for setting frequency for lcore %u\n",
+ RTE_LOG_LINE(ERR, POWER, "Fail to set file position indicator to 0 "
+ "for setting frequency for lcore %u",
pi->lcore_id);
return -1;
}
@@ -288,7 +288,7 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx)
if (pi->turbo_enable)
target_freq = pi->sys_max_freq;
else {
- RTE_LOG(ERR, POWER, "Turbo is off, frequency can't be scaled up more %u\n",
+ RTE_LOG_LINE(ERR, POWER, "Turbo is off, frequency can't be scaled up more %u",
pi->lcore_id);
return -1;
}
@@ -299,14 +299,14 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx)
if (idx > pi->curr_idx) {
if (fprintf(pi->f_cur_min, "%u", target_freq) < 0) {
- RTE_LOG(ERR, POWER, "Fail to write new frequency for "
- "lcore %u\n", pi->lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Fail to write new frequency for "
+ "lcore %u", pi->lcore_id);
return -1;
}
if (fprintf(pi->f_cur_max, "%u", target_freq) < 0) {
- RTE_LOG(ERR, POWER, "Fail to write new frequency for "
- "lcore %u\n", pi->lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Fail to write new frequency for "
+ "lcore %u", pi->lcore_id);
return -1;
}
@@ -322,14 +322,14 @@ set_freq_internal(struct pstate_power_info *pi, uint32_t idx)
if (idx < pi->curr_idx) {
if (fprintf(pi->f_cur_max, "%u", target_freq) < 0) {
- RTE_LOG(ERR, POWER, "Fail to write new frequency for "
- "lcore %u\n", pi->lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Fail to write new frequency for "
+ "lcore %u", pi->lcore_id);
return -1;
}
if (fprintf(pi->f_cur_min, "%u", target_freq) < 0) {
- RTE_LOG(ERR, POWER, "Fail to write new frequency for "
- "lcore %u\n", pi->lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Fail to write new frequency for "
+ "lcore %u", pi->lcore_id);
return -1;
}
@@ -384,7 +384,7 @@ power_get_available_freqs(struct pstate_power_info *pi)
open_core_sysfs_file(&f_max, "r", POWER_SYSFILE_BASE_MAX_FREQ,
pi->lcore_id);
if (f_max == NULL) {
- RTE_LOG(ERR, POWER, "failed to open %s\n",
+ RTE_LOG_LINE(ERR, POWER, "failed to open %s",
POWER_SYSFILE_BASE_MAX_FREQ);
goto out;
}
@@ -392,7 +392,7 @@ power_get_available_freqs(struct pstate_power_info *pi)
open_core_sysfs_file(&f_min, "r", POWER_SYSFILE_BASE_MIN_FREQ,
pi->lcore_id);
if (f_min == NULL) {
- RTE_LOG(ERR, POWER, "failed to open %s\n",
+ RTE_LOG_LINE(ERR, POWER, "failed to open %s",
POWER_SYSFILE_BASE_MIN_FREQ);
goto out;
}
@@ -400,14 +400,14 @@ power_get_available_freqs(struct pstate_power_info *pi)
/* read base ratios */
ret = read_core_sysfs_u32(f_max, &sys_max_freq);
if (ret < 0) {
- RTE_LOG(ERR, POWER, "Failed to read %s\n",
+ RTE_LOG_LINE(ERR, POWER, "Failed to read %s",
POWER_SYSFILE_BASE_MAX_FREQ);
goto out;
}
ret = read_core_sysfs_u32(f_min, &sys_min_freq);
if (ret < 0) {
- RTE_LOG(ERR, POWER, "Failed to read %s\n",
+ RTE_LOG_LINE(ERR, POWER, "Failed to read %s",
POWER_SYSFILE_BASE_MIN_FREQ);
goto out;
}
@@ -450,7 +450,7 @@ power_get_available_freqs(struct pstate_power_info *pi)
num_freqs = (RTE_MIN(base_max_freq, sys_max_freq) - sys_min_freq) / BUS_FREQ
+ 1 + pi->turbo_available;
if (num_freqs >= RTE_MAX_LCORE_FREQS) {
- RTE_LOG(ERR, POWER, "Too many available frequencies: %d\n",
+ RTE_LOG_LINE(ERR, POWER, "Too many available frequencies: %d",
num_freqs);
goto out;
}
@@ -494,14 +494,14 @@ power_get_cur_idx(struct pstate_power_info *pi)
open_core_sysfs_file(&f_cur, "r", POWER_SYSFILE_CUR_FREQ,
pi->lcore_id);
if (f_cur == NULL) {
- RTE_LOG(ERR, POWER, "failed to open %s\n",
+ RTE_LOG_LINE(ERR, POWER, "failed to open %s",
POWER_SYSFILE_CUR_FREQ);
goto fail;
}
ret = read_core_sysfs_u32(f_cur, &sys_cur_freq);
if (ret < 0) {
- RTE_LOG(ERR, POWER, "Failed to read %s\n",
+ RTE_LOG_LINE(ERR, POWER, "Failed to read %s",
POWER_SYSFILE_CUR_FREQ);
goto fail;
}
@@ -543,7 +543,7 @@ power_pstate_cpufreq_init(unsigned int lcore_id)
uint32_t exp_state;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Lcore id %u can not exceed %u\n",
+ RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceed %u",
lcore_id, RTE_MAX_LCORE - 1U);
return -1;
}
@@ -559,47 +559,47 @@ power_pstate_cpufreq_init(unsigned int lcore_id)
if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state,
POWER_ONGOING,
rte_memory_order_acquire, rte_memory_order_relaxed)) {
- RTE_LOG(INFO, POWER, "Power management of lcore %u is "
- "in use\n", lcore_id);
+ RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is "
+ "in use", lcore_id);
return -1;
}
pi->lcore_id = lcore_id;
/* Check and set the governor */
if (power_set_governor_performance(pi) < 0) {
- RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to "
- "performance\n", lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Cannot set governor of lcore %u to "
+ "performance", lcore_id);
goto fail;
}
/* Init for setting lcore frequency */
if (power_init_for_setting_freq(pi) < 0) {
- RTE_LOG(ERR, POWER, "Cannot init for setting frequency for "
- "lcore %u\n", lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Cannot init for setting frequency for "
+ "lcore %u", lcore_id);
goto fail;
}
/* Get the available frequencies */
if (power_get_available_freqs(pi) < 0) {
- RTE_LOG(ERR, POWER, "Cannot get available frequencies of "
- "lcore %u\n", lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Cannot get available frequencies of "
+ "lcore %u", lcore_id);
goto fail;
}
if (power_get_cur_idx(pi) < 0) {
- RTE_LOG(ERR, POWER, "Cannot get current frequency "
- "index of lcore %u\n", lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Cannot get current frequency "
+ "index of lcore %u", lcore_id);
goto fail;
}
/* Set freq to max by default */
if (power_pstate_cpufreq_freq_max(lcore_id) < 0) {
- RTE_LOG(ERR, POWER, "Cannot set frequency of lcore %u "
- "to max\n", lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Cannot set frequency of lcore %u "
+ "to max", lcore_id);
goto fail;
}
- RTE_LOG(INFO, POWER, "Initialized successfully for lcore %u "
- "power management\n", lcore_id);
+ RTE_LOG_LINE(INFO, POWER, "Initialized successfully for lcore %u "
+ "power management", lcore_id);
exp_state = POWER_ONGOING;
rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_USED,
rte_memory_order_release, rte_memory_order_relaxed);
@@ -621,7 +621,7 @@ power_pstate_cpufreq_exit(unsigned int lcore_id)
uint32_t exp_state;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Lcore id %u can not exceeds %u\n",
+ RTE_LOG_LINE(ERR, POWER, "Lcore id %u can not exceeds %u",
lcore_id, RTE_MAX_LCORE - 1U);
return -1;
}
@@ -637,8 +637,8 @@ power_pstate_cpufreq_exit(unsigned int lcore_id)
if (!rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state,
POWER_ONGOING,
rte_memory_order_acquire, rte_memory_order_relaxed)) {
- RTE_LOG(INFO, POWER, "Power management of lcore %u is "
- "not used\n", lcore_id);
+ RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u is "
+ "not used", lcore_id);
return -1;
}
@@ -650,14 +650,14 @@ power_pstate_cpufreq_exit(unsigned int lcore_id)
/* Set the governor back to the original */
if (power_set_governor_original(pi) < 0) {
- RTE_LOG(ERR, POWER, "Cannot set the governor of %u back "
- "to the original\n", lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Cannot set the governor of %u back "
+ "to the original", lcore_id);
goto fail;
}
- RTE_LOG(INFO, POWER, "Power management of lcore %u has exited from "
+ RTE_LOG_LINE(INFO, POWER, "Power management of lcore %u has exited from "
"'performance' mode and been set back to the "
- "original\n", lcore_id);
+ "original", lcore_id);
exp_state = POWER_ONGOING;
rte_atomic_compare_exchange_strong_explicit(&(pi->state), &exp_state, POWER_IDLE,
rte_memory_order_release, rte_memory_order_relaxed);
@@ -679,18 +679,18 @@ power_pstate_cpufreq_freqs(unsigned int lcore_id, uint32_t *freqs, uint32_t num)
struct pstate_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return 0;
}
if (freqs == NULL) {
- RTE_LOG(ERR, POWER, "NULL buffer supplied\n");
+ RTE_LOG_LINE(ERR, POWER, "NULL buffer supplied");
return 0;
}
pi = &lcore_power_info[lcore_id];
if (num < pi->nb_freqs) {
- RTE_LOG(ERR, POWER, "Buffer size is not enough\n");
+ RTE_LOG_LINE(ERR, POWER, "Buffer size is not enough");
return 0;
}
rte_memcpy(freqs, pi->freqs, pi->nb_freqs * sizeof(uint32_t));
@@ -702,7 +702,7 @@ uint32_t
power_pstate_cpufreq_get_freq(unsigned int lcore_id)
{
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return RTE_POWER_INVALID_FREQ_INDEX;
}
@@ -714,7 +714,7 @@ int
power_pstate_cpufreq_set_freq(unsigned int lcore_id, uint32_t index)
{
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -727,7 +727,7 @@ power_pstate_cpufreq_freq_up(unsigned int lcore_id)
struct pstate_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -746,7 +746,7 @@ power_pstate_cpufreq_freq_down(unsigned int lcore_id)
struct pstate_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -762,7 +762,7 @@ int
power_pstate_cpufreq_freq_max(unsigned int lcore_id)
{
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -787,7 +787,7 @@ power_pstate_cpufreq_freq_min(unsigned int lcore_id)
struct pstate_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -804,7 +804,7 @@ power_pstate_turbo_status(unsigned int lcore_id)
struct pstate_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -819,7 +819,7 @@ power_pstate_enable_turbo(unsigned int lcore_id)
struct pstate_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -829,8 +829,8 @@ power_pstate_enable_turbo(unsigned int lcore_id)
pi->turbo_enable = 1;
else {
pi->turbo_enable = 0;
- RTE_LOG(ERR, POWER,
- "Failed to enable turbo on lcore %u\n",
+ RTE_LOG_LINE(ERR, POWER,
+ "Failed to enable turbo on lcore %u",
lcore_id);
return -1;
}
@@ -845,7 +845,7 @@ power_pstate_disable_turbo(unsigned int lcore_id)
struct pstate_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
@@ -856,8 +856,8 @@ power_pstate_disable_turbo(unsigned int lcore_id)
if (pi->turbo_available && pi->curr_idx <= 1) {
/* Try to set freq to max by default coming out of turbo */
if (power_pstate_cpufreq_freq_max(lcore_id) < 0) {
- RTE_LOG(ERR, POWER,
- "Failed to set frequency of lcore %u to max\n",
+ RTE_LOG_LINE(ERR, POWER,
+ "Failed to set frequency of lcore %u to max",
lcore_id);
return -1;
}
@@ -873,11 +873,11 @@ int power_pstate_get_capabilities(unsigned int lcore_id,
struct pstate_power_info *pi;
if (lcore_id >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID");
return -1;
}
if (caps == NULL) {
- RTE_LOG(ERR, POWER, "Invalid argument\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid argument");
return -1;
}
diff --git a/lib/power/rte_power.c b/lib/power/rte_power.c
index 1502612b0a..7bee4f88f9 100644
--- a/lib/power/rte_power.c
+++ b/lib/power/rte_power.c
@@ -74,7 +74,7 @@ rte_power_set_env(enum power_management_env env)
rte_spinlock_lock(&global_env_cfg_lock);
if (global_default_env != PM_ENV_NOT_SET) {
- RTE_LOG(ERR, POWER, "Power Management Environment already set.\n");
+ RTE_LOG_LINE(ERR, POWER, "Power Management Environment already set.");
rte_spinlock_unlock(&global_env_cfg_lock);
return -1;
}
@@ -143,7 +143,7 @@ rte_power_set_env(enum power_management_env env)
rte_power_freq_disable_turbo = power_amd_pstate_disable_turbo;
rte_power_get_capabilities = power_amd_pstate_get_capabilities;
} else {
- RTE_LOG(ERR, POWER, "Invalid Power Management Environment(%d) set\n",
+ RTE_LOG_LINE(ERR, POWER, "Invalid Power Management Environment(%d) set",
env);
ret = -1;
}
@@ -190,46 +190,46 @@ rte_power_init(unsigned int lcore_id)
case PM_ENV_AMD_PSTATE_CPUFREQ:
return power_amd_pstate_cpufreq_init(lcore_id);
default:
- RTE_LOG(INFO, POWER, "Env isn't set yet!\n");
+ RTE_LOG_LINE(INFO, POWER, "Env isn't set yet!");
}
/* Auto detect Environment */
- RTE_LOG(INFO, POWER, "Attempting to initialise ACPI cpufreq power management...\n");
+ RTE_LOG_LINE(INFO, POWER, "Attempting to initialise ACPI cpufreq power management...");
ret = power_acpi_cpufreq_init(lcore_id);
if (ret == 0) {
rte_power_set_env(PM_ENV_ACPI_CPUFREQ);
goto out;
}
- RTE_LOG(INFO, POWER, "Attempting to initialise PSTAT power management...\n");
+ RTE_LOG_LINE(INFO, POWER, "Attempting to initialise PSTAT power management...");
ret = power_pstate_cpufreq_init(lcore_id);
if (ret == 0) {
rte_power_set_env(PM_ENV_PSTATE_CPUFREQ);
goto out;
}
- RTE_LOG(INFO, POWER, "Attempting to initialise AMD PSTATE power management...\n");
+ RTE_LOG_LINE(INFO, POWER, "Attempting to initialise AMD PSTATE power management...");
ret = power_amd_pstate_cpufreq_init(lcore_id);
if (ret == 0) {
rte_power_set_env(PM_ENV_AMD_PSTATE_CPUFREQ);
goto out;
}
- RTE_LOG(INFO, POWER, "Attempting to initialise CPPC power management...\n");
+ RTE_LOG_LINE(INFO, POWER, "Attempting to initialise CPPC power management...");
ret = power_cppc_cpufreq_init(lcore_id);
if (ret == 0) {
rte_power_set_env(PM_ENV_CPPC_CPUFREQ);
goto out;
}
- RTE_LOG(INFO, POWER, "Attempting to initialise VM power management...\n");
+ RTE_LOG_LINE(INFO, POWER, "Attempting to initialise VM power management...");
ret = power_kvm_vm_init(lcore_id);
if (ret == 0) {
rte_power_set_env(PM_ENV_KVM_VM);
goto out;
}
- RTE_LOG(ERR, POWER, "Unable to set Power Management Environment for lcore "
- "%u\n", lcore_id);
+ RTE_LOG_LINE(ERR, POWER, "Unable to set Power Management Environment for lcore "
+ "%u", lcore_id);
out:
return ret;
}
@@ -249,7 +249,7 @@ rte_power_exit(unsigned int lcore_id)
case PM_ENV_AMD_PSTATE_CPUFREQ:
return power_amd_pstate_cpufreq_exit(lcore_id);
default:
- RTE_LOG(ERR, POWER, "Environment has not been set, unable to exit gracefully\n");
+ RTE_LOG_LINE(ERR, POWER, "Environment has not been set, unable to exit gracefully");
}
return -1;
diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c
index 6f18ed0adf..fb7d8fddb3 100644
--- a/lib/power/rte_power_pmd_mgmt.c
+++ b/lib/power/rte_power_pmd_mgmt.c
@@ -146,7 +146,7 @@ get_monitor_addresses(struct pmd_core_cfg *cfg,
/* attempted out of bounds access */
if (i >= len) {
- RTE_LOG(ERR, POWER, "Too many queues being monitored\n");
+ RTE_LOG_LINE(ERR, POWER, "Too many queues being monitored");
return -1;
}
@@ -423,7 +423,7 @@ check_scale(unsigned int lcore)
if (!rte_power_check_env_supported(PM_ENV_ACPI_CPUFREQ) &&
!rte_power_check_env_supported(PM_ENV_PSTATE_CPUFREQ) &&
!rte_power_check_env_supported(PM_ENV_AMD_PSTATE_CPUFREQ)) {
- RTE_LOG(DEBUG, POWER, "Neither ACPI nor PSTATE modes are supported\n");
+ RTE_LOG_LINE(DEBUG, POWER, "Neither ACPI nor PSTATE modes are supported");
return -ENOTSUP;
}
/* ensure we could initialize the power library */
@@ -434,7 +434,7 @@ check_scale(unsigned int lcore)
env = rte_power_get_env();
if (env != PM_ENV_ACPI_CPUFREQ && env != PM_ENV_PSTATE_CPUFREQ &&
env != PM_ENV_AMD_PSTATE_CPUFREQ) {
- RTE_LOG(DEBUG, POWER, "Neither ACPI nor PSTATE modes were initialized\n");
+ RTE_LOG_LINE(DEBUG, POWER, "Neither ACPI nor PSTATE modes were initialized");
return -ENOTSUP;
}
@@ -450,7 +450,7 @@ check_monitor(struct pmd_core_cfg *cfg, const union queue *qdata)
/* check if rte_power_monitor is supported */
if (!global_data.intrinsics_support.power_monitor) {
- RTE_LOG(DEBUG, POWER, "Monitoring intrinsics are not supported\n");
+ RTE_LOG_LINE(DEBUG, POWER, "Monitoring intrinsics are not supported");
return -ENOTSUP;
}
/* check if multi-monitor is supported */
@@ -459,14 +459,14 @@ check_monitor(struct pmd_core_cfg *cfg, const union queue *qdata)
/* if we're adding a new queue, do we support multiple queues? */
if (cfg->n_queues > 0 && !multimonitor_supported) {
- RTE_LOG(DEBUG, POWER, "Monitoring multiple queues is not supported\n");
+ RTE_LOG_LINE(DEBUG, POWER, "Monitoring multiple queues is not supported");
return -ENOTSUP;
}
/* check if the device supports the necessary PMD API */
if (rte_eth_get_monitor_addr(qdata->portid, qdata->qid,
&dummy) == -ENOTSUP) {
- RTE_LOG(DEBUG, POWER, "The device does not support rte_eth_get_monitor_addr\n");
+ RTE_LOG_LINE(DEBUG, POWER, "The device does not support rte_eth_get_monitor_addr");
return -ENOTSUP;
}
@@ -566,14 +566,14 @@ rte_power_ethdev_pmgmt_queue_enable(unsigned int lcore_id, uint16_t port_id,
clb = clb_pause;
break;
default:
- RTE_LOG(DEBUG, POWER, "Invalid power management type\n");
+ RTE_LOG_LINE(DEBUG, POWER, "Invalid power management type");
ret = -EINVAL;
goto end;
}
/* add this queue to the list */
ret = queue_list_add(lcore_cfg, &qdata);
if (ret < 0) {
- RTE_LOG(DEBUG, POWER, "Failed to add queue to list: %s\n",
+ RTE_LOG_LINE(DEBUG, POWER, "Failed to add queue to list: %s",
strerror(-ret));
goto end;
}
@@ -686,7 +686,7 @@ int
rte_power_pmd_mgmt_set_pause_duration(unsigned int duration)
{
if (duration == 0) {
- RTE_LOG(ERR, POWER, "Pause duration must be greater than 0, value unchanged\n");
+ RTE_LOG_LINE(ERR, POWER, "Pause duration must be greater than 0, value unchanged");
return -EINVAL;
}
pause_duration = duration;
@@ -704,12 +704,12 @@ int
rte_power_pmd_mgmt_set_scaling_freq_min(unsigned int lcore, unsigned int min)
{
if (lcore >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID: %u\n", lcore);
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID: %u", lcore);
return -EINVAL;
}
if (min > scale_freq_max[lcore]) {
- RTE_LOG(ERR, POWER, "Invalid min frequency: Cannot be greater than max frequency\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid min frequency: Cannot be greater than max frequency");
return -EINVAL;
}
scale_freq_min[lcore] = min;
@@ -721,7 +721,7 @@ int
rte_power_pmd_mgmt_set_scaling_freq_max(unsigned int lcore, unsigned int max)
{
if (lcore >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID: %u\n", lcore);
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID: %u", lcore);
return -EINVAL;
}
@@ -729,7 +729,7 @@ rte_power_pmd_mgmt_set_scaling_freq_max(unsigned int lcore, unsigned int max)
if (max == 0)
max = UINT32_MAX;
if (max < scale_freq_min[lcore]) {
- RTE_LOG(ERR, POWER, "Invalid max frequency: Cannot be less than min frequency\n");
+ RTE_LOG_LINE(ERR, POWER, "Invalid max frequency: Cannot be less than min frequency");
return -EINVAL;
}
@@ -742,12 +742,12 @@ int
rte_power_pmd_mgmt_get_scaling_freq_min(unsigned int lcore)
{
if (lcore >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID: %u\n", lcore);
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID: %u", lcore);
return -EINVAL;
}
if (scale_freq_max[lcore] == 0)
- RTE_LOG(DEBUG, POWER, "Scaling freq min config not set. Using sysfs min freq.\n");
+ RTE_LOG_LINE(DEBUG, POWER, "Scaling freq min config not set. Using sysfs min freq.");
return scale_freq_min[lcore];
}
@@ -756,12 +756,12 @@ int
rte_power_pmd_mgmt_get_scaling_freq_max(unsigned int lcore)
{
if (lcore >= RTE_MAX_LCORE) {
- RTE_LOG(ERR, POWER, "Invalid lcore ID: %u\n", lcore);
+ RTE_LOG_LINE(ERR, POWER, "Invalid lcore ID: %u", lcore);
return -EINVAL;
}
if (scale_freq_max[lcore] == UINT32_MAX) {
- RTE_LOG(DEBUG, POWER, "Scaling freq max config not set. Using sysfs max freq.\n");
+ RTE_LOG_LINE(DEBUG, POWER, "Scaling freq max config not set. Using sysfs max freq.");
return 0;
}
diff --git a/lib/power/rte_power_uncore.c b/lib/power/rte_power_uncore.c
index 9c20fe150d..d57fc18faa 100644
--- a/lib/power/rte_power_uncore.c
+++ b/lib/power/rte_power_uncore.c
@@ -101,7 +101,7 @@ rte_power_set_uncore_env(enum rte_uncore_power_mgmt_env env)
rte_spinlock_lock(&global_env_cfg_lock);
if (default_uncore_env != RTE_UNCORE_PM_ENV_NOT_SET) {
- RTE_LOG(ERR, POWER, "Uncore Power Management Env already set.\n");
+ RTE_LOG_LINE(ERR, POWER, "Uncore Power Management Env already set.");
rte_spinlock_unlock(&global_env_cfg_lock);
return -1;
}
@@ -124,7 +124,7 @@ rte_power_set_uncore_env(enum rte_uncore_power_mgmt_env env)
rte_power_uncore_get_num_pkgs = power_intel_uncore_get_num_pkgs;
rte_power_uncore_get_num_dies = power_intel_uncore_get_num_dies;
} else {
- RTE_LOG(ERR, POWER, "Invalid Power Management Environment(%d) set\n", env);
+ RTE_LOG_LINE(ERR, POWER, "Invalid Power Management Environment(%d) set", env);
ret = -1;
goto out;
}
@@ -159,12 +159,12 @@ rte_power_uncore_init(unsigned int pkg, unsigned int die)
case RTE_UNCORE_PM_ENV_INTEL_UNCORE:
return power_intel_uncore_init(pkg, die);
default:
- RTE_LOG(INFO, POWER, "Uncore Env isn't set yet!\n");
+ RTE_LOG_LINE(INFO, POWER, "Uncore Env isn't set yet!");
break;
}
/* Auto detect Environment */
- RTE_LOG(INFO, POWER, "Attempting to initialise Intel Uncore power mgmt...\n");
+ RTE_LOG_LINE(INFO, POWER, "Attempting to initialise Intel Uncore power mgmt...");
ret = power_intel_uncore_init(pkg, die);
if (ret == 0) {
rte_power_set_uncore_env(RTE_UNCORE_PM_ENV_INTEL_UNCORE);
@@ -172,8 +172,8 @@ rte_power_uncore_init(unsigned int pkg, unsigned int die)
}
if (default_uncore_env == RTE_UNCORE_PM_ENV_NOT_SET) {
- RTE_LOG(ERR, POWER, "Unable to set Power Management Environment "
- "for package %u Die %u\n", pkg, die);
+ RTE_LOG_LINE(ERR, POWER, "Unable to set Power Management Environment "
+ "for package %u Die %u", pkg, die);
ret = 0;
}
out:
@@ -187,7 +187,7 @@ rte_power_uncore_exit(unsigned int pkg, unsigned int die)
case RTE_UNCORE_PM_ENV_INTEL_UNCORE:
return power_intel_uncore_exit(pkg, die);
default:
- RTE_LOG(ERR, POWER, "Uncore Env has not been set, unable to exit gracefully\n");
+ RTE_LOG_LINE(ERR, POWER, "Uncore Env has not been set, unable to exit gracefully");
break;
}
return -1;
diff --git a/lib/rcu/rte_rcu_qsbr.c b/lib/rcu/rte_rcu_qsbr.c
index 5b6530788a..bd0b83be0c 100644
--- a/lib/rcu/rte_rcu_qsbr.c
+++ b/lib/rcu/rte_rcu_qsbr.c
@@ -20,7 +20,7 @@
#include "rcu_qsbr_pvt.h"
#define RCU_LOG(level, fmt, args...) \
- RTE_LOG(level, RCU, "%s(): " fmt "\n", __func__, ## args)
+ RTE_LOG_LINE(level, RCU, "%s(): " fmt, __func__, ## args)
/* Get the memory size of QSBR variable */
size_t
diff --git a/lib/reorder/rte_reorder.c b/lib/reorder/rte_reorder.c
index 640719c3ec..847e45b9f7 100644
--- a/lib/reorder/rte_reorder.c
+++ b/lib/reorder/rte_reorder.c
@@ -74,34 +74,34 @@ rte_reorder_init(struct rte_reorder_buffer *b, unsigned int bufsize,
};
if (b == NULL) {
- RTE_LOG(ERR, REORDER, "Invalid reorder buffer parameter:"
- " NULL\n");
+ RTE_LOG_LINE(ERR, REORDER, "Invalid reorder buffer parameter:"
+ " NULL");
rte_errno = EINVAL;
return NULL;
}
if (!rte_is_power_of_2(size)) {
- RTE_LOG(ERR, REORDER, "Invalid reorder buffer size"
- " - Not a power of 2\n");
+ RTE_LOG_LINE(ERR, REORDER, "Invalid reorder buffer size"
+ " - Not a power of 2");
rte_errno = EINVAL;
return NULL;
}
if (name == NULL) {
- RTE_LOG(ERR, REORDER, "Invalid reorder buffer name ptr:"
- " NULL\n");
+ RTE_LOG_LINE(ERR, REORDER, "Invalid reorder buffer name ptr:"
+ " NULL");
rte_errno = EINVAL;
return NULL;
}
if (bufsize < min_bufsize) {
- RTE_LOG(ERR, REORDER, "Invalid reorder buffer memory size: %u, "
- "minimum required: %u\n", bufsize, min_bufsize);
+ RTE_LOG_LINE(ERR, REORDER, "Invalid reorder buffer memory size: %u, "
+ "minimum required: %u", bufsize, min_bufsize);
rte_errno = EINVAL;
return NULL;
}
rte_reorder_seqn_dynfield_offset = rte_mbuf_dynfield_register(&reorder_seqn_dynfield_desc);
if (rte_reorder_seqn_dynfield_offset < 0) {
- RTE_LOG(ERR, REORDER,
- "Failed to register mbuf field for reorder sequence number, rte_errno: %i\n",
+ RTE_LOG_LINE(ERR, REORDER,
+ "Failed to register mbuf field for reorder sequence number, rte_errno: %i",
rte_errno);
rte_errno = ENOMEM;
return NULL;
@@ -161,14 +161,14 @@ rte_reorder_create(const char *name, unsigned socket_id, unsigned int size)
/* Check user arguments. */
if (!rte_is_power_of_2(size)) {
- RTE_LOG(ERR, REORDER, "Invalid reorder buffer size"
- " - Not a power of 2\n");
+ RTE_LOG_LINE(ERR, REORDER, "Invalid reorder buffer size"
+ " - Not a power of 2");
rte_errno = EINVAL;
return NULL;
}
if (name == NULL) {
- RTE_LOG(ERR, REORDER, "Invalid reorder buffer name ptr:"
- " NULL\n");
+ RTE_LOG_LINE(ERR, REORDER, "Invalid reorder buffer name ptr:"
+ " NULL");
rte_errno = EINVAL;
return NULL;
}
@@ -176,7 +176,7 @@ rte_reorder_create(const char *name, unsigned socket_id, unsigned int size)
/* allocate tailq entry */
te = rte_zmalloc("REORDER_TAILQ_ENTRY", sizeof(*te), 0);
if (te == NULL) {
- RTE_LOG(ERR, REORDER, "Failed to allocate tailq entry\n");
+ RTE_LOG_LINE(ERR, REORDER, "Failed to allocate tailq entry");
rte_errno = ENOMEM;
return NULL;
}
@@ -184,7 +184,7 @@ rte_reorder_create(const char *name, unsigned socket_id, unsigned int size)
/* Allocate memory to store the reorder buffer structure. */
b = rte_zmalloc_socket("REORDER_BUFFER", bufsize, 0, socket_id);
if (b == NULL) {
- RTE_LOG(ERR, REORDER, "Memzone allocation failed\n");
+ RTE_LOG_LINE(ERR, REORDER, "Memzone allocation failed");
rte_errno = ENOMEM;
rte_free(te);
return NULL;
diff --git a/lib/rib/rte_rib.c b/lib/rib/rte_rib.c
index 251d0d4ef1..baee4bff5a 100644
--- a/lib/rib/rte_rib.c
+++ b/lib/rib/rte_rib.c
@@ -416,8 +416,8 @@ rte_rib_create(const char *name, int socket_id, const struct rte_rib_conf *conf)
NULL, NULL, NULL, NULL, socket_id, 0);
if (node_pool == NULL) {
- RTE_LOG(ERR, LPM,
- "Can not allocate mempool for RIB %s\n", name);
+ RTE_LOG_LINE(ERR, LPM,
+ "Can not allocate mempool for RIB %s", name);
return NULL;
}
@@ -441,8 +441,8 @@ rte_rib_create(const char *name, int socket_id, const struct rte_rib_conf *conf)
/* allocate tailq entry */
te = rte_zmalloc("RIB_TAILQ_ENTRY", sizeof(*te), 0);
if (unlikely(te == NULL)) {
- RTE_LOG(ERR, LPM,
- "Can not allocate tailq entry for RIB %s\n", name);
+ RTE_LOG_LINE(ERR, LPM,
+ "Can not allocate tailq entry for RIB %s", name);
rte_errno = ENOMEM;
goto exit;
}
@@ -451,7 +451,7 @@ rte_rib_create(const char *name, int socket_id, const struct rte_rib_conf *conf)
rib = rte_zmalloc_socket(mem_name,
sizeof(struct rte_rib), RTE_CACHE_LINE_SIZE, socket_id);
if (unlikely(rib == NULL)) {
- RTE_LOG(ERR, LPM, "RIB %s memory allocation failed\n", name);
+ RTE_LOG_LINE(ERR, LPM, "RIB %s memory allocation failed", name);
rte_errno = ENOMEM;
goto free_te;
}
diff --git a/lib/rib/rte_rib6.c b/lib/rib/rte_rib6.c
index ad3d48ab8e..ce54f51208 100644
--- a/lib/rib/rte_rib6.c
+++ b/lib/rib/rte_rib6.c
@@ -485,8 +485,8 @@ rte_rib6_create(const char *name, int socket_id,
NULL, NULL, NULL, NULL, socket_id, 0);
if (node_pool == NULL) {
- RTE_LOG(ERR, LPM,
- "Can not allocate mempool for RIB6 %s\n", name);
+ RTE_LOG_LINE(ERR, LPM,
+ "Can not allocate mempool for RIB6 %s", name);
return NULL;
}
@@ -510,8 +510,8 @@ rte_rib6_create(const char *name, int socket_id,
/* allocate tailq entry */
te = rte_zmalloc("RIB6_TAILQ_ENTRY", sizeof(*te), 0);
if (unlikely(te == NULL)) {
- RTE_LOG(ERR, LPM,
- "Can not allocate tailq entry for RIB6 %s\n", name);
+ RTE_LOG_LINE(ERR, LPM,
+ "Can not allocate tailq entry for RIB6 %s", name);
rte_errno = ENOMEM;
goto exit;
}
@@ -520,7 +520,7 @@ rte_rib6_create(const char *name, int socket_id,
rib = rte_zmalloc_socket(mem_name,
sizeof(struct rte_rib6), RTE_CACHE_LINE_SIZE, socket_id);
if (unlikely(rib == NULL)) {
- RTE_LOG(ERR, LPM, "RIB6 %s memory allocation failed\n", name);
+ RTE_LOG_LINE(ERR, LPM, "RIB6 %s memory allocation failed", name);
rte_errno = ENOMEM;
goto free_te;
}
diff --git a/lib/ring/rte_ring.c b/lib/ring/rte_ring.c
index 12046419f1..7fd6576c8c 100644
--- a/lib/ring/rte_ring.c
+++ b/lib/ring/rte_ring.c
@@ -55,15 +55,15 @@ rte_ring_get_memsize_elem(unsigned int esize, unsigned int count)
/* Check if element size is a multiple of 4B */
if (esize % 4 != 0) {
- RTE_LOG(ERR, RING, "element size is not a multiple of 4\n");
+ RTE_LOG_LINE(ERR, RING, "element size is not a multiple of 4");
return -EINVAL;
}
/* count must be a power of 2 */
if ((!POWEROF2(count)) || (count > RTE_RING_SZ_MASK )) {
- RTE_LOG(ERR, RING,
- "Requested number of elements is invalid, must be power of 2, and not exceed %u\n",
+ RTE_LOG_LINE(ERR, RING,
+ "Requested number of elements is invalid, must be power of 2, and not exceed %u",
RTE_RING_SZ_MASK);
return -EINVAL;
@@ -198,8 +198,8 @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned int count,
/* future proof flags, only allow supported values */
if (flags & ~RING_F_MASK) {
- RTE_LOG(ERR, RING,
- "Unsupported flags requested %#x\n", flags);
+ RTE_LOG_LINE(ERR, RING,
+ "Unsupported flags requested %#x", flags);
return -EINVAL;
}
@@ -219,8 +219,8 @@ rte_ring_init(struct rte_ring *r, const char *name, unsigned int count,
r->capacity = count;
} else {
if ((!POWEROF2(count)) || (count > RTE_RING_SZ_MASK)) {
- RTE_LOG(ERR, RING,
- "Requested size is invalid, must be power of 2, and not exceed the size limit %u\n",
+ RTE_LOG_LINE(ERR, RING,
+ "Requested size is invalid, must be power of 2, and not exceed the size limit %u",
RTE_RING_SZ_MASK);
return -EINVAL;
}
@@ -274,7 +274,7 @@ rte_ring_create_elem(const char *name, unsigned int esize, unsigned int count,
te = rte_zmalloc("RING_TAILQ_ENTRY", sizeof(*te), 0);
if (te == NULL) {
- RTE_LOG(ERR, RING, "Cannot reserve memory for tailq\n");
+ RTE_LOG_LINE(ERR, RING, "Cannot reserve memory for tailq");
rte_errno = ENOMEM;
return NULL;
}
@@ -299,7 +299,7 @@ rte_ring_create_elem(const char *name, unsigned int esize, unsigned int count,
TAILQ_INSERT_TAIL(ring_list, te, next);
} else {
r = NULL;
- RTE_LOG(ERR, RING, "Cannot reserve memory\n");
+ RTE_LOG_LINE(ERR, RING, "Cannot reserve memory");
rte_free(te);
}
rte_mcfg_tailq_write_unlock();
@@ -331,8 +331,8 @@ rte_ring_free(struct rte_ring *r)
* therefore, there is no memzone to free.
*/
if (r->memzone == NULL) {
- RTE_LOG(ERR, RING,
- "Cannot free ring, not created with rte_ring_create()\n");
+ RTE_LOG_LINE(ERR, RING,
+ "Cannot free ring, not created with rte_ring_create()");
return;
}
@@ -355,7 +355,7 @@ rte_ring_free(struct rte_ring *r)
rte_mcfg_tailq_write_unlock();
if (rte_memzone_free(r->memzone) != 0)
- RTE_LOG(ERR, RING, "Cannot free memory\n");
+ RTE_LOG_LINE(ERR, RING, "Cannot free memory");
rte_free(te);
}
diff --git a/lib/sched/rte_pie.c b/lib/sched/rte_pie.c
index cce0ce762d..ac1f99e2bd 100644
--- a/lib/sched/rte_pie.c
+++ b/lib/sched/rte_pie.c
@@ -17,7 +17,7 @@ int
rte_pie_rt_data_init(struct rte_pie *pie)
{
if (pie == NULL) {
- RTE_LOG(ERR, SCHED, "%s: Invalid addr for pie\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED, "%s: Invalid addr for pie", __func__);
return -EINVAL;
}
@@ -39,26 +39,26 @@ rte_pie_config_init(struct rte_pie_config *pie_cfg,
return -1;
if (qdelay_ref <= 0) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for qdelay_ref\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for qdelay_ref", __func__);
return -EINVAL;
}
if (dp_update_interval <= 0) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for dp_update_interval\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for dp_update_interval", __func__);
return -EINVAL;
}
if (max_burst <= 0) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for max_burst\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for max_burst", __func__);
return -EINVAL;
}
if (tailq_th <= 0) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for tailq_th\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for tailq_th", __func__);
return -EINVAL;
}
diff --git a/lib/sched/rte_sched.c b/lib/sched/rte_sched.c
index 76dd8dd738..75f2f12007 100644
--- a/lib/sched/rte_sched.c
+++ b/lib/sched/rte_sched.c
@@ -325,23 +325,23 @@ pipe_profile_check(struct rte_sched_pipe_params *params,
/* Pipe parameters */
if (params == NULL) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for parameter params\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for parameter params", __func__);
return -EINVAL;
}
/* TB rate: non-zero, not greater than port rate */
if (params->tb_rate == 0 ||
params->tb_rate > rate) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for tb rate\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for tb rate", __func__);
return -EINVAL;
}
/* TB size: non-zero */
if (params->tb_size == 0) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for tb size\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for tb size", __func__);
return -EINVAL;
}
@@ -350,38 +350,38 @@ pipe_profile_check(struct rte_sched_pipe_params *params,
if ((qsize[i] == 0 && params->tc_rate[i] != 0) ||
(qsize[i] != 0 && (params->tc_rate[i] == 0 ||
params->tc_rate[i] > params->tb_rate))) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for qsize or tc_rate\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for qsize or tc_rate", __func__);
return -EINVAL;
}
}
if (params->tc_rate[RTE_SCHED_TRAFFIC_CLASS_BE] == 0 ||
qsize[RTE_SCHED_TRAFFIC_CLASS_BE] == 0) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for be traffic class rate\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for be traffic class rate", __func__);
return -EINVAL;
}
/* TC period: non-zero */
if (params->tc_period == 0) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for tc period\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for tc period", __func__);
return -EINVAL;
}
/* Best effort tc oversubscription weight: non-zero */
if (params->tc_ov_weight == 0) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for tc ov weight\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for tc ov weight", __func__);
return -EINVAL;
}
/* Queue WRR weights: non-zero */
for (i = 0; i < RTE_SCHED_BE_QUEUES_PER_PIPE; i++) {
if (params->wrr_weights[i] == 0) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for wrr weight\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for wrr weight", __func__);
return -EINVAL;
}
}
@@ -397,20 +397,20 @@ subport_profile_check(struct rte_sched_subport_profile_params *params,
/* Check user parameters */
if (params == NULL) {
- RTE_LOG(ERR, SCHED, "%s: "
- "Incorrect value for parameter params\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED, "%s: "
+ "Incorrect value for parameter params", __func__);
return -EINVAL;
}
if (params->tb_rate == 0 || params->tb_rate > rate) {
- RTE_LOG(ERR, SCHED, "%s: "
- "Incorrect value for tb rate\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED, "%s: "
+ "Incorrect value for tb rate", __func__);
return -EINVAL;
}
if (params->tb_size == 0) {
- RTE_LOG(ERR, SCHED, "%s: "
- "Incorrect value for tb size\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED, "%s: "
+ "Incorrect value for tb size", __func__);
return -EINVAL;
}
@@ -418,21 +418,21 @@ subport_profile_check(struct rte_sched_subport_profile_params *params,
uint64_t tc_rate = params->tc_rate[i];
if (tc_rate == 0 || (tc_rate > params->tb_rate)) {
- RTE_LOG(ERR, SCHED, "%s: "
- "Incorrect value for tc rate\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED, "%s: "
+ "Incorrect value for tc rate", __func__);
return -EINVAL;
}
}
if (params->tc_rate[RTE_SCHED_TRAFFIC_CLASS_BE] == 0) {
- RTE_LOG(ERR, SCHED, "%s: "
- "Incorrect tc rate(best effort)\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED, "%s: "
+ "Incorrect tc rate(best effort)", __func__);
return -EINVAL;
}
if (params->tc_period == 0) {
- RTE_LOG(ERR, SCHED, "%s: "
- "Incorrect value for tc period\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED, "%s: "
+ "Incorrect value for tc period", __func__);
return -EINVAL;
}
@@ -445,29 +445,29 @@ rte_sched_port_check_params(struct rte_sched_port_params *params)
uint32_t i;
if (params == NULL) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for parameter params\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for parameter params", __func__);
return -EINVAL;
}
/* socket */
if (params->socket < 0) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for socket id\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for socket id", __func__);
return -EINVAL;
}
/* rate */
if (params->rate == 0) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for rate\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for rate", __func__);
return -EINVAL;
}
/* mtu */
if (params->mtu == 0) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for mtu\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for mtu", __func__);
return -EINVAL;
}
@@ -475,8 +475,8 @@ rte_sched_port_check_params(struct rte_sched_port_params *params)
if (params->n_subports_per_port == 0 ||
params->n_subports_per_port > 1u << 16 ||
!rte_is_power_of_2(params->n_subports_per_port)) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for number of subports\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for number of subports", __func__);
return -EINVAL;
}
@@ -484,8 +484,8 @@ rte_sched_port_check_params(struct rte_sched_port_params *params)
params->n_subport_profiles == 0 ||
params->n_max_subport_profiles == 0 ||
params->n_subport_profiles > params->n_max_subport_profiles) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for subport profiles\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for subport profiles", __func__);
return -EINVAL;
}
@@ -496,8 +496,8 @@ rte_sched_port_check_params(struct rte_sched_port_params *params)
status = subport_profile_check(p, params->rate);
if (status != 0) {
- RTE_LOG(ERR, SCHED,
- "%s: subport profile check failed(%d)\n",
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: subport profile check failed(%d)",
__func__, status);
return -EINVAL;
}
@@ -506,8 +506,8 @@ rte_sched_port_check_params(struct rte_sched_port_params *params)
/* n_pipes_per_subport: non-zero, power of 2 */
if (params->n_pipes_per_subport == 0 ||
!rte_is_power_of_2(params->n_pipes_per_subport)) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for maximum pipes number\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for maximum pipes number", __func__);
return -EINVAL;
}
@@ -830,8 +830,8 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params,
/* Check user parameters */
if (params == NULL) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for parameter params\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for parameter params", __func__);
return -EINVAL;
}
@@ -842,14 +842,14 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params,
uint16_t qsize = params->qsize[i];
if (qsize != 0 && !rte_is_power_of_2(qsize)) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for qsize\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for qsize", __func__);
return -EINVAL;
}
}
if (params->qsize[RTE_SCHED_TRAFFIC_CLASS_BE] == 0) {
- RTE_LOG(ERR, SCHED, "%s: Incorrect qsize\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED, "%s: Incorrect qsize", __func__);
return -EINVAL;
}
@@ -857,8 +857,8 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params,
if (params->n_pipes_per_subport_enabled == 0 ||
params->n_pipes_per_subport_enabled > n_max_pipes_per_subport ||
!rte_is_power_of_2(params->n_pipes_per_subport_enabled)) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for pipes number\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for pipes number", __func__);
return -EINVAL;
}
@@ -867,8 +867,8 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params,
params->n_pipe_profiles == 0 ||
params->n_max_pipe_profiles == 0 ||
params->n_pipe_profiles > params->n_max_pipe_profiles) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for pipe profiles\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for pipe profiles", __func__);
return -EINVAL;
}
@@ -878,8 +878,8 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params,
status = pipe_profile_check(p, rate, ¶ms->qsize[0]);
if (status != 0) {
- RTE_LOG(ERR, SCHED,
- "%s: Pipe profile check failed(%d)\n", __func__, status);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Pipe profile check failed(%d)", __func__, status);
return -EINVAL;
}
}
@@ -896,8 +896,8 @@ rte_sched_port_get_memory_footprint(struct rte_sched_port_params *port_params,
status = rte_sched_port_check_params(port_params);
if (status != 0) {
- RTE_LOG(ERR, SCHED,
- "%s: Port scheduler port params check failed (%d)\n",
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Port scheduler port params check failed (%d)",
__func__, status);
return 0;
@@ -910,8 +910,8 @@ rte_sched_port_get_memory_footprint(struct rte_sched_port_params *port_params,
port_params->n_pipes_per_subport,
port_params->rate);
if (status != 0) {
- RTE_LOG(ERR, SCHED,
- "%s: Port scheduler subport params check failed (%d)\n",
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Port scheduler subport params check failed (%d)",
__func__, status);
return 0;
@@ -941,8 +941,8 @@ rte_sched_port_config(struct rte_sched_port_params *params)
status = rte_sched_port_check_params(params);
if (status != 0) {
- RTE_LOG(ERR, SCHED,
- "%s: Port scheduler params check failed (%d)\n",
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Port scheduler params check failed (%d)",
__func__, status);
return NULL;
}
@@ -956,7 +956,7 @@ rte_sched_port_config(struct rte_sched_port_params *params)
port = rte_zmalloc_socket("qos_params", size0 + size1,
RTE_CACHE_LINE_SIZE, params->socket);
if (port == NULL) {
- RTE_LOG(ERR, SCHED, "%s: Memory allocation fails\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED, "%s: Memory allocation fails", __func__);
return NULL;
}
@@ -965,7 +965,7 @@ rte_sched_port_config(struct rte_sched_port_params *params)
port->subport_profiles = rte_zmalloc_socket("subport_profile", size2,
RTE_CACHE_LINE_SIZE, params->socket);
if (port->subport_profiles == NULL) {
- RTE_LOG(ERR, SCHED, "%s: Memory allocation fails\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED, "%s: Memory allocation fails", __func__);
rte_free(port);
return NULL;
}
@@ -1107,8 +1107,8 @@ rte_sched_red_config(struct rte_sched_port *port,
params->cman_params->red_params[i][j].maxp_inv) != 0) {
rte_sched_free_memory(port, n_subports);
- RTE_LOG(NOTICE, SCHED,
- "%s: RED configuration init fails\n", __func__);
+ RTE_LOG_LINE(NOTICE, SCHED,
+ "%s: RED configuration init fails", __func__);
return -EINVAL;
}
}
@@ -1127,8 +1127,8 @@ rte_sched_pie_config(struct rte_sched_port *port,
for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) {
if (params->cman_params->pie_params[i].tailq_th > params->qsize[i]) {
- RTE_LOG(NOTICE, SCHED,
- "%s: PIE tailq threshold incorrect\n", __func__);
+ RTE_LOG_LINE(NOTICE, SCHED,
+ "%s: PIE tailq threshold incorrect", __func__);
return -EINVAL;
}
@@ -1139,8 +1139,8 @@ rte_sched_pie_config(struct rte_sched_port *port,
params->cman_params->pie_params[i].tailq_th) != 0) {
rte_sched_free_memory(port, n_subports);
- RTE_LOG(NOTICE, SCHED,
- "%s: PIE configuration init fails\n", __func__);
+ RTE_LOG_LINE(NOTICE, SCHED,
+ "%s: PIE configuration init fails", __func__);
return -EINVAL;
}
}
@@ -1171,14 +1171,14 @@ rte_sched_subport_tc_ov_config(struct rte_sched_port *port,
struct rte_sched_subport *s;
if (port == NULL) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for parameter port\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for parameter port", __func__);
return -EINVAL;
}
if (subport_id >= port->n_subports_per_port) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for parameter subport id\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for parameter subport id", __func__);
return -EINVAL;
}
@@ -1204,21 +1204,21 @@ rte_sched_subport_config(struct rte_sched_port *port,
/* Check user parameters */
if (port == NULL) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for parameter port\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for parameter port", __func__);
return 0;
}
if (subport_id >= port->n_subports_per_port) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for subport id\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for subport id", __func__);
ret = -EINVAL;
goto out;
}
if (subport_profile_id >= port->n_max_subport_profiles) {
- RTE_LOG(ERR, SCHED, "%s: "
- "Number of subport profile exceeds the max limit\n",
+ RTE_LOG_LINE(ERR, SCHED, "%s: "
+ "Number of subport profile exceeds the max limit",
__func__);
ret = -EINVAL;
goto out;
@@ -1234,8 +1234,8 @@ rte_sched_subport_config(struct rte_sched_port *port,
port->n_pipes_per_subport,
port->rate);
if (status != 0) {
- RTE_LOG(NOTICE, SCHED,
- "%s: Port scheduler params check failed (%d)\n",
+ RTE_LOG_LINE(NOTICE, SCHED,
+ "%s: Port scheduler params check failed (%d)",
__func__, status);
ret = -EINVAL;
goto out;
@@ -1250,8 +1250,8 @@ rte_sched_subport_config(struct rte_sched_port *port,
s = rte_zmalloc_socket("subport_params", size0 + size1,
RTE_CACHE_LINE_SIZE, port->socket);
if (s == NULL) {
- RTE_LOG(ERR, SCHED,
- "%s: Memory allocation fails\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Memory allocation fails", __func__);
ret = -ENOMEM;
goto out;
}
@@ -1282,8 +1282,8 @@ rte_sched_subport_config(struct rte_sched_port *port,
s->cman_enabled = true;
status = rte_sched_cman_config(port, s, params, n_subports);
if (status) {
- RTE_LOG(NOTICE, SCHED,
- "%s: CMAN configuration fails\n", __func__);
+ RTE_LOG_LINE(NOTICE, SCHED,
+ "%s: CMAN configuration fails", __func__);
return status;
}
} else {
@@ -1330,8 +1330,8 @@ rte_sched_subport_config(struct rte_sched_port *port,
s->bmp = rte_bitmap_init(n_subport_pipe_queues, s->bmp_array,
bmp_mem_size);
if (s->bmp == NULL) {
- RTE_LOG(ERR, SCHED,
- "%s: Subport bitmap init error\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Subport bitmap init error", __func__);
ret = -EINVAL;
goto out;
}
@@ -1400,29 +1400,29 @@ rte_sched_pipe_config(struct rte_sched_port *port,
deactivate = (pipe_profile < 0);
if (port == NULL) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for parameter port\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for parameter port", __func__);
return -EINVAL;
}
if (subport_id >= port->n_subports_per_port) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for parameter subport id\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for parameter subport id", __func__);
ret = -EINVAL;
goto out;
}
s = port->subports[subport_id];
if (pipe_id >= s->n_pipes_per_subport_enabled) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for parameter pipe id\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for parameter pipe id", __func__);
ret = -EINVAL;
goto out;
}
if (!deactivate && profile >= s->n_pipe_profiles) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for parameter pipe profile\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for parameter pipe profile", __func__);
ret = -EINVAL;
goto out;
}
@@ -1447,8 +1447,8 @@ rte_sched_pipe_config(struct rte_sched_port *port,
s->tc_ov = s->tc_ov_rate > subport_tc_be_rate;
if (s->tc_ov != tc_be_ov) {
- RTE_LOG(DEBUG, SCHED,
- "Subport %u Best-effort TC oversubscription is OFF (%.4lf >= %.4lf)\n",
+ RTE_LOG_LINE(DEBUG, SCHED,
+ "Subport %u Best-effort TC oversubscription is OFF (%.4lf >= %.4lf)",
subport_id, subport_tc_be_rate, s->tc_ov_rate);
}
@@ -1489,8 +1489,8 @@ rte_sched_pipe_config(struct rte_sched_port *port,
s->tc_ov = s->tc_ov_rate > subport_tc_be_rate;
if (s->tc_ov != tc_be_ov) {
- RTE_LOG(DEBUG, SCHED,
- "Subport %u Best effort TC oversubscription is ON (%.4lf < %.4lf)\n",
+ RTE_LOG_LINE(DEBUG, SCHED,
+ "Subport %u Best effort TC oversubscription is ON (%.4lf < %.4lf)",
subport_id, subport_tc_be_rate, s->tc_ov_rate);
}
p->tc_ov_period_id = s->tc_ov_period_id;
@@ -1518,15 +1518,15 @@ rte_sched_subport_pipe_profile_add(struct rte_sched_port *port,
/* Port */
if (port == NULL) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for parameter port\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for parameter port", __func__);
return -EINVAL;
}
/* Subport id not exceeds the max limit */
if (subport_id > port->n_subports_per_port) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for subport id\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for subport id", __func__);
return -EINVAL;
}
@@ -1534,16 +1534,16 @@ rte_sched_subport_pipe_profile_add(struct rte_sched_port *port,
/* Pipe profiles exceeds the max limit */
if (s->n_pipe_profiles >= s->n_max_pipe_profiles) {
- RTE_LOG(ERR, SCHED,
- "%s: Number of pipe profiles exceeds the max limit\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Number of pipe profiles exceeds the max limit", __func__);
return -EINVAL;
}
/* Pipe params */
status = pipe_profile_check(params, port->rate, &s->qsize[0]);
if (status != 0) {
- RTE_LOG(ERR, SCHED,
- "%s: Pipe profile check failed(%d)\n", __func__, status);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Pipe profile check failed(%d)", __func__, status);
return -EINVAL;
}
@@ -1553,8 +1553,8 @@ rte_sched_subport_pipe_profile_add(struct rte_sched_port *port,
/* Pipe profile should not exists */
for (i = 0; i < s->n_pipe_profiles; i++)
if (memcmp(s->pipe_profiles + i, pp, sizeof(*pp)) == 0) {
- RTE_LOG(ERR, SCHED,
- "%s: Pipe profile exists\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Pipe profile exists", __func__);
return -EINVAL;
}
@@ -1581,20 +1581,20 @@ rte_sched_port_subport_profile_add(struct rte_sched_port *port,
/* Port */
if (port == NULL) {
- RTE_LOG(ERR, SCHED, "%s: "
- "Incorrect value for parameter port\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED, "%s: "
+ "Incorrect value for parameter port", __func__);
return -EINVAL;
}
if (params == NULL) {
- RTE_LOG(ERR, SCHED, "%s: "
- "Incorrect value for parameter profile\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED, "%s: "
+ "Incorrect value for parameter profile", __func__);
return -EINVAL;
}
if (subport_profile_id == NULL) {
- RTE_LOG(ERR, SCHED, "%s: "
- "Incorrect value for parameter subport_profile_id\n",
+ RTE_LOG_LINE(ERR, SCHED, "%s: "
+ "Incorrect value for parameter subport_profile_id",
__func__);
return -EINVAL;
}
@@ -1603,16 +1603,16 @@ rte_sched_port_subport_profile_add(struct rte_sched_port *port,
/* Subport profiles exceeds the max limit */
if (port->n_subport_profiles >= port->n_max_subport_profiles) {
- RTE_LOG(ERR, SCHED, "%s: "
- "Number of subport profiles exceeds the max limit\n",
+ RTE_LOG_LINE(ERR, SCHED, "%s: "
+ "Number of subport profiles exceeds the max limit",
__func__);
return -EINVAL;
}
status = subport_profile_check(params, port->rate);
if (status != 0) {
- RTE_LOG(ERR, SCHED,
- "%s: subport profile check failed(%d)\n", __func__, status);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: subport profile check failed(%d)", __func__, status);
return -EINVAL;
}
@@ -1622,8 +1622,8 @@ rte_sched_port_subport_profile_add(struct rte_sched_port *port,
for (i = 0; i < port->n_subport_profiles; i++)
if (memcmp(port->subport_profiles + i,
dst, sizeof(*dst)) == 0) {
- RTE_LOG(ERR, SCHED,
- "%s: subport profile exists\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: subport profile exists", __func__);
return -EINVAL;
}
@@ -1695,26 +1695,26 @@ rte_sched_subport_read_stats(struct rte_sched_port *port,
/* Check user parameters */
if (port == NULL) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for parameter port\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for parameter port", __func__);
return -EINVAL;
}
if (subport_id >= port->n_subports_per_port) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for subport id\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for subport id", __func__);
return -EINVAL;
}
if (stats == NULL) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for parameter stats\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for parameter stats", __func__);
return -EINVAL;
}
if (tc_ov == NULL) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for tc_ov\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for tc_ov", __func__);
return -EINVAL;
}
@@ -1743,26 +1743,26 @@ rte_sched_queue_read_stats(struct rte_sched_port *port,
/* Check user parameters */
if (port == NULL) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for parameter port\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for parameter port", __func__);
return -EINVAL;
}
if (queue_id >= rte_sched_port_queues_per_port(port)) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for queue id\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for queue id", __func__);
return -EINVAL;
}
if (stats == NULL) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for parameter stats\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for parameter stats", __func__);
return -EINVAL;
}
if (qlen == NULL) {
- RTE_LOG(ERR, SCHED,
- "%s: Incorrect value for parameter qlen\n", __func__);
+ RTE_LOG_LINE(ERR, SCHED,
+ "%s: Incorrect value for parameter qlen", __func__);
return -EINVAL;
}
subport_qmask = port->n_pipes_per_subport_log2 + 4;
diff --git a/lib/table/rte_table_acl.c b/lib/table/rte_table_acl.c
index 902cb78eac..944f5064d2 100644
--- a/lib/table/rte_table_acl.c
+++ b/lib/table/rte_table_acl.c
@@ -65,21 +65,21 @@ rte_table_acl_create(
/* Check input parameters */
if (p == NULL) {
- RTE_LOG(ERR, TABLE, "%s: Invalid value for params\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: Invalid value for params", __func__);
return NULL;
}
if (p->name == NULL) {
- RTE_LOG(ERR, TABLE, "%s: Invalid value for name\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: Invalid value for name", __func__);
return NULL;
}
if (p->n_rules == 0) {
- RTE_LOG(ERR, TABLE, "%s: Invalid value for n_rules\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: Invalid value for n_rules",
__func__);
return NULL;
}
if ((p->n_rule_fields == 0) ||
(p->n_rule_fields > RTE_ACL_MAX_FIELDS)) {
- RTE_LOG(ERR, TABLE, "%s: Invalid value for n_rule_fields\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: Invalid value for n_rule_fields",
__func__);
return NULL;
}
@@ -98,8 +98,8 @@ rte_table_acl_create(
acl = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE,
socket_id);
if (acl == NULL) {
- RTE_LOG(ERR, TABLE,
- "%s: Cannot allocate %u bytes for ACL table\n",
+ RTE_LOG_LINE(ERR, TABLE,
+ "%s: Cannot allocate %u bytes for ACL table",
__func__, total_size);
return NULL;
}
@@ -140,7 +140,7 @@ rte_table_acl_free(void *table)
/* Check input parameters */
if (table == NULL) {
- RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__);
return -EINVAL;
}
@@ -164,7 +164,7 @@ rte_table_acl_build(struct rte_table_acl *acl, struct rte_acl_ctx **acl_ctx)
/* Create low level ACL table */
ctx = rte_acl_create(&acl->acl_params);
if (ctx == NULL) {
- RTE_LOG(ERR, TABLE, "%s: Cannot create low level ACL table\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: Cannot create low level ACL table",
__func__);
return -1;
}
@@ -176,8 +176,8 @@ rte_table_acl_build(struct rte_table_acl *acl, struct rte_acl_ctx **acl_ctx)
status = rte_acl_add_rules(ctx, acl->acl_rule_list[i],
1);
if (status != 0) {
- RTE_LOG(ERR, TABLE,
- "%s: Cannot add rule to low level ACL table\n",
+ RTE_LOG_LINE(ERR, TABLE,
+ "%s: Cannot add rule to low level ACL table",
__func__);
rte_acl_free(ctx);
return -1;
@@ -196,8 +196,8 @@ rte_table_acl_build(struct rte_table_acl *acl, struct rte_acl_ctx **acl_ctx)
/* Build low level ACl table */
status = rte_acl_build(ctx, &acl->cfg);
if (status != 0) {
- RTE_LOG(ERR, TABLE,
- "%s: Cannot build the low level ACL table\n",
+ RTE_LOG_LINE(ERR, TABLE,
+ "%s: Cannot build the low level ACL table",
__func__);
rte_acl_free(ctx);
return -1;
@@ -226,29 +226,29 @@ rte_table_acl_entry_add(
/* Check input parameters */
if (table == NULL) {
- RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__);
return -EINVAL;
}
if (key == NULL) {
- RTE_LOG(ERR, TABLE, "%s: key parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: key parameter is NULL", __func__);
return -EINVAL;
}
if (entry == NULL) {
- RTE_LOG(ERR, TABLE, "%s: entry parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: entry parameter is NULL", __func__);
return -EINVAL;
}
if (key_found == NULL) {
- RTE_LOG(ERR, TABLE, "%s: key_found parameter is NULL\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: key_found parameter is NULL",
__func__);
return -EINVAL;
}
if (entry_ptr == NULL) {
- RTE_LOG(ERR, TABLE, "%s: entry_ptr parameter is NULL\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: entry_ptr parameter is NULL",
__func__);
return -EINVAL;
}
if (rule->priority > RTE_ACL_MAX_PRIORITY) {
- RTE_LOG(ERR, TABLE, "%s: Priority is too high\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: Priority is too high", __func__);
return -EINVAL;
}
@@ -291,7 +291,7 @@ rte_table_acl_entry_add(
/* Return if max rules */
if (free_pos_valid == 0) {
- RTE_LOG(ERR, TABLE, "%s: Max number of rules reached\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: Max number of rules reached",
__func__);
return -ENOSPC;
}
@@ -342,15 +342,15 @@ rte_table_acl_entry_delete(
/* Check input parameters */
if (table == NULL) {
- RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__);
return -EINVAL;
}
if (key == NULL) {
- RTE_LOG(ERR, TABLE, "%s: key parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: key parameter is NULL", __func__);
return -EINVAL;
}
if (key_found == NULL) {
- RTE_LOG(ERR, TABLE, "%s: key_found parameter is NULL\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: key_found parameter is NULL",
__func__);
return -EINVAL;
}
@@ -424,28 +424,28 @@ rte_table_acl_entry_add_bulk(
/* Check input parameters */
if (table == NULL) {
- RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__);
return -EINVAL;
}
if (keys == NULL) {
- RTE_LOG(ERR, TABLE, "%s: keys parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: keys parameter is NULL", __func__);
return -EINVAL;
}
if (entries == NULL) {
- RTE_LOG(ERR, TABLE, "%s: entries parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: entries parameter is NULL", __func__);
return -EINVAL;
}
if (n_keys == 0) {
- RTE_LOG(ERR, TABLE, "%s: 0 rules to add\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: 0 rules to add", __func__);
return -EINVAL;
}
if (key_found == NULL) {
- RTE_LOG(ERR, TABLE, "%s: key_found parameter is NULL\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: key_found parameter is NULL",
__func__);
return -EINVAL;
}
if (entries_ptr == NULL) {
- RTE_LOG(ERR, TABLE, "%s: entries_ptr parameter is NULL\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: entries_ptr parameter is NULL",
__func__);
return -EINVAL;
}
@@ -455,20 +455,20 @@ rte_table_acl_entry_add_bulk(
struct rte_table_acl_rule_add_params *rule;
if (keys[i] == NULL) {
- RTE_LOG(ERR, TABLE, "%s: keys[%" PRIu32 "] parameter is NULL\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: keys[%" PRIu32 "] parameter is NULL",
__func__, i);
return -EINVAL;
}
if (entries[i] == NULL) {
- RTE_LOG(ERR, TABLE, "%s: entries[%" PRIu32 "] parameter is NULL\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: entries[%" PRIu32 "] parameter is NULL",
__func__, i);
return -EINVAL;
}
rule = keys[i];
if (rule->priority > RTE_ACL_MAX_PRIORITY) {
- RTE_LOG(ERR, TABLE, "%s: Priority is too high\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: Priority is too high", __func__);
return -EINVAL;
}
}
@@ -604,26 +604,26 @@ rte_table_acl_entry_delete_bulk(
/* Check input parameters */
if (table == NULL) {
- RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__);
return -EINVAL;
}
if (keys == NULL) {
- RTE_LOG(ERR, TABLE, "%s: key parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: key parameter is NULL", __func__);
return -EINVAL;
}
if (n_keys == 0) {
- RTE_LOG(ERR, TABLE, "%s: 0 rules to delete\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: 0 rules to delete", __func__);
return -EINVAL;
}
if (key_found == NULL) {
- RTE_LOG(ERR, TABLE, "%s: key_found parameter is NULL\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: key_found parameter is NULL",
__func__);
return -EINVAL;
}
for (i = 0; i < n_keys; i++) {
if (keys[i] == NULL) {
- RTE_LOG(ERR, TABLE, "%s: keys[%" PRIu32 "] parameter is NULL\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: keys[%" PRIu32 "] parameter is NULL",
__func__, i);
return -EINVAL;
}
diff --git a/lib/table/rte_table_array.c b/lib/table/rte_table_array.c
index a45b29ed6a..0b3107104d 100644
--- a/lib/table/rte_table_array.c
+++ b/lib/table/rte_table_array.c
@@ -61,8 +61,8 @@ rte_table_array_create(void *params, int socket_id, uint32_t entry_size)
total_size = total_cl_size * RTE_CACHE_LINE_SIZE;
t = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE, socket_id);
if (t == NULL) {
- RTE_LOG(ERR, TABLE,
- "%s: Cannot allocate %u bytes for array table\n",
+ RTE_LOG_LINE(ERR, TABLE,
+ "%s: Cannot allocate %u bytes for array table",
__func__, total_size);
return NULL;
}
@@ -83,7 +83,7 @@ rte_table_array_free(void *table)
/* Check input parameters */
if (t == NULL) {
- RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__);
return -EINVAL;
}
@@ -107,24 +107,24 @@ rte_table_array_entry_add(
/* Check input parameters */
if (table == NULL) {
- RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__);
return -EINVAL;
}
if (key == NULL) {
- RTE_LOG(ERR, TABLE, "%s: key parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: key parameter is NULL", __func__);
return -EINVAL;
}
if (entry == NULL) {
- RTE_LOG(ERR, TABLE, "%s: entry parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: entry parameter is NULL", __func__);
return -EINVAL;
}
if (key_found == NULL) {
- RTE_LOG(ERR, TABLE, "%s: key_found parameter is NULL\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: key_found parameter is NULL",
__func__);
return -EINVAL;
}
if (entry_ptr == NULL) {
- RTE_LOG(ERR, TABLE, "%s: entry_ptr parameter is NULL\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: entry_ptr parameter is NULL",
__func__);
return -EINVAL;
}
diff --git a/lib/table/rte_table_hash_cuckoo.c b/lib/table/rte_table_hash_cuckoo.c
index 86c960c103..228b49a893 100644
--- a/lib/table/rte_table_hash_cuckoo.c
+++ b/lib/table/rte_table_hash_cuckoo.c
@@ -47,27 +47,27 @@ static int
check_params_create_hash_cuckoo(struct rte_table_hash_cuckoo_params *params)
{
if (params == NULL) {
- RTE_LOG(ERR, TABLE, "NULL Input Parameters.\n");
+ RTE_LOG_LINE(ERR, TABLE, "NULL Input Parameters.");
return -EINVAL;
}
if (params->name == NULL) {
- RTE_LOG(ERR, TABLE, "Table name is NULL.\n");
+ RTE_LOG_LINE(ERR, TABLE, "Table name is NULL.");
return -EINVAL;
}
if (params->key_size == 0) {
- RTE_LOG(ERR, TABLE, "Invalid key_size.\n");
+ RTE_LOG_LINE(ERR, TABLE, "Invalid key_size.");
return -EINVAL;
}
if (params->n_keys == 0) {
- RTE_LOG(ERR, TABLE, "Invalid n_keys.\n");
+ RTE_LOG_LINE(ERR, TABLE, "Invalid n_keys.");
return -EINVAL;
}
if (params->f_hash == NULL) {
- RTE_LOG(ERR, TABLE, "f_hash is NULL.\n");
+ RTE_LOG_LINE(ERR, TABLE, "f_hash is NULL.");
return -EINVAL;
}
@@ -94,8 +94,8 @@ rte_table_hash_cuckoo_create(void *params,
t = rte_zmalloc_socket(p->name, total_size, RTE_CACHE_LINE_SIZE, socket_id);
if (t == NULL) {
- RTE_LOG(ERR, TABLE,
- "%s: Cannot allocate %u bytes for cuckoo hash table %s\n",
+ RTE_LOG_LINE(ERR, TABLE,
+ "%s: Cannot allocate %u bytes for cuckoo hash table %s",
__func__, total_size, p->name);
return NULL;
}
@@ -114,8 +114,8 @@ rte_table_hash_cuckoo_create(void *params,
if (h_table == NULL) {
h_table = rte_hash_create(&hash_cuckoo_params);
if (h_table == NULL) {
- RTE_LOG(ERR, TABLE,
- "%s: failed to create cuckoo hash table %s\n",
+ RTE_LOG_LINE(ERR, TABLE,
+ "%s: failed to create cuckoo hash table %s",
__func__, p->name);
rte_free(t);
return NULL;
@@ -131,8 +131,8 @@ rte_table_hash_cuckoo_create(void *params,
t->key_offset = p->key_offset;
t->h_table = h_table;
- RTE_LOG(INFO, TABLE,
- "%s: Cuckoo hash table %s memory footprint is %u bytes\n",
+ RTE_LOG_LINE(INFO, TABLE,
+ "%s: Cuckoo hash table %s memory footprint is %u bytes",
__func__, p->name, total_size);
return t;
}
diff --git a/lib/table/rte_table_hash_ext.c b/lib/table/rte_table_hash_ext.c
index 9f0220ded2..38ea96c654 100644
--- a/lib/table/rte_table_hash_ext.c
+++ b/lib/table/rte_table_hash_ext.c
@@ -128,33 +128,33 @@ check_params_create(struct rte_table_hash_params *params)
{
/* name */
if (params->name == NULL) {
- RTE_LOG(ERR, TABLE, "%s: name invalid value\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: name invalid value", __func__);
return -EINVAL;
}
/* key_size */
if ((params->key_size < sizeof(uint64_t)) ||
(!rte_is_power_of_2(params->key_size))) {
- RTE_LOG(ERR, TABLE, "%s: key_size invalid value\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: key_size invalid value", __func__);
return -EINVAL;
}
/* n_keys */
if (params->n_keys == 0) {
- RTE_LOG(ERR, TABLE, "%s: n_keys invalid value\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: n_keys invalid value", __func__);
return -EINVAL;
}
/* n_buckets */
if ((params->n_buckets == 0) ||
(!rte_is_power_of_2(params->n_buckets))) {
- RTE_LOG(ERR, TABLE, "%s: n_buckets invalid value\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: n_buckets invalid value", __func__);
return -EINVAL;
}
/* f_hash */
if (params->f_hash == NULL) {
- RTE_LOG(ERR, TABLE, "%s: f_hash invalid value\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: f_hash invalid value", __func__);
return -EINVAL;
}
@@ -211,8 +211,8 @@ rte_table_hash_ext_create(void *params, int socket_id, uint32_t entry_size)
key_sz + key_stack_sz + bkt_ext_stack_sz + data_sz;
if (total_size > SIZE_MAX) {
- RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes"
- " for hash table %s\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes"
+ " for hash table %s",
__func__, total_size, p->name);
return NULL;
}
@@ -222,13 +222,13 @@ rte_table_hash_ext_create(void *params, int socket_id, uint32_t entry_size)
RTE_CACHE_LINE_SIZE,
socket_id);
if (t == NULL) {
- RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes"
- " for hash table %s\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes"
+ " for hash table %s",
__func__, total_size, p->name);
return NULL;
}
- RTE_LOG(INFO, TABLE, "%s (%u-byte key): Hash table %s memory "
- "footprint is %" PRIu64 " bytes\n",
+ RTE_LOG_LINE(INFO, TABLE, "%s (%u-byte key): Hash table %s memory "
+ "footprint is %" PRIu64 " bytes",
__func__, p->key_size, p->name, total_size);
/* Memory initialization */
diff --git a/lib/table/rte_table_hash_key16.c b/lib/table/rte_table_hash_key16.c
index 584c3f2c98..63b28f79c0 100644
--- a/lib/table/rte_table_hash_key16.c
+++ b/lib/table/rte_table_hash_key16.c
@@ -107,32 +107,32 @@ check_params_create(struct rte_table_hash_params *params)
{
/* name */
if (params->name == NULL) {
- RTE_LOG(ERR, TABLE, "%s: name invalid value\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: name invalid value", __func__);
return -EINVAL;
}
/* key_size */
if (params->key_size != KEY_SIZE) {
- RTE_LOG(ERR, TABLE, "%s: key_size invalid value\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: key_size invalid value", __func__);
return -EINVAL;
}
/* n_keys */
if (params->n_keys == 0) {
- RTE_LOG(ERR, TABLE, "%s: n_keys is zero\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: n_keys is zero", __func__);
return -EINVAL;
}
/* n_buckets */
if ((params->n_buckets == 0) ||
(!rte_is_power_of_2(params->n_buckets))) {
- RTE_LOG(ERR, TABLE, "%s: n_buckets invalid value\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: n_buckets invalid value", __func__);
return -EINVAL;
}
/* f_hash */
if (params->f_hash == NULL) {
- RTE_LOG(ERR, TABLE, "%s: f_hash function pointer is NULL\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: f_hash function pointer is NULL",
__func__);
return -EINVAL;
}
@@ -181,8 +181,8 @@ rte_table_hash_create_key16_lru(void *params,
total_size = sizeof(struct rte_table_hash) + n_buckets * bucket_size;
if (total_size > SIZE_MAX) {
- RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes "
- "for hash table %s\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes "
+ "for hash table %s",
__func__, total_size, p->name);
return NULL;
}
@@ -192,13 +192,13 @@ rte_table_hash_create_key16_lru(void *params,
RTE_CACHE_LINE_SIZE,
socket_id);
if (f == NULL) {
- RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes "
- "for hash table %s\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes "
+ "for hash table %s",
__func__, total_size, p->name);
return NULL;
}
- RTE_LOG(INFO, TABLE, "%s: Hash table %s memory footprint "
- "is %" PRIu64 " bytes\n",
+ RTE_LOG_LINE(INFO, TABLE, "%s: Hash table %s memory footprint "
+ "is %" PRIu64 " bytes",
__func__, p->name, total_size);
/* Memory initialization */
@@ -236,7 +236,7 @@ rte_table_hash_free_key16_lru(void *table)
/* Check input parameters */
if (f == NULL) {
- RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__);
return -EINVAL;
}
@@ -391,8 +391,8 @@ rte_table_hash_create_key16_ext(void *params,
total_size = sizeof(struct rte_table_hash) +
(p->n_buckets + n_buckets_ext) * bucket_size + stack_size;
if (total_size > SIZE_MAX) {
- RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes "
- "for hash table %s\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes "
+ "for hash table %s",
__func__, total_size, p->name);
return NULL;
}
@@ -402,13 +402,13 @@ rte_table_hash_create_key16_ext(void *params,
RTE_CACHE_LINE_SIZE,
socket_id);
if (f == NULL) {
- RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes "
- "for hash table %s\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes "
+ "for hash table %s",
__func__, total_size, p->name);
return NULL;
}
- RTE_LOG(INFO, TABLE, "%s: Hash table %s memory footprint "
- "is %" PRIu64 " bytes\n",
+ RTE_LOG_LINE(INFO, TABLE, "%s: Hash table %s memory footprint "
+ "is %" PRIu64 " bytes",
__func__, p->name, total_size);
/* Memory initialization */
@@ -446,7 +446,7 @@ rte_table_hash_free_key16_ext(void *table)
/* Check input parameters */
if (f == NULL) {
- RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__);
return -EINVAL;
}
diff --git a/lib/table/rte_table_hash_key32.c b/lib/table/rte_table_hash_key32.c
index 22b5ca9166..6293bf518b 100644
--- a/lib/table/rte_table_hash_key32.c
+++ b/lib/table/rte_table_hash_key32.c
@@ -111,32 +111,32 @@ check_params_create(struct rte_table_hash_params *params)
{
/* name */
if (params->name == NULL) {
- RTE_LOG(ERR, TABLE, "%s: name invalid value\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: name invalid value", __func__);
return -EINVAL;
}
/* key_size */
if (params->key_size != KEY_SIZE) {
- RTE_LOG(ERR, TABLE, "%s: key_size invalid value\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: key_size invalid value", __func__);
return -EINVAL;
}
/* n_keys */
if (params->n_keys == 0) {
- RTE_LOG(ERR, TABLE, "%s: n_keys is zero\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: n_keys is zero", __func__);
return -EINVAL;
}
/* n_buckets */
if ((params->n_buckets == 0) ||
(!rte_is_power_of_2(params->n_buckets))) {
- RTE_LOG(ERR, TABLE, "%s: n_buckets invalid value\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: n_buckets invalid value", __func__);
return -EINVAL;
}
/* f_hash */
if (params->f_hash == NULL) {
- RTE_LOG(ERR, TABLE, "%s: f_hash function pointer is NULL\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: f_hash function pointer is NULL",
__func__);
return -EINVAL;
}
@@ -184,8 +184,8 @@ rte_table_hash_create_key32_lru(void *params,
KEYS_PER_BUCKET * entry_size);
total_size = sizeof(struct rte_table_hash) + n_buckets * bucket_size;
if (total_size > SIZE_MAX) {
- RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes "
- "for hash table %s\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes "
+ "for hash table %s",
__func__, total_size, p->name);
return NULL;
}
@@ -195,14 +195,14 @@ rte_table_hash_create_key32_lru(void *params,
RTE_CACHE_LINE_SIZE,
socket_id);
if (f == NULL) {
- RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes "
- "for hash table %s\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes "
+ "for hash table %s",
__func__, total_size, p->name);
return NULL;
}
- RTE_LOG(INFO, TABLE,
+ RTE_LOG_LINE(INFO, TABLE,
"%s: Hash table %s memory footprint "
- "is %" PRIu64 " bytes\n",
+ "is %" PRIu64 " bytes",
__func__, p->name, total_size);
/* Memory initialization */
@@ -244,7 +244,7 @@ rte_table_hash_free_key32_lru(void *table)
/* Check input parameters */
if (f == NULL) {
- RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__);
return -EINVAL;
}
@@ -400,8 +400,8 @@ rte_table_hash_create_key32_ext(void *params,
total_size = sizeof(struct rte_table_hash) +
(p->n_buckets + n_buckets_ext) * bucket_size + stack_size;
if (total_size > SIZE_MAX) {
- RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes "
- "for hash table %s\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes "
+ "for hash table %s",
__func__, total_size, p->name);
return NULL;
}
@@ -411,14 +411,14 @@ rte_table_hash_create_key32_ext(void *params,
RTE_CACHE_LINE_SIZE,
socket_id);
if (f == NULL) {
- RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes "
- "for hash table %s\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes "
+ "for hash table %s",
__func__, total_size, p->name);
return NULL;
}
- RTE_LOG(INFO, TABLE,
+ RTE_LOG_LINE(INFO, TABLE,
"%s: Hash table %s memory footprint "
- "is %" PRIu64" bytes\n",
+ "is %" PRIu64" bytes",
__func__, p->name, total_size);
/* Memory initialization */
@@ -460,7 +460,7 @@ rte_table_hash_free_key32_ext(void *table)
/* Check input parameters */
if (f == NULL) {
- RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__);
return -EINVAL;
}
diff --git a/lib/table/rte_table_hash_key8.c b/lib/table/rte_table_hash_key8.c
index bd0ec4aac0..69e61c2ec8 100644
--- a/lib/table/rte_table_hash_key8.c
+++ b/lib/table/rte_table_hash_key8.c
@@ -101,32 +101,32 @@ check_params_create(struct rte_table_hash_params *params)
{
/* name */
if (params->name == NULL) {
- RTE_LOG(ERR, TABLE, "%s: name invalid value\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: name invalid value", __func__);
return -EINVAL;
}
/* key_size */
if (params->key_size != KEY_SIZE) {
- RTE_LOG(ERR, TABLE, "%s: key_size invalid value\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: key_size invalid value", __func__);
return -EINVAL;
}
/* n_keys */
if (params->n_keys == 0) {
- RTE_LOG(ERR, TABLE, "%s: n_keys is zero\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: n_keys is zero", __func__);
return -EINVAL;
}
/* n_buckets */
if ((params->n_buckets == 0) ||
(!rte_is_power_of_2(params->n_buckets))) {
- RTE_LOG(ERR, TABLE, "%s: n_buckets invalid value\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: n_buckets invalid value", __func__);
return -EINVAL;
}
/* f_hash */
if (params->f_hash == NULL) {
- RTE_LOG(ERR, TABLE, "%s: f_hash function pointer is NULL\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: f_hash function pointer is NULL",
__func__);
return -EINVAL;
}
@@ -173,8 +173,8 @@ rte_table_hash_create_key8_lru(void *params, int socket_id, uint32_t entry_size)
total_size = sizeof(struct rte_table_hash) + n_buckets * bucket_size;
if (total_size > SIZE_MAX) {
- RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes"
- " for hash table %s\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes"
+ " for hash table %s",
__func__, total_size, p->name);
return NULL;
}
@@ -184,14 +184,14 @@ rte_table_hash_create_key8_lru(void *params, int socket_id, uint32_t entry_size)
RTE_CACHE_LINE_SIZE,
socket_id);
if (f == NULL) {
- RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes"
- " for hash table %s\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes"
+ " for hash table %s",
__func__, total_size, p->name);
return NULL;
}
- RTE_LOG(INFO, TABLE, "%s: Hash table %s memory footprint "
- "is %" PRIu64 " bytes\n",
+ RTE_LOG_LINE(INFO, TABLE, "%s: Hash table %s memory footprint "
+ "is %" PRIu64 " bytes",
__func__, p->name, total_size);
/* Memory initialization */
@@ -226,7 +226,7 @@ rte_table_hash_free_key8_lru(void *table)
/* Check input parameters */
if (f == NULL) {
- RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__);
return -EINVAL;
}
@@ -377,8 +377,8 @@ rte_table_hash_create_key8_ext(void *params, int socket_id, uint32_t entry_size)
(p->n_buckets + n_buckets_ext) * bucket_size + stack_size;
if (total_size > SIZE_MAX) {
- RTE_LOG(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes "
- "for hash table %s\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: Cannot allocate %" PRIu64 " bytes "
+ "for hash table %s",
__func__, total_size, p->name);
return NULL;
}
@@ -388,14 +388,14 @@ rte_table_hash_create_key8_ext(void *params, int socket_id, uint32_t entry_size)
RTE_CACHE_LINE_SIZE,
socket_id);
if (f == NULL) {
- RTE_LOG(ERR, TABLE,
+ RTE_LOG_LINE(ERR, TABLE,
"%s: Cannot allocate %" PRIu64 " bytes "
- "for hash table %s\n",
+ "for hash table %s",
__func__, total_size, p->name);
return NULL;
}
- RTE_LOG(INFO, TABLE, "%s: Hash table %s memory footprint "
- "is %" PRIu64 " bytes\n",
+ RTE_LOG_LINE(INFO, TABLE, "%s: Hash table %s memory footprint "
+ "is %" PRIu64 " bytes",
__func__, p->name, total_size);
/* Memory initialization */
@@ -430,7 +430,7 @@ rte_table_hash_free_key8_ext(void *table)
/* Check input parameters */
if (f == NULL) {
- RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__);
return -EINVAL;
}
diff --git a/lib/table/rte_table_hash_lru.c b/lib/table/rte_table_hash_lru.c
index 758ec4fe7a..190062b33f 100644
--- a/lib/table/rte_table_hash_lru.c
+++ b/lib/table/rte_table_hash_lru.c
@@ -105,33 +105,33 @@ check_params_create(struct rte_table_hash_params *params)
{
/* name */
if (params->name == NULL) {
- RTE_LOG(ERR, TABLE, "%s: name invalid value\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: name invalid value", __func__);
return -EINVAL;
}
/* key_size */
if ((params->key_size < sizeof(uint64_t)) ||
(!rte_is_power_of_2(params->key_size))) {
- RTE_LOG(ERR, TABLE, "%s: key_size invalid value\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: key_size invalid value", __func__);
return -EINVAL;
}
/* n_keys */
if (params->n_keys == 0) {
- RTE_LOG(ERR, TABLE, "%s: n_keys invalid value\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: n_keys invalid value", __func__);
return -EINVAL;
}
/* n_buckets */
if ((params->n_buckets == 0) ||
(!rte_is_power_of_2(params->n_buckets))) {
- RTE_LOG(ERR, TABLE, "%s: n_buckets invalid value\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: n_buckets invalid value", __func__);
return -EINVAL;
}
/* f_hash */
if (params->f_hash == NULL) {
- RTE_LOG(ERR, TABLE, "%s: f_hash invalid value\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: f_hash invalid value", __func__);
return -EINVAL;
}
@@ -187,9 +187,9 @@ rte_table_hash_lru_create(void *params, int socket_id, uint32_t entry_size)
key_stack_sz + data_sz;
if (total_size > SIZE_MAX) {
- RTE_LOG(ERR, TABLE,
+ RTE_LOG_LINE(ERR, TABLE,
"%s: Cannot allocate %" PRIu64 " bytes for hash "
- "table %s\n",
+ "table %s",
__func__, total_size, p->name);
return NULL;
}
@@ -199,14 +199,14 @@ rte_table_hash_lru_create(void *params, int socket_id, uint32_t entry_size)
RTE_CACHE_LINE_SIZE,
socket_id);
if (t == NULL) {
- RTE_LOG(ERR, TABLE,
+ RTE_LOG_LINE(ERR, TABLE,
"%s: Cannot allocate %" PRIu64 " bytes for hash "
- "table %s\n",
+ "table %s",
__func__, total_size, p->name);
return NULL;
}
- RTE_LOG(INFO, TABLE, "%s (%u-byte key): Hash table %s memory footprint"
- " is %" PRIu64 " bytes\n",
+ RTE_LOG_LINE(INFO, TABLE, "%s (%u-byte key): Hash table %s memory footprint"
+ " is %" PRIu64 " bytes",
__func__, p->key_size, p->name, total_size);
/* Memory initialization */
diff --git a/lib/table/rte_table_lpm.c b/lib/table/rte_table_lpm.c
index c2ef0d9ba0..989ab65ee6 100644
--- a/lib/table/rte_table_lpm.c
+++ b/lib/table/rte_table_lpm.c
@@ -59,29 +59,29 @@ rte_table_lpm_create(void *params, int socket_id, uint32_t entry_size)
/* Check input parameters */
if (p == NULL) {
- RTE_LOG(ERR, TABLE, "%s: NULL input parameters\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: NULL input parameters", __func__);
return NULL;
}
if (p->n_rules == 0) {
- RTE_LOG(ERR, TABLE, "%s: Invalid n_rules\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: Invalid n_rules", __func__);
return NULL;
}
if (p->number_tbl8s == 0) {
- RTE_LOG(ERR, TABLE, "%s: Invalid number_tbl8s\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: Invalid number_tbl8s", __func__);
return NULL;
}
if (p->entry_unique_size == 0) {
- RTE_LOG(ERR, TABLE, "%s: Invalid entry_unique_size\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: Invalid entry_unique_size",
__func__);
return NULL;
}
if (p->entry_unique_size > entry_size) {
- RTE_LOG(ERR, TABLE, "%s: Invalid entry_unique_size\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: Invalid entry_unique_size",
__func__);
return NULL;
}
if (p->name == NULL) {
- RTE_LOG(ERR, TABLE, "%s: Table name is NULL\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: Table name is NULL",
__func__);
return NULL;
}
@@ -93,8 +93,8 @@ rte_table_lpm_create(void *params, int socket_id, uint32_t entry_size)
lpm = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE,
socket_id);
if (lpm == NULL) {
- RTE_LOG(ERR, TABLE,
- "%s: Cannot allocate %u bytes for LPM table\n",
+ RTE_LOG_LINE(ERR, TABLE,
+ "%s: Cannot allocate %u bytes for LPM table",
__func__, total_size);
return NULL;
}
@@ -107,7 +107,7 @@ rte_table_lpm_create(void *params, int socket_id, uint32_t entry_size)
if (lpm->lpm == NULL) {
rte_free(lpm);
- RTE_LOG(ERR, TABLE, "Unable to create low-level LPM table\n");
+ RTE_LOG_LINE(ERR, TABLE, "Unable to create low-level LPM table");
return NULL;
}
@@ -127,7 +127,7 @@ rte_table_lpm_free(void *table)
/* Check input parameters */
if (lpm == NULL) {
- RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__);
return -EINVAL;
}
@@ -187,21 +187,21 @@ rte_table_lpm_entry_add(
/* Check input parameters */
if (lpm == NULL) {
- RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__);
return -EINVAL;
}
if (ip_prefix == NULL) {
- RTE_LOG(ERR, TABLE, "%s: ip_prefix parameter is NULL\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: ip_prefix parameter is NULL",
__func__);
return -EINVAL;
}
if (entry == NULL) {
- RTE_LOG(ERR, TABLE, "%s: entry parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: entry parameter is NULL", __func__);
return -EINVAL;
}
if ((ip_prefix->depth == 0) || (ip_prefix->depth > 32)) {
- RTE_LOG(ERR, TABLE, "%s: invalid depth (%d)\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: invalid depth (%d)",
__func__, ip_prefix->depth);
return -EINVAL;
}
@@ -216,7 +216,7 @@ rte_table_lpm_entry_add(
uint8_t *nht_entry;
if (nht_find_free(lpm, &nht_pos) == 0) {
- RTE_LOG(ERR, TABLE, "%s: NHT full\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: NHT full", __func__);
return -1;
}
@@ -226,7 +226,7 @@ rte_table_lpm_entry_add(
/* Add rule to low level LPM table */
if (rte_lpm_add(lpm->lpm, ip_prefix->ip, ip_prefix->depth, nht_pos) < 0) {
- RTE_LOG(ERR, TABLE, "%s: LPM rule add failed\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: LPM rule add failed", __func__);
return -1;
}
@@ -253,16 +253,16 @@ rte_table_lpm_entry_delete(
/* Check input parameters */
if (lpm == NULL) {
- RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__);
return -EINVAL;
}
if (ip_prefix == NULL) {
- RTE_LOG(ERR, TABLE, "%s: ip_prefix parameter is NULL\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: ip_prefix parameter is NULL",
__func__);
return -EINVAL;
}
if ((ip_prefix->depth == 0) || (ip_prefix->depth > 32)) {
- RTE_LOG(ERR, TABLE, "%s: invalid depth (%d)\n", __func__,
+ RTE_LOG_LINE(ERR, TABLE, "%s: invalid depth (%d)", __func__,
ip_prefix->depth);
return -EINVAL;
}
@@ -271,7 +271,7 @@ rte_table_lpm_entry_delete(
status = rte_lpm_is_rule_present(lpm->lpm, ip_prefix->ip,
ip_prefix->depth, &nht_pos);
if (status < 0) {
- RTE_LOG(ERR, TABLE, "%s: LPM algorithmic error\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: LPM algorithmic error", __func__);
return -1;
}
if (status == 0) {
@@ -282,7 +282,7 @@ rte_table_lpm_entry_delete(
/* Delete rule from the low-level LPM table */
status = rte_lpm_delete(lpm->lpm, ip_prefix->ip, ip_prefix->depth);
if (status) {
- RTE_LOG(ERR, TABLE, "%s: LPM rule delete failed\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: LPM rule delete failed", __func__);
return -1;
}
diff --git a/lib/table/rte_table_lpm_ipv6.c b/lib/table/rte_table_lpm_ipv6.c
index 6f3e11a14f..5b0e643832 100644
--- a/lib/table/rte_table_lpm_ipv6.c
+++ b/lib/table/rte_table_lpm_ipv6.c
@@ -56,29 +56,29 @@ rte_table_lpm_ipv6_create(void *params, int socket_id, uint32_t entry_size)
/* Check input parameters */
if (p == NULL) {
- RTE_LOG(ERR, TABLE, "%s: NULL input parameters\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: NULL input parameters", __func__);
return NULL;
}
if (p->n_rules == 0) {
- RTE_LOG(ERR, TABLE, "%s: Invalid n_rules\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: Invalid n_rules", __func__);
return NULL;
}
if (p->number_tbl8s == 0) {
- RTE_LOG(ERR, TABLE, "%s: Invalid n_rules\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: Invalid n_rules", __func__);
return NULL;
}
if (p->entry_unique_size == 0) {
- RTE_LOG(ERR, TABLE, "%s: Invalid entry_unique_size\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: Invalid entry_unique_size",
__func__);
return NULL;
}
if (p->entry_unique_size > entry_size) {
- RTE_LOG(ERR, TABLE, "%s: Invalid entry_unique_size\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: Invalid entry_unique_size",
__func__);
return NULL;
}
if (p->name == NULL) {
- RTE_LOG(ERR, TABLE, "%s: Table name is NULL\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: Table name is NULL",
__func__);
return NULL;
}
@@ -90,8 +90,8 @@ rte_table_lpm_ipv6_create(void *params, int socket_id, uint32_t entry_size)
lpm = rte_zmalloc_socket("TABLE", total_size, RTE_CACHE_LINE_SIZE,
socket_id);
if (lpm == NULL) {
- RTE_LOG(ERR, TABLE,
- "%s: Cannot allocate %u bytes for LPM IPv6 table\n",
+ RTE_LOG_LINE(ERR, TABLE,
+ "%s: Cannot allocate %u bytes for LPM IPv6 table",
__func__, total_size);
return NULL;
}
@@ -103,8 +103,8 @@ rte_table_lpm_ipv6_create(void *params, int socket_id, uint32_t entry_size)
lpm->lpm = rte_lpm6_create(p->name, socket_id, &lpm6_config);
if (lpm->lpm == NULL) {
rte_free(lpm);
- RTE_LOG(ERR, TABLE,
- "Unable to create low-level LPM IPv6 table\n");
+ RTE_LOG_LINE(ERR, TABLE,
+ "Unable to create low-level LPM IPv6 table");
return NULL;
}
@@ -124,7 +124,7 @@ rte_table_lpm_ipv6_free(void *table)
/* Check input parameters */
if (lpm == NULL) {
- RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__);
return -EINVAL;
}
@@ -184,21 +184,21 @@ rte_table_lpm_ipv6_entry_add(
/* Check input parameters */
if (lpm == NULL) {
- RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__);
return -EINVAL;
}
if (ip_prefix == NULL) {
- RTE_LOG(ERR, TABLE, "%s: ip_prefix parameter is NULL\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: ip_prefix parameter is NULL",
__func__);
return -EINVAL;
}
if (entry == NULL) {
- RTE_LOG(ERR, TABLE, "%s: entry parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: entry parameter is NULL", __func__);
return -EINVAL;
}
if ((ip_prefix->depth == 0) || (ip_prefix->depth > 128)) {
- RTE_LOG(ERR, TABLE, "%s: invalid depth (%d)\n", __func__,
+ RTE_LOG_LINE(ERR, TABLE, "%s: invalid depth (%d)", __func__,
ip_prefix->depth);
return -EINVAL;
}
@@ -213,7 +213,7 @@ rte_table_lpm_ipv6_entry_add(
uint8_t *nht_entry;
if (nht_find_free(lpm, &nht_pos) == 0) {
- RTE_LOG(ERR, TABLE, "%s: NHT full\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: NHT full", __func__);
return -1;
}
@@ -224,7 +224,7 @@ rte_table_lpm_ipv6_entry_add(
/* Add rule to low level LPM table */
if (rte_lpm6_add(lpm->lpm, ip_prefix->ip, ip_prefix->depth,
nht_pos) < 0) {
- RTE_LOG(ERR, TABLE, "%s: LPM IPv6 rule add failed\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: LPM IPv6 rule add failed", __func__);
return -1;
}
@@ -252,16 +252,16 @@ rte_table_lpm_ipv6_entry_delete(
/* Check input parameters */
if (lpm == NULL) {
- RTE_LOG(ERR, TABLE, "%s: table parameter is NULL\n", __func__);
+ RTE_LOG_LINE(ERR, TABLE, "%s: table parameter is NULL", __func__);
return -EINVAL;
}
if (ip_prefix == NULL) {
- RTE_LOG(ERR, TABLE, "%s: ip_prefix parameter is NULL\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: ip_prefix parameter is NULL",
__func__);
return -EINVAL;
}
if ((ip_prefix->depth == 0) || (ip_prefix->depth > 128)) {
- RTE_LOG(ERR, TABLE, "%s: invalid depth (%d)\n", __func__,
+ RTE_LOG_LINE(ERR, TABLE, "%s: invalid depth (%d)", __func__,
ip_prefix->depth);
return -EINVAL;
}
@@ -270,7 +270,7 @@ rte_table_lpm_ipv6_entry_delete(
status = rte_lpm6_is_rule_present(lpm->lpm, ip_prefix->ip,
ip_prefix->depth, &nht_pos);
if (status < 0) {
- RTE_LOG(ERR, TABLE, "%s: LPM IPv6 algorithmic error\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: LPM IPv6 algorithmic error",
__func__);
return -1;
}
@@ -282,7 +282,7 @@ rte_table_lpm_ipv6_entry_delete(
/* Delete rule from the low-level LPM table */
status = rte_lpm6_delete(lpm->lpm, ip_prefix->ip, ip_prefix->depth);
if (status) {
- RTE_LOG(ERR, TABLE, "%s: LPM IPv6 rule delete failed\n",
+ RTE_LOG_LINE(ERR, TABLE, "%s: LPM IPv6 rule delete failed",
__func__);
return -1;
}
diff --git a/lib/table/rte_table_stub.c b/lib/table/rte_table_stub.c
index cc21516995..a54b502f79 100644
--- a/lib/table/rte_table_stub.c
+++ b/lib/table/rte_table_stub.c
@@ -38,8 +38,8 @@ rte_table_stub_create(__rte_unused void *params,
stub = rte_zmalloc_socket("TABLE", size, RTE_CACHE_LINE_SIZE,
socket_id);
if (stub == NULL) {
- RTE_LOG(ERR, TABLE,
- "%s: Cannot allocate %u bytes for stub table\n",
+ RTE_LOG_LINE(ERR, TABLE,
+ "%s: Cannot allocate %u bytes for stub table",
__func__, size);
return NULL;
}
diff --git a/lib/vhost/fd_man.c b/lib/vhost/fd_man.c
index 83586c5b4f..ff91c3169a 100644
--- a/lib/vhost/fd_man.c
+++ b/lib/vhost/fd_man.c
@@ -334,8 +334,8 @@ fdset_pipe_init(struct fdset *fdset)
int ret;
if (pipe(fdset->u.pipefd) < 0) {
- RTE_LOG(ERR, VHOST_FDMAN,
- "failed to create pipe for vhost fdset\n");
+ RTE_LOG_LINE(ERR, VHOST_FDMAN,
+ "failed to create pipe for vhost fdset");
return -1;
}
@@ -343,8 +343,8 @@ fdset_pipe_init(struct fdset *fdset)
fdset_pipe_read_cb, NULL, NULL);
if (ret < 0) {
- RTE_LOG(ERR, VHOST_FDMAN,
- "failed to add pipe readfd %d into vhost server fdset\n",
+ RTE_LOG_LINE(ERR, VHOST_FDMAN,
+ "failed to add pipe readfd %d into vhost server fdset",
fdset->u.readfd);
fdset_pipe_uninit(fdset);
--
2.43.0
next prev parent reply other threads:[~2023-12-08 15:02 UTC|newest]
Thread overview: 122+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-17 13:18 [RFC 0/3] Detect superfluous newline in logs David Marchand
2023-11-17 13:18 ` [RFC 1/3] lib: remove redundant newline from logs David Marchand
2023-11-17 13:18 ` [RFC 2/3] log: add a per line log helper David Marchand
2023-11-17 13:18 ` [RFC 3/3] lib: use per line logging David Marchand
2023-11-17 13:27 ` [RFC 0/3] Detect superfluous newline in logs Bruce Richardson
2023-11-17 13:48 ` David Marchand
2023-11-17 14:11 ` Bruce Richardson
2023-11-17 14:21 ` David Marchand
2023-11-17 13:47 ` Morten Brørup
2023-11-17 14:09 ` David Marchand
2023-11-17 14:17 ` Morten Brørup
2023-12-08 14:59 ` [RFC v2 00/14] " David Marchand
2023-12-08 14:59 ` [RFC v2 01/14] hash: remove some dead code David Marchand
2023-12-08 16:53 ` Stephen Hemminger
2023-12-08 20:46 ` Tyler Retzlaff
2023-12-08 14:59 ` [RFC v2 02/14] regexdev: fix logtype register David Marchand
2023-12-08 16:58 ` Stephen Hemminger
2023-12-08 20:46 ` Tyler Retzlaff
2023-12-14 10:11 ` Ori Kam
2023-12-08 14:59 ` [RFC v2 03/14] lib: use dedicated logtypes David Marchand
2023-12-08 17:00 ` Stephen Hemminger
2023-12-08 20:49 ` Tyler Retzlaff
2023-12-16 9:47 ` Andrew Rybchenko
2023-12-08 14:59 ` [RFC v2 04/14] lib: add newline in logs David Marchand
2023-12-08 17:01 ` Stephen Hemminger
2023-12-11 12:38 ` David Marchand
2023-12-08 20:50 ` Tyler Retzlaff
2023-12-16 9:43 ` Andrew Rybchenko
2023-12-08 14:59 ` [RFC v2 05/14] lib: remove redundant newline from logs David Marchand
2023-12-08 17:02 ` Stephen Hemminger
2023-12-09 7:15 ` fengchengwen
2023-12-11 8:48 ` Mattias Rönnblom
2023-12-08 14:59 ` [RFC v2 06/14] eal/linux: remove log paraphrasing the doc David Marchand
2023-12-08 17:03 ` Stephen Hemminger
2023-12-08 20:54 ` Tyler Retzlaff
2023-12-08 14:59 ` [RFC v2 07/14] bpf: remove log level in internal helper David Marchand
2023-12-08 17:04 ` Stephen Hemminger
2023-12-08 21:02 ` Tyler Retzlaff
2023-12-08 14:59 ` [RFC v2 08/14] lib: simplify multilines log messages David Marchand
2023-12-08 17:05 ` Stephen Hemminger
2023-12-16 9:26 ` Andrew Rybchenko
2023-12-08 21:03 ` Tyler Retzlaff
2023-12-08 14:59 ` [RFC v2 09/14] rcu: introduce a logging helper David Marchand
2023-12-08 17:08 ` Stephen Hemminger
2023-12-08 18:26 ` Honnappa Nagarahalli
2023-12-08 21:27 ` Tyler Retzlaff
2023-12-12 15:00 ` David Marchand
2023-12-12 19:11 ` Tyler Retzlaff
2023-12-18 9:37 ` David Marchand
2023-12-18 19:52 ` Tyler Retzlaff
2023-12-08 14:59 ` [RFC v2 10/14] vhost: improve log for memory dumping configuration David Marchand
2023-12-08 17:14 ` Stephen Hemminger
2023-12-08 14:59 ` [RFC v2 11/14] log: add a per line log helper David Marchand
2023-12-08 17:15 ` Stephen Hemminger
2023-12-09 7:21 ` fengchengwen
2023-12-08 14:59 ` David Marchand [this message]
2023-12-08 17:16 ` [RFC v2 12/14] lib: convert to per line logging Stephen Hemminger
2023-12-11 12:34 ` David Marchand
2023-12-16 9:30 ` Andrew Rybchenko
2023-12-08 14:59 ` [RFC v2 13/14] lib: replace logging helpers David Marchand
2023-12-08 17:18 ` Stephen Hemminger
2023-12-11 12:36 ` David Marchand
2023-12-16 9:42 ` Andrew Rybchenko
2023-12-08 14:59 ` [RFC v2 14/14] lib: use per line logging in helpers David Marchand
2023-12-09 7:19 ` fengchengwen
2023-12-16 9:41 ` Andrew Rybchenko
2023-12-18 9:27 ` [PATCH v3 00/14] Detect superfluous newline in logs David Marchand
2023-12-18 9:27 ` [PATCH v3 01/14] hash: remove some dead code David Marchand
2023-12-18 9:27 ` [PATCH v3 02/14] regexdev: fix logtype register David Marchand
2023-12-18 9:27 ` [PATCH v3 03/14] lib: use dedicated logtypes and macros David Marchand
2023-12-18 9:27 ` [PATCH v3 04/14] lib: add newline in logs David Marchand
2023-12-18 9:27 ` [PATCH v3 05/14] lib: remove redundant newline from logs David Marchand
2023-12-18 9:27 ` [PATCH v3 06/14] eal/linux: remove log paraphrasing the doc David Marchand
2023-12-18 9:27 ` [PATCH v3 07/14] bpf: remove log level in internal helper David Marchand
2023-12-18 9:27 ` [PATCH v3 08/14] lib: simplify multilines log messages David Marchand
2023-12-18 10:05 ` Andrew Rybchenko
2023-12-18 9:27 ` [PATCH v3 09/14] rcu: introduce a logging helper David Marchand
2023-12-18 9:27 ` [PATCH v3 10/14] vhost: improve log for memory dumping configuration David Marchand
2023-12-18 9:27 ` [PATCH v3 11/14] log: add a per line log helper David Marchand
2023-12-18 9:27 ` [PATCH v3 12/14] lib: convert to per line logging David Marchand
2023-12-18 9:27 ` [PATCH v3 13/14] lib: replace logging helpers David Marchand
2023-12-18 9:27 ` [PATCH v3 14/14] lib: use per line logging in helpers David Marchand
2023-12-18 14:37 ` [PATCH v4 00/14] Detect superfluous newline in logs David Marchand
2023-12-18 14:37 ` [PATCH v4 01/14] hash: remove some dead code David Marchand
2023-12-18 14:37 ` [PATCH v4 02/14] regexdev: fix logtype register David Marchand
2023-12-18 16:46 ` Stephen Hemminger
2023-12-18 14:37 ` [PATCH v4 03/14] lib: use dedicated logtypes and macros David Marchand
2023-12-18 14:37 ` [PATCH v4 04/14] lib: add newline in logs David Marchand
2023-12-18 14:37 ` [PATCH v4 05/14] lib: remove redundant newline from logs David Marchand
2023-12-18 14:37 ` [PATCH v4 06/14] eal/linux: remove log paraphrasing the doc David Marchand
2023-12-18 14:37 ` [PATCH v4 07/14] bpf: remove log level in internal helper David Marchand
2023-12-18 14:37 ` [PATCH v4 08/14] lib: simplify multilines log messages David Marchand
2023-12-18 14:37 ` [PATCH v4 09/14] rcu: introduce a logging helper David Marchand
2023-12-18 14:37 ` [PATCH v4 10/14] vhost: improve log for memory dumping configuration David Marchand
2023-12-18 14:38 ` [PATCH v4 11/14] log: add a per line log helper David Marchand
2023-12-19 15:45 ` Thomas Monjalon
2023-12-19 17:16 ` Stephen Hemminger
2023-12-20 8:26 ` David Marchand
2023-12-18 14:38 ` [PATCH v4 12/14] lib: convert to per line logging David Marchand
2023-12-20 13:46 ` Thomas Monjalon
2023-12-20 14:00 ` David Marchand
2023-12-18 14:38 ` [PATCH v4 13/14] lib: replace logging helpers David Marchand
2023-12-18 14:38 ` [PATCH v4 14/14] lib: use per line logging in helpers David Marchand
2023-12-20 15:35 ` [PATCH v5 00/13] Detect superfluous newline in logs David Marchand
2023-12-20 15:35 ` [PATCH v5 01/13] hash: remove some dead code David Marchand
2023-12-21 5:58 ` Ruifeng Wang
2023-12-21 6:26 ` Ruifeng Wang
2023-12-20 15:35 ` [PATCH v5 02/13] regexdev: fix logtype register David Marchand
2023-12-20 15:35 ` [PATCH v5 03/13] lib: use dedicated logtypes and macros David Marchand
2023-12-20 15:35 ` [PATCH v5 04/13] lib: add newline in logs David Marchand
2023-12-20 15:35 ` [PATCH v5 05/13] lib: remove redundant newline from logs David Marchand
2023-12-20 15:35 ` [PATCH v5 06/13] eal/linux: remove log paraphrasing the doc David Marchand
2023-12-20 15:36 ` [PATCH v5 07/13] bpf: remove log level in internal helper David Marchand
2023-12-20 15:36 ` [PATCH v5 08/13] lib: simplify multilines log messages David Marchand
2023-12-20 15:36 ` [PATCH v5 09/13] lib: add more logging helpers David Marchand
2023-12-20 15:36 ` [PATCH v5 10/13] vhost: improve log for memory dumping configuration David Marchand
2023-12-20 15:36 ` [PATCH v5 11/13] log: add a per line log helper David Marchand
2023-12-20 15:42 ` David Marchand
2023-12-20 15:36 ` [PATCH v5 12/13] lib: replace logging helpers David Marchand
2023-12-20 15:36 ` [PATCH v5 13/13] lib: use per line logging in helpers David Marchand
2023-12-21 9:31 ` [PATCH v5 00/13] Detect superfluous newline in logs David Marchand
2023-12-21 16:32 ` Stephen Hemminger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231208145950.2184940-13-david.marchand@redhat.com \
--to=david.marchand@redhat.com \
--cc=anatoly.burakov@intel.com \
--cc=andrew.rybchenko@oktetlabs.ru \
--cc=bruce.richardson@intel.com \
--cc=byron.marohn@intel.com \
--cc=chenbox@nvidia.com \
--cc=cristian.dumitrescu@intel.com \
--cc=david.hunt@intel.com \
--cc=dev@dpdk.org \
--cc=dmitry.kozliuk@gmail.com \
--cc=dmitrym@microsoft.com \
--cc=ferruh.yigit@amd.com \
--cc=harry.van.haaren@intel.com \
--cc=hkalra@marvell.com \
--cc=honnappa.nagarahalli@arm.com \
--cc=jerinj@marvell.com \
--cc=kda@semihalf.com \
--cc=konstantin.v.ananyev@yandex.ru \
--cc=maxime.coquelin@redhat.com \
--cc=mb@smartsharesystems.com \
--cc=navasile@linux.microsoft.com \
--cc=pallavi.kadam@intel.com \
--cc=reshma.pattan@intel.com \
--cc=sameh.gobriel@intel.com \
--cc=sivaprasad.tummala@amd.com \
--cc=skori@marvell.com \
--cc=stephen@networkplumber.org \
--cc=thomas@monjalon.net \
--cc=vfialko@marvell.com \
--cc=vladimir.medvedkin@intel.com \
--cc=yipeng1.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).