patches for DPDK stable branches
 help / color / mirror / Atom feed
From: Xueming Li <xuemingl@nvidia.com>
To: David Marchand <david.marchand@redhat.com>
Cc: <xuemingl@nvidia.com>, Chengwen Feng <fengchengwen@huawei.com>,
	"dpdk stable" <stable@dpdk.org>
Subject: patch 'drivers: remove redundant newline from logs' has been queued to stable release 23.11.3
Date: Mon, 11 Nov 2024 14:26:49 +0800	[thread overview]
Message-ID: <20241111062847.216344-4-xuemingl@nvidia.com> (raw)
In-Reply-To: <20241111062847.216344-1-xuemingl@nvidia.com>

Hi,

FYI, your patch has been queued to stable release 23.11.3

Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.

Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.

Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging

This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=5b424bd34d8c972d428d03bc9952528d597e2040

Thanks.

Xueming Li <xuemingl@nvidia.com>

---
From 5b424bd34d8c972d428d03bc9952528d597e2040 Mon Sep 17 00:00:00 2001
From: David Marchand <david.marchand@redhat.com>
Date: Wed, 13 Dec 2023 20:29:58 +0100
Subject: [PATCH] drivers: remove redundant newline from logs
Cc: Xueming Li <xuemingl@nvidia.com>

[ upstream commit f665790a5dbad7b645ff46f31d65e977324e7bfc ]

Fix places where two newline characters may be logged.

Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
---
 drivers/baseband/acc/rte_acc100_pmd.c         |  22 +-
 drivers/baseband/acc/rte_vrb_pmd.c            |  26 +--
 .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c         |  14 +-
 drivers/baseband/la12xx/bbdev_la12xx.c        |   4 +-
 .../baseband/turbo_sw/bbdev_turbo_software.c  |   4 +-
 drivers/bus/cdx/cdx_vfio.c                    |   8 +-
 drivers/bus/dpaa/include/fman.h               |   3 +-
 drivers/bus/fslmc/fslmc_bus.c                 |   8 +-
 drivers/bus/fslmc/fslmc_vfio.c                |  10 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpci.c      |   4 +-
 drivers/bus/ifpga/ifpga_bus.c                 |   8 +-
 drivers/bus/vdev/vdev_params.c                |   2 +-
 drivers/bus/vmbus/vmbus_common.c              |   2 +-
 drivers/common/cnxk/roc_dev.c                 |   2 +-
 drivers/common/cnxk/roc_model.c               |   2 +-
 drivers/common/cnxk/roc_nix_ops.c             |  20 +-
 drivers/common/cnxk/roc_nix_tm.c              |   2 +-
 drivers/common/cnxk/roc_nix_tm_mark.c         |   2 +-
 drivers/common/cnxk/roc_nix_tm_ops.c          |   2 +-
 drivers/common/cnxk/roc_nix_tm_utils.c        |   2 +-
 drivers/common/cnxk/roc_sso.c                 |   2 +-
 drivers/common/cnxk/roc_tim.c                 |   2 +-
 drivers/common/cpt/cpt_ucode.h                |   4 +-
 drivers/common/idpf/idpf_common_logs.h        |   5 +-
 drivers/common/octeontx/octeontx_mbox.c       |   4 +-
 drivers/common/qat/qat_pf2vf.c                |   4 +-
 drivers/common/qat/qat_qp.c                   |   2 +-
 drivers/compress/isal/isal_compress_pmd.c     |  78 +++----
 drivers/compress/octeontx/otx_zip.h           |  12 +-
 drivers/compress/octeontx/otx_zip_pmd.c       |  14 +-
 drivers/compress/zlib/zlib_pmd.c              |  26 +--
 drivers/compress/zlib/zlib_pmd_ops.c          |   4 +-
 drivers/crypto/bcmfs/bcmfs_qp.c               |   2 +-
 drivers/crypto/bcmfs/bcmfs_sym_pmd.c          |   2 +-
 drivers/crypto/bcmfs/bcmfs_sym_session.c      |   2 +-
 drivers/crypto/caam_jr/caam_jr.c              |  32 +--
 drivers/crypto/caam_jr/caam_jr_uio.c          |   6 +-
 drivers/crypto/ccp/ccp_dev.c                  |   2 +-
 drivers/crypto/ccp/rte_ccp_pmd.c              |   2 +-
 drivers/crypto/cnxk/cnxk_se.h                 |   6 +-
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c   |  42 ++--
 drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c   |  16 +-
 drivers/crypto/dpaa_sec/dpaa_sec.c            |  24 +-
 drivers/crypto/dpaa_sec/dpaa_sec_log.h        |   2 +-
 drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c     |   6 +-
 drivers/crypto/ipsec_mb/ipsec_mb_private.c    |   4 +-
 drivers/crypto/ipsec_mb/ipsec_mb_private.h    |   2 +-
 drivers/crypto/ipsec_mb/pmd_aesni_mb.c        |  28 +--
 drivers/crypto/ipsec_mb/pmd_snow3g.c          |   4 +-
 .../crypto/octeontx/otx_cryptodev_hw_access.h |   6 +-
 drivers/crypto/openssl/rte_openssl_pmd.c      |  42 ++--
 drivers/crypto/openssl/rte_openssl_pmd_ops.c  |  44 ++--
 drivers/crypto/qat/qat_asym.c                 |   2 +-
 drivers/crypto/qat/qat_sym_session.c          |  12 +-
 drivers/crypto/uadk/uadk_crypto_pmd.c         |   8 +-
 drivers/crypto/virtio/virtio_cryptodev.c      |   2 +-
 drivers/dma/dpaa/dpaa_qdma.c                  |  40 ++--
 drivers/dma/dpaa2/dpaa2_qdma.c                |  10 +-
 drivers/dma/hisilicon/hisi_dmadev.c           |   6 +-
 drivers/dma/idxd/idxd_common.c                |   2 +-
 drivers/dma/idxd/idxd_pci.c                   |   6 +-
 drivers/dma/ioat/ioat_dmadev.c                |  14 +-
 drivers/event/cnxk/cnxk_tim_evdev.c           |   2 +-
 drivers/event/dlb2/dlb2.c                     | 220 +++++++++---------
 drivers/event/dlb2/dlb2_xstats.c              |   6 +-
 drivers/event/dlb2/pf/dlb2_main.c             |  52 ++---
 drivers/event/dlb2/pf/dlb2_pf.c               |  20 +-
 drivers/event/dpaa2/dpaa2_eventdev.c          |  14 +-
 drivers/event/octeontx/timvf_evdev.c          |   2 +-
 drivers/event/opdl/opdl_evdev.c               |  30 +--
 drivers/event/opdl/opdl_test.c                | 116 ++++-----
 drivers/event/sw/sw_evdev.c                   |  22 +-
 drivers/event/sw/sw_evdev_xstats.c            |   4 +-
 drivers/mempool/dpaa2/dpaa2_hw_mempool.c      |   8 +-
 drivers/mempool/octeontx/octeontx_fpavf.c     |  22 +-
 .../mempool/octeontx/rte_mempool_octeontx.c   |   6 +-
 drivers/ml/cnxk/cn10k_ml_dev.c                |  32 +--
 drivers/ml/cnxk/cnxk_ml_ops.c                 |  20 +-
 drivers/net/atlantic/atl_rxtx.c               |   4 +-
 drivers/net/atlantic/hw_atl/hw_atl_utils.c    |  12 +-
 drivers/net/axgbe/axgbe_ethdev.c              |   2 +-
 drivers/net/bnx2x/bnx2x.c                     |   8 +-
 drivers/net/bonding/rte_eth_bond_8023ad.c     |   4 +-
 drivers/net/bonding/rte_eth_bond_alb.c        |   2 +-
 drivers/net/bonding/rte_eth_bond_api.c        |   4 +-
 drivers/net/bonding/rte_eth_bond_pmd.c        |   6 +-
 drivers/net/cnxk/cnxk_ethdev.c                |   4 +-
 drivers/net/cnxk/cnxk_ethdev_mcs.c            |  14 +-
 drivers/net/cnxk/cnxk_ethdev_ops.c            |   2 +-
 drivers/net/cpfl/cpfl_flow_parser.c           |   2 +-
 drivers/net/cpfl/cpfl_fxp_rule.c              |   8 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |  16 +-
 drivers/net/dpaa2/dpaa2_flow.c                |  36 +--
 drivers/net/dpaa2/dpaa2_mux.c                 |   4 +-
 drivers/net/dpaa2/dpaa2_recycle.c             |   6 +-
 drivers/net/dpaa2/dpaa2_rxtx.c                |  14 +-
 drivers/net/dpaa2/dpaa2_sparser.c             |   8 +-
 drivers/net/dpaa2/dpaa2_tm.c                  |  24 +-
 drivers/net/e1000/igb_ethdev.c                |   2 +-
 drivers/net/enetc/enetc_ethdev.c              |   4 +-
 drivers/net/enetfec/enet_ethdev.c             |   4 +-
 drivers/net/enetfec/enet_uio.c                |  10 +-
 drivers/net/enic/enic_ethdev.c                |  20 +-
 drivers/net/enic/enic_flow.c                  |  20 +-
 drivers/net/enic/enic_vf_representor.c        |  16 +-
 drivers/net/failsafe/failsafe_args.c          |   2 +-
 drivers/net/failsafe/failsafe_eal.c           |   2 +-
 drivers/net/failsafe/failsafe_ether.c         |   4 +-
 drivers/net/failsafe/failsafe_intr.c          |   6 +-
 drivers/net/gve/base/gve_adminq.c             |   2 +-
 drivers/net/hinic/base/hinic_pmd_eqs.c        |   2 +-
 drivers/net/hinic/base/hinic_pmd_mbox.c       |   6 +-
 drivers/net/hinic/base/hinic_pmd_niccfg.c     |   8 +-
 drivers/net/hinic/hinic_pmd_ethdev.c          |   4 +-
 drivers/net/hns3/hns3_dump.c                  |  12 +-
 drivers/net/hns3/hns3_intr.c                  |  12 +-
 drivers/net/hns3/hns3_ptp.c                   |   2 +-
 drivers/net/hns3/hns3_regs.c                  |   4 +-
 drivers/net/i40e/i40e_ethdev.c                |  37 ++-
 drivers/net/i40e/i40e_pf.c                    |   8 +-
 drivers/net/i40e/i40e_rxtx.c                  |  24 +-
 drivers/net/iavf/iavf_ethdev.c                |  12 +-
 drivers/net/iavf/iavf_rxtx.c                  |   2 +-
 drivers/net/ice/ice_dcf_ethdev.c              |   4 +-
 drivers/net/ice/ice_dcf_vf_representor.c      |  14 +-
 drivers/net/ice/ice_ethdev.c                  |  44 ++--
 drivers/net/ice/ice_fdir_filter.c             |   2 +-
 drivers/net/ice/ice_hash.c                    |   8 +-
 drivers/net/ice/ice_rxtx.c                    |   2 +-
 drivers/net/ipn3ke/ipn3ke_ethdev.c            |   4 +-
 drivers/net/ipn3ke/ipn3ke_flow.c              |  23 +-
 drivers/net/ipn3ke/ipn3ke_representor.c       |  20 +-
 drivers/net/ipn3ke/ipn3ke_tm.c                |  10 +-
 drivers/net/ixgbe/ixgbe_ethdev.c              |   7 +-
 drivers/net/ixgbe/ixgbe_ipsec.c               |  24 +-
 drivers/net/ixgbe/ixgbe_pf.c                  |  18 +-
 drivers/net/ixgbe/rte_pmd_ixgbe.c             |   8 +-
 drivers/net/memif/rte_eth_memif.c             |   2 +-
 drivers/net/mlx4/mlx4.c                       |   4 +-
 drivers/net/netvsc/hn_rxtx.c                  |   4 +-
 drivers/net/ngbe/base/ngbe_hw.c               |   2 +-
 drivers/net/ngbe/ngbe_ethdev.c                |   2 +-
 drivers/net/ngbe/ngbe_pf.c                    |  10 +-
 drivers/net/octeon_ep/cnxk_ep_tx.c            |   2 +-
 drivers/net/octeon_ep/cnxk_ep_vf.c            |  12 +-
 drivers/net/octeon_ep/otx2_ep_vf.c            |  18 +-
 drivers/net/octeon_ep/otx_ep_common.h         |   2 +-
 drivers/net/octeon_ep/otx_ep_ethdev.c         |  80 +++----
 drivers/net/octeon_ep/otx_ep_mbox.c           |  30 +--
 drivers/net/octeon_ep/otx_ep_rxtx.c           |  74 +++---
 drivers/net/octeon_ep/otx_ep_vf.c             |  20 +-
 drivers/net/octeontx/base/octeontx_pkovf.c    |   2 +-
 drivers/net/octeontx/octeontx_ethdev.c        |   4 +-
 drivers/net/pcap/pcap_ethdev.c                |   4 +-
 drivers/net/pfe/pfe_ethdev.c                  |  22 +-
 drivers/net/pfe/pfe_hif.c                     |  12 +-
 drivers/net/pfe/pfe_hif_lib.c                 |   2 +-
 drivers/net/qede/qede_rxtx.c                  |  66 +++---
 drivers/net/thunderx/nicvf_ethdev.c           |   2 +-
 drivers/net/txgbe/txgbe_ethdev.c              |   4 +-
 drivers/net/txgbe/txgbe_ipsec.c               |  24 +-
 drivers/net/txgbe/txgbe_pf.c                  |  20 +-
 .../net/virtio/virtio_user/virtio_user_dev.c  |   2 +-
 drivers/net/vmxnet3/vmxnet3_ethdev.c          |   4 +-
 drivers/net/vmxnet3/vmxnet3_rxtx.c            |   2 +-
 drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c         |  14 +-
 drivers/raw/ifpga/afu_pmd_n3000.c             |   2 +-
 drivers/raw/ifpga/ifpga_rawdev.c              |  94 ++++----
 drivers/regex/cn9k/cn9k_regexdev.c            |   2 +-
 drivers/vdpa/ifc/ifcvf_vdpa.c                 |  10 +-
 drivers/vdpa/nfp/nfp_vdpa.c                   |   2 +-
 171 files changed, 1194 insertions(+), 1211 deletions(-)

diff --git a/drivers/baseband/acc/rte_acc100_pmd.c b/drivers/baseband/acc/rte_acc100_pmd.c
index 292537e24d..9d028f0f48 100644
--- a/drivers/baseband/acc/rte_acc100_pmd.c
+++ b/drivers/baseband/acc/rte_acc100_pmd.c
@@ -230,7 +230,7 @@ fetch_acc100_config(struct rte_bbdev *dev)
 	}

 	rte_bbdev_log_debug(
-			"%s Config LLR SIGN IN/OUT %s %s QG %u %u %u %u AQ %u %u %u %u Len %u %u %u %u\n",
+			"%s Config LLR SIGN IN/OUT %s %s QG %u %u %u %u AQ %u %u %u %u Len %u %u %u %u",
 			(d->pf_device) ? "PF" : "VF",
 			(acc_conf->input_pos_llr_1_bit) ? "POS" : "NEG",
 			(acc_conf->output_pos_llr_1_bit) ? "POS" : "NEG",
@@ -1229,7 +1229,7 @@ acc100_fcw_ld_fill(struct rte_bbdev_dec_op *op, struct acc_fcw_ld *fcw,
 			harq_in_length = RTE_ALIGN_FLOOR(harq_in_length, ACC100_HARQ_ALIGN_COMP);

 		if ((harq_layout[harq_index].offset > 0) && harq_prun) {
-			rte_bbdev_log_debug("HARQ IN offset unexpected for now\n");
+			rte_bbdev_log_debug("HARQ IN offset unexpected for now");
 			fcw->hcin_size0 = harq_layout[harq_index].size0;
 			fcw->hcin_offset = harq_layout[harq_index].offset;
 			fcw->hcin_size1 = harq_in_length - harq_layout[harq_index].offset;
@@ -2890,7 +2890,7 @@ harq_loopback(struct acc_queue *q, struct rte_bbdev_dec_op *op,
 	uint32_t harq_index;

 	if (harq_in_length == 0) {
-		rte_bbdev_log(ERR, "Loopback of invalid null size\n");
+		rte_bbdev_log(ERR, "Loopback of invalid null size");
 		return -EINVAL;
 	}

@@ -2928,7 +2928,7 @@ harq_loopback(struct acc_queue *q, struct rte_bbdev_dec_op *op,
 	fcw->hcin_en = 1;
 	fcw->hcout_en = 1;

-	rte_bbdev_log(DEBUG, "Loopback IN %d Index %d offset %d length %d %d\n",
+	rte_bbdev_log(DEBUG, "Loopback IN %d Index %d offset %d length %d %d",
 			ddr_mem_in, harq_index,
 			harq_layout[harq_index].offset, harq_in_length,
 			harq_dma_length_in);
@@ -2944,7 +2944,7 @@ harq_loopback(struct acc_queue *q, struct rte_bbdev_dec_op *op,
 		fcw->hcin_size0 = harq_in_length;
 	}
 	harq_layout[harq_index].val = 0;
-	rte_bbdev_log(DEBUG, "Loopback FCW Config %d %d %d\n",
+	rte_bbdev_log(DEBUG, "Loopback FCW Config %d %d %d",
 			fcw->hcin_size0, fcw->hcin_offset, fcw->hcin_size1);
 	fcw->hcout_size0 = harq_in_length;
 	fcw->hcin_decomp_mode = h_comp;
@@ -3691,7 +3691,7 @@ acc100_enqueue_ldpc_dec_cb(struct rte_bbdev_queue_data *q_data,

 		if (i > 0)
 			same_op = cmp_ldpc_dec_op(&ops[i-1]);
-		rte_bbdev_log(INFO, "Op %d %d %d %d %d %d %d %d %d %d %d %d\n",
+		rte_bbdev_log(INFO, "Op %d %d %d %d %d %d %d %d %d %d %d %d",
 			i, ops[i]->ldpc_dec.op_flags, ops[i]->ldpc_dec.rv_index,
 			ops[i]->ldpc_dec.iter_max, ops[i]->ldpc_dec.iter_count,
 			ops[i]->ldpc_dec.basegraph, ops[i]->ldpc_dec.z_c,
@@ -3808,7 +3808,7 @@ dequeue_enc_one_op_cb(struct acc_queue *q, struct rte_bbdev_enc_op **ref_op,
 		return -1;

 	rsp.val = atom_desc.rsp.val;
-	rte_bbdev_log_debug("Resp. desc %p: %x num %d\n", desc, rsp.val, desc->req.numCBs);
+	rte_bbdev_log_debug("Resp. desc %p: %x num %d", desc, rsp.val, desc->req.numCBs);

 	/* Dequeue */
 	op = desc->req.op_addr;
@@ -3885,7 +3885,7 @@ dequeue_enc_one_op_tb(struct acc_queue *q, struct rte_bbdev_enc_op **ref_op,
 		atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc,
 				__ATOMIC_RELAXED);
 		rsp.val = atom_desc.rsp.val;
-		rte_bbdev_log_debug("Resp. desc %p: %x descs %d cbs %d\n",
+		rte_bbdev_log_debug("Resp. desc %p: %x descs %d cbs %d",
 				desc, rsp.val, descs_in_tb, desc->req.numCBs);

 		op->status |= ((rsp.dma_err) ? (1 << RTE_BBDEV_DRV_ERROR) : 0);
@@ -3981,7 +3981,7 @@ dequeue_ldpc_dec_one_op_cb(struct rte_bbdev_queue_data *q_data,
 		return -1;

 	rsp.val = atom_desc.rsp.val;
-	rte_bbdev_log_debug("Resp. desc %p: %x\n", desc, rsp.val);
+	rte_bbdev_log_debug("Resp. desc %p: %x", desc, rsp.val);

 	/* Dequeue */
 	op = desc->req.op_addr;
@@ -4060,7 +4060,7 @@ dequeue_dec_one_op_tb(struct acc_queue *q, struct rte_bbdev_dec_op **ref_op,
 		atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc,
 				__ATOMIC_RELAXED);
 		rsp.val = atom_desc.rsp.val;
-		rte_bbdev_log_debug("Resp. desc %p: %x r %d c %d\n",
+		rte_bbdev_log_debug("Resp. desc %p: %x r %d c %d",
 						desc, rsp.val, cb_idx, cbs_in_tb);

 		op->status |= ((rsp.input_err) ? (1 << RTE_BBDEV_DATA_ERROR) : 0);
@@ -4797,7 +4797,7 @@ acc100_configure(const char *dev_name, struct rte_acc_conf *conf)
 	}

 	if (aram_address > ACC100_WORDS_IN_ARAM_SIZE) {
-		rte_bbdev_log(ERR, "ARAM Configuration not fitting %d %d\n",
+		rte_bbdev_log(ERR, "ARAM Configuration not fitting %d %d",
 				aram_address, ACC100_WORDS_IN_ARAM_SIZE);
 		return -EINVAL;
 	}
diff --git a/drivers/baseband/acc/rte_vrb_pmd.c b/drivers/baseband/acc/rte_vrb_pmd.c
index 686e086a5c..88e1d03ebf 100644
--- a/drivers/baseband/acc/rte_vrb_pmd.c
+++ b/drivers/baseband/acc/rte_vrb_pmd.c
@@ -348,7 +348,7 @@ fetch_acc_config(struct rte_bbdev *dev)
 	}

 	rte_bbdev_log_debug(
-			"%s Config LLR SIGN IN/OUT %s %s QG %u %u %u %u %u %u AQ %u %u %u %u %u %u Len %u %u %u %u %u %u\n",
+			"%s Config LLR SIGN IN/OUT %s %s QG %u %u %u %u %u %u AQ %u %u %u %u %u %u Len %u %u %u %u %u %u",
 			(d->pf_device) ? "PF" : "VF",
 			(acc_conf->input_pos_llr_1_bit) ? "POS" : "NEG",
 			(acc_conf->output_pos_llr_1_bit) ? "POS" : "NEG",
@@ -464,7 +464,7 @@ vrb_dev_interrupt_handler(void *cb_arg)
 			}
 		} else {
 			rte_bbdev_log_debug(
-					"VRB VF Interrupt received, Info Ring data: 0x%x\n",
+					"VRB VF Interrupt received, Info Ring data: 0x%x",
 					ring_data->val);
 			switch (int_nb) {
 			case ACC_VF_INT_DMA_DL_DESC_IRQ:
@@ -698,7 +698,7 @@ vrb_intr_enable(struct rte_bbdev *dev)

 	if (d->device_variant == VRB1_VARIANT) {
 		/* On VRB1: cannot enable MSI/IR to avoid potential back-pressure corner case. */
-		rte_bbdev_log(ERR, "VRB1 (%s) doesn't support any MSI/MSI-X interrupt\n",
+		rte_bbdev_log(ERR, "VRB1 (%s) doesn't support any MSI/MSI-X interrupt",
 				dev->data->name);
 		return -ENOTSUP;
 	}
@@ -800,7 +800,7 @@ vrb_intr_enable(struct rte_bbdev *dev)
 		return 0;
 	}

-	rte_bbdev_log(ERR, "Device (%s) supports only VFIO MSI/MSI-X interrupts\n",
+	rte_bbdev_log(ERR, "Device (%s) supports only VFIO MSI/MSI-X interrupts",
 			dev->data->name);
 	return -ENOTSUP;
 }
@@ -1023,7 +1023,7 @@ vrb_queue_setup(struct rte_bbdev *dev, uint16_t queue_id,
 			d->queue_offset(d->pf_device, q->vf_id, q->qgrp_id, q->aq_id));

 	rte_bbdev_log_debug(
-			"Setup dev%u q%u: qgrp_id=%u, vf_id=%u, aq_id=%u, aq_depth=%u, mmio_reg_enqueue=%p base %p\n",
+			"Setup dev%u q%u: qgrp_id=%u, vf_id=%u, aq_id=%u, aq_depth=%u, mmio_reg_enqueue=%p base %p",
 			dev->data->dev_id, queue_id, q->qgrp_id, q->vf_id,
 			q->aq_id, q->aq_depth, q->mmio_reg_enqueue,
 			d->mmio_base);
@@ -1076,7 +1076,7 @@ vrb_print_op(struct rte_bbdev_dec_op *op, enum rte_bbdev_op_type op_type,
 			);
 	} else if (op_type == RTE_BBDEV_OP_MLDTS) {
 		struct rte_bbdev_mldts_op *op_mldts = (struct rte_bbdev_mldts_op *) op;
-		rte_bbdev_log(INFO, "  Op MLD %d RBs %d NL %d Rp %d %d %x\n",
+		rte_bbdev_log(INFO, "  Op MLD %d RBs %d NL %d Rp %d %d %x",
 				index,
 				op_mldts->mldts.num_rbs, op_mldts->mldts.num_layers,
 				op_mldts->mldts.r_rep,
@@ -2492,7 +2492,7 @@ vrb_enqueue_ldpc_dec_one_op_cb(struct acc_queue *q, struct rte_bbdev_dec_op *op,
 		hq_output = op->ldpc_dec.harq_combined_output.data;
 		hq_len = op->ldpc_dec.harq_combined_output.length;
 		if (unlikely(!mbuf_append(hq_output_head, hq_output, hq_len))) {
-			rte_bbdev_log(ERR, "HARQ output mbuf issue %d %d\n",
+			rte_bbdev_log(ERR, "HARQ output mbuf issue %d %d",
 					hq_output->buf_len,
 					hq_len);
 			return -1;
@@ -2985,7 +2985,7 @@ vrb_enqueue_ldpc_dec_cb(struct rte_bbdev_queue_data *q_data,
 			break;
 		}
 		avail -= 1;
-		rte_bbdev_log(INFO, "Op %d %d %d %d %d %d %d %d %d %d %d %d\n",
+		rte_bbdev_log(INFO, "Op %d %d %d %d %d %d %d %d %d %d %d %d",
 			i, ops[i]->ldpc_dec.op_flags, ops[i]->ldpc_dec.rv_index,
 			ops[i]->ldpc_dec.iter_max, ops[i]->ldpc_dec.iter_count,
 			ops[i]->ldpc_dec.basegraph, ops[i]->ldpc_dec.z_c,
@@ -3319,7 +3319,7 @@ vrb_dequeue_ldpc_dec_one_op_cb(struct rte_bbdev_queue_data *q_data,
 		return -1;

 	rsp.val = atom_desc.rsp.val;
-	rte_bbdev_log_debug("Resp. desc %p: %x %x %x\n", desc, rsp.val, desc->rsp.add_info_0,
+	rte_bbdev_log_debug("Resp. desc %p: %x %x %x", desc, rsp.val, desc->rsp.add_info_0,
 			desc->rsp.add_info_1);

 	/* Dequeue. */
@@ -3440,7 +3440,7 @@ vrb_dequeue_dec_one_op_tb(struct acc_queue *q, struct rte_bbdev_dec_op **ref_op,
 	}

 	if (check_bit(op->ldpc_dec.op_flags, RTE_BBDEV_LDPC_CRC_TYPE_24A_CHECK)) {
-		rte_bbdev_log_debug("TB-CRC Check %x\n", tb_crc_check);
+		rte_bbdev_log_debug("TB-CRC Check %x", tb_crc_check);
 		if (tb_crc_check > 0)
 			op->status |= 1 << RTE_BBDEV_CRC_ERROR;
 	}
@@ -3985,7 +3985,7 @@ vrb2_check_mld_r_constraint(struct rte_bbdev_mldts_op *op) {
 	layer_idx = RTE_MIN(op->mldts.num_layers - VRB2_MLD_MIN_LAYER,
 			VRB2_MLD_MAX_LAYER - VRB2_MLD_MIN_LAYER);
 	rrep_idx = RTE_MIN(op->mldts.r_rep, VRB2_MLD_MAX_RREP);
-	rte_bbdev_log_debug("RB %d index %d %d max %d\n", op->mldts.num_rbs, layer_idx, rrep_idx,
+	rte_bbdev_log_debug("RB %d index %d %d max %d", op->mldts.num_rbs, layer_idx, rrep_idx,
 			max_rb[layer_idx][rrep_idx]);

 	return (op->mldts.num_rbs <= max_rb[layer_idx][rrep_idx]);
@@ -4650,7 +4650,7 @@ vrb1_configure(const char *dev_name, struct rte_acc_conf *conf)
 	}

 	if (aram_address > VRB1_WORDS_IN_ARAM_SIZE) {
-		rte_bbdev_log(ERR, "ARAM Configuration not fitting %d %d\n",
+		rte_bbdev_log(ERR, "ARAM Configuration not fitting %d %d",
 				aram_address, VRB1_WORDS_IN_ARAM_SIZE);
 		return -EINVAL;
 	}
@@ -5020,7 +5020,7 @@ vrb2_configure(const char *dev_name, struct rte_acc_conf *conf)
 			}
 		}
 		if (aram_address > VRB2_WORDS_IN_ARAM_SIZE) {
-			rte_bbdev_log(ERR, "ARAM Configuration not fitting %d %d\n",
+			rte_bbdev_log(ERR, "ARAM Configuration not fitting %d %d",
 					aram_address, VRB2_WORDS_IN_ARAM_SIZE);
 			return -EINVAL;
 		}
diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index 6b0644ffc5..d60cd3a5c5 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -1498,14 +1498,14 @@ fpga_mutex_acquisition(struct fpga_queue *q)
 	do {
 		if (cnt > 0)
 			usleep(FPGA_TIMEOUT_CHECK_INTERVAL);
-		rte_bbdev_log_debug("Acquiring Mutex for %x\n",
+		rte_bbdev_log_debug("Acquiring Mutex for %x",
 				q->ddr_mutex_uuid);
 		fpga_reg_write_32(q->d->mmio_base,
 				FPGA_5GNR_FEC_MUTEX,
 				mutex_ctrl);
 		mutex_read = fpga_reg_read_32(q->d->mmio_base,
 				FPGA_5GNR_FEC_MUTEX);
-		rte_bbdev_log_debug("Mutex %x cnt %d owner %x\n",
+		rte_bbdev_log_debug("Mutex %x cnt %d owner %x",
 				mutex_read, cnt, q->ddr_mutex_uuid);
 		cnt++;
 	} while ((mutex_read >> 16) != q->ddr_mutex_uuid);
@@ -1546,7 +1546,7 @@ fpga_harq_write_loopback(struct fpga_queue *q,
 			FPGA_5GNR_FEC_HARQ_BUF_SIZE_REGS);
 	if (reg_32 < harq_in_length) {
 		left_length = reg_32;
-		rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size\n");
+		rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size");
 	}

 	input = (uint64_t *)rte_pktmbuf_mtod_offset(harq_input,
@@ -1609,18 +1609,18 @@ fpga_harq_read_loopback(struct fpga_queue *q,
 			FPGA_5GNR_FEC_HARQ_BUF_SIZE_REGS);
 	if (reg < harq_in_length) {
 		harq_in_length = reg;
-		rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size\n");
+		rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size");
 	}

 	if (!mbuf_append(harq_output, harq_output, harq_in_length)) {
-		rte_bbdev_log(ERR, "HARQ output buffer warning %d %d\n",
+		rte_bbdev_log(ERR, "HARQ output buffer warning %d %d",
 				harq_output->buf_len -
 				rte_pktmbuf_headroom(harq_output),
 				harq_in_length);
 		harq_in_length = harq_output->buf_len -
 				rte_pktmbuf_headroom(harq_output);
 		if (!mbuf_append(harq_output, harq_output, harq_in_length)) {
-			rte_bbdev_log(ERR, "HARQ output buffer issue %d %d\n",
+			rte_bbdev_log(ERR, "HARQ output buffer issue %d %d",
 					harq_output->buf_len, harq_in_length);
 			return -1;
 		}
@@ -1642,7 +1642,7 @@ fpga_harq_read_loopback(struct fpga_queue *q,
 				FPGA_5GNR_FEC_DDR4_RD_RDY_REGS);
 			if (reg == FPGA_DDR_OVERFLOW) {
 				rte_bbdev_log(ERR,
-						"Read address is overflow!\n");
+						"Read address is overflow!");
 				return -1;
 			}
 		}
diff --git a/drivers/baseband/la12xx/bbdev_la12xx.c b/drivers/baseband/la12xx/bbdev_la12xx.c
index 1a56e73abd..af4b4f1e9a 100644
--- a/drivers/baseband/la12xx/bbdev_la12xx.c
+++ b/drivers/baseband/la12xx/bbdev_la12xx.c
@@ -201,7 +201,7 @@ la12xx_e200_queue_setup(struct rte_bbdev *dev,
 		q_priv->la12xx_core_id = LA12XX_LDPC_DEC_CORE;
 		break;
 	default:
-		rte_bbdev_log(ERR, "Unsupported op type\n");
+		rte_bbdev_log(ERR, "Unsupported op type");
 		return -1;
 	}

@@ -269,7 +269,7 @@ la12xx_e200_queue_setup(struct rte_bbdev *dev,
 		ch->feca_blk_id = rte_cpu_to_be_32(priv->num_ldpc_dec_queues++);
 		break;
 	default:
-		rte_bbdev_log(ERR, "Not supported op type\n");
+		rte_bbdev_log(ERR, "Not supported op type");
 		return -1;
 	}
 	ch->op_type = rte_cpu_to_be_32(q_priv->op_type);
diff --git a/drivers/baseband/turbo_sw/bbdev_turbo_software.c b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
index 8ddc7ff05f..a66dcd8962 100644
--- a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
+++ b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
@@ -269,7 +269,7 @@ info_get(struct rte_bbdev *dev, struct rte_bbdev_driver_info *dev_info)
 			dev_info->num_queues[op_cap->type] = num_queue_per_type;
 	}

-	rte_bbdev_log_debug("got device info from %u\n", dev->data->dev_id);
+	rte_bbdev_log_debug("got device info from %u", dev->data->dev_id);
 }

 /* Release queue */
@@ -1951,7 +1951,7 @@ turbo_sw_bbdev_probe(struct rte_vdev_device *vdev)
 	parse_turbo_sw_params(&init_params, input_args);

 	rte_bbdev_log_debug(
-			"Initialising %s on NUMA node %d with max queues: %d\n",
+			"Initialising %s on NUMA node %d with max queues: %d",
 			name, init_params.socket_id, init_params.queues_num);

 	return turbo_sw_bbdev_create(vdev, &init_params);
diff --git a/drivers/bus/cdx/cdx_vfio.c b/drivers/bus/cdx/cdx_vfio.c
index 79abc3f120..664f267471 100644
--- a/drivers/bus/cdx/cdx_vfio.c
+++ b/drivers/bus/cdx/cdx_vfio.c
@@ -638,7 +638,7 @@ rte_cdx_vfio_bm_enable(struct rte_cdx_device *dev)
 	feature->flags |= VFIO_DEVICE_FEATURE_SET;
 	ret = ioctl(vfio_dev_fd, RTE_VFIO_DEVICE_FEATURE, feature);
 	if (ret) {
-		CDX_BUS_ERR("Bus Master configuring not supported for device: %s, error: %d (%s)\n",
+		CDX_BUS_ERR("Bus Master configuring not supported for device: %s, error: %d (%s)",
 			dev->name, errno, strerror(errno));
 		free(feature);
 		return ret;
@@ -648,7 +648,7 @@ rte_cdx_vfio_bm_enable(struct rte_cdx_device *dev)
 	vfio_bm_feature->op = VFIO_DEVICE_FEATURE_SET_MASTER;
 	ret = ioctl(vfio_dev_fd, RTE_VFIO_DEVICE_FEATURE, feature);
 	if (ret < 0)
-		CDX_BUS_ERR("BM Enable Error for device: %s, Error: %d (%s)\n",
+		CDX_BUS_ERR("BM Enable Error for device: %s, Error: %d (%s)",
 			dev->name, errno, strerror(errno));

 	free(feature);
@@ -682,7 +682,7 @@ rte_cdx_vfio_bm_disable(struct rte_cdx_device *dev)
 	feature->flags |= VFIO_DEVICE_FEATURE_SET;
 	ret = ioctl(vfio_dev_fd, RTE_VFIO_DEVICE_FEATURE, feature);
 	if (ret) {
-		CDX_BUS_ERR("Bus Master configuring not supported for device: %s, Error: %d (%s)\n",
+		CDX_BUS_ERR("Bus Master configuring not supported for device: %s, Error: %d (%s)",
 			dev->name, errno, strerror(errno));
 		free(feature);
 		return ret;
@@ -692,7 +692,7 @@ rte_cdx_vfio_bm_disable(struct rte_cdx_device *dev)
 	vfio_bm_feature->op = VFIO_DEVICE_FEATURE_CLEAR_MASTER;
 	ret = ioctl(vfio_dev_fd, RTE_VFIO_DEVICE_FEATURE, feature);
 	if (ret < 0)
-		CDX_BUS_ERR("BM Disable Error for device: %s, Error: %d (%s)\n",
+		CDX_BUS_ERR("BM Disable Error for device: %s, Error: %d (%s)",
 			dev->name, errno, strerror(errno));

 	free(feature);
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index 3a6dd555a7..19f6132bba 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -403,7 +403,8 @@ extern int fman_ccsr_map_fd;
 #define FMAN_ERR(rc, fmt, args...) \
 	do { \
 		_errno = (rc); \
-		DPAA_BUS_LOG(ERR, fmt "(%d)", ##args, errno); \
+		rte_log(RTE_LOG_ERR, dpaa_logtype_bus, "dpaa: " fmt "(%d)\n", \
+			##args, errno); \
 	} while (0)

 #define FMAN_IP_REV_1	0xC30C4
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index 89f0f329c0..adb452fd3e 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -499,7 +499,7 @@ rte_fslmc_find_device(const struct rte_device *start, rte_dev_cmp_t cmp,
 	const struct rte_dpaa2_device *dstart;
 	struct rte_dpaa2_device *dev;

-	DPAA2_BUS_DEBUG("Finding a device named %s\n", (const char *)data);
+	DPAA2_BUS_DEBUG("Finding a device named %s", (const char *)data);

 	/* find_device is always called with an opaque object which should be
 	 * passed along to the 'cmp' function iterating over all device obj
@@ -514,7 +514,7 @@ rte_fslmc_find_device(const struct rte_device *start, rte_dev_cmp_t cmp,
 	}
 	while (dev != NULL) {
 		if (cmp(&dev->device, data) == 0) {
-			DPAA2_BUS_DEBUG("Found device (%s)\n",
+			DPAA2_BUS_DEBUG("Found device (%s)",
 					dev->device.name);
 			return &dev->device;
 		}
@@ -628,14 +628,14 @@ fslmc_bus_dev_iterate(const void *start, const char *str,

 	/* Expectation is that device would be name=device_name */
 	if (strncmp(str, "name=", 5) != 0) {
-		DPAA2_BUS_DEBUG("Invalid device string (%s)\n", str);
+		DPAA2_BUS_DEBUG("Invalid device string (%s)", str);
 		return NULL;
 	}

 	/* Now that name=device_name format is available, split */
 	dup = strdup(str);
 	if (dup == NULL) {
-		DPAA2_BUS_DEBUG("Dup string (%s) failed!\n", str);
+		DPAA2_BUS_DEBUG("Dup string (%s) failed!", str);
 		return NULL;
 	}
 	dev_name = dup + strlen("name=");
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 5966776a85..b90efeb651 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -232,7 +232,7 @@ fslmc_memevent_cb(enum rte_mem_event type, const void *addr, size_t len,

 		/* iova_addr may be set to RTE_BAD_IOVA */
 		if (iova_addr == RTE_BAD_IOVA) {
-			DPAA2_BUS_DEBUG("Segment has invalid iova, skipping\n");
+			DPAA2_BUS_DEBUG("Segment has invalid iova, skipping");
 			cur_len += map_len;
 			continue;
 		}
@@ -389,7 +389,7 @@ rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size)
 	dma_map.vaddr = vaddr;
 	dma_map.iova = iova;

-	DPAA2_BUS_DEBUG("VFIOdmamap 0x%"PRIx64":0x%"PRIx64",size 0x%"PRIx64"\n",
+	DPAA2_BUS_DEBUG("VFIOdmamap 0x%"PRIx64":0x%"PRIx64",size 0x%"PRIx64,
 			(uint64_t)dma_map.vaddr, (uint64_t)dma_map.iova,
 			(uint64_t)dma_map.size);
 	ret = ioctl(group->container->fd, VFIO_IOMMU_MAP_DMA,
@@ -480,13 +480,13 @@ fslmc_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
 	ret = ioctl(vfio_group_fd, VFIO_GROUP_GET_STATUS, &group_status);
 	if (ret) {
 		DPAA2_BUS_ERR("  %s cannot get group status, "
-				"error %i (%s)\n", dev_addr,
+				"error %i (%s)", dev_addr,
 				errno, strerror(errno));
 		close(vfio_group_fd);
 		rte_vfio_clear_group(vfio_group_fd);
 		return -1;
 	} else if (!(group_status.flags & VFIO_GROUP_FLAGS_VIABLE)) {
-		DPAA2_BUS_ERR("  %s VFIO group is not viable!\n", dev_addr);
+		DPAA2_BUS_ERR("  %s VFIO group is not viable!", dev_addr);
 		close(vfio_group_fd);
 		rte_vfio_clear_group(vfio_group_fd);
 		return -1;
@@ -503,7 +503,7 @@ fslmc_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
 				&vfio_container_fd);
 		if (ret) {
 			DPAA2_BUS_ERR("  %s cannot add VFIO group to container, "
-					"error %i (%s)\n", dev_addr,
+					"error %i (%s)", dev_addr,
 					errno, strerror(errno));
 			close(vfio_group_fd);
 			close(vfio_container_fd);
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
index 07256ed7ec..7e858a113f 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
@@ -86,7 +86,7 @@ rte_dpaa2_create_dpci_device(int vdev_fd __rte_unused,
 					sizeof(struct queue_storage_info_t),
 					RTE_CACHE_LINE_SIZE);
 		if (!rxq->q_storage) {
-			DPAA2_BUS_ERR("q_storage allocation failed\n");
+			DPAA2_BUS_ERR("q_storage allocation failed");
 			ret = -ENOMEM;
 			goto err;
 		}
@@ -94,7 +94,7 @@ rte_dpaa2_create_dpci_device(int vdev_fd __rte_unused,
 		memset(rxq->q_storage, 0, sizeof(struct queue_storage_info_t));
 		ret = dpaa2_alloc_dq_storage(rxq->q_storage);
 		if (ret) {
-			DPAA2_BUS_ERR("dpaa2_alloc_dq_storage failed\n");
+			DPAA2_BUS_ERR("dpaa2_alloc_dq_storage failed");
 			goto err;
 		}
 	}
diff --git a/drivers/bus/ifpga/ifpga_bus.c b/drivers/bus/ifpga/ifpga_bus.c
index ffb0c61214..11b31eee4f 100644
--- a/drivers/bus/ifpga/ifpga_bus.c
+++ b/drivers/bus/ifpga/ifpga_bus.c
@@ -180,7 +180,7 @@ ifpga_scan_one(struct rte_rawdev *rawdev,
 		rawdev->dev_ops->firmware_load &&
 		rawdev->dev_ops->firmware_load(rawdev,
 				&afu_pr_conf)){
-		IFPGA_BUS_ERR("firmware load error %d\n", ret);
+		IFPGA_BUS_ERR("firmware load error %d", ret);
 		goto end;
 	}
 	afu_dev->id.uuid.uuid_low  = afu_pr_conf.afu_id.uuid.uuid_low;
@@ -316,7 +316,7 @@ ifpga_probe_all_drivers(struct rte_afu_device *afu_dev)

 	/* Check if a driver is already loaded */
 	if (rte_dev_is_probed(&afu_dev->device)) {
-		IFPGA_BUS_DEBUG("Device %s is already probed\n",
+		IFPGA_BUS_DEBUG("Device %s is already probed",
 				rte_ifpga_device_name(afu_dev));
 		return -EEXIST;
 	}
@@ -353,7 +353,7 @@ ifpga_probe(void)
 		if (ret == -EEXIST)
 			continue;
 		if (ret < 0)
-			IFPGA_BUS_ERR("failed to initialize %s device\n",
+			IFPGA_BUS_ERR("failed to initialize %s device",
 				rte_ifpga_device_name(afu_dev));
 	}

@@ -408,7 +408,7 @@ ifpga_remove_driver(struct rte_afu_device *afu_dev)

 	name = rte_ifpga_device_name(afu_dev);
 	if (afu_dev->driver == NULL) {
-		IFPGA_BUS_DEBUG("no driver attach to device %s\n", name);
+		IFPGA_BUS_DEBUG("no driver attach to device %s", name);
 		return 1;
 	}

diff --git a/drivers/bus/vdev/vdev_params.c b/drivers/bus/vdev/vdev_params.c
index 51583fe949..68ae09e2e9 100644
--- a/drivers/bus/vdev/vdev_params.c
+++ b/drivers/bus/vdev/vdev_params.c
@@ -53,7 +53,7 @@ rte_vdev_dev_iterate(const void *start,
 	if (str != NULL) {
 		kvargs = rte_kvargs_parse(str, vdev_params_keys);
 		if (kvargs == NULL) {
-			VDEV_LOG(ERR, "cannot parse argument list\n");
+			VDEV_LOG(ERR, "cannot parse argument list");
 			rte_errno = EINVAL;
 			return NULL;
 		}
diff --git a/drivers/bus/vmbus/vmbus_common.c b/drivers/bus/vmbus/vmbus_common.c
index b9139c6e6c..8a965d10d9 100644
--- a/drivers/bus/vmbus/vmbus_common.c
+++ b/drivers/bus/vmbus/vmbus_common.c
@@ -108,7 +108,7 @@ vmbus_probe_one_driver(struct rte_vmbus_driver *dr,
 	/* no initialization when marked as blocked, return without error */
 	if (dev->device.devargs != NULL &&
 		dev->device.devargs->policy == RTE_DEV_BLOCKED) {
-		VMBUS_LOG(INFO, "  Device is blocked, not initializing\n");
+		VMBUS_LOG(INFO, "  Device is blocked, not initializing");
 		return 1;
 	}

diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c
index 14aff233d5..35eb8b7628 100644
--- a/drivers/common/cnxk/roc_dev.c
+++ b/drivers/common/cnxk/roc_dev.c
@@ -1493,7 +1493,7 @@ dev_init(struct dev *dev, struct plt_pci_device *pci_dev)
 		rc = plt_thread_create_control(&dev->sync.pfvf_msg_thread, name,
 				pf_vf_mbox_thread_main, dev);
 		if (rc != 0) {
-			plt_err("Failed to create thread for VF mbox handling\n");
+			plt_err("Failed to create thread for VF mbox handling");
 			goto thread_fail;
 		}
 	}
diff --git a/drivers/common/cnxk/roc_model.c b/drivers/common/cnxk/roc_model.c
index 6dc2afe7f0..446ab3d2bd 100644
--- a/drivers/common/cnxk/roc_model.c
+++ b/drivers/common/cnxk/roc_model.c
@@ -153,7 +153,7 @@ cn10k_part_pass_get(uint32_t *part, uint32_t *pass)

 	dir = opendir(SYSFS_PCI_DEVICES);
 	if (dir == NULL) {
-		plt_err("%s(): opendir failed: %s\n", __func__,
+		plt_err("%s(): opendir failed: %s", __func__,
 			strerror(errno));
 		return -errno;
 	}
diff --git a/drivers/common/cnxk/roc_nix_ops.c b/drivers/common/cnxk/roc_nix_ops.c
index 9e66ad1a49..efb0a41d07 100644
--- a/drivers/common/cnxk/roc_nix_ops.c
+++ b/drivers/common/cnxk/roc_nix_ops.c
@@ -220,7 +220,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
 		goto exit;
 	}

-	plt_nix_dbg("tcpv4 lso fmt=%u\n", rsp->lso_format_idx);
+	plt_nix_dbg("tcpv4 lso fmt=%u", rsp->lso_format_idx);

 	/*
 	 * IPv6/TCP LSO
@@ -240,7 +240,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
 		goto exit;
 	}

-	plt_nix_dbg("tcpv6 lso fmt=%u\n", rsp->lso_format_idx);
+	plt_nix_dbg("tcpv6 lso fmt=%u", rsp->lso_format_idx);

 	/*
 	 * IPv4/UDP/TUN HDR/IPv4/TCP LSO
@@ -256,7 +256,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
 		goto exit;

 	nix->lso_udp_tun_idx[ROC_NIX_LSO_TUN_V4V4] = rsp->lso_format_idx;
-	plt_nix_dbg("udp tun v4v4 fmt=%u\n", rsp->lso_format_idx);
+	plt_nix_dbg("udp tun v4v4 fmt=%u", rsp->lso_format_idx);

 	/*
 	 * IPv4/UDP/TUN HDR/IPv6/TCP LSO
@@ -272,7 +272,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
 		goto exit;

 	nix->lso_udp_tun_idx[ROC_NIX_LSO_TUN_V4V6] = rsp->lso_format_idx;
-	plt_nix_dbg("udp tun v4v6 fmt=%u\n", rsp->lso_format_idx);
+	plt_nix_dbg("udp tun v4v6 fmt=%u", rsp->lso_format_idx);

 	/*
 	 * IPv6/UDP/TUN HDR/IPv4/TCP LSO
@@ -288,7 +288,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
 		goto exit;

 	nix->lso_udp_tun_idx[ROC_NIX_LSO_TUN_V6V4] = rsp->lso_format_idx;
-	plt_nix_dbg("udp tun v6v4 fmt=%u\n", rsp->lso_format_idx);
+	plt_nix_dbg("udp tun v6v4 fmt=%u", rsp->lso_format_idx);

 	/*
 	 * IPv6/UDP/TUN HDR/IPv6/TCP LSO
@@ -304,7 +304,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
 		goto exit;

 	nix->lso_udp_tun_idx[ROC_NIX_LSO_TUN_V6V6] = rsp->lso_format_idx;
-	plt_nix_dbg("udp tun v6v6 fmt=%u\n", rsp->lso_format_idx);
+	plt_nix_dbg("udp tun v6v6 fmt=%u", rsp->lso_format_idx);

 	/*
 	 * IPv4/TUN HDR/IPv4/TCP LSO
@@ -320,7 +320,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
 		goto exit;

 	nix->lso_tun_idx[ROC_NIX_LSO_TUN_V4V4] = rsp->lso_format_idx;
-	plt_nix_dbg("tun v4v4 fmt=%u\n", rsp->lso_format_idx);
+	plt_nix_dbg("tun v4v4 fmt=%u", rsp->lso_format_idx);

 	/*
 	 * IPv4/TUN HDR/IPv6/TCP LSO
@@ -336,7 +336,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
 		goto exit;

 	nix->lso_tun_idx[ROC_NIX_LSO_TUN_V4V6] = rsp->lso_format_idx;
-	plt_nix_dbg("tun v4v6 fmt=%u\n", rsp->lso_format_idx);
+	plt_nix_dbg("tun v4v6 fmt=%u", rsp->lso_format_idx);

 	/*
 	 * IPv6/TUN HDR/IPv4/TCP LSO
@@ -352,7 +352,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
 		goto exit;

 	nix->lso_tun_idx[ROC_NIX_LSO_TUN_V6V4] = rsp->lso_format_idx;
-	plt_nix_dbg("tun v6v4 fmt=%u\n", rsp->lso_format_idx);
+	plt_nix_dbg("tun v6v4 fmt=%u", rsp->lso_format_idx);

 	/*
 	 * IPv6/TUN HDR/IPv6/TCP LSO
@@ -369,7 +369,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
 		goto exit;

 	nix->lso_tun_idx[ROC_NIX_LSO_TUN_V6V6] = rsp->lso_format_idx;
-	plt_nix_dbg("tun v6v6 fmt=%u\n", rsp->lso_format_idx);
+	plt_nix_dbg("tun v6v6 fmt=%u", rsp->lso_format_idx);
 	rc = 0;
 exit:
 	mbox_put(mbox);
diff --git a/drivers/common/cnxk/roc_nix_tm.c b/drivers/common/cnxk/roc_nix_tm.c
index 9e5e614b3b..92401e04d0 100644
--- a/drivers/common/cnxk/roc_nix_tm.c
+++ b/drivers/common/cnxk/roc_nix_tm.c
@@ -906,7 +906,7 @@ nix_tm_sq_flush_pre(struct roc_nix_sq *sq)
 			if (rc) {
 				roc_nix_tm_dump(sq->roc_nix, NULL);
 				roc_nix_queues_ctx_dump(sq->roc_nix, NULL);
-				plt_err("Failed to drain sq %u, rc=%d\n", sq->qid, rc);
+				plt_err("Failed to drain sq %u, rc=%d", sq->qid, rc);
 				return rc;
 			}
 			/* Freed all pending SQEs for this SQ, so disable this node */
diff --git a/drivers/common/cnxk/roc_nix_tm_mark.c b/drivers/common/cnxk/roc_nix_tm_mark.c
index e9a7604e79..092d0851b9 100644
--- a/drivers/common/cnxk/roc_nix_tm_mark.c
+++ b/drivers/common/cnxk/roc_nix_tm_mark.c
@@ -266,7 +266,7 @@ nix_tm_mark_init(struct nix *nix)
 			}

 			nix->tm_markfmt[i][j] = rsp->mark_format_idx;
-			plt_tm_dbg("Mark type: %u, Mark Color:%u, id:%u\n", i,
+			plt_tm_dbg("Mark type: %u, Mark Color:%u, id:%u", i,
 				   j, nix->tm_markfmt[i][j]);
 		}
 	}
diff --git a/drivers/common/cnxk/roc_nix_tm_ops.c b/drivers/common/cnxk/roc_nix_tm_ops.c
index e1cef7a670..c1b91ad92f 100644
--- a/drivers/common/cnxk/roc_nix_tm_ops.c
+++ b/drivers/common/cnxk/roc_nix_tm_ops.c
@@ -503,7 +503,7 @@ roc_nix_tm_hierarchy_disable(struct roc_nix *roc_nix)
 		/* Wait for sq entries to be flushed */
 		rc = roc_nix_tm_sq_flush_spin(sq);
 		if (rc) {
-			plt_err("Failed to drain sq, rc=%d\n", rc);
+			plt_err("Failed to drain sq, rc=%d", rc);
 			goto cleanup;
 		}
 	}
diff --git a/drivers/common/cnxk/roc_nix_tm_utils.c b/drivers/common/cnxk/roc_nix_tm_utils.c
index 8e3da95a45..4a09cc2aae 100644
--- a/drivers/common/cnxk/roc_nix_tm_utils.c
+++ b/drivers/common/cnxk/roc_nix_tm_utils.c
@@ -583,7 +583,7 @@ nix_tm_topology_reg_prep(struct nix *nix, struct nix_tm_node *node,
 		/* Configure TL4 to send to SDP channel instead of CGX/LBK */
 		if (nix->sdp_link) {
 			relchan = nix->tx_chan_base & 0xff;
-			plt_tm_dbg("relchan=%u schq=%u tx_chan_cnt=%u\n", relchan, schq,
+			plt_tm_dbg("relchan=%u schq=%u tx_chan_cnt=%u", relchan, schq,
 				   nix->tx_chan_cnt);
 			reg[k] = NIX_AF_TL4X_SDP_LINK_CFG(schq);
 			regval[k] = BIT_ULL(12);
diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index 748d287bad..b02c9c7f38 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -171,7 +171,7 @@ sso_rsrc_get(struct roc_sso *roc_sso)
 	mbox_alloc_msg_free_rsrc_cnt(mbox);
 	rc = mbox_process_msg(mbox, (void **)&rsrc_cnt);
 	if (rc) {
-		plt_err("Failed to get free resource count\n");
+		plt_err("Failed to get free resource count");
 		rc = -EIO;
 		goto exit;
 	}
diff --git a/drivers/common/cnxk/roc_tim.c b/drivers/common/cnxk/roc_tim.c
index f8607b2852..d39af3c85e 100644
--- a/drivers/common/cnxk/roc_tim.c
+++ b/drivers/common/cnxk/roc_tim.c
@@ -317,7 +317,7 @@ tim_free_lf_count_get(struct dev *dev, uint16_t *nb_lfs)
 	mbox_alloc_msg_free_rsrc_cnt(mbox);
 	rc = mbox_process_msg(mbox, (void **)&rsrc_cnt);
 	if (rc) {
-		plt_err("Failed to get free resource count\n");
+		plt_err("Failed to get free resource count");
 		mbox_put(mbox);
 		return -EIO;
 	}
diff --git a/drivers/common/cpt/cpt_ucode.h b/drivers/common/cpt/cpt_ucode.h
index b393be4cf6..2e6846312b 100644
--- a/drivers/common/cpt/cpt_ucode.h
+++ b/drivers/common/cpt/cpt_ucode.h
@@ -2589,7 +2589,7 @@ fill_sess_aead(struct rte_crypto_sym_xform *xform,
 		sess->cpt_op |= CPT_OP_CIPHER_DECRYPT;
 		sess->cpt_op |= CPT_OP_AUTH_VERIFY;
 	} else {
-		CPT_LOG_DP_ERR("Unknown aead operation\n");
+		CPT_LOG_DP_ERR("Unknown aead operation");
 		return -1;
 	}
 	switch (aead_form->algo) {
@@ -2658,7 +2658,7 @@ fill_sess_cipher(struct rte_crypto_sym_xform *xform,
 			ctx->dec_auth = 1;
 		}
 	} else {
-		CPT_LOG_DP_ERR("Unknown cipher operation\n");
+		CPT_LOG_DP_ERR("Unknown cipher operation");
 		return -1;
 	}

diff --git a/drivers/common/idpf/idpf_common_logs.h b/drivers/common/idpf/idpf_common_logs.h
index f6be84ceb5..105450774e 100644
--- a/drivers/common/idpf/idpf_common_logs.h
+++ b/drivers/common/idpf/idpf_common_logs.h
@@ -9,7 +9,7 @@

 extern int idpf_common_logtype;

-#define DRV_LOG_RAW(level, ...)					\
+#define DRV_LOG(level, ...)					\
 	rte_log(RTE_LOG_ ## level,				\
 		idpf_common_logtype,				\
 		RTE_FMT("%s(): "				\
@@ -17,9 +17,6 @@ extern int idpf_common_logtype;
 			__func__,				\
 			RTE_FMT_TAIL(__VA_ARGS__,)))

-#define DRV_LOG(level, fmt, args...)		\
-	DRV_LOG_RAW(level, fmt "\n", ## args)
-
 #ifdef RTE_LIBRTE_IDPF_DEBUG_RX
 #define RX_LOG(level, ...) \
 	RTE_LOG(level, \
diff --git a/drivers/common/octeontx/octeontx_mbox.c b/drivers/common/octeontx/octeontx_mbox.c
index 4fd3fda721..f98942c79c 100644
--- a/drivers/common/octeontx/octeontx_mbox.c
+++ b/drivers/common/octeontx/octeontx_mbox.c
@@ -264,7 +264,7 @@ octeontx_start_domain(void)

 	result = octeontx_mbox_send(&hdr, NULL, 0, NULL, 0);
 	if (result != 0) {
-		mbox_log_err("Could not start domain. Err=%d. FuncErr=%d\n",
+		mbox_log_err("Could not start domain. Err=%d. FuncErr=%d",
 			     result, hdr.res_code);
 		result = -EINVAL;
 	}
@@ -288,7 +288,7 @@ octeontx_check_mbox_version(struct mbox_intf_ver *app_intf_ver,
 				    sizeof(struct mbox_intf_ver),
 				    &kernel_intf_ver, sizeof(kernel_intf_ver));
 	if (result != sizeof(kernel_intf_ver)) {
-		mbox_log_err("Could not send interface version. Err=%d. FuncErr=%d\n",
+		mbox_log_err("Could not send interface version. Err=%d. FuncErr=%d",
 			     result, hdr.res_code);
 		result = -EINVAL;
 	}
diff --git a/drivers/common/qat/qat_pf2vf.c b/drivers/common/qat/qat_pf2vf.c
index 621f12fce2..9b25fdc6a0 100644
--- a/drivers/common/qat/qat_pf2vf.c
+++ b/drivers/common/qat/qat_pf2vf.c
@@ -36,7 +36,7 @@ int qat_pf2vf_exch_msg(struct qat_pci_device *qat_dev,
 	}

 	if ((pf2vf_msg.msg_type & type_mask) != pf2vf_msg.msg_type) {
-		QAT_LOG(ERR, "PF2VF message type 0x%X out of range\n",
+		QAT_LOG(ERR, "PF2VF message type 0x%X out of range",
 			pf2vf_msg.msg_type);
 		return -EINVAL;
 	}
@@ -65,7 +65,7 @@ int qat_pf2vf_exch_msg(struct qat_pci_device *qat_dev,
 			(++count < ADF_IOV_MSG_ACK_MAX_RETRY));

 		if (val & ADF_PFVF_INT) {
-			QAT_LOG(ERR, "ACK not received from remote\n");
+			QAT_LOG(ERR, "ACK not received from remote");
 			return -EIO;
 		}

diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index f95dd33375..21a110d22e 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -267,7 +267,7 @@ qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue,
 	if (qat_qp_check_queue_alignment(queue->base_phys_addr,
 			queue_size_bytes)) {
 		QAT_LOG(ERR, "Invalid alignment on queue create "
-					" 0x%"PRIx64"\n",
+					" 0x%"PRIx64,
 					queue->base_phys_addr);
 		ret = -EFAULT;
 		goto queue_create_err;
diff --git a/drivers/compress/isal/isal_compress_pmd.c b/drivers/compress/isal/isal_compress_pmd.c
index cb23e929ed..0e783243a8 100644
--- a/drivers/compress/isal/isal_compress_pmd.c
+++ b/drivers/compress/isal/isal_compress_pmd.c
@@ -42,10 +42,10 @@ isal_comp_set_priv_xform_parameters(struct isal_priv_xform *priv_xform,
 		/* Set private xform algorithm */
 		if (xform->compress.algo != RTE_COMP_ALGO_DEFLATE) {
 			if (xform->compress.algo == RTE_COMP_ALGO_NULL) {
-				ISAL_PMD_LOG(ERR, "By-pass not supported\n");
+				ISAL_PMD_LOG(ERR, "By-pass not supported");
 				return -ENOTSUP;
 			}
-			ISAL_PMD_LOG(ERR, "Algorithm not supported\n");
+			ISAL_PMD_LOG(ERR, "Algorithm not supported");
 			return -ENOTSUP;
 		}
 		priv_xform->compress.algo = RTE_COMP_ALGO_DEFLATE;
@@ -55,7 +55,7 @@ isal_comp_set_priv_xform_parameters(struct isal_priv_xform *priv_xform,
 			priv_xform->compress.window_size =
 					RTE_COMP_ISAL_WINDOW_SIZE;
 		else {
-			ISAL_PMD_LOG(ERR, "Window size not supported\n");
+			ISAL_PMD_LOG(ERR, "Window size not supported");
 			return -ENOTSUP;
 		}

@@ -74,7 +74,7 @@ isal_comp_set_priv_xform_parameters(struct isal_priv_xform *priv_xform,
 					RTE_COMP_HUFFMAN_DYNAMIC;
 			break;
 		default:
-			ISAL_PMD_LOG(ERR, "Huffman code not supported\n");
+			ISAL_PMD_LOG(ERR, "Huffman code not supported");
 			return -ENOTSUP;
 		}

@@ -92,10 +92,10 @@ isal_comp_set_priv_xform_parameters(struct isal_priv_xform *priv_xform,
 			break;
 		case(RTE_COMP_CHECKSUM_CRC32_ADLER32):
 			ISAL_PMD_LOG(ERR, "Combined CRC and ADLER checksum not"
-					" supported\n");
+					" supported");
 			return -ENOTSUP;
 		default:
-			ISAL_PMD_LOG(ERR, "Checksum type not supported\n");
+			ISAL_PMD_LOG(ERR, "Checksum type not supported");
 			priv_xform->compress.chksum = IGZIP_DEFLATE;
 			break;
 		}
@@ -105,21 +105,21 @@ isal_comp_set_priv_xform_parameters(struct isal_priv_xform *priv_xform,
 		 */
 		if (xform->compress.level < RTE_COMP_LEVEL_PMD_DEFAULT ||
 				xform->compress.level > RTE_COMP_LEVEL_MAX) {
-			ISAL_PMD_LOG(ERR, "Compression level out of range\n");
+			ISAL_PMD_LOG(ERR, "Compression level out of range");
 			return -EINVAL;
 		}
 		/* Check for Compressdev API level 0, No compression
 		 * not supported in ISA-L
 		 */
 		else if (xform->compress.level == RTE_COMP_LEVEL_NONE) {
-			ISAL_PMD_LOG(ERR, "No Compression not supported\n");
+			ISAL_PMD_LOG(ERR, "No Compression not supported");
 			return -ENOTSUP;
 		}
 		/* If using fixed huffman code, level must be 0 */
 		else if (priv_xform->compress.deflate.huffman ==
 				RTE_COMP_HUFFMAN_FIXED) {
 			ISAL_PMD_LOG(DEBUG, "ISA-L level 0 used due to a"
-					" fixed huffman code\n");
+					" fixed huffman code");
 			priv_xform->compress.level = RTE_COMP_ISAL_LEVEL_ZERO;
 			priv_xform->level_buffer_size =
 					ISAL_DEF_LVL0_DEFAULT;
@@ -169,7 +169,7 @@ isal_comp_set_priv_xform_parameters(struct isal_priv_xform *priv_xform,
 					ISAL_PMD_LOG(DEBUG, "Requested ISA-L level"
 						" 3 or above; Level 3 optimized"
 						" for AVX512 & AVX2 only."
-						" level changed to 2.\n");
+						" level changed to 2.");
 					priv_xform->compress.level =
 						RTE_COMP_ISAL_LEVEL_TWO;
 					priv_xform->level_buffer_size =
@@ -188,10 +188,10 @@ isal_comp_set_priv_xform_parameters(struct isal_priv_xform *priv_xform,
 		/* Set private xform algorithm */
 		if (xform->decompress.algo != RTE_COMP_ALGO_DEFLATE) {
 			if (xform->decompress.algo == RTE_COMP_ALGO_NULL) {
-				ISAL_PMD_LOG(ERR, "By pass not supported\n");
+				ISAL_PMD_LOG(ERR, "By pass not supported");
 				return -ENOTSUP;
 			}
-			ISAL_PMD_LOG(ERR, "Algorithm not supported\n");
+			ISAL_PMD_LOG(ERR, "Algorithm not supported");
 			return -ENOTSUP;
 		}
 		priv_xform->decompress.algo = RTE_COMP_ALGO_DEFLATE;
@@ -210,10 +210,10 @@ isal_comp_set_priv_xform_parameters(struct isal_priv_xform *priv_xform,
 			break;
 		case(RTE_COMP_CHECKSUM_CRC32_ADLER32):
 			ISAL_PMD_LOG(ERR, "Combined CRC and ADLER checksum not"
-					" supported\n");
+					" supported");
 			return -ENOTSUP;
 		default:
-			ISAL_PMD_LOG(ERR, "Checksum type not supported\n");
+			ISAL_PMD_LOG(ERR, "Checksum type not supported");
 			priv_xform->decompress.chksum = ISAL_DEFLATE;
 			break;
 		}
@@ -223,7 +223,7 @@ isal_comp_set_priv_xform_parameters(struct isal_priv_xform *priv_xform,
 			priv_xform->decompress.window_size =
 					RTE_COMP_ISAL_WINDOW_SIZE;
 		else {
-			ISAL_PMD_LOG(ERR, "Window size not supported\n");
+			ISAL_PMD_LOG(ERR, "Window size not supported");
 			return -ENOTSUP;
 		}
 	}
@@ -263,7 +263,7 @@ chained_mbuf_compression(struct rte_comp_op *op, struct isal_comp_qp *qp)
 			remaining_offset);

 	if (unlikely(!qp->stream->next_in || !qp->stream->next_out)) {
-		ISAL_PMD_LOG(ERR, "Invalid source or destination buffer\n");
+		ISAL_PMD_LOG(ERR, "Invalid source or destination buffer");
 		op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
 		return -1;
 	}
@@ -279,7 +279,7 @@ chained_mbuf_compression(struct rte_comp_op *op, struct isal_comp_qp *qp)
 		remaining_data = op->src.length - qp->stream->total_in;

 		if (ret != COMP_OK) {
-			ISAL_PMD_LOG(ERR, "Compression operation failed\n");
+			ISAL_PMD_LOG(ERR, "Compression operation failed");
 			op->status = RTE_COMP_OP_STATUS_ERROR;
 			return ret;
 		}
@@ -294,7 +294,7 @@ chained_mbuf_compression(struct rte_comp_op *op, struct isal_comp_qp *qp)
 					RTE_MIN(remaining_data, src->data_len);
 			} else {
 				ISAL_PMD_LOG(ERR,
-				"Not enough input buffer segments\n");
+				"Not enough input buffer segments");
 				op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
 				return -1;
 			}
@@ -309,7 +309,7 @@ chained_mbuf_compression(struct rte_comp_op *op, struct isal_comp_qp *qp)
 				qp->stream->avail_out = dst->data_len;
 			} else {
 				ISAL_PMD_LOG(ERR,
-				"Not enough output buffer segments\n");
+				"Not enough output buffer segments");
 				op->status =
 				RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED;
 				return -1;
@@ -378,14 +378,14 @@ chained_mbuf_decompression(struct rte_comp_op *op, struct isal_comp_qp *qp)

 		if (ret == ISAL_OUT_OVERFLOW) {
 			ISAL_PMD_LOG(ERR, "Decompression operation ran "
-				"out of space, but can be recovered.\n%d bytes "
-				"consumed\t%d bytes produced\n",
+				"out of space, but can be recovered.%d bytes "
+				"consumed\t%d bytes produced",
 				consumed_data, qp->state->total_out);
 				op->status =
 				RTE_COMP_OP_STATUS_OUT_OF_SPACE_RECOVERABLE;
 			return ret;
 		} else if (ret < 0) {
-			ISAL_PMD_LOG(ERR, "Decompression operation failed\n");
+			ISAL_PMD_LOG(ERR, "Decompression operation failed");
 			op->status = RTE_COMP_OP_STATUS_ERROR;
 			return ret;
 		}
@@ -399,7 +399,7 @@ chained_mbuf_decompression(struct rte_comp_op *op, struct isal_comp_qp *qp)
 				qp->state->avail_out = dst->data_len;
 			} else {
 				ISAL_PMD_LOG(ERR,
-				"Not enough output buffer segments\n");
+				"Not enough output buffer segments");
 				op->status =
 				RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED;
 				return -1;
@@ -451,14 +451,14 @@ process_isal_deflate(struct rte_comp_op *op, struct isal_comp_qp *qp,
 				IGZIP_HUFFTABLE_DEFAULT);

 	if (op->m_src->pkt_len < (op->src.length + op->src.offset)) {
-		ISAL_PMD_LOG(ERR, "Input mbuf(s) not big enough.\n");
+		ISAL_PMD_LOG(ERR, "Input mbuf(s) not big enough.");
 		op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
 		return -1;
 	}

 	if (op->dst.offset >= op->m_dst->pkt_len) {
 		ISAL_PMD_LOG(ERR, "Output mbuf(s) not big enough"
-				" for offset provided.\n");
+				" for offset provided.");
 		op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
 		return -1;
 	}
@@ -483,7 +483,7 @@ process_isal_deflate(struct rte_comp_op *op, struct isal_comp_qp *qp,

 		if (unlikely(!qp->stream->next_in || !qp->stream->next_out)) {
 			ISAL_PMD_LOG(ERR, "Invalid source or destination"
-					" buffers\n");
+					" buffers");
 			op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
 			return -1;
 		}
@@ -493,7 +493,7 @@ process_isal_deflate(struct rte_comp_op *op, struct isal_comp_qp *qp,

 		/* Check that output buffer did not run out of space */
 		if (ret == STATELESS_OVERFLOW) {
-			ISAL_PMD_LOG(ERR, "Output buffer not big enough\n");
+			ISAL_PMD_LOG(ERR, "Output buffer not big enough");
 			op->status = RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED;
 			return ret;
 		}
@@ -501,13 +501,13 @@ process_isal_deflate(struct rte_comp_op *op, struct isal_comp_qp *qp,
 		/* Check that input buffer has been fully consumed */
 		if (qp->stream->avail_in != (uint32_t)0) {
 			ISAL_PMD_LOG(ERR, "Input buffer could not be read"
-					" entirely\n");
+					" entirely");
 			op->status = RTE_COMP_OP_STATUS_ERROR;
 			return -1;
 		}

 		if (ret != COMP_OK) {
-			ISAL_PMD_LOG(ERR, "Compression operation failed\n");
+			ISAL_PMD_LOG(ERR, "Compression operation failed");
 			op->status = RTE_COMP_OP_STATUS_ERROR;
 			return ret;
 		}
@@ -543,14 +543,14 @@ process_isal_inflate(struct rte_comp_op *op, struct isal_comp_qp *qp,
 	qp->state->crc_flag = priv_xform->decompress.chksum;

 	if (op->m_src->pkt_len < (op->src.length + op->src.offset)) {
-		ISAL_PMD_LOG(ERR, "Input mbuf(s) not big enough.\n");
+		ISAL_PMD_LOG(ERR, "Input mbuf(s) not big enough.");
 		op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
 		return -1;
 	}

 	if (op->dst.offset >= op->m_dst->pkt_len) {
 		ISAL_PMD_LOG(ERR, "Output mbuf not big enough for "
-				"offset provided.\n");
+				"offset provided.");
 		op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
 		return -1;
 	}
@@ -574,7 +574,7 @@ process_isal_inflate(struct rte_comp_op *op, struct isal_comp_qp *qp,

 		if (unlikely(!qp->state->next_in || !qp->state->next_out)) {
 			ISAL_PMD_LOG(ERR, "Invalid source or destination"
-					" buffers\n");
+					" buffers");
 			op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
 			return -1;
 		}
@@ -583,7 +583,7 @@ process_isal_inflate(struct rte_comp_op *op, struct isal_comp_qp *qp,
 		ret = isal_inflate_stateless(qp->state);

 		if (ret == ISAL_OUT_OVERFLOW) {
-			ISAL_PMD_LOG(ERR, "Output buffer not big enough\n");
+			ISAL_PMD_LOG(ERR, "Output buffer not big enough");
 			op->status = RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED;
 			return ret;
 		}
@@ -591,13 +591,13 @@ process_isal_inflate(struct rte_comp_op *op, struct isal_comp_qp *qp,
 		/* Check that input buffer has been fully consumed */
 		if (qp->state->avail_in != (uint32_t)0) {
 			ISAL_PMD_LOG(ERR, "Input buffer could not be read"
-					" entirely\n");
+					" entirely");
 			op->status = RTE_COMP_OP_STATUS_ERROR;
 			return -1;
 		}

 		if (ret != ISAL_DECOMP_OK && ret != ISAL_END_INPUT) {
-			ISAL_PMD_LOG(ERR, "Decompression operation failed\n");
+			ISAL_PMD_LOG(ERR, "Decompression operation failed");
 			op->status = RTE_COMP_OP_STATUS_ERROR;
 			return ret;
 		}
@@ -622,7 +622,7 @@ process_op(struct isal_comp_qp *qp, struct rte_comp_op *op,
 		process_isal_inflate(op, qp, priv_xform);
 		break;
 	default:
-		ISAL_PMD_LOG(ERR, "Operation Not Supported\n");
+		ISAL_PMD_LOG(ERR, "Operation Not Supported");
 		return -ENOTSUP;
 	}
 	return 0;
@@ -641,7 +641,7 @@ isal_comp_pmd_enqueue_burst(void *queue_pair, struct rte_comp_op **ops,
 	for (i = 0; i < num_enq; i++) {
 		if (unlikely(ops[i]->op_type != RTE_COMP_OP_STATELESS)) {
 			ops[i]->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
-			ISAL_PMD_LOG(ERR, "Stateful operation not Supported\n");
+			ISAL_PMD_LOG(ERR, "Stateful operation not Supported");
 			qp->qp_stats.enqueue_err_count++;
 			continue;
 		}
@@ -696,7 +696,7 @@ compdev_isal_create(const char *name, struct rte_vdev_device *vdev,
 	dev->dequeue_burst = isal_comp_pmd_dequeue_burst;
 	dev->enqueue_burst = isal_comp_pmd_enqueue_burst;

-	ISAL_PMD_LOG(INFO, "\nISA-L library version used: "ISAL_VERSION_STRING);
+	ISAL_PMD_LOG(INFO, "ISA-L library version used: "ISAL_VERSION_STRING);

 	return 0;
 }
@@ -739,7 +739,7 @@ compdev_isal_probe(struct rte_vdev_device *dev)
 	retval = rte_compressdev_pmd_parse_input_args(&init_params, args);
 	if (retval) {
 		ISAL_PMD_LOG(ERR,
-			"Failed to parse initialisation arguments[%s]\n", args);
+			"Failed to parse initialisation arguments[%s]", args);
 		return -EINVAL;
 	}

diff --git a/drivers/compress/octeontx/otx_zip.h b/drivers/compress/octeontx/otx_zip.h
index 7391360925..d52f937548 100644
--- a/drivers/compress/octeontx/otx_zip.h
+++ b/drivers/compress/octeontx/otx_zip.h
@@ -206,7 +206,7 @@ zipvf_prepare_sgl(struct rte_mbuf *buf, int64_t offset, struct zipvf_sginfo *sg_
 			break;
 		}

-		ZIP_PMD_LOG(DEBUG, "ZIP SGL buf[%d], len = %d, iova = 0x%"PRIx64"\n",
+		ZIP_PMD_LOG(DEBUG, "ZIP SGL buf[%d], len = %d, iova = 0x%"PRIx64,
 			    sgidx, sginfo[sgidx].sg_ctl.s.length, sginfo[sgidx].sg_addr.s.addr);
 		++sgidx;
 	}
@@ -219,7 +219,7 @@ zipvf_prepare_sgl(struct rte_mbuf *buf, int64_t offset, struct zipvf_sginfo *sg_
 	}
 	qp->num_sgbuf = ++sgidx;

-	ZIP_PMD_LOG(DEBUG, "Tot_buf_len:%d max_segs:%"PRIx64"\n", tot_buf_len,
+	ZIP_PMD_LOG(DEBUG, "Tot_buf_len:%d max_segs:%"PRIx64, tot_buf_len,
 		    qp->num_sgbuf);
 	return ret;
 }
@@ -246,7 +246,7 @@ zipvf_prepare_in_buf(union zip_inst_s *inst, struct zipvf_qp *qp, struct rte_com
 		inst->s.inp_ptr_ctl.s.length = qp->num_sgbuf;
 		inst->s.inp_ptr_ctl.s.fw = 0;

-		ZIP_PMD_LOG(DEBUG, "Gather(input): len(nb_segs):%d, iova: 0x%"PRIx64"\n",
+		ZIP_PMD_LOG(DEBUG, "Gather(input): len(nb_segs):%d, iova: 0x%"PRIx64,
 			    inst->s.inp_ptr_ctl.s.length, inst->s.inp_ptr_addr.s.addr);
 		return ret;
 	}
@@ -256,7 +256,7 @@ zipvf_prepare_in_buf(union zip_inst_s *inst, struct zipvf_qp *qp, struct rte_com
 	inst->s.inp_ptr_addr.s.addr = rte_pktmbuf_iova_offset(m_src, offset);
 	inst->s.inp_ptr_ctl.s.length = inlen;

-	ZIP_PMD_LOG(DEBUG, "Direct input - inlen:%d\n", inlen);
+	ZIP_PMD_LOG(DEBUG, "Direct input - inlen:%d", inlen);
 	return ret;
 }

@@ -282,7 +282,7 @@ zipvf_prepare_out_buf(union zip_inst_s *inst, struct zipvf_qp *qp, struct rte_co
 		inst->s.out_ptr_addr.s.addr = rte_mem_virt2iova(qp->s_info);
 		inst->s.out_ptr_ctl.s.length = qp->num_sgbuf;

-		ZIP_PMD_LOG(DEBUG, "Scatter(output): nb_segs:%d, iova:0x%"PRIx64"\n",
+		ZIP_PMD_LOG(DEBUG, "Scatter(output): nb_segs:%d, iova:0x%"PRIx64,
 			    inst->s.out_ptr_ctl.s.length, inst->s.out_ptr_addr.s.addr);
 		return ret;
 	}
@@ -296,7 +296,7 @@ zipvf_prepare_out_buf(union zip_inst_s *inst, struct zipvf_qp *qp, struct rte_co

 	inst->s.out_ptr_ctl.s.length = inst->s.totaloutputlength;

-	ZIP_PMD_LOG(DEBUG, "Direct output - outlen:%d\n", inst->s.totaloutputlength);
+	ZIP_PMD_LOG(DEBUG, "Direct output - outlen:%d", inst->s.totaloutputlength);
 	return ret;
 }

diff --git a/drivers/compress/octeontx/otx_zip_pmd.c b/drivers/compress/octeontx/otx_zip_pmd.c
index fd20139da6..c8f456b319 100644
--- a/drivers/compress/octeontx/otx_zip_pmd.c
+++ b/drivers/compress/octeontx/otx_zip_pmd.c
@@ -161,7 +161,7 @@ zip_set_stream_parameters(struct rte_compressdev *dev,
 			 */

 		} else {
-			ZIP_PMD_ERR("\nxform type not supported");
+			ZIP_PMD_ERR("xform type not supported");
 			ret = -1;
 			goto err;
 		}
@@ -527,7 +527,7 @@ zip_pmd_enqueue_burst(void *queue_pair,
 	}

 	qp->enqed = enqd;
-	ZIP_PMD_LOG(DEBUG, "ops_enqd[nb_ops:%d]:%d\n", nb_ops, enqd);
+	ZIP_PMD_LOG(DEBUG, "ops_enqd[nb_ops:%d]:%d", nb_ops, enqd);

 	return enqd;
 }
@@ -563,7 +563,7 @@ zip_pmd_dequeue_burst(void *queue_pair,
 			op->status = RTE_COMP_OP_STATUS_SUCCESS;
 		} else {
 			/* FATAL error cannot do anything */
-			ZIP_PMD_ERR("operation failed with error code:%d\n",
+			ZIP_PMD_ERR("operation failed with error code:%d",
 				zresult->s.compcode);
 			if (zresult->s.compcode == ZIP_COMP_E_DSTOP)
 				op->status = RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED;
@@ -571,7 +571,7 @@ zip_pmd_dequeue_burst(void *queue_pair,
 				op->status = RTE_COMP_OP_STATUS_ERROR;
 		}

-		ZIP_PMD_LOG(DEBUG, "written %d\n", zresult->s.totalbyteswritten);
+		ZIP_PMD_LOG(DEBUG, "written %d", zresult->s.totalbyteswritten);

 		/* Update op stats */
 		switch (op->status) {
@@ -582,7 +582,7 @@ zip_pmd_dequeue_burst(void *queue_pair,
 			op->produced = zresult->s.totalbyteswritten;
 			break;
 		default:
-			ZIP_PMD_ERR("stats not updated for status:%d\n",
+			ZIP_PMD_ERR("stats not updated for status:%d",
 				    op->status);
 			break;
 		}
@@ -598,7 +598,7 @@ zip_pmd_dequeue_burst(void *queue_pair,
 			rte_mempool_put(qp->vf->sg_mp, qp->s_info);
 	}

-	ZIP_PMD_LOG(DEBUG, "ops_deqd[nb_ops:%d]: %d\n", nb_ops, nb_dequeued);
+	ZIP_PMD_LOG(DEBUG, "ops_deqd[nb_ops:%d]: %d", nb_ops, nb_dequeued);
 	return nb_dequeued;
 }

@@ -676,7 +676,7 @@ zip_pci_remove(struct rte_pci_device *pci_dev)
 	char compressdev_name[RTE_COMPRESSDEV_NAME_MAX_LEN];

 	if (pci_dev == NULL) {
-		ZIP_PMD_ERR(" Invalid PCI Device\n");
+		ZIP_PMD_ERR(" Invalid PCI Device");
 		return -EINVAL;
 	}
 	rte_pci_device_name(&pci_dev->addr, compressdev_name,
diff --git a/drivers/compress/zlib/zlib_pmd.c b/drivers/compress/zlib/zlib_pmd.c
index 98abd41013..92e808e78c 100644
--- a/drivers/compress/zlib/zlib_pmd.c
+++ b/drivers/compress/zlib/zlib_pmd.c
@@ -29,13 +29,13 @@ process_zlib_deflate(struct rte_comp_op *op, z_stream *strm)
 		break;
 	default:
 		op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
-		ZLIB_PMD_ERR("Invalid flush value\n");
+		ZLIB_PMD_ERR("Invalid flush value");
 		return;
 	}

 	if (unlikely(!strm)) {
 		op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
-		ZLIB_PMD_ERR("Invalid z_stream\n");
+		ZLIB_PMD_ERR("Invalid z_stream");
 		return;
 	}
 	/* Update z_stream with the inputs provided by application */
@@ -98,7 +98,7 @@ def_end:
 		op->produced += strm->total_out;
 		break;
 	default:
-		ZLIB_PMD_ERR("stats not updated for status:%d\n",
+		ZLIB_PMD_ERR("stats not updated for status:%d",
 				op->status);
 	}

@@ -114,7 +114,7 @@ process_zlib_inflate(struct rte_comp_op *op, z_stream *strm)

 	if (unlikely(!strm)) {
 		op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
-		ZLIB_PMD_ERR("Invalid z_stream\n");
+		ZLIB_PMD_ERR("Invalid z_stream");
 		return;
 	}
 	strm->next_in = rte_pktmbuf_mtod_offset(mbuf_src, uint8_t *,
@@ -184,7 +184,7 @@ inf_end:
 		op->produced += strm->total_out;
 		break;
 	default:
-		ZLIB_PMD_ERR("stats not produced for status:%d\n",
+		ZLIB_PMD_ERR("stats not produced for status:%d",
 				op->status);
 	}

@@ -203,7 +203,7 @@ process_zlib_op(struct zlib_qp *qp, struct rte_comp_op *op)
 			(op->dst.offset > rte_pktmbuf_data_len(op->m_dst))) {
 		op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
 		ZLIB_PMD_ERR("Invalid source or destination buffers or "
-			     "invalid Operation requested\n");
+			     "invalid Operation requested");
 	} else {
 		private_xform = (struct zlib_priv_xform *)op->private_xform;
 		stream = &private_xform->stream;
@@ -238,7 +238,7 @@ zlib_set_stream_parameters(const struct rte_comp_xform *xform,
 			wbits = -(xform->compress.window_size);
 			break;
 		default:
-			ZLIB_PMD_ERR("Compression algorithm not supported\n");
+			ZLIB_PMD_ERR("Compression algorithm not supported");
 			return -1;
 		}
 		/** Compression Level */
@@ -260,7 +260,7 @@ zlib_set_stream_parameters(const struct rte_comp_xform *xform,
 			if (level < RTE_COMP_LEVEL_MIN ||
 					level > RTE_COMP_LEVEL_MAX) {
 				ZLIB_PMD_ERR("Compression level %d "
-						"not supported\n",
+						"not supported",
 						level);
 				return -1;
 			}
@@ -278,13 +278,13 @@ zlib_set_stream_parameters(const struct rte_comp_xform *xform,
 			strategy = Z_DEFAULT_STRATEGY;
 			break;
 		default:
-			ZLIB_PMD_ERR("Compression strategy not supported\n");
+			ZLIB_PMD_ERR("Compression strategy not supported");
 			return -1;
 		}
 		if (deflateInit2(strm, level,
 					Z_DEFLATED, wbits,
 					DEF_MEM_LEVEL, strategy) != Z_OK) {
-			ZLIB_PMD_ERR("Deflate init failed\n");
+			ZLIB_PMD_ERR("Deflate init failed");
 			return -1;
 		}
 		break;
@@ -298,12 +298,12 @@ zlib_set_stream_parameters(const struct rte_comp_xform *xform,
 			wbits = -(xform->decompress.window_size);
 			break;
 		default:
-			ZLIB_PMD_ERR("Compression algorithm not supported\n");
+			ZLIB_PMD_ERR("Compression algorithm not supported");
 			return -1;
 		}

 		if (inflateInit2(strm, wbits) != Z_OK) {
-			ZLIB_PMD_ERR("Inflate init failed\n");
+			ZLIB_PMD_ERR("Inflate init failed");
 			return -1;
 		}
 		break;
@@ -395,7 +395,7 @@ zlib_probe(struct rte_vdev_device *vdev)
 	retval = rte_compressdev_pmd_parse_input_args(&init_params, input_args);
 	if (retval < 0) {
 		ZLIB_PMD_LOG(ERR,
-			"Failed to parse initialisation arguments[%s]\n",
+			"Failed to parse initialisation arguments[%s]",
 			input_args);
 		return -EINVAL;
 	}
diff --git a/drivers/compress/zlib/zlib_pmd_ops.c b/drivers/compress/zlib/zlib_pmd_ops.c
index 445a3baa67..a530d15119 100644
--- a/drivers/compress/zlib/zlib_pmd_ops.c
+++ b/drivers/compress/zlib/zlib_pmd_ops.c
@@ -48,8 +48,8 @@ zlib_pmd_config(struct rte_compressdev *dev,
 				NULL, config->socket_id,
 				0);
 		if (mp == NULL) {
-			ZLIB_PMD_ERR("Cannot create private xform pool on "
-			"socket %d\n", config->socket_id);
+			ZLIB_PMD_ERR("Cannot create private xform pool on socket %d",
+				config->socket_id);
 			return -ENOMEM;
 		}
 		internals->mp = mp;
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.c b/drivers/crypto/bcmfs/bcmfs_qp.c
index d1ede5e990..59e39a6c14 100644
--- a/drivers/crypto/bcmfs/bcmfs_qp.c
+++ b/drivers/crypto/bcmfs/bcmfs_qp.c
@@ -142,7 +142,7 @@ bcmfs_queue_create(struct bcmfs_queue *queue,

 	if (bcmfs_qp_check_queue_alignment(qp_mz->iova, align)) {
 		BCMFS_LOG(ERR, "Invalid alignment on queue create "
-					" 0x%" PRIx64 "\n",
+					" 0x%" PRIx64,
 					queue->base_phys_addr);
 		ret = -EFAULT;
 		goto queue_create_err;
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
index 78272d616c..d3b1e25d57 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
@@ -217,7 +217,7 @@ bcmfs_sym_qp_setup(struct rte_cryptodev *cdev, uint16_t qp_id,
 	bcmfs_private->fsdev->qps_in_use[qp_id] = *qp_addr;

 	cdev->data->queue_pairs[qp_id] = qp;
-	BCMFS_LOG(NOTICE, "queue %d setup done\n", qp_id);
+	BCMFS_LOG(NOTICE, "queue %d setup done", qp_id);

 	return 0;
 }
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_session.c b/drivers/crypto/bcmfs/bcmfs_sym_session.c
index 40813d1fe5..64bd4a317a 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_session.c
+++ b/drivers/crypto/bcmfs/bcmfs_sym_session.c
@@ -192,7 +192,7 @@ crypto_set_session_parameters(struct bcmfs_sym_session *sess,
 			rc = -EINVAL;
 		break;
 	default:
-		BCMFS_DP_LOG(ERR, "Invalid chain order\n");
+		BCMFS_DP_LOG(ERR, "Invalid chain order");
 		rc = -EINVAL;
 		break;
 	}
diff --git a/drivers/crypto/caam_jr/caam_jr.c b/drivers/crypto/caam_jr/caam_jr.c
index b55258689b..1713600db7 100644
--- a/drivers/crypto/caam_jr/caam_jr.c
+++ b/drivers/crypto/caam_jr/caam_jr.c
@@ -309,7 +309,7 @@ caam_jr_prep_cdb(struct caam_jr_session *ses)

 	cdb = caam_jr_dma_mem_alloc(L1_CACHE_BYTES, sizeof(struct sec_cdb));
 	if (!cdb) {
-		CAAM_JR_ERR("failed to allocate memory for cdb\n");
+		CAAM_JR_ERR("failed to allocate memory for cdb");
 		return -1;
 	}

@@ -606,7 +606,7 @@ hw_poll_job_ring(struct sec_job_ring_t *job_ring,
 		/*TODO for multiple ops, packets*/
 		ctx = container_of(current_desc, struct caam_jr_op_ctx, jobdes);
 		if (unlikely(sec_error_code)) {
-			CAAM_JR_ERR("desc at cidx %d generated error 0x%x\n",
+			CAAM_JR_ERR("desc at cidx %d generated error 0x%x",
 				job_ring->cidx, sec_error_code);
 			hw_handle_job_ring_error(job_ring, sec_error_code);
 			//todo improve with exact errors
@@ -1368,7 +1368,7 @@ caam_jr_enqueue_op(struct rte_crypto_op *op, struct caam_jr_qp *qp)
 	}

 	if (unlikely(!ses->qp || ses->qp != qp)) {
-		CAAM_JR_DP_DEBUG("Old:sess->qp=%p New qp = %p\n", ses->qp, qp);
+		CAAM_JR_DP_DEBUG("Old:sess->qp=%p New qp = %p", ses->qp, qp);
 		ses->qp = qp;
 		caam_jr_prep_cdb(ses);
 	}
@@ -1554,7 +1554,7 @@ caam_jr_cipher_init(struct rte_cryptodev *dev __rte_unused,
 	session->cipher_key.data = rte_zmalloc(NULL, xform->cipher.key.length,
 					       RTE_CACHE_LINE_SIZE);
 	if (session->cipher_key.data == NULL && xform->cipher.key.length > 0) {
-		CAAM_JR_ERR("No Memory for cipher key\n");
+		CAAM_JR_ERR("No Memory for cipher key");
 		return -ENOMEM;
 	}
 	session->cipher_key.length = xform->cipher.key.length;
@@ -1576,7 +1576,7 @@ caam_jr_auth_init(struct rte_cryptodev *dev __rte_unused,
 	session->auth_key.data = rte_zmalloc(NULL, xform->auth.key.length,
 					     RTE_CACHE_LINE_SIZE);
 	if (session->auth_key.data == NULL && xform->auth.key.length > 0) {
-		CAAM_JR_ERR("No Memory for auth key\n");
+		CAAM_JR_ERR("No Memory for auth key");
 		return -ENOMEM;
 	}
 	session->auth_key.length = xform->auth.key.length;
@@ -1602,7 +1602,7 @@ caam_jr_aead_init(struct rte_cryptodev *dev __rte_unused,
 	session->aead_key.data = rte_zmalloc(NULL, xform->aead.key.length,
 					     RTE_CACHE_LINE_SIZE);
 	if (session->aead_key.data == NULL && xform->aead.key.length > 0) {
-		CAAM_JR_ERR("No Memory for aead key\n");
+		CAAM_JR_ERR("No Memory for aead key");
 		return -ENOMEM;
 	}
 	session->aead_key.length = xform->aead.key.length;
@@ -1755,7 +1755,7 @@ caam_jr_set_ipsec_session(__rte_unused struct rte_cryptodev *dev,
 					       RTE_CACHE_LINE_SIZE);
 	if (session->cipher_key.data == NULL &&
 			cipher_xform->key.length > 0) {
-		CAAM_JR_ERR("No Memory for cipher key\n");
+		CAAM_JR_ERR("No Memory for cipher key");
 		return -ENOMEM;
 	}

@@ -1765,7 +1765,7 @@ caam_jr_set_ipsec_session(__rte_unused struct rte_cryptodev *dev,
 					RTE_CACHE_LINE_SIZE);
 	if (session->auth_key.data == NULL &&
 			auth_xform->key.length > 0) {
-		CAAM_JR_ERR("No Memory for auth key\n");
+		CAAM_JR_ERR("No Memory for auth key");
 		rte_free(session->cipher_key.data);
 		return -ENOMEM;
 	}
@@ -1810,11 +1810,11 @@ caam_jr_set_ipsec_session(__rte_unused struct rte_cryptodev *dev,
 	case RTE_CRYPTO_AUTH_KASUMI_F9:
 	case RTE_CRYPTO_AUTH_AES_CBC_MAC:
 	case RTE_CRYPTO_AUTH_ZUC_EIA3:
-		CAAM_JR_ERR("Crypto: Unsupported auth alg %u\n",
+		CAAM_JR_ERR("Crypto: Unsupported auth alg %u",
 			auth_xform->algo);
 		goto out;
 	default:
-		CAAM_JR_ERR("Crypto: Undefined Auth specified %u\n",
+		CAAM_JR_ERR("Crypto: Undefined Auth specified %u",
 			auth_xform->algo);
 		goto out;
 	}
@@ -1834,11 +1834,11 @@ caam_jr_set_ipsec_session(__rte_unused struct rte_cryptodev *dev,
 	case RTE_CRYPTO_CIPHER_3DES_ECB:
 	case RTE_CRYPTO_CIPHER_AES_ECB:
 	case RTE_CRYPTO_CIPHER_KASUMI_F8:
-		CAAM_JR_ERR("Crypto: Unsupported Cipher alg %u\n",
+		CAAM_JR_ERR("Crypto: Unsupported Cipher alg %u",
 			cipher_xform->algo);
 		goto out;
 	default:
-		CAAM_JR_ERR("Crypto: Undefined Cipher specified %u\n",
+		CAAM_JR_ERR("Crypto: Undefined Cipher specified %u",
 			cipher_xform->algo);
 		goto out;
 	}
@@ -1962,7 +1962,7 @@ caam_jr_dev_configure(struct rte_cryptodev *dev,
 						NULL, NULL, NULL, NULL,
 						SOCKET_ID_ANY, 0);
 		if (!internals->ctx_pool) {
-			CAAM_JR_ERR("%s create failed\n", str);
+			CAAM_JR_ERR("%s create failed", str);
 			return -ENOMEM;
 		}
 	} else
@@ -2180,7 +2180,7 @@ init_job_ring(void *reg_base_addr, int irq_id)
 		}
 	}
 	if (job_ring == NULL) {
-		CAAM_JR_ERR("No free job ring\n");
+		CAAM_JR_ERR("No free job ring");
 		return NULL;
 	}

@@ -2301,7 +2301,7 @@ caam_jr_dev_init(const char *name,
 						job_ring->uio_fd);

 	if (!dev->data->dev_private) {
-		CAAM_JR_ERR("Ring memory allocation failed\n");
+		CAAM_JR_ERR("Ring memory allocation failed");
 		goto cleanup2;
 	}

@@ -2334,7 +2334,7 @@ caam_jr_dev_init(const char *name,
 	security_instance = rte_malloc("caam_jr",
 				sizeof(struct rte_security_ctx), 0);
 	if (security_instance == NULL) {
-		CAAM_JR_ERR("memory allocation failed\n");
+		CAAM_JR_ERR("memory allocation failed");
 		//todo error handling.
 		goto cleanup2;
 	}
diff --git a/drivers/crypto/caam_jr/caam_jr_uio.c b/drivers/crypto/caam_jr/caam_jr_uio.c
index 583ba3b523..acb40bdf77 100644
--- a/drivers/crypto/caam_jr/caam_jr_uio.c
+++ b/drivers/crypto/caam_jr/caam_jr_uio.c
@@ -338,7 +338,7 @@ free_job_ring(int uio_fd)
 	}

 	if (job_ring == NULL) {
-		CAAM_JR_ERR("JR not available for fd = %x\n", uio_fd);
+		CAAM_JR_ERR("JR not available for fd = %x", uio_fd);
 		return;
 	}

@@ -378,7 +378,7 @@ uio_job_ring *config_job_ring(void)
 	}

 	if (job_ring == NULL) {
-		CAAM_JR_ERR("No free job ring\n");
+		CAAM_JR_ERR("No free job ring");
 		return NULL;
 	}

@@ -441,7 +441,7 @@ sec_configure(void)
 					dir->d_name, "name", uio_name);
 			CAAM_JR_INFO("sec device uio name: %s", uio_name);
 			if (ret != 0) {
-				CAAM_JR_ERR("file_read_first_line failed\n");
+				CAAM_JR_ERR("file_read_first_line failed");
 				closedir(d);
 				return -1;
 			}
diff --git a/drivers/crypto/ccp/ccp_dev.c b/drivers/crypto/ccp/ccp_dev.c
index b7ca3af5a4..6d42b92d8b 100644
--- a/drivers/crypto/ccp/ccp_dev.c
+++ b/drivers/crypto/ccp/ccp_dev.c
@@ -362,7 +362,7 @@ ccp_find_lsb_regions(struct ccp_queue *cmd_q, uint64_t status)
 		if (ccp_get_bit(&cmd_q->lsbmask, j))
 			weight++;

-	CCP_LOG_DBG("Queue %d can access %d LSB regions  of mask  %lu\n",
+	CCP_LOG_DBG("Queue %d can access %d LSB regions  of mask  %lu",
 	       (int)cmd_q->id, weight, cmd_q->lsbmask);

 	return weight ? 0 : -EINVAL;
diff --git a/drivers/crypto/ccp/rte_ccp_pmd.c b/drivers/crypto/ccp/rte_ccp_pmd.c
index a5271d7227..c92fdb446d 100644
--- a/drivers/crypto/ccp/rte_ccp_pmd.c
+++ b/drivers/crypto/ccp/rte_ccp_pmd.c
@@ -228,7 +228,7 @@ cryptodev_ccp_create(const char *name,
 	}
 	cryptodev_cnt++;

-	CCP_LOG_DBG("CCP : Crypto device count = %d\n", cryptodev_cnt);
+	CCP_LOG_DBG("CCP : Crypto device count = %d", cryptodev_cnt);
 	dev->device = &pci_dev->device;
 	dev->device->driver = &pci_drv->driver;
 	dev->driver_id = ccp_cryptodev_driver_id;
diff --git a/drivers/crypto/cnxk/cnxk_se.h b/drivers/crypto/cnxk/cnxk_se.h
index c2a807fa94..cf163e0208 100644
--- a/drivers/crypto/cnxk/cnxk_se.h
+++ b/drivers/crypto/cnxk/cnxk_se.h
@@ -1952,7 +1952,7 @@ fill_sess_aead(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess)
 		sess->cpt_op |= ROC_SE_OP_CIPHER_DECRYPT;
 		sess->cpt_op |= ROC_SE_OP_AUTH_VERIFY;
 	} else {
-		plt_dp_err("Unknown aead operation\n");
+		plt_dp_err("Unknown aead operation");
 		return -1;
 	}
 	switch (aead_form->algo) {
@@ -2036,7 +2036,7 @@ fill_sm_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *ses
 		sess->cpt_op |= ROC_SE_OP_CIPHER_DECRYPT;
 		sess->roc_se_ctx.template_w4.s.opcode_minor = ROC_SE_FC_MINOR_OP_DECRYPT;
 	} else {
-		plt_dp_err("Unknown cipher operation\n");
+		plt_dp_err("Unknown cipher operation");
 		return -1;
 	}

@@ -2113,7 +2113,7 @@ fill_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess)
 				ROC_SE_FC_MINOR_OP_HMAC_FIRST;
 		}
 	} else {
-		plt_dp_err("Unknown cipher operation\n");
+		plt_dp_err("Unknown cipher operation");
 		return -1;
 	}

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 6ae356ace0..b65bea3b3f 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -1146,7 +1146,7 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,

 	DPAA2_SEC_DP_DEBUG(
 		"CIPHER SG: cipher_off: 0x%x/length %d, ivlen=%d"
-		" data_off: 0x%x\n",
+		" data_off: 0x%x",
 		data_offset,
 		data_len,
 		sess->iv.length,
@@ -1172,7 +1172,7 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
 	DPAA2_SET_FLE_FIN(sge);

 	DPAA2_SEC_DP_DEBUG(
-		"CIPHER SG: 1 - flc = %p, fle = %p FLEaddr = %x-%x, len %d\n",
+		"CIPHER SG: 1 - flc = %p, fle = %p FLEaddr = %x-%x, len %d",
 		flc, fle, fle->addr_hi, fle->addr_lo,
 		fle->length);

@@ -1212,7 +1212,7 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,

 	DPAA2_SEC_DP_DEBUG(
 		"CIPHER SG: fdaddr =%" PRIx64 " bpid =%d meta =%d"
-		" off =%d, len =%d\n",
+		" off =%d, len =%d",
 		DPAA2_GET_FD_ADDR(fd),
 		DPAA2_GET_FD_BPID(fd),
 		rte_dpaa2_bpid_info[bpid].meta_data_size,
@@ -1292,7 +1292,7 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,

 	DPAA2_SEC_DP_DEBUG(
 		"CIPHER: cipher_off: 0x%x/length %d, ivlen=%d,"
-		" data_off: 0x%x\n",
+		" data_off: 0x%x",
 		data_offset,
 		data_len,
 		sess->iv.length,
@@ -1303,7 +1303,7 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
 	fle->length = data_len + sess->iv.length;

 	DPAA2_SEC_DP_DEBUG(
-		"CIPHER: 1 - flc = %p, fle = %p FLEaddr = %x-%x, length %d\n",
+		"CIPHER: 1 - flc = %p, fle = %p FLEaddr = %x-%x, length %d",
 		flc, fle, fle->addr_hi, fle->addr_lo,
 		fle->length);

@@ -1326,7 +1326,7 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,

 	DPAA2_SEC_DP_DEBUG(
 		"CIPHER: fdaddr =%" PRIx64 " bpid =%d meta =%d"
-		" off =%d, len =%d\n",
+		" off =%d, len =%d",
 		DPAA2_GET_FD_ADDR(fd),
 		DPAA2_GET_FD_BPID(fd),
 		rte_dpaa2_bpid_info[bpid].meta_data_size,
@@ -1348,12 +1348,12 @@ build_sec_fd(struct rte_crypto_op *op,
 	} else if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
 		sess = SECURITY_GET_SESS_PRIV(op->sym->session);
 	} else {
-		DPAA2_SEC_DP_ERR("Session type invalid\n");
+		DPAA2_SEC_DP_ERR("Session type invalid");
 		return -ENOTSUP;
 	}

 	if (!sess) {
-		DPAA2_SEC_DP_ERR("Session not available\n");
+		DPAA2_SEC_DP_ERR("Session not available");
 		return -EINVAL;
 	}

@@ -1446,7 +1446,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
 			DPAA2_SEC_ERR(
-				"Failed to allocate IO portal, tid: %d\n",
+				"Failed to allocate IO portal, tid: %d",
 				rte_gettid());
 			return 0;
 		}
@@ -1475,7 +1475,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
 			bpid = mempool_to_bpid(mb_pool);
 			ret = build_sec_fd(*ops, &fd_arr[loop], bpid, dpaa2_qp);
 			if (ret) {
-				DPAA2_SEC_DP_DEBUG("FD build failed\n");
+				DPAA2_SEC_DP_DEBUG("FD build failed");
 				goto skip_tx;
 			}
 			ops++;
@@ -1493,7 +1493,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
 				if (retry_count > DPAA2_MAX_TX_RETRY_COUNT) {
 					num_tx += loop;
 					nb_ops -= loop;
-					DPAA2_SEC_DP_DEBUG("Enqueue fail\n");
+					DPAA2_SEC_DP_DEBUG("Enqueue fail");
 					/* freeing the fle buffers */
 					while (loop < frames_to_send) {
 						free_fle(&fd_arr[loop],
@@ -1569,7 +1569,7 @@ sec_fd_to_mbuf(const struct qbman_fd *fd, struct dpaa2_sec_qp *qp)

 	fle = (struct qbman_fle *)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd));

-	DPAA2_SEC_DP_DEBUG("FLE addr = %x - %x, offset = %x\n",
+	DPAA2_SEC_DP_DEBUG("FLE addr = %x - %x, offset = %x",
 			   fle->addr_hi, fle->addr_lo, fle->fin_bpid_offset);

 	/* we are using the first FLE entry to store Mbuf.
@@ -1602,7 +1602,7 @@ sec_fd_to_mbuf(const struct qbman_fd *fd, struct dpaa2_sec_qp *qp)
 	}

 	DPAA2_SEC_DP_DEBUG("mbuf %p BMAN buf addr %p,"
-		" fdaddr =%" PRIx64 " bpid =%d meta =%d off =%d, len =%d\n",
+		" fdaddr =%" PRIx64 " bpid =%d meta =%d off =%d, len =%d",
 		(void *)dst,
 		dst->buf_addr,
 		DPAA2_GET_FD_ADDR(fd),
@@ -1824,7 +1824,7 @@ dpaa2_sec_enqueue_burst_ordered(void *qp, struct rte_crypto_op **ops,
 			bpid = mempool_to_bpid(mb_pool);
 			ret = build_sec_fd(*ops, &fd_arr[loop], bpid, dpaa2_qp);
 			if (ret) {
-				DPAA2_SEC_DP_DEBUG("FD build failed\n");
+				DPAA2_SEC_DP_DEBUG("FD build failed");
 				goto skip_tx;
 			}
 			ops++;
@@ -1841,7 +1841,7 @@ dpaa2_sec_enqueue_burst_ordered(void *qp, struct rte_crypto_op **ops,
 				if (retry_count > DPAA2_MAX_TX_RETRY_COUNT) {
 					num_tx += loop;
 					nb_ops -= loop;
-					DPAA2_SEC_DP_DEBUG("Enqueue fail\n");
+					DPAA2_SEC_DP_DEBUG("Enqueue fail");
 					/* freeing the fle buffers */
 					while (loop < frames_to_send) {
 						free_fle(&fd_arr[loop],
@@ -1884,7 +1884,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
 			DPAA2_SEC_ERR(
-				"Failed to allocate IO portal, tid: %d\n",
+				"Failed to allocate IO portal, tid: %d",
 				rte_gettid());
 			return 0;
 		}
@@ -1937,7 +1937,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
 			status = (uint8_t)qbman_result_DQ_flags(dq_storage);
 			if (unlikely(
 				(status & QBMAN_DQ_STAT_VALIDFRAME) == 0)) {
-				DPAA2_SEC_DP_DEBUG("No frame is delivered\n");
+				DPAA2_SEC_DP_DEBUG("No frame is delivered");
 				continue;
 			}
 		}
@@ -1948,7 +1948,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
 		if (unlikely(fd->simple.frc)) {
 			/* TODO Parse SEC errors */
 			if (dpaa2_sec_dp_dump > DPAA2_SEC_DP_NO_DUMP) {
-				DPAA2_SEC_DP_ERR("SEC returned Error - %x\n",
+				DPAA2_SEC_DP_ERR("SEC returned Error - %x",
 						 fd->simple.frc);
 				if (dpaa2_sec_dp_dump > DPAA2_SEC_DP_ERR_DUMP)
 					dpaa2_sec_dump(ops[num_rx]);
@@ -1966,7 +1966,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,

 	dpaa2_qp->rx_vq.rx_pkts += num_rx;

-	DPAA2_SEC_DP_DEBUG("SEC RX pkts %d err pkts %" PRIu64 "\n", num_rx,
+	DPAA2_SEC_DP_DEBUG("SEC RX pkts %d err pkts %" PRIu64, num_rx,
 				dpaa2_qp->rx_vq.err_pkts);
 	/*Return the total number of packets received to DPAA2 app*/
 	return num_rx;
@@ -2555,7 +2555,7 @@ dpaa2_sec_aead_init(struct rte_crypto_sym_xform *xform,
 #ifdef CAAM_DESC_DEBUG
 	int i;
 	for (i = 0; i < bufsize; i++)
-		DPAA2_SEC_DEBUG("DESC[%d]:0x%x\n",
+		DPAA2_SEC_DEBUG("DESC[%d]:0x%x",
 			    i, priv->flc_desc[0].desc[i]);
 #endif
 	return ret;
@@ -4275,7 +4275,7 @@ check_devargs_handler(const char *key, const char *value,
 		if (dpaa2_sec_dp_dump > DPAA2_SEC_DP_FULL_DUMP) {
 			DPAA2_SEC_WARN("WARN: DPAA2_SEC_DP_DUMP_LEVEL is not "
 				      "supported, changing to FULL error"
-				      " prints\n");
+				      " prints");
 			dpaa2_sec_dp_dump = DPAA2_SEC_DP_FULL_DUMP;
 		}
 	} else
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
index 4754b9d6f8..883584a6e2 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
@@ -605,7 +605,7 @@ build_raw_dp_cipher_fd(uint8_t *drv_ctx,
 	flc = &priv->flc_desc[0].flc;

 	DPAA2_SEC_DP_DEBUG(
-		"RAW CIPHER SG: cipher_off: 0x%x/length %d, ivlen=%d\n",
+		"RAW CIPHER SG: cipher_off: 0x%x/length %d, ivlen=%d",
 		data_offset,
 		data_len,
 		sess->iv.length);
@@ -642,7 +642,7 @@ build_raw_dp_cipher_fd(uint8_t *drv_ctx,
 	DPAA2_SET_FLE_FIN(sge);

 	DPAA2_SEC_DP_DEBUG(
-		"RAW CIPHER SG: 1 - flc = %p, fle = %p FLEaddr = %x-%x, len %d\n",
+		"RAW CIPHER SG: 1 - flc = %p, fle = %p FLEaddr = %x-%x, len %d",
 		flc, fle, fle->addr_hi, fle->addr_lo,
 		fle->length);

@@ -678,7 +678,7 @@ build_raw_dp_cipher_fd(uint8_t *drv_ctx,
 	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));

 	DPAA2_SEC_DP_DEBUG(
-		"RAW CIPHER SG: fdaddr =%" PRIx64 " off =%d, len =%d\n",
+		"RAW CIPHER SG: fdaddr =%" PRIx64 " off =%d, len =%d",
 		DPAA2_GET_FD_ADDR(fd),
 		DPAA2_GET_FD_OFFSET(fd),
 		DPAA2_GET_FD_LEN(fd));
@@ -721,7 +721,7 @@ dpaa2_sec_raw_enqueue_burst(void *qp_data, uint8_t *drv_ctx,
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
 			DPAA2_SEC_ERR(
-				"Failed to allocate IO portal, tid: %d\n",
+				"Failed to allocate IO portal, tid: %d",
 				rte_gettid());
 			return 0;
 		}
@@ -811,7 +811,7 @@ sec_fd_to_userdata(const struct qbman_fd *fd)
 	void *userdata;
 	fle = (struct qbman_fle *)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd));

-	DPAA2_SEC_DP_DEBUG("FLE addr = %x - %x, offset = %x\n",
+	DPAA2_SEC_DP_DEBUG("FLE addr = %x - %x, offset = %x",
 			   fle->addr_hi, fle->addr_lo, fle->fin_bpid_offset);
 	userdata = (struct rte_crypto_op *)DPAA2_GET_FLE_ADDR((fle - 1));
 	/* free the fle memory */
@@ -847,7 +847,7 @@ dpaa2_sec_raw_dequeue_burst(void *qp_data, uint8_t *drv_ctx,
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
 			DPAA2_SEC_ERR(
-				"Failed to allocate IO portal, tid: %d\n",
+				"Failed to allocate IO portal, tid: %d",
 				rte_gettid());
 			return 0;
 		}
@@ -900,7 +900,7 @@ dpaa2_sec_raw_dequeue_burst(void *qp_data, uint8_t *drv_ctx,
 			status = (uint8_t)qbman_result_DQ_flags(dq_storage);
 			if (unlikely(
 				(status & QBMAN_DQ_STAT_VALIDFRAME) == 0)) {
-				DPAA2_SEC_DP_DEBUG("No frame is delivered\n");
+				DPAA2_SEC_DP_DEBUG("No frame is delivered");
 				continue;
 			}
 		}
@@ -929,7 +929,7 @@ dpaa2_sec_raw_dequeue_burst(void *qp_data, uint8_t *drv_ctx,
 	*dequeue_status = 1;
 	*n_success = num_rx;

-	DPAA2_SEC_DP_DEBUG("SEC Received %d Packets\n", num_rx);
+	DPAA2_SEC_DP_DEBUG("SEC Received %d Packets", num_rx);
 	/*Return the total number of packets received to DPAA2 app*/
 	return num_rx;
 }
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 906ea39047..131cd90c94 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -102,7 +102,7 @@ ern_sec_fq_handler(struct qman_portal *qm __rte_unused,
 		   struct qman_fq *fq,
 		   const struct qm_mr_entry *msg)
 {
-	DPAA_SEC_DP_ERR("sec fq %d error, RC = %x, seqnum = %x\n",
+	DPAA_SEC_DP_ERR("sec fq %d error, RC = %x, seqnum = %x",
 			fq->fqid, msg->ern.rc, msg->ern.seqnum);
 }

@@ -849,7 +849,7 @@ dpaa_sec_deq(struct dpaa_sec_qp *qp, struct rte_crypto_op **ops, int nb_ops)
 			op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
 		} else {
 			if (dpaa_sec_dp_dump > DPAA_SEC_DP_NO_DUMP) {
-				DPAA_SEC_DP_WARN("SEC return err:0x%x\n",
+				DPAA_SEC_DP_WARN("SEC return err:0x%x",
 						  ctx->fd_status);
 				if (dpaa_sec_dp_dump > DPAA_SEC_DP_ERR_DUMP)
 					dpaa_sec_dump(ctx, qp);
@@ -1944,7 +1944,7 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
 			} else if (unlikely(ses->qp[rte_lcore_id() %
 						MAX_DPAA_CORES] != qp)) {
 				DPAA_SEC_DP_ERR("Old:sess->qp = %p"
-					" New qp = %p\n",
+					" New qp = %p",
 					ses->qp[rte_lcore_id() %
 					MAX_DPAA_CORES], qp);
 				frames_to_send = loop;
@@ -2054,7 +2054,7 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
 				fd->cmd = 0x80000000 |
 					*((uint32_t *)((uint8_t *)op +
 					ses->pdcp.hfn_ovd_offset));
-				DPAA_SEC_DP_DEBUG("Per packet HFN: %x, ovd:%u\n",
+				DPAA_SEC_DP_DEBUG("Per packet HFN: %x, ovd:%u",
 					*((uint32_t *)((uint8_t *)op +
 					ses->pdcp.hfn_ovd_offset)),
 					ses->pdcp.hfn_ovd);
@@ -2095,7 +2095,7 @@ dpaa_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
 	dpaa_qp->rx_pkts += num_rx;
 	dpaa_qp->rx_errs += nb_ops - num_rx;

-	DPAA_SEC_DP_DEBUG("SEC Received %d Packets\n", num_rx);
+	DPAA_SEC_DP_DEBUG("SEC Received %d Packets", num_rx);

 	return num_rx;
 }
@@ -2158,7 +2158,7 @@ dpaa_sec_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 							NULL, NULL, NULL, NULL,
 							SOCKET_ID_ANY, 0);
 		if (!qp->ctx_pool) {
-			DPAA_SEC_ERR("%s create failed\n", str);
+			DPAA_SEC_ERR("%s create failed", str);
 			return -ENOMEM;
 		}
 	} else
@@ -2459,7 +2459,7 @@ dpaa_sec_aead_init(struct rte_cryptodev *dev __rte_unused,
 	session->aead_key.data = rte_zmalloc(NULL, xform->aead.key.length,
 					     RTE_CACHE_LINE_SIZE);
 	if (session->aead_key.data == NULL && xform->aead.key.length > 0) {
-		DPAA_SEC_ERR("No Memory for aead key\n");
+		DPAA_SEC_ERR("No Memory for aead key");
 		return -ENOMEM;
 	}
 	session->aead_key.length = xform->aead.key.length;
@@ -2508,7 +2508,7 @@ dpaa_sec_detach_rxq(struct dpaa_sec_dev_private *qi, struct qman_fq *fq)
 	for (i = 0; i < RTE_DPAA_MAX_RX_QUEUE; i++) {
 		if (&qi->inq[i] == fq) {
 			if (qman_retire_fq(fq, NULL) != 0)
-				DPAA_SEC_DEBUG("Queue is not retired\n");
+				DPAA_SEC_DEBUG("Queue is not retired");
 			qman_oos_fq(fq);
 			qi->inq_attach[i] = 0;
 			return 0;
@@ -3483,7 +3483,7 @@ dpaa_sec_eventq_attach(const struct rte_cryptodev *dev,
 		qp->outq.cb.dqrr_dpdk_cb = dpaa_sec_process_atomic_event;
 		break;
 	case RTE_SCHED_TYPE_ORDERED:
-		DPAA_SEC_ERR("Ordered queue schedule type is not supported\n");
+		DPAA_SEC_ERR("Ordered queue schedule type is not supported");
 		return -ENOTSUP;
 	default:
 		opts.fqd.fq_ctrl |= QM_FQCTRL_AVOIDBLOCK;
@@ -3582,7 +3582,7 @@ check_devargs_handler(__rte_unused const char *key, const char *value,
 	dpaa_sec_dp_dump = atoi(value);
 	if (dpaa_sec_dp_dump > DPAA_SEC_DP_FULL_DUMP) {
 		DPAA_SEC_WARN("WARN: DPAA_SEC_DP_DUMP_LEVEL is not "
-			      "supported, changing to FULL error prints\n");
+			      "supported, changing to FULL error prints");
 		dpaa_sec_dp_dump = DPAA_SEC_DP_FULL_DUMP;
 	}

@@ -3645,7 +3645,7 @@ dpaa_sec_dev_init(struct rte_cryptodev *cryptodev)

 	ret = munmap(internals->sec_hw, MAP_SIZE);
 	if (ret)
-		DPAA_SEC_WARN("munmap failed\n");
+		DPAA_SEC_WARN("munmap failed");

 	close(map_fd);
 	cryptodev->driver_id = dpaa_cryptodev_driver_id;
@@ -3713,7 +3713,7 @@ dpaa_sec_dev_init(struct rte_cryptodev *cryptodev)
 	return 0;

 init_error:
-	DPAA_SEC_ERR("driver %s: create failed\n", cryptodev->data->name);
+	DPAA_SEC_ERR("driver %s: create failed", cryptodev->data->name);

 	rte_free(cryptodev->security_ctx);
 	return -EFAULT;
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec_log.h b/drivers/crypto/dpaa_sec/dpaa_sec_log.h
index fb895a8bc6..82ac1fa1c4 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec_log.h
+++ b/drivers/crypto/dpaa_sec/dpaa_sec_log.h
@@ -29,7 +29,7 @@ extern int dpaa_logtype_sec;

 /* DP Logs, toggled out at compile time if level lower than current level */
 #define DPAA_SEC_DP_LOG(level, fmt, args...) \
-	RTE_LOG_DP(level, PMD, fmt, ## args)
+	RTE_LOG_DP_LINE(level, PMD, fmt, ## args)

 #define DPAA_SEC_DP_DEBUG(fmt, args...) \
 	DPAA_SEC_DP_LOG(DEBUG, fmt, ## args)
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c b/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c
index ce49c4996f..f62c803894 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c
@@ -761,7 +761,7 @@ build_dpaa_raw_proto_sg(uint8_t *drv_ctx,
 		fd->cmd = 0x80000000 |
 			*((uint32_t *)((uint8_t *)userdata +
 			ses->pdcp.hfn_ovd_offset));
-		DPAA_SEC_DP_DEBUG("Per packet HFN: %x, ovd:%u\n",
+		DPAA_SEC_DP_DEBUG("Per packet HFN: %x, ovd:%u",
 			*((uint32_t *)((uint8_t *)userdata +
 			ses->pdcp.hfn_ovd_offset)),
 			ses->pdcp.hfn_ovd);
@@ -806,7 +806,7 @@ dpaa_sec_raw_enqueue_burst(void *qp_data, uint8_t *drv_ctx,
 			} else if (unlikely(ses->qp[rte_lcore_id() %
 						MAX_DPAA_CORES] != dpaa_qp)) {
 				DPAA_SEC_DP_ERR("Old:sess->qp = %p"
-					" New qp = %p\n",
+					" New qp = %p",
 					ses->qp[rte_lcore_id() %
 					MAX_DPAA_CORES], dpaa_qp);
 				frames_to_send = loop;
@@ -955,7 +955,7 @@ dpaa_sec_raw_dequeue_burst(void *qp_data, uint8_t *drv_ctx,
 	*dequeue_status = 1;
 	*n_success = num_rx;

-	DPAA_SEC_DP_DEBUG("SEC Received %d Packets\n", num_rx);
+	DPAA_SEC_DP_DEBUG("SEC Received %d Packets", num_rx);

 	return num_rx;
 }
diff --git a/drivers/crypto/ipsec_mb/ipsec_mb_private.c b/drivers/crypto/ipsec_mb/ipsec_mb_private.c
index f485d130b6..0d2538832d 100644
--- a/drivers/crypto/ipsec_mb/ipsec_mb_private.c
+++ b/drivers/crypto/ipsec_mb/ipsec_mb_private.c
@@ -165,7 +165,7 @@ ipsec_mb_create(struct rte_vdev_device *vdev,

 	rte_cryptodev_pmd_probing_finish(dev);

-	IPSEC_MB_LOG(INFO, "IPSec Multi-buffer library version used: %s\n",
+	IPSEC_MB_LOG(INFO, "IPSec Multi-buffer library version used: %s",
 		     imb_get_version_str());

 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
@@ -176,7 +176,7 @@ ipsec_mb_create(struct rte_vdev_device *vdev,

 		if (retval)
 			IPSEC_MB_LOG(ERR,
-				"IPSec Multi-buffer register MP request failed.\n");
+				"IPSec Multi-buffer register MP request failed.");
 	}
 	return retval;
 }
diff --git a/drivers/crypto/ipsec_mb/ipsec_mb_private.h b/drivers/crypto/ipsec_mb/ipsec_mb_private.h
index 52722f94a0..252bcb3192 100644
--- a/drivers/crypto/ipsec_mb/ipsec_mb_private.h
+++ b/drivers/crypto/ipsec_mb/ipsec_mb_private.h
@@ -198,7 +198,7 @@ alloc_init_mb_mgr(void)
 	IMB_MGR *mb_mgr = alloc_mb_mgr(0);

 	if (unlikely(mb_mgr == NULL)) {
-		IPSEC_MB_LOG(ERR, "Failed to allocate IMB_MGR data\n");
+		IPSEC_MB_LOG(ERR, "Failed to allocate IMB_MGR data");
 		return NULL;
 	}

diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
index 80de25c65b..8e74645e0a 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
@@ -107,7 +107,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
 		uint16_t xcbc_mac_digest_len =
 			get_truncated_digest_byte_length(IMB_AUTH_AES_XCBC);
 		if (sess->auth.req_digest_len != xcbc_mac_digest_len) {
-			IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+			IPSEC_MB_LOG(ERR, "Invalid digest size");
 			return -EINVAL;
 		}
 		sess->template_job.auth_tag_output_len_in_bytes = sess->auth.req_digest_len;
@@ -130,7 +130,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
 				get_digest_byte_length(IMB_AUTH_AES_CMAC);

 		if (sess->auth.req_digest_len > cmac_digest_len) {
-			IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+			IPSEC_MB_LOG(ERR, "Invalid digest size");
 			return -EINVAL;
 		}
 		/*
@@ -165,7 +165,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,

 		if (sess->auth.req_digest_len >
 			get_digest_byte_length(IMB_AUTH_AES_GMAC)) {
-			IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+			IPSEC_MB_LOG(ERR, "Invalid digest size");
 			return -EINVAL;
 		}
 		sess->template_job.auth_tag_output_len_in_bytes = sess->auth.req_digest_len;
@@ -192,7 +192,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
 			sess->template_job.key_len_in_bytes = IMB_KEY_256_BYTES;
 			break;
 		default:
-			IPSEC_MB_LOG(ERR, "Invalid authentication key length\n");
+			IPSEC_MB_LOG(ERR, "Invalid authentication key length");
 			return -EINVAL;
 		}
 		sess->template_job.u.GMAC._key = &sess->cipher.gcm_key;
@@ -205,7 +205,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
 			sess->template_job.hash_alg = IMB_AUTH_ZUC_EIA3_BITLEN;

 			if (sess->auth.req_digest_len != 4) {
-				IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+				IPSEC_MB_LOG(ERR, "Invalid digest size");
 				return -EINVAL;
 			}
 		} else if (xform->auth.key.length == 32) {
@@ -217,11 +217,11 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
 #else
 			if (sess->auth.req_digest_len != 4) {
 #endif
-				IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+				IPSEC_MB_LOG(ERR, "Invalid digest size");
 				return -EINVAL;
 			}
 		} else {
-			IPSEC_MB_LOG(ERR, "Invalid authentication key length\n");
+			IPSEC_MB_LOG(ERR, "Invalid authentication key length");
 			return -EINVAL;
 		}

@@ -237,7 +237,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
 			get_truncated_digest_byte_length(
 						IMB_AUTH_SNOW3G_UIA2_BITLEN);
 		if (sess->auth.req_digest_len != snow3g_uia2_digest_len) {
-			IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+			IPSEC_MB_LOG(ERR, "Invalid digest size");
 			return -EINVAL;
 		}
 		sess->template_job.auth_tag_output_len_in_bytes = sess->auth.req_digest_len;
@@ -252,7 +252,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
 		uint16_t kasumi_f9_digest_len =
 			get_truncated_digest_byte_length(IMB_AUTH_KASUMI_UIA1);
 		if (sess->auth.req_digest_len != kasumi_f9_digest_len) {
-			IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+			IPSEC_MB_LOG(ERR, "Invalid digest size");
 			return -EINVAL;
 		}
 		sess->template_job.auth_tag_output_len_in_bytes = sess->auth.req_digest_len;
@@ -361,7 +361,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,

 	if (sess->auth.req_digest_len > full_digest_size ||
 			sess->auth.req_digest_len == 0) {
-		IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+		IPSEC_MB_LOG(ERR, "Invalid digest size");
 		return -EINVAL;
 	}

@@ -691,7 +691,7 @@ aesni_mb_set_session_aead_parameters(const IMB_MGR *mb_mgr,
 		if (sess->auth.req_digest_len < AES_CCM_DIGEST_MIN_LEN ||
 			sess->auth.req_digest_len > AES_CCM_DIGEST_MAX_LEN ||
 			(sess->auth.req_digest_len & 1) == 1) {
-			IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+			IPSEC_MB_LOG(ERR, "Invalid digest size");
 			return -EINVAL;
 		}
 		break;
@@ -727,7 +727,7 @@ aesni_mb_set_session_aead_parameters(const IMB_MGR *mb_mgr,
 		/* GCM digest size must be between 1 and 16 */
 		if (sess->auth.req_digest_len == 0 ||
 				sess->auth.req_digest_len > 16) {
-			IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+			IPSEC_MB_LOG(ERR, "Invalid digest size");
 			return -EINVAL;
 		}
 		break;
@@ -748,7 +748,7 @@ aesni_mb_set_session_aead_parameters(const IMB_MGR *mb_mgr,
 		sess->template_job.enc_keys = sess->cipher.expanded_aes_keys.encode;
 		sess->template_job.dec_keys = sess->cipher.expanded_aes_keys.decode;
 		if (sess->auth.req_digest_len != 16) {
-			IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+			IPSEC_MB_LOG(ERR, "Invalid digest size");
 			return -EINVAL;
 		}
 		break;
@@ -1200,7 +1200,7 @@ handle_sgl_linear(IMB_JOB *job, struct rte_crypto_op *op, uint32_t dst_offset,
 	total_len = sgl_linear_cipher_auth_len(job, &auth_len);
 	linear_buf = rte_zmalloc(NULL, total_len + job->auth_tag_output_len_in_bytes, 0);
 	if (linear_buf == NULL) {
-		IPSEC_MB_LOG(ERR, "Error allocating memory for SGL Linear Buffer\n");
+		IPSEC_MB_LOG(ERR, "Error allocating memory for SGL Linear Buffer");
 		return -1;
 	}

diff --git a/drivers/crypto/ipsec_mb/pmd_snow3g.c b/drivers/crypto/ipsec_mb/pmd_snow3g.c
index e64df1a462..a0b354bb83 100644
--- a/drivers/crypto/ipsec_mb/pmd_snow3g.c
+++ b/drivers/crypto/ipsec_mb/pmd_snow3g.c
@@ -186,7 +186,7 @@ process_snow3g_cipher_op_bit(struct ipsec_mb_qp *qp,
 	src = rte_pktmbuf_mtod(op->sym->m_src, uint8_t *);
 	if (op->sym->m_dst == NULL) {
 		op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
-		IPSEC_MB_LOG(ERR, "bit-level in-place not supported\n");
+		IPSEC_MB_LOG(ERR, "bit-level in-place not supported");
 		return 0;
 	}
 	length_in_bits = op->sym->cipher.data.length;
@@ -317,7 +317,7 @@ process_ops(struct rte_crypto_op **ops, struct snow3g_session *session,
 			IPSEC_MB_LOG(ERR,
 				"PMD supports only contiguous mbufs, "
 				"op (%p) provides noncontiguous mbuf as "
-				"source/destination buffer.\n", ops[i]);
+				"source/destination buffer.", ops[i]);
 			ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
 			return 0;
 		}
diff --git a/drivers/crypto/octeontx/otx_cryptodev_hw_access.h b/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
index 4647d568de..aa2363ef15 100644
--- a/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
+++ b/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
@@ -211,7 +211,7 @@ otx_cpt_ring_dbell(struct cpt_instance *instance, uint16_t count)
 static __rte_always_inline void *
 get_cpt_inst(struct command_queue *cqueue)
 {
-	CPT_LOG_DP_DEBUG("CPT queue idx %u\n", cqueue->idx);
+	CPT_LOG_DP_DEBUG("CPT queue idx %u", cqueue->idx);
 	return &cqueue->qhead[cqueue->idx * CPT_INST_SIZE];
 }

@@ -305,9 +305,9 @@ complete:
 				" error, MC completion code : 0x%x", user_req,
 				ret);
 		}
-		CPT_LOG_DP_DEBUG("MC status %.8x\n",
+		CPT_LOG_DP_DEBUG("MC status %.8x",
 			   *((volatile uint32_t *)user_req->alternate_caddr));
-		CPT_LOG_DP_DEBUG("HW status %.8x\n",
+		CPT_LOG_DP_DEBUG("HW status %.8x",
 			   *((volatile uint32_t *)user_req->completion_addr));
 	} else if ((cptres->s8x.compcode == CPT_8X_COMP_E_SWERR) ||
 		   (cptres->s8x.compcode == CPT_8X_COMP_E_FAULT)) {
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index 101111e85b..e10a172f46 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -57,13 +57,13 @@ static void ossl_legacy_provider_load(void)
 	/* Load Multiple providers into the default (NULL) library context */
 	legacy = OSSL_PROVIDER_load(NULL, "legacy");
 	if (legacy == NULL) {
-		OPENSSL_LOG(ERR, "Failed to load Legacy provider\n");
+		OPENSSL_LOG(ERR, "Failed to load Legacy provider");
 		return;
 	}

 	deflt = OSSL_PROVIDER_load(NULL, "default");
 	if (deflt == NULL) {
-		OPENSSL_LOG(ERR, "Failed to load Default provider\n");
+		OPENSSL_LOG(ERR, "Failed to load Default provider");
 		OSSL_PROVIDER_unload(legacy);
 		return;
 	}
@@ -2123,7 +2123,7 @@ process_openssl_dsa_sign_op_evp(struct rte_crypto_op *cop,
 	dsa_sign_data_p = (const unsigned char *)dsa_sign_data;
 	DSA_SIG *sign = d2i_DSA_SIG(NULL, &dsa_sign_data_p, outlen);
 	if (!sign) {
-		OPENSSL_LOG(ERR, "%s:%d\n", __func__, __LINE__);
+		OPENSSL_LOG(ERR, "%s:%d", __func__, __LINE__);
 		OPENSSL_free(dsa_sign_data);
 		goto err_dsa_sign;
 	} else {
@@ -2168,7 +2168,7 @@ process_openssl_dsa_verify_op_evp(struct rte_crypto_op *cop,

 	cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
 	if (!param_bld) {
-		OPENSSL_LOG(ERR, " %s:%d\n", __func__, __LINE__);
+		OPENSSL_LOG(ERR, " %s:%d", __func__, __LINE__);
 		return -1;
 	}

@@ -2246,7 +2246,7 @@ process_openssl_dsa_sign_op(struct rte_crypto_op *cop,
 			dsa);

 	if (sign == NULL) {
-		OPENSSL_LOG(ERR, "%s:%d\n", __func__, __LINE__);
+		OPENSSL_LOG(ERR, "%s:%d", __func__, __LINE__);
 		cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
 	} else {
 		const BIGNUM *r = NULL, *s = NULL;
@@ -2275,7 +2275,7 @@ process_openssl_dsa_verify_op(struct rte_crypto_op *cop,
 	BIGNUM *pub_key = NULL;

 	if (sign == NULL) {
-		OPENSSL_LOG(ERR, " %s:%d\n", __func__, __LINE__);
+		OPENSSL_LOG(ERR, " %s:%d", __func__, __LINE__);
 		cop->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
 		return -1;
 	}
@@ -2352,7 +2352,7 @@ process_openssl_dh_op_evp(struct rte_crypto_op *cop,

 		if (!OSSL_PARAM_BLD_push_BN(param_bld_peer, OSSL_PKEY_PARAM_PUB_KEY,
 				pub_key)) {
-			OPENSSL_LOG(ERR, "Failed to set public key\n");
+			OPENSSL_LOG(ERR, "Failed to set public key");
 			OSSL_PARAM_BLD_free(param_bld_peer);
 			BN_free(pub_key);
 			return ret;
@@ -2397,7 +2397,7 @@ process_openssl_dh_op_evp(struct rte_crypto_op *cop,

 		if (!OSSL_PARAM_BLD_push_BN(param_bld, OSSL_PKEY_PARAM_PRIV_KEY,
 				priv_key)) {
-			OPENSSL_LOG(ERR, "Failed to set private key\n");
+			OPENSSL_LOG(ERR, "Failed to set private key");
 			EVP_PKEY_CTX_free(peer_ctx);
 			OSSL_PARAM_free(params_peer);
 			BN_free(pub_key);
@@ -2423,7 +2423,7 @@ process_openssl_dh_op_evp(struct rte_crypto_op *cop,
 		goto err_dh;

 	if (op->ke_type == RTE_CRYPTO_ASYM_KE_PUB_KEY_GENERATE) {
-		OPENSSL_LOG(DEBUG, "%s:%d updated pub key\n", __func__, __LINE__);
+		OPENSSL_LOG(DEBUG, "%s:%d updated pub key", __func__, __LINE__);
 		if (!EVP_PKEY_get_bn_param(dhpkey, OSSL_PKEY_PARAM_PUB_KEY, &pub_key))
 			goto err_dh;
 				/* output public key */
@@ -2432,7 +2432,7 @@ process_openssl_dh_op_evp(struct rte_crypto_op *cop,

 	if (op->ke_type == RTE_CRYPTO_ASYM_KE_PRIV_KEY_GENERATE) {

-		OPENSSL_LOG(DEBUG, "%s:%d updated priv key\n", __func__, __LINE__);
+		OPENSSL_LOG(DEBUG, "%s:%d updated priv key", __func__, __LINE__);
 		if (!EVP_PKEY_get_bn_param(dhpkey, OSSL_PKEY_PARAM_PRIV_KEY, &priv_key))
 			goto err_dh;

@@ -2527,7 +2527,7 @@ process_openssl_dh_op(struct rte_crypto_op *cop,
 		}
 		ret = set_dh_priv_key(dh_key, priv_key);
 		if (ret) {
-			OPENSSL_LOG(ERR, "Failed to set private key\n");
+			OPENSSL_LOG(ERR, "Failed to set private key");
 			cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
 			BN_free(peer_key);
 			BN_free(priv_key);
@@ -2574,7 +2574,7 @@ process_openssl_dh_op(struct rte_crypto_op *cop,
 		}
 		ret = set_dh_priv_key(dh_key, priv_key);
 		if (ret) {
-			OPENSSL_LOG(ERR, "Failed to set private key\n");
+			OPENSSL_LOG(ERR, "Failed to set private key");
 			cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
 			BN_free(priv_key);
 			return 0;
@@ -2596,7 +2596,7 @@ process_openssl_dh_op(struct rte_crypto_op *cop,
 	if (asym_op->dh.ke_type == RTE_CRYPTO_ASYM_KE_PUB_KEY_GENERATE) {
 		const BIGNUM *pub_key = NULL;

-		OPENSSL_LOG(DEBUG, "%s:%d update public key\n",
+		OPENSSL_LOG(DEBUG, "%s:%d update public key",
 				__func__, __LINE__);

 		/* get the generated keys */
@@ -2610,7 +2610,7 @@ process_openssl_dh_op(struct rte_crypto_op *cop,
 	if (asym_op->dh.ke_type == RTE_CRYPTO_ASYM_KE_PRIV_KEY_GENERATE) {
 		const BIGNUM *priv_key = NULL;

-		OPENSSL_LOG(DEBUG, "%s:%d updated priv key\n",
+		OPENSSL_LOG(DEBUG, "%s:%d updated priv key",
 				__func__, __LINE__);

 		/* get the generated keys */
@@ -2719,7 +2719,7 @@ process_openssl_rsa_op_evp(struct rte_crypto_op *cop,
 	default:
 		cop->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
 		OPENSSL_LOG(ERR,
-				"rsa pad type not supported %d\n", pad);
+				"rsa pad type not supported %d", pad);
 		return ret;
 	}

@@ -2746,7 +2746,7 @@ process_openssl_rsa_op_evp(struct rte_crypto_op *cop,
 		op->rsa.cipher.length = outlen;

 		OPENSSL_LOG(DEBUG,
-				"length of encrypted text %zu\n", outlen);
+				"length of encrypted text %zu", outlen);
 		break;

 	case RTE_CRYPTO_ASYM_OP_DECRYPT:
@@ -2770,7 +2770,7 @@ process_openssl_rsa_op_evp(struct rte_crypto_op *cop,
 			goto err_rsa;
 		op->rsa.message.length = outlen;

-		OPENSSL_LOG(DEBUG, "length of decrypted text %zu\n", outlen);
+		OPENSSL_LOG(DEBUG, "length of decrypted text %zu", outlen);
 		break;

 	case RTE_CRYPTO_ASYM_OP_SIGN:
@@ -2825,7 +2825,7 @@ process_openssl_rsa_op_evp(struct rte_crypto_op *cop,

 		OPENSSL_LOG(DEBUG,
 				"Length of public_decrypt %zu "
-				"length of message %zd\n",
+				"length of message %zd",
 				outlen, op->rsa.message.length);
 		if (CRYPTO_memcmp(tmp, op->rsa.message.data,
 				op->rsa.message.length)) {
@@ -3097,7 +3097,7 @@ process_openssl_rsa_op(struct rte_crypto_op *cop,
 	default:
 		cop->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
 		OPENSSL_LOG(ERR,
-				"rsa pad type not supported %d\n", pad);
+				"rsa pad type not supported %d", pad);
 		return 0;
 	}

@@ -3112,7 +3112,7 @@ process_openssl_rsa_op(struct rte_crypto_op *cop,
 		if (ret > 0)
 			op->rsa.cipher.length = ret;
 		OPENSSL_LOG(DEBUG,
-				"length of encrypted text %d\n", ret);
+				"length of encrypted text %d", ret);
 		break;

 	case RTE_CRYPTO_ASYM_OP_DECRYPT:
@@ -3150,7 +3150,7 @@ process_openssl_rsa_op(struct rte_crypto_op *cop,

 		OPENSSL_LOG(DEBUG,
 				"Length of public_decrypt %d "
-				"length of message %zd\n",
+				"length of message %zd",
 				ret, op->rsa.message.length);
 		if ((ret <= 0) || (CRYPTO_memcmp(tmp, op->rsa.message.data,
 				op->rsa.message.length))) {
diff --git a/drivers/crypto/openssl/rte_openssl_pmd_ops.c b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
index 1bbb855a59..b7b612fc57 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
@@ -892,7 +892,7 @@ static int openssl_set_asym_session_parameters(
 #if (OPENSSL_VERSION_NUMBER >= 0x30000000L)
 		OSSL_PARAM_BLD * param_bld = OSSL_PARAM_BLD_new();
 		if (!param_bld) {
-			OPENSSL_LOG(ERR, "failed to allocate resources\n");
+			OPENSSL_LOG(ERR, "failed to allocate resources");
 			goto err_rsa;
 		}

@@ -900,7 +900,7 @@ static int openssl_set_asym_session_parameters(
 			|| !OSSL_PARAM_BLD_push_BN(param_bld,
 					OSSL_PKEY_PARAM_RSA_E, e)) {
 			OSSL_PARAM_BLD_free(param_bld);
-			OPENSSL_LOG(ERR, "failed to allocate resources\n");
+			OPENSSL_LOG(ERR, "failed to allocate resources");
 			goto err_rsa;
 		}

@@ -1033,14 +1033,14 @@ static int openssl_set_asym_session_parameters(
 			ret = set_rsa_params(rsa, p, q);
 			if (ret) {
 				OPENSSL_LOG(ERR,
-					"failed to set rsa params\n");
+					"failed to set rsa params");
 				RSA_free(rsa);
 				goto err_rsa;
 			}
 			ret = set_rsa_crt_params(rsa, dmp1, dmq1, iqmp);
 			if (ret) {
 				OPENSSL_LOG(ERR,
-					"failed to set crt params\n");
+					"failed to set crt params");
 				RSA_free(rsa);
 				/*
 				 * set already populated params to NULL
@@ -1053,7 +1053,7 @@ static int openssl_set_asym_session_parameters(

 		ret = set_rsa_keys(rsa, n, e, d);
 		if (ret) {
-			OPENSSL_LOG(ERR, "Failed to load rsa keys\n");
+			OPENSSL_LOG(ERR, "Failed to load rsa keys");
 			RSA_free(rsa);
 			return ret;
 		}
@@ -1080,7 +1080,7 @@ err_rsa:
 		BN_CTX *ctx = BN_CTX_new();
 		if (ctx == NULL) {
 			OPENSSL_LOG(ERR,
-				" failed to allocate resources\n");
+				" failed to allocate resources");
 			return ret;
 		}
 		BN_CTX_start(ctx);
@@ -1111,7 +1111,7 @@ err_rsa:
 		BN_CTX *ctx = BN_CTX_new();
 		if (ctx == NULL) {
 			OPENSSL_LOG(ERR,
-				" failed to allocate resources\n");
+				" failed to allocate resources");
 			return ret;
 		}
 		BN_CTX_start(ctx);
@@ -1152,7 +1152,7 @@ err_rsa:
 		OSSL_PARAM_BLD *param_bld = NULL;
 		param_bld = OSSL_PARAM_BLD_new();
 		if (!param_bld) {
-			OPENSSL_LOG(ERR, "failed to allocate resources\n");
+			OPENSSL_LOG(ERR, "failed to allocate resources");
 			goto err_dh;
 		}
 		if ((!OSSL_PARAM_BLD_push_utf8_string(param_bld,
@@ -1168,7 +1168,7 @@ err_rsa:
 		OSSL_PARAM_BLD *param_bld_peer = NULL;
 		param_bld_peer = OSSL_PARAM_BLD_new();
 		if (!param_bld_peer) {
-			OPENSSL_LOG(ERR, "failed to allocate resources\n");
+			OPENSSL_LOG(ERR, "failed to allocate resources");
 			OSSL_PARAM_BLD_free(param_bld);
 			goto err_dh;
 		}
@@ -1203,7 +1203,7 @@ err_rsa:
 		dh = DH_new();
 		if (dh == NULL) {
 			OPENSSL_LOG(ERR,
-				"failed to allocate resources\n");
+				"failed to allocate resources");
 			goto err_dh;
 		}
 		ret = set_dh_params(dh, p, g);
@@ -1217,7 +1217,7 @@ err_rsa:
 		break;

 err_dh:
-		OPENSSL_LOG(ERR, " failed to set dh params\n");
+		OPENSSL_LOG(ERR, " failed to set dh params");
 #if (OPENSSL_VERSION_NUMBER >= 0x30000000L)
 		BN_free(*p);
 		BN_free(*g);
@@ -1263,7 +1263,7 @@ err_dh:

 		param_bld = OSSL_PARAM_BLD_new();
 		if (!param_bld) {
-			OPENSSL_LOG(ERR, "failed to allocate resources\n");
+			OPENSSL_LOG(ERR, "failed to allocate resources");
 			goto err_dsa;
 		}

@@ -1273,7 +1273,7 @@ err_dh:
 			|| !OSSL_PARAM_BLD_push_BN(param_bld, OSSL_PKEY_PARAM_PRIV_KEY,
 			*priv_key)) {
 			OSSL_PARAM_BLD_free(param_bld);
-			OPENSSL_LOG(ERR, "failed to allocate resources\n");
+			OPENSSL_LOG(ERR, "failed to allocate resources");
 			goto err_dsa;
 		}
 		asym_session->xfrm_type = RTE_CRYPTO_ASYM_XFORM_DSA;
@@ -1313,14 +1313,14 @@ err_dh:
 		DSA *dsa = DSA_new();
 		if (dsa == NULL) {
 			OPENSSL_LOG(ERR,
-				" failed to allocate resources\n");
+				" failed to allocate resources");
 			goto err_dsa;
 		}

 		ret = set_dsa_params(dsa, p, q, g);
 		if (ret) {
 			DSA_free(dsa);
-			OPENSSL_LOG(ERR, "Failed to dsa params\n");
+			OPENSSL_LOG(ERR, "Failed to dsa params");
 			goto err_dsa;
 		}

@@ -1334,7 +1334,7 @@ err_dh:
 		ret = set_dsa_keys(dsa, pub_key, priv_key);
 		if (ret) {
 			DSA_free(dsa);
-			OPENSSL_LOG(ERR, "Failed to set keys\n");
+			OPENSSL_LOG(ERR, "Failed to set keys");
 			goto err_dsa;
 		}
 		asym_session->u.s.dsa = dsa;
@@ -1369,21 +1369,21 @@ err_dsa:

 		param_bld = OSSL_PARAM_BLD_new();
 		if (!param_bld) {
-			OPENSSL_LOG(ERR, "failed to allocate params\n");
+			OPENSSL_LOG(ERR, "failed to allocate params");
 			goto err_sm2;
 		}

 		ret = OSSL_PARAM_BLD_push_utf8_string(param_bld,
 				OSSL_ASYM_CIPHER_PARAM_DIGEST, "SM3", 0);
 		if (!ret) {
-			OPENSSL_LOG(ERR, "failed to push params\n");
+			OPENSSL_LOG(ERR, "failed to push params");
 			goto err_sm2;
 		}

 		ret = OSSL_PARAM_BLD_push_utf8_string(param_bld,
 				OSSL_PKEY_PARAM_GROUP_NAME, "SM2", 0);
 		if (!ret) {
-			OPENSSL_LOG(ERR, "failed to push params\n");
+			OPENSSL_LOG(ERR, "failed to push params");
 			goto err_sm2;
 		}

@@ -1393,7 +1393,7 @@ err_dsa:
 		ret = OSSL_PARAM_BLD_push_BN(param_bld, OSSL_PKEY_PARAM_PRIV_KEY,
 									 pkey_bn);
 		if (!ret) {
-			OPENSSL_LOG(ERR, "failed to push params\n");
+			OPENSSL_LOG(ERR, "failed to push params");
 			goto err_sm2;
 		}

@@ -1408,13 +1408,13 @@ err_dsa:
 		ret = OSSL_PARAM_BLD_push_octet_string(param_bld,
 				OSSL_PKEY_PARAM_PUB_KEY, pubkey, len);
 		if (!ret) {
-			OPENSSL_LOG(ERR, "failed to push params\n");
+			OPENSSL_LOG(ERR, "failed to push params");
 			goto err_sm2;
 		}

 		params = OSSL_PARAM_BLD_to_param(param_bld);
 		if (!params) {
-			OPENSSL_LOG(ERR, "failed to push params\n");
+			OPENSSL_LOG(ERR, "failed to push params");
 			goto err_sm2;
 		}

diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c
index 2bf3060278..5d240a3de1 100644
--- a/drivers/crypto/qat/qat_asym.c
+++ b/drivers/crypto/qat/qat_asym.c
@@ -1520,7 +1520,7 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,

 	snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s",
 			qat_pci_dev->name, "asym");
-	QAT_LOG(DEBUG, "Creating QAT ASYM device %s\n", name);
+	QAT_LOG(DEBUG, "Creating QAT ASYM device %s", name);

 	if (gen_dev_ops->cryptodev_ops == NULL) {
 		QAT_LOG(ERR, "Device %s does not support asymmetric crypto",
diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c
index 9f4f6c3d93..224cc0ab50 100644
--- a/drivers/crypto/qat/qat_sym_session.c
+++ b/drivers/crypto/qat/qat_sym_session.c
@@ -569,7 +569,7 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
 		ret = -ENOTSUP;
 		goto error_out;
 	default:
-		QAT_LOG(ERR, "Crypto: Undefined Cipher specified %u\n",
+		QAT_LOG(ERR, "Crypto: Undefined Cipher specified %u",
 				cipher_xform->algo);
 		ret = -EINVAL;
 		goto error_out;
@@ -1073,7 +1073,7 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev,
 						aead_xform);
 		break;
 	default:
-		QAT_LOG(ERR, "Crypto: Undefined AEAD specified %u\n",
+		QAT_LOG(ERR, "Crypto: Undefined AEAD specified %u",
 				aead_xform->algo);
 		return -EINVAL;
 	}
@@ -1676,7 +1676,7 @@ static int aes_ipsecmb_job(uint8_t *in, uint8_t *out, IMB_MGR *m,

 	err = imb_get_errno(m);
 	if (err)
-		QAT_LOG(ERR, "Error: %s!\n", imb_get_strerror(err));
+		QAT_LOG(ERR, "Error: %s!", imb_get_strerror(err));

 	return -EFAULT;
 }
@@ -2480,10 +2480,8 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
 			&state2_size, cdesc->aes_cmac);
 #endif
 		if (ret) {
-			cdesc->aes_cmac ? QAT_LOG(ERR,
-						  "(CMAC)precompute failed")
-					: QAT_LOG(ERR,
-						  "(XCBC)precompute failed");
+			QAT_LOG(ERR, "(%s)precompute failed",
+				cdesc->aes_cmac ? "CMAC" : "XCBC");
 			return -EFAULT;
 		}
 		break;
diff --git a/drivers/crypto/uadk/uadk_crypto_pmd.c b/drivers/crypto/uadk/uadk_crypto_pmd.c
index 824383512e..e4b1a32398 100644
--- a/drivers/crypto/uadk/uadk_crypto_pmd.c
+++ b/drivers/crypto/uadk/uadk_crypto_pmd.c
@@ -634,7 +634,7 @@ uadk_set_session_cipher_parameters(struct rte_cryptodev *dev,
 	setup.sched_param = &params;
 	sess->handle_cipher = wd_cipher_alloc_sess(&setup);
 	if (!sess->handle_cipher) {
-		UADK_LOG(ERR, "uadk failed to alloc session!\n");
+		UADK_LOG(ERR, "uadk failed to alloc session!");
 		ret = -EINVAL;
 		goto env_uninit;
 	}
@@ -642,7 +642,7 @@ uadk_set_session_cipher_parameters(struct rte_cryptodev *dev,
 	ret = wd_cipher_set_key(sess->handle_cipher, cipher->key.data, cipher->key.length);
 	if (ret) {
 		wd_cipher_free_sess(sess->handle_cipher);
-		UADK_LOG(ERR, "uadk failed to set key!\n");
+		UADK_LOG(ERR, "uadk failed to set key!");
 		ret = -EINVAL;
 		goto env_uninit;
 	}
@@ -734,7 +734,7 @@ uadk_set_session_auth_parameters(struct rte_cryptodev *dev,
 	setup.sched_param = &params;
 	sess->handle_digest = wd_digest_alloc_sess(&setup);
 	if (!sess->handle_digest) {
-		UADK_LOG(ERR, "uadk failed to alloc session!\n");
+		UADK_LOG(ERR, "uadk failed to alloc session!");
 		ret = -EINVAL;
 		goto env_uninit;
 	}
@@ -745,7 +745,7 @@ uadk_set_session_auth_parameters(struct rte_cryptodev *dev,
 					xform->auth.key.data,
 					xform->auth.key.length);
 		if (ret) {
-			UADK_LOG(ERR, "uadk failed to alloc session!\n");
+			UADK_LOG(ERR, "uadk failed to alloc session!");
 			wd_digest_free_sess(sess->handle_digest);
 			sess->handle_digest = 0;
 			ret = -EINVAL;
diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
index 4854820ba6..c0d3178b71 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.c
+++ b/drivers/crypto/virtio/virtio_cryptodev.c
@@ -591,7 +591,7 @@ virtio_crypto_qp_setup(struct rte_cryptodev *dev, uint16_t queue_pair_id,
 			qp_conf->nb_descriptors, socket_id, &vq);
 	if (ret < 0) {
 		VIRTIO_CRYPTO_INIT_LOG_ERR(
-			"virtio crypto data queue initialization failed\n");
+			"virtio crypto data queue initialization failed");
 		return ret;
 	}

diff --git a/drivers/dma/dpaa/dpaa_qdma.c b/drivers/dma/dpaa/dpaa_qdma.c
index 10e65ef1d7..3d4fd818f8 100644
--- a/drivers/dma/dpaa/dpaa_qdma.c
+++ b/drivers/dma/dpaa/dpaa_qdma.c
@@ -295,7 +295,7 @@ static struct fsl_qdma_queue
 		for (i = 0; i < queue_num; i++) {
 			if (queue_size[i] > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX ||
 			    queue_size[i] < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
-				DPAA_QDMA_ERR("Get wrong queue-sizes.\n");
+				DPAA_QDMA_ERR("Get wrong queue-sizes.");
 				goto fail;
 			}
 			queue_temp = queue_head + i + (j * queue_num);
@@ -345,7 +345,7 @@ fsl_qdma_queue *fsl_qdma_prep_status_queue(void)
 	status_size = QDMA_STATUS_SIZE;
 	if (status_size > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX ||
 	    status_size < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
-		DPAA_QDMA_ERR("Get wrong status_size.\n");
+		DPAA_QDMA_ERR("Get wrong status_size.");
 		return NULL;
 	}

@@ -643,7 +643,7 @@ fsl_qdma_alloc_chan_resources(struct fsl_qdma_chan *fsl_chan)
 				FSL_QDMA_COMMAND_BUFFER_SIZE, 64);
 	if (ret) {
 		DPAA_QDMA_ERR(
-			"failed to alloc dma buffer for comp descriptor\n");
+			"failed to alloc dma buffer for comp descriptor");
 		goto exit;
 	}

@@ -779,7 +779,7 @@ dpaa_qdma_enqueue(void *dev_private, uint16_t vchan,
 			(dma_addr_t)dst, (dma_addr_t)src,
 			length, NULL, NULL);
 	if (!fsl_comp) {
-		DPAA_QDMA_DP_DEBUG("fsl_comp is NULL\n");
+		DPAA_QDMA_DP_DEBUG("fsl_comp is NULL");
 		return -1;
 	}
 	ret = fsl_qdma_enqueue_desc(fsl_chan, fsl_comp, flags);
@@ -803,19 +803,19 @@ dpaa_qdma_dequeue_status(void *dev_private, uint16_t vchan,

 	intr = qdma_readl_be(status + FSL_QDMA_DEDR);
 	if (intr) {
-		DPAA_QDMA_ERR("DMA transaction error! %x\n", intr);
+		DPAA_QDMA_ERR("DMA transaction error! %x", intr);
 		intr = qdma_readl(status + FSL_QDMA_DECFDW0R);
-		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW0R %x\n", intr);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW0R %x", intr);
 		intr = qdma_readl(status + FSL_QDMA_DECFDW1R);
-		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW1R %x\n", intr);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW1R %x", intr);
 		intr = qdma_readl(status + FSL_QDMA_DECFDW2R);
-		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW2R %x\n", intr);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW2R %x", intr);
 		intr = qdma_readl(status + FSL_QDMA_DECFDW3R);
-		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW3R %x\n", intr);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW3R %x", intr);
 		intr = qdma_readl(status + FSL_QDMA_DECFQIDR);
-		DPAA_QDMA_INFO("reg FSL_QDMA_DECFQIDR %x\n", intr);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFQIDR %x", intr);
 		intr = qdma_readl(status + FSL_QDMA_DECBR);
-		DPAA_QDMA_INFO("reg FSL_QDMA_DECBR %x\n", intr);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECBR %x", intr);
 		qdma_writel(0xffffffff,
 			    status + FSL_QDMA_DEDR);
 		intr = qdma_readl(status + FSL_QDMA_DEDR);
@@ -849,19 +849,19 @@ dpaa_qdma_dequeue(void *dev_private,

 	intr = qdma_readl_be(status + FSL_QDMA_DEDR);
 	if (intr) {
-		DPAA_QDMA_ERR("DMA transaction error! %x\n", intr);
+		DPAA_QDMA_ERR("DMA transaction error! %x", intr);
 		intr = qdma_readl(status + FSL_QDMA_DECFDW0R);
-		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW0R %x\n", intr);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW0R %x", intr);
 		intr = qdma_readl(status + FSL_QDMA_DECFDW1R);
-		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW1R %x\n", intr);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW1R %x", intr);
 		intr = qdma_readl(status + FSL_QDMA_DECFDW2R);
-		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW2R %x\n", intr);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW2R %x", intr);
 		intr = qdma_readl(status + FSL_QDMA_DECFDW3R);
-		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW3R %x\n", intr);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW3R %x", intr);
 		intr = qdma_readl(status + FSL_QDMA_DECFQIDR);
-		DPAA_QDMA_INFO("reg FSL_QDMA_DECFQIDR %x\n", intr);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECFQIDR %x", intr);
 		intr = qdma_readl(status + FSL_QDMA_DECBR);
-		DPAA_QDMA_INFO("reg FSL_QDMA_DECBR %x\n", intr);
+		DPAA_QDMA_INFO("reg FSL_QDMA_DECBR %x", intr);
 		qdma_writel(0xffffffff,
 			    status + FSL_QDMA_DEDR);
 		intr = qdma_readl(status + FSL_QDMA_DEDR);
@@ -974,7 +974,7 @@ dpaa_qdma_init(struct rte_dma_dev *dmadev)
 	close(ccsr_qdma_fd);
 	if (fsl_qdma->ctrl_base == MAP_FAILED) {
 		DPAA_QDMA_ERR("Can not map CCSR base qdma: Phys: %08" PRIx64
-		       "size %d\n", phys_addr, regs_size);
+		       "size %d", phys_addr, regs_size);
 		goto err;
 	}

@@ -998,7 +998,7 @@ dpaa_qdma_init(struct rte_dma_dev *dmadev)

 	ret = fsl_qdma_reg_init(fsl_qdma);
 	if (ret) {
-		DPAA_QDMA_ERR("Can't Initialize the qDMA engine.\n");
+		DPAA_QDMA_ERR("Can't Initialize the qDMA engine.");
 		munmap(fsl_qdma->ctrl_base, regs_size);
 		goto err;
 	}
diff --git a/drivers/dma/dpaa2/dpaa2_qdma.c b/drivers/dma/dpaa2/dpaa2_qdma.c
index 2c91ceec13..5780e49297 100644
--- a/drivers/dma/dpaa2/dpaa2_qdma.c
+++ b/drivers/dma/dpaa2/dpaa2_qdma.c
@@ -578,7 +578,7 @@ dpdmai_dev_dequeue_multijob_prefetch(struct qdma_virt_queue *qdma_vq,
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
 			DPAA2_QDMA_ERR(
-				"Failed to allocate IO portal, tid: %d\n",
+				"Failed to allocate IO portal, tid: %d",
 				rte_gettid());
 			return 0;
 		}
@@ -608,7 +608,7 @@ dpdmai_dev_dequeue_multijob_prefetch(struct qdma_virt_queue *qdma_vq,
 		while (1) {
 			if (qbman_swp_pull(swp, &pulldesc)) {
 				DPAA2_QDMA_DP_WARN(
-					"VDQ command not issued.QBMAN busy\n");
+					"VDQ command not issued.QBMAN busy");
 					/* Portal was busy, try again */
 				continue;
 			}
@@ -684,7 +684,7 @@ dpdmai_dev_dequeue_multijob_prefetch(struct qdma_virt_queue *qdma_vq,
 	while (1) {
 		if (qbman_swp_pull(swp, &pulldesc)) {
 			DPAA2_QDMA_DP_WARN(
-				"VDQ command is not issued. QBMAN is busy (2)\n");
+				"VDQ command is not issued. QBMAN is busy (2)");
 			continue;
 		}
 		break;
@@ -728,7 +728,7 @@ dpdmai_dev_dequeue_multijob_no_prefetch(struct qdma_virt_queue *qdma_vq,
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
 			DPAA2_QDMA_ERR(
-				"Failed to allocate IO portal, tid: %d\n",
+				"Failed to allocate IO portal, tid: %d",
 				rte_gettid());
 			return 0;
 		}
@@ -825,7 +825,7 @@ dpdmai_dev_submit_multi(struct qdma_virt_queue *qdma_vq,
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
 			DPAA2_QDMA_ERR(
-				"Failed to allocate IO portal, tid: %d\n",
+				"Failed to allocate IO portal, tid: %d",
 				rte_gettid());
 			return 0;
 		}
diff --git a/drivers/dma/hisilicon/hisi_dmadev.c b/drivers/dma/hisilicon/hisi_dmadev.c
index 4db3b0554c..8bc076f5d5 100644
--- a/drivers/dma/hisilicon/hisi_dmadev.c
+++ b/drivers/dma/hisilicon/hisi_dmadev.c
@@ -358,7 +358,7 @@ hisi_dma_start(struct rte_dma_dev *dev)
 	struct hisi_dma_dev *hw = dev->data->dev_private;

 	if (hw->iomz == NULL) {
-		HISI_DMA_ERR(hw, "Vchan was not setup, start fail!\n");
+		HISI_DMA_ERR(hw, "Vchan was not setup, start fail!");
 		return -EINVAL;
 	}

@@ -631,7 +631,7 @@ hisi_dma_scan_cq(struct hisi_dma_dev *hw)
 			 * status array indexed by csq_head. Only error logs
 			 * are used for prompting.
 			 */
-			HISI_DMA_ERR(hw, "invalid csq_head:%u!\n", csq_head);
+			HISI_DMA_ERR(hw, "invalid csq_head:%u!", csq_head);
 			count = 0;
 			break;
 		}
@@ -913,7 +913,7 @@ hisi_dma_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	rte_pci_device_name(&pci_dev->addr, name, sizeof(name));

 	if (pci_dev->mem_resource[2].addr == NULL) {
-		HISI_DMA_LOG(ERR, "%s BAR2 is NULL!\n", name);
+		HISI_DMA_LOG(ERR, "%s BAR2 is NULL!", name);
 		return -ENODEV;
 	}

diff --git a/drivers/dma/idxd/idxd_common.c b/drivers/dma/idxd/idxd_common.c
index 83d53942eb..dc2e8cd432 100644
--- a/drivers/dma/idxd/idxd_common.c
+++ b/drivers/dma/idxd/idxd_common.c
@@ -616,7 +616,7 @@ idxd_dmadev_create(const char *name, struct rte_device *dev,
 			sizeof(idxd->batch_comp_ring[0]))	* (idxd->max_batches + 1),
 			sizeof(idxd->batch_comp_ring[0]), dev->numa_node);
 	if (idxd->batch_comp_ring == NULL) {
-		IDXD_PMD_ERR("Unable to reserve memory for batch data\n");
+		IDXD_PMD_ERR("Unable to reserve memory for batch data");
 		ret = -ENOMEM;
 		goto cleanup;
 	}
diff --git a/drivers/dma/idxd/idxd_pci.c b/drivers/dma/idxd/idxd_pci.c
index a78889a7ef..2ee78773bb 100644
--- a/drivers/dma/idxd/idxd_pci.c
+++ b/drivers/dma/idxd/idxd_pci.c
@@ -323,7 +323,7 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)

 		/* look up queue 0 to get the PCI structure */
 		snprintf(qname, sizeof(qname), "%s-q0", name);
-		IDXD_PMD_INFO("Looking up %s\n", qname);
+		IDXD_PMD_INFO("Looking up %s", qname);
 		ret = idxd_dmadev_create(qname, &dev->device, NULL, &idxd_pci_ops);
 		if (ret != 0) {
 			IDXD_PMD_ERR("Failed to create dmadev %s", name);
@@ -338,7 +338,7 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
 		for (qid = 1; qid < max_qid; qid++) {
 			/* add the queue number to each device name */
 			snprintf(qname, sizeof(qname), "%s-q%d", name, qid);
-			IDXD_PMD_INFO("Looking up %s\n", qname);
+			IDXD_PMD_INFO("Looking up %s", qname);
 			ret = idxd_dmadev_create(qname, &dev->device, NULL, &idxd_pci_ops);
 			if (ret != 0) {
 				IDXD_PMD_ERR("Failed to create dmadev %s", name);
@@ -364,7 +364,7 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
 		return ret;
 	}
 	if (idxd.u.pci->portals == NULL) {
-		IDXD_PMD_ERR("Error, invalid portal assigned during initialization\n");
+		IDXD_PMD_ERR("Error, invalid portal assigned during initialization");
 		free(idxd.u.pci);
 		return -EINVAL;
 	}
diff --git a/drivers/dma/ioat/ioat_dmadev.c b/drivers/dma/ioat/ioat_dmadev.c
index 5fc14bcf22..8b7ff5652f 100644
--- a/drivers/dma/ioat/ioat_dmadev.c
+++ b/drivers/dma/ioat/ioat_dmadev.c
@@ -156,12 +156,12 @@ ioat_dev_start(struct rte_dma_dev *dev)
 	ioat->offset = 0;
 	ioat->failure = 0;

-	IOAT_PMD_DEBUG("channel status - %s [0x%"PRIx64"]\n",
+	IOAT_PMD_DEBUG("channel status - %s [0x%"PRIx64"]",
 			chansts_readable[ioat->status & IOAT_CHANSTS_STATUS],
 			ioat->status);

 	if ((ioat->regs->chansts & IOAT_CHANSTS_STATUS) == IOAT_CHANSTS_HALTED) {
-		IOAT_PMD_WARN("Device HALTED on start, attempting to recover\n");
+		IOAT_PMD_WARN("Device HALTED on start, attempting to recover");
 		if (__ioat_recover(ioat) != 0) {
 			IOAT_PMD_ERR("Device couldn't be recovered");
 			return -1;
@@ -469,7 +469,7 @@ ioat_completed(void *dev_private, uint16_t qid __rte_unused, const uint16_t max_
 		ioat->failure = ioat->regs->chanerr;
 		ioat->next_read = read + count + 1;
 		if (__ioat_recover(ioat) != 0) {
-			IOAT_PMD_ERR("Device HALTED and could not be recovered\n");
+			IOAT_PMD_ERR("Device HALTED and could not be recovered");
 			__dev_dump(dev_private, stdout);
 			return 0;
 		}
@@ -515,7 +515,7 @@ ioat_completed_status(void *dev_private, uint16_t qid __rte_unused,
 		count++;
 		ioat->next_read = read + count;
 		if (__ioat_recover(ioat) != 0) {
-			IOAT_PMD_ERR("Device HALTED and could not be recovered\n");
+			IOAT_PMD_ERR("Device HALTED and could not be recovered");
 			__dev_dump(dev_private, stdout);
 			return 0;
 		}
@@ -652,12 +652,12 @@ ioat_dmadev_create(const char *name, struct rte_pci_device *dev)

 	/* Do device initialization - reset and set error behaviour. */
 	if (ioat->regs->chancnt != 1)
-		IOAT_PMD_WARN("%s: Channel count == %d\n", __func__,
+		IOAT_PMD_WARN("%s: Channel count == %d", __func__,
 				ioat->regs->chancnt);

 	/* Locked by someone else. */
 	if (ioat->regs->chanctrl & IOAT_CHANCTRL_CHANNEL_IN_USE) {
-		IOAT_PMD_WARN("%s: Channel appears locked\n", __func__);
+		IOAT_PMD_WARN("%s: Channel appears locked", __func__);
 		ioat->regs->chanctrl = 0;
 	}

@@ -676,7 +676,7 @@ ioat_dmadev_create(const char *name, struct rte_pci_device *dev)
 		rte_delay_ms(1);
 		if (++retry >= 200) {
 			IOAT_PMD_ERR("%s: cannot reset device. CHANCMD=%#"PRIx8
-					", CHANSTS=%#"PRIx64", CHANERR=%#"PRIx32"\n",
+					", CHANSTS=%#"PRIx64", CHANERR=%#"PRIx32,
 					__func__,
 					ioat->regs->chancmd,
 					ioat->regs->chansts,
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 6d59fdf909..bba70646fa 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -268,7 +268,7 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
 	sso_set_priv_mem_fn(dev->event_dev, NULL);

 	plt_tim_dbg(
-		"Total memory used %" PRIu64 "MB\n",
+		"Total memory used %" PRIu64 "MB",
 		(uint64_t)(((tim_ring->nb_chunks * tim_ring->chunk_sz) +
 			    (tim_ring->nb_bkts * sizeof(struct cnxk_tim_bkt))) /
 			   BIT_ULL(20)));
diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
index 5044cb17ef..9dc5edb3fb 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb2/dlb2.c
@@ -168,7 +168,7 @@ dlb2_hw_query_resources(struct dlb2_eventdev *dlb2)
 	ret = dlb2_iface_get_num_resources(handle,
 					   &dlb2->hw_rsrc_query_results);
 	if (ret) {
-		DLB2_LOG_ERR("ioctl get dlb2 num resources, err=%d\n", ret);
+		DLB2_LOG_ERR("ioctl get dlb2 num resources, err=%d", ret);
 		return ret;
 	}

@@ -256,7 +256,7 @@ set_producer_coremask(const char *key __rte_unused,
 	const char **mask_str = opaque;

 	if (value == NULL || opaque == NULL) {
-		DLB2_LOG_ERR("NULL pointer\n");
+		DLB2_LOG_ERR("NULL pointer");
 		return -EINVAL;
 	}

@@ -290,7 +290,7 @@ set_max_cq_depth(const char *key __rte_unused,
 	int ret;

 	if (value == NULL || opaque == NULL) {
-		DLB2_LOG_ERR("NULL pointer\n");
+		DLB2_LOG_ERR("NULL pointer");
 		return -EINVAL;
 	}

@@ -301,7 +301,7 @@ set_max_cq_depth(const char *key __rte_unused,
 	if (*max_cq_depth < DLB2_MIN_CQ_DEPTH_OVERRIDE ||
 	    *max_cq_depth > DLB2_MAX_CQ_DEPTH_OVERRIDE ||
 	    !rte_is_power_of_2(*max_cq_depth)) {
-		DLB2_LOG_ERR("dlb2: max_cq_depth %d and %d and a power of 2\n",
+		DLB2_LOG_ERR("dlb2: max_cq_depth %d and %d and a power of 2",
 			     DLB2_MIN_CQ_DEPTH_OVERRIDE,
 			     DLB2_MAX_CQ_DEPTH_OVERRIDE);
 		return -EINVAL;
@@ -319,7 +319,7 @@ set_max_enq_depth(const char *key __rte_unused,
 	int ret;

 	if (value == NULL || opaque == NULL) {
-		DLB2_LOG_ERR("NULL pointer\n");
+		DLB2_LOG_ERR("NULL pointer");
 		return -EINVAL;
 	}

@@ -330,7 +330,7 @@ set_max_enq_depth(const char *key __rte_unused,
 	if (*max_enq_depth < DLB2_MIN_ENQ_DEPTH_OVERRIDE ||
 	    *max_enq_depth > DLB2_MAX_ENQ_DEPTH_OVERRIDE ||
 	    !rte_is_power_of_2(*max_enq_depth)) {
-		DLB2_LOG_ERR("dlb2: max_enq_depth %d and %d and a power of 2\n",
+		DLB2_LOG_ERR("dlb2: max_enq_depth %d and %d and a power of 2",
 		DLB2_MIN_ENQ_DEPTH_OVERRIDE,
 		DLB2_MAX_ENQ_DEPTH_OVERRIDE);
 		return -EINVAL;
@@ -348,7 +348,7 @@ set_max_num_events(const char *key __rte_unused,
 	int ret;

 	if (value == NULL || opaque == NULL) {
-		DLB2_LOG_ERR("NULL pointer\n");
+		DLB2_LOG_ERR("NULL pointer");
 		return -EINVAL;
 	}

@@ -358,7 +358,7 @@ set_max_num_events(const char *key __rte_unused,

 	if (*max_num_events < 0 || *max_num_events >
 			DLB2_MAX_NUM_LDB_CREDITS) {
-		DLB2_LOG_ERR("dlb2: max_num_events must be between 0 and %d\n",
+		DLB2_LOG_ERR("dlb2: max_num_events must be between 0 and %d",
 			     DLB2_MAX_NUM_LDB_CREDITS);
 		return -EINVAL;
 	}
@@ -375,7 +375,7 @@ set_num_dir_credits(const char *key __rte_unused,
 	int ret;

 	if (value == NULL || opaque == NULL) {
-		DLB2_LOG_ERR("NULL pointer\n");
+		DLB2_LOG_ERR("NULL pointer");
 		return -EINVAL;
 	}

@@ -385,7 +385,7 @@ set_num_dir_credits(const char *key __rte_unused,

 	if (*num_dir_credits < 0 ||
 	    *num_dir_credits > DLB2_MAX_NUM_DIR_CREDITS(DLB2_HW_V2)) {
-		DLB2_LOG_ERR("dlb2: num_dir_credits must be between 0 and %d\n",
+		DLB2_LOG_ERR("dlb2: num_dir_credits must be between 0 and %d",
 			     DLB2_MAX_NUM_DIR_CREDITS(DLB2_HW_V2));
 		return -EINVAL;
 	}
@@ -402,7 +402,7 @@ set_dev_id(const char *key __rte_unused,
 	int ret;

 	if (value == NULL || opaque == NULL) {
-		DLB2_LOG_ERR("NULL pointer\n");
+		DLB2_LOG_ERR("NULL pointer");
 		return -EINVAL;
 	}

@@ -422,7 +422,7 @@ set_poll_interval(const char *key __rte_unused,
 	int ret;

 	if (value == NULL || opaque == NULL) {
-		DLB2_LOG_ERR("NULL pointer\n");
+		DLB2_LOG_ERR("NULL pointer");
 		return -EINVAL;
 	}

@@ -442,7 +442,7 @@ set_port_cos(const char *key __rte_unused,
 	int first, last, cos_id, i;

 	if (value == NULL || opaque == NULL) {
-		DLB2_LOG_ERR("NULL pointer\n");
+		DLB2_LOG_ERR("NULL pointer");
 		return -EINVAL;
 	}

@@ -455,18 +455,18 @@ set_port_cos(const char *key __rte_unused,
 	} else if (sscanf(value, "%d:%d", &first, &cos_id) == 2) {
 		last = first;
 	} else {
-		DLB2_LOG_ERR("Error parsing ldb port port_cos devarg. Should be port-port:val, or port:val\n");
+		DLB2_LOG_ERR("Error parsing ldb port port_cos devarg. Should be port-port:val, or port:val");
 		return -EINVAL;
 	}

 	if (first > last || first < 0 ||
 		last >= DLB2_MAX_NUM_LDB_PORTS) {
-		DLB2_LOG_ERR("Error parsing ldb port cos_id arg, invalid port value\n");
+		DLB2_LOG_ERR("Error parsing ldb port cos_id arg, invalid port value");
 		return -EINVAL;
 	}

 	if (cos_id < DLB2_COS_0 || cos_id > DLB2_COS_3) {
-		DLB2_LOG_ERR("Error parsing ldb port cos_id devarg, must be between 0 and 4\n");
+		DLB2_LOG_ERR("Error parsing ldb port cos_id devarg, must be between 0 and 4");
 		return -EINVAL;
 	}

@@ -484,7 +484,7 @@ set_cos_bw(const char *key __rte_unused,
 	struct dlb2_cos_bw *cos_bw = opaque;

 	if (opaque == NULL) {
-		DLB2_LOG_ERR("NULL pointer\n");
+		DLB2_LOG_ERR("NULL pointer");
 		return -EINVAL;
 	}

@@ -492,11 +492,11 @@ set_cos_bw(const char *key __rte_unused,

 	if (sscanf(value, "%d:%d:%d:%d", &cos_bw->val[0], &cos_bw->val[1],
 		   &cos_bw->val[2], &cos_bw->val[3]) != 4) {
-		DLB2_LOG_ERR("Error parsing cos bandwidth devarg. Should be bw0:bw1:bw2:bw3 where all values combined are <= 100\n");
+		DLB2_LOG_ERR("Error parsing cos bandwidth devarg. Should be bw0:bw1:bw2:bw3 where all values combined are <= 100");
 		return -EINVAL;
 	}
 	if (cos_bw->val[0] + cos_bw->val[1] + cos_bw->val[2] + cos_bw->val[3] > 100) {
-		DLB2_LOG_ERR("Error parsing cos bandwidth devarg. Should be bw0:bw1:bw2:bw3  where all values combined are <= 100\n");
+		DLB2_LOG_ERR("Error parsing cos bandwidth devarg. Should be bw0:bw1:bw2:bw3  where all values combined are <= 100");
 		return -EINVAL;
 	}

@@ -512,7 +512,7 @@ set_sw_credit_quanta(const char *key __rte_unused,
 	int ret;

 	if (value == NULL || opaque == NULL) {
-		DLB2_LOG_ERR("NULL pointer\n");
+		DLB2_LOG_ERR("NULL pointer");
 		return -EINVAL;
 	}

@@ -521,7 +521,7 @@ set_sw_credit_quanta(const char *key __rte_unused,
 		return ret;

 	if (*sw_credit_quanta <= 0) {
-		DLB2_LOG_ERR("sw_credit_quanta must be > 0\n");
+		DLB2_LOG_ERR("sw_credit_quanta must be > 0");
 		return -EINVAL;
 	}

@@ -537,7 +537,7 @@ set_hw_credit_quanta(const char *key __rte_unused,
 	int ret;

 	if (value == NULL || opaque == NULL) {
-		DLB2_LOG_ERR("NULL pointer\n");
+		DLB2_LOG_ERR("NULL pointer");
 		return -EINVAL;
 	}

@@ -557,7 +557,7 @@ set_default_depth_thresh(const char *key __rte_unused,
 	int ret;

 	if (value == NULL || opaque == NULL) {
-		DLB2_LOG_ERR("NULL pointer\n");
+		DLB2_LOG_ERR("NULL pointer");
 		return -EINVAL;
 	}

@@ -576,7 +576,7 @@ set_vector_opts_enab(const char *key __rte_unused,
 	bool *dlb2_vector_opts_enabled = opaque;

 	if (value == NULL || opaque == NULL) {
-		DLB2_LOG_ERR("NULL pointer\n");
+		DLB2_LOG_ERR("NULL pointer");
 		return -EINVAL;
 	}

@@ -596,7 +596,7 @@ set_default_ldb_port_allocation(const char *key __rte_unused,
 	bool *default_ldb_port_allocation = opaque;

 	if (value == NULL || opaque == NULL) {
-		DLB2_LOG_ERR("NULL pointer\n");
+		DLB2_LOG_ERR("NULL pointer");
 		return -EINVAL;
 	}

@@ -616,7 +616,7 @@ set_enable_cq_weight(const char *key __rte_unused,
 	bool *enable_cq_weight = opaque;

 	if (value == NULL || opaque == NULL) {
-		DLB2_LOG_ERR("NULL pointer\n");
+		DLB2_LOG_ERR("NULL pointer");
 		return -EINVAL;
 	}

@@ -637,7 +637,7 @@ set_qid_depth_thresh(const char *key __rte_unused,
 	int first, last, thresh, i;

 	if (value == NULL || opaque == NULL) {
-		DLB2_LOG_ERR("NULL pointer\n");
+		DLB2_LOG_ERR("NULL pointer");
 		return -EINVAL;
 	}

@@ -654,18 +654,18 @@ set_qid_depth_thresh(const char *key __rte_unused,
 	} else if (sscanf(value, "%d:%d", &first, &thresh) == 2) {
 		last = first;
 	} else {
-		DLB2_LOG_ERR("Error parsing qid depth devarg. Should be all:val, qid-qid:val, or qid:val\n");
+		DLB2_LOG_ERR("Error parsing qid depth devarg. Should be all:val, qid-qid:val, or qid:val");
 		return -EINVAL;
 	}

 	if (first > last || first < 0 ||
 		last >= DLB2_MAX_NUM_QUEUES(DLB2_HW_V2)) {
-		DLB2_LOG_ERR("Error parsing qid depth devarg, invalid qid value\n");
+		DLB2_LOG_ERR("Error parsing qid depth devarg, invalid qid value");
 		return -EINVAL;
 	}

 	if (thresh < 0 || thresh > DLB2_MAX_QUEUE_DEPTH_THRESHOLD) {
-		DLB2_LOG_ERR("Error parsing qid depth devarg, threshold > %d\n",
+		DLB2_LOG_ERR("Error parsing qid depth devarg, threshold > %d",
 			     DLB2_MAX_QUEUE_DEPTH_THRESHOLD);
 		return -EINVAL;
 	}
@@ -685,7 +685,7 @@ set_qid_depth_thresh_v2_5(const char *key __rte_unused,
 	int first, last, thresh, i;

 	if (value == NULL || opaque == NULL) {
-		DLB2_LOG_ERR("NULL pointer\n");
+		DLB2_LOG_ERR("NULL pointer");
 		return -EINVAL;
 	}

@@ -702,18 +702,18 @@ set_qid_depth_thresh_v2_5(const char *key __rte_unused,
 	} else if (sscanf(value, "%d:%d", &first, &thresh) == 2) {
 		last = first;
 	} else {
-		DLB2_LOG_ERR("Error parsing qid depth devarg. Should be all:val, qid-qid:val, or qid:val\n");
+		DLB2_LOG_ERR("Error parsing qid depth devarg. Should be all:val, qid-qid:val, or qid:val");
 		return -EINVAL;
 	}

 	if (first > last || first < 0 ||
 		last >= DLB2_MAX_NUM_QUEUES(DLB2_HW_V2_5)) {
-		DLB2_LOG_ERR("Error parsing qid depth devarg, invalid qid value\n");
+		DLB2_LOG_ERR("Error parsing qid depth devarg, invalid qid value");
 		return -EINVAL;
 	}

 	if (thresh < 0 || thresh > DLB2_MAX_QUEUE_DEPTH_THRESHOLD) {
-		DLB2_LOG_ERR("Error parsing qid depth devarg, threshold > %d\n",
+		DLB2_LOG_ERR("Error parsing qid depth devarg, threshold > %d",
 			     DLB2_MAX_QUEUE_DEPTH_THRESHOLD);
 		return -EINVAL;
 	}
@@ -735,7 +735,7 @@ dlb2_eventdev_info_get(struct rte_eventdev *dev,
 	if (ret) {
 		const struct rte_eventdev_data *data = dev->data;

-		DLB2_LOG_ERR("get resources err=%d, devid=%d\n",
+		DLB2_LOG_ERR("get resources err=%d, devid=%d",
 			     ret, data->dev_id);
 		/* fn is void, so fall through and return values set up in
 		 * probe
@@ -778,7 +778,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2,
 	struct dlb2_create_sched_domain_args *cfg;

 	if (resources_asked == NULL) {
-		DLB2_LOG_ERR("dlb2: dlb2_create NULL parameter\n");
+		DLB2_LOG_ERR("dlb2: dlb2_create NULL parameter");
 		ret = EINVAL;
 		goto error_exit;
 	}
@@ -806,7 +806,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2,

 	if (cos_ports > resources_asked->num_ldb_ports ||
 	    (cos_ports && dlb2->max_cos_port >= resources_asked->num_ldb_ports)) {
-		DLB2_LOG_ERR("dlb2: num_ldb_ports < cos_ports\n");
+		DLB2_LOG_ERR("dlb2: num_ldb_ports < cos_ports");
 		ret = EINVAL;
 		goto error_exit;
 	}
@@ -851,7 +851,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2,

 	ret = dlb2_iface_sched_domain_create(handle, cfg);
 	if (ret < 0) {
-		DLB2_LOG_ERR("dlb2: domain create failed, ret = %d, extra status: %s\n",
+		DLB2_LOG_ERR("dlb2: domain create failed, ret = %d, extra status: %s",
 			     ret,
 			     dlb2_error_strings[cfg->response.status]);

@@ -927,27 +927,27 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
 		dlb2_hw_reset_sched_domain(dev, true);
 		ret = dlb2_hw_query_resources(dlb2);
 		if (ret) {
-			DLB2_LOG_ERR("get resources err=%d, devid=%d\n",
+			DLB2_LOG_ERR("get resources err=%d, devid=%d",
 				     ret, data->dev_id);
 			return ret;
 		}
 	}

 	if (config->nb_event_queues > rsrcs->num_queues) {
-		DLB2_LOG_ERR("nb_event_queues parameter (%d) exceeds the QM device's capabilities (%d).\n",
+		DLB2_LOG_ERR("nb_event_queues parameter (%d) exceeds the QM device's capabilities (%d).",
 			     config->nb_event_queues,
 			     rsrcs->num_queues);
 		return -EINVAL;
 	}
 	if (config->nb_event_ports > (rsrcs->num_ldb_ports
 			+ rsrcs->num_dir_ports)) {
-		DLB2_LOG_ERR("nb_event_ports parameter (%d) exceeds the QM device's capabilities (%d).\n",
+		DLB2_LOG_ERR("nb_event_ports parameter (%d) exceeds the QM device's capabilities (%d).",
 			     config->nb_event_ports,
 			     (rsrcs->num_ldb_ports + rsrcs->num_dir_ports));
 		return -EINVAL;
 	}
 	if (config->nb_events_limit > rsrcs->nb_events_limit) {
-		DLB2_LOG_ERR("nb_events_limit parameter (%d) exceeds the QM device's capabilities (%d).\n",
+		DLB2_LOG_ERR("nb_events_limit parameter (%d) exceeds the QM device's capabilities (%d).",
 			     config->nb_events_limit,
 			     rsrcs->nb_events_limit);
 		return -EINVAL;
@@ -997,7 +997,7 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)

 	if (dlb2_hw_create_sched_domain(dlb2, handle, rsrcs,
 					dlb2->version) < 0) {
-		DLB2_LOG_ERR("dlb2_hw_create_sched_domain failed\n");
+		DLB2_LOG_ERR("dlb2_hw_create_sched_domain failed");
 		return -ENODEV;
 	}

@@ -1065,7 +1065,7 @@ dlb2_get_sn_allocation(struct dlb2_eventdev *dlb2, int group)

 	ret = dlb2_iface_get_sn_allocation(handle, &cfg);
 	if (ret < 0) {
-		DLB2_LOG_ERR("dlb2: get_sn_allocation ret=%d (driver status: %s)\n",
+		DLB2_LOG_ERR("dlb2: get_sn_allocation ret=%d (driver status: %s)",
 			     ret, dlb2_error_strings[cfg.response.status]);
 		return ret;
 	}
@@ -1085,7 +1085,7 @@ dlb2_set_sn_allocation(struct dlb2_eventdev *dlb2, int group, int num)

 	ret = dlb2_iface_set_sn_allocation(handle, &cfg);
 	if (ret < 0) {
-		DLB2_LOG_ERR("dlb2: set_sn_allocation ret=%d (driver status: %s)\n",
+		DLB2_LOG_ERR("dlb2: set_sn_allocation ret=%d (driver status: %s)",
 			     ret, dlb2_error_strings[cfg.response.status]);
 		return ret;
 	}
@@ -1104,7 +1104,7 @@ dlb2_get_sn_occupancy(struct dlb2_eventdev *dlb2, int group)

 	ret = dlb2_iface_get_sn_occupancy(handle, &cfg);
 	if (ret < 0) {
-		DLB2_LOG_ERR("dlb2: get_sn_occupancy ret=%d (driver status: %s)\n",
+		DLB2_LOG_ERR("dlb2: get_sn_occupancy ret=%d (driver status: %s)",
 			     ret, dlb2_error_strings[cfg.response.status]);
 		return ret;
 	}
@@ -1158,7 +1158,7 @@ dlb2_program_sn_allocation(struct dlb2_eventdev *dlb2,
 	}

 	if (i == DLB2_NUM_SN_GROUPS) {
-		DLB2_LOG_ERR("[%s()] No groups with %d sequence_numbers are available or have free slots\n",
+		DLB2_LOG_ERR("[%s()] No groups with %d sequence_numbers are available or have free slots",
 		       __func__, sequence_numbers);
 		return;
 	}
@@ -1233,7 +1233,7 @@ dlb2_hw_create_ldb_queue(struct dlb2_eventdev *dlb2,

 	ret = dlb2_iface_ldb_queue_create(handle, &cfg);
 	if (ret < 0) {
-		DLB2_LOG_ERR("dlb2: create LB event queue error, ret=%d (driver status: %s)\n",
+		DLB2_LOG_ERR("dlb2: create LB event queue error, ret=%d (driver status: %s)",
 			     ret, dlb2_error_strings[cfg.response.status]);
 		return -EINVAL;
 	}
@@ -1269,7 +1269,7 @@ dlb2_eventdev_ldb_queue_setup(struct rte_eventdev *dev,

 	qm_qid = dlb2_hw_create_ldb_queue(dlb2, ev_queue, queue_conf);
 	if (qm_qid < 0) {
-		DLB2_LOG_ERR("Failed to create the load-balanced queue\n");
+		DLB2_LOG_ERR("Failed to create the load-balanced queue");

 		return qm_qid;
 	}
@@ -1377,7 +1377,7 @@ dlb2_init_consume_qe(struct dlb2_port *qm_port, char *mz_name)
 			RTE_CACHE_LINE_SIZE);

 	if (qe == NULL)	{
-		DLB2_LOG_ERR("dlb2: no memory for consume_qe\n");
+		DLB2_LOG_ERR("dlb2: no memory for consume_qe");
 		return -ENOMEM;
 	}
 	qm_port->consume_qe = qe;
@@ -1409,7 +1409,7 @@ dlb2_init_int_arm_qe(struct dlb2_port *qm_port, char *mz_name)
 			RTE_CACHE_LINE_SIZE);

 	if (qe == NULL) {
-		DLB2_LOG_ERR("dlb2: no memory for complete_qe\n");
+		DLB2_LOG_ERR("dlb2: no memory for complete_qe");
 		return -ENOMEM;
 	}
 	qm_port->int_arm_qe = qe;
@@ -1437,20 +1437,20 @@ dlb2_init_qe_mem(struct dlb2_port *qm_port, char *mz_name)
 	qm_port->qe4 = rte_zmalloc(mz_name, sz, RTE_CACHE_LINE_SIZE);

 	if (qm_port->qe4 == NULL) {
-		DLB2_LOG_ERR("dlb2: no qe4 memory\n");
+		DLB2_LOG_ERR("dlb2: no qe4 memory");
 		ret = -ENOMEM;
 		goto error_exit;
 	}

 	ret = dlb2_init_int_arm_qe(qm_port, mz_name);
 	if (ret < 0) {
-		DLB2_LOG_ERR("dlb2: dlb2_init_int_arm_qe ret=%d\n", ret);
+		DLB2_LOG_ERR("dlb2: dlb2_init_int_arm_qe ret=%d", ret);
 		goto error_exit;
 	}

 	ret = dlb2_init_consume_qe(qm_port, mz_name);
 	if (ret < 0) {
-		DLB2_LOG_ERR("dlb2: dlb2_init_consume_qe ret=%d\n", ret);
+		DLB2_LOG_ERR("dlb2: dlb2_init_consume_qe ret=%d", ret);
 		goto error_exit;
 	}

@@ -1533,14 +1533,14 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
 		return -EINVAL;

 	if (dequeue_depth < DLB2_MIN_CQ_DEPTH) {
-		DLB2_LOG_ERR("dlb2: invalid cq depth, must be at least %d\n",
+		DLB2_LOG_ERR("dlb2: invalid cq depth, must be at least %d",
 			     DLB2_MIN_CQ_DEPTH);
 		return -EINVAL;
 	}

 	if (dlb2->version == DLB2_HW_V2 && ev_port->cq_weight != 0 &&
 	    ev_port->cq_weight > dequeue_depth) {
-		DLB2_LOG_ERR("dlb2: invalid cq dequeue depth %d, must be >= cq weight %d\n",
+		DLB2_LOG_ERR("dlb2: invalid cq dequeue depth %d, must be >= cq weight %d",
 			     dequeue_depth, ev_port->cq_weight);
 		return -EINVAL;
 	}
@@ -1576,7 +1576,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,

 	ret = dlb2_iface_ldb_port_create(handle, &cfg,  dlb2->poll_mode);
 	if (ret < 0) {
-		DLB2_LOG_ERR("dlb2: dlb2_ldb_port_create error, ret=%d (driver status: %s)\n",
+		DLB2_LOG_ERR("dlb2: dlb2_ldb_port_create error, ret=%d (driver status: %s)",
 			     ret, dlb2_error_strings[cfg.response.status]);
 		goto error_exit;
 	}
@@ -1599,7 +1599,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,

 	ret = dlb2_init_qe_mem(qm_port, mz_name);
 	if (ret < 0) {
-		DLB2_LOG_ERR("dlb2: init_qe_mem failed, ret=%d\n", ret);
+		DLB2_LOG_ERR("dlb2: init_qe_mem failed, ret=%d", ret);
 		goto error_exit;
 	}

@@ -1612,7 +1612,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
 		ret = dlb2_iface_enable_cq_weight(handle, &cq_weight_args);

 		if (ret < 0) {
-			DLB2_LOG_ERR("dlb2: dlb2_dir_port_create error, ret=%d (driver status: %s)\n",
+			DLB2_LOG_ERR("dlb2: dlb2_dir_port_create error, ret=%d (driver status: %s)",
 					ret,
 					dlb2_error_strings[cfg.response.  status]);
 			goto error_exit;
@@ -1714,7 +1714,7 @@ error_exit:

 	rte_spinlock_unlock(&handle->resource_lock);

-	DLB2_LOG_ERR("dlb2: create ldb port failed!\n");
+	DLB2_LOG_ERR("dlb2: create ldb port failed!");

 	return ret;
 }
@@ -1758,13 +1758,13 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
 		return -EINVAL;

 	if (dequeue_depth < DLB2_MIN_CQ_DEPTH) {
-		DLB2_LOG_ERR("dlb2: invalid dequeue_depth, must be %d-%d\n",
+		DLB2_LOG_ERR("dlb2: invalid dequeue_depth, must be %d-%d",
 			     DLB2_MIN_CQ_DEPTH, DLB2_MAX_INPUT_QUEUE_DEPTH);
 		return -EINVAL;
 	}

 	if (enqueue_depth < DLB2_MIN_ENQUEUE_DEPTH) {
-		DLB2_LOG_ERR("dlb2: invalid enqueue_depth, must be at least %d\n",
+		DLB2_LOG_ERR("dlb2: invalid enqueue_depth, must be at least %d",
 			     DLB2_MIN_ENQUEUE_DEPTH);
 		return -EINVAL;
 	}
@@ -1799,7 +1799,7 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,

 	ret = dlb2_iface_dir_port_create(handle, &cfg,  dlb2->poll_mode);
 	if (ret < 0) {
-		DLB2_LOG_ERR("dlb2: dlb2_dir_port_create error, ret=%d (driver status: %s)\n",
+		DLB2_LOG_ERR("dlb2: dlb2_dir_port_create error, ret=%d (driver status: %s)",
 			     ret, dlb2_error_strings[cfg.response.status]);
 		goto error_exit;
 	}
@@ -1824,7 +1824,7 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
 	ret = dlb2_init_qe_mem(qm_port, mz_name);

 	if (ret < 0) {
-		DLB2_LOG_ERR("dlb2: init_qe_mem failed, ret=%d\n", ret);
+		DLB2_LOG_ERR("dlb2: init_qe_mem failed, ret=%d", ret);
 		goto error_exit;
 	}

@@ -1913,7 +1913,7 @@ error_exit:

 	rte_spinlock_unlock(&handle->resource_lock);

-	DLB2_LOG_ERR("dlb2: create dir port failed!\n");
+	DLB2_LOG_ERR("dlb2: create dir port failed!");

 	return ret;
 }
@@ -1929,7 +1929,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
 	int ret;

 	if (dev == NULL || port_conf == NULL) {
-		DLB2_LOG_ERR("Null parameter\n");
+		DLB2_LOG_ERR("Null parameter");
 		return -EINVAL;
 	}

@@ -1947,7 +1947,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
 	ev_port = &dlb2->ev_ports[ev_port_id];
 	/* configured? */
 	if (ev_port->setup_done) {
-		DLB2_LOG_ERR("evport %d is already configured\n", ev_port_id);
+		DLB2_LOG_ERR("evport %d is already configured", ev_port_id);
 		return -EINVAL;
 	}

@@ -1979,7 +1979,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,

 	if (port_conf->enqueue_depth > sw_credit_quanta ||
 	    port_conf->enqueue_depth > hw_credit_quanta) {
-		DLB2_LOG_ERR("Invalid port config. Enqueue depth %d must be <= credit quanta %d and batch size %d\n",
+		DLB2_LOG_ERR("Invalid port config. Enqueue depth %d must be <= credit quanta %d and batch size %d",
 			     port_conf->enqueue_depth,
 			     sw_credit_quanta,
 			     hw_credit_quanta);
@@ -2001,7 +2001,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
 					      port_conf->dequeue_depth,
 					      port_conf->enqueue_depth);
 		if (ret < 0) {
-			DLB2_LOG_ERR("Failed to create the lB port ve portId=%d\n",
+			DLB2_LOG_ERR("Failed to create the lB port ve portId=%d",
 				     ev_port_id);

 			return ret;
@@ -2012,7 +2012,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
 					      port_conf->dequeue_depth,
 					      port_conf->enqueue_depth);
 		if (ret < 0) {
-			DLB2_LOG_ERR("Failed to create the DIR port\n");
+			DLB2_LOG_ERR("Failed to create the DIR port");
 			return ret;
 		}
 	}
@@ -2079,9 +2079,9 @@ dlb2_hw_map_ldb_qid_to_port(struct dlb2_hw_dev *handle,

 	ret = dlb2_iface_map_qid(handle, &cfg);
 	if (ret < 0) {
-		DLB2_LOG_ERR("dlb2: map qid error, ret=%d (driver status: %s)\n",
+		DLB2_LOG_ERR("dlb2: map qid error, ret=%d (driver status: %s)",
 			     ret, dlb2_error_strings[cfg.response.status]);
-		DLB2_LOG_ERR("dlb2: grp=%d, qm_port=%d, qm_qid=%d prio=%d\n",
+		DLB2_LOG_ERR("dlb2: grp=%d, qm_port=%d, qm_qid=%d prio=%d",
 			     handle->domain_id, cfg.port_id,
 			     cfg.qid,
 			     cfg.priority);
@@ -2114,7 +2114,7 @@ dlb2_event_queue_join_ldb(struct dlb2_eventdev *dlb2,
 			first_avail = i;
 	}
 	if (first_avail == -1) {
-		DLB2_LOG_ERR("dlb2: qm_port %d has no available QID slots.\n",
+		DLB2_LOG_ERR("dlb2: qm_port %d has no available QID slots.",
 			     ev_port->qm_port.id);
 		return -EINVAL;
 	}
@@ -2151,7 +2151,7 @@ dlb2_hw_create_dir_queue(struct dlb2_eventdev *dlb2,

 	ret = dlb2_iface_dir_queue_create(handle, &cfg);
 	if (ret < 0) {
-		DLB2_LOG_ERR("dlb2: create DIR event queue error, ret=%d (driver status: %s)\n",
+		DLB2_LOG_ERR("dlb2: create DIR event queue error, ret=%d (driver status: %s)",
 			     ret, dlb2_error_strings[cfg.response.status]);
 		return -EINVAL;
 	}
@@ -2169,7 +2169,7 @@ dlb2_eventdev_dir_queue_setup(struct dlb2_eventdev *dlb2,
 	qm_qid = dlb2_hw_create_dir_queue(dlb2, ev_queue, ev_port->qm_port.id);

 	if (qm_qid < 0) {
-		DLB2_LOG_ERR("Failed to create the DIR queue\n");
+		DLB2_LOG_ERR("Failed to create the DIR queue");
 		return qm_qid;
 	}

@@ -2199,7 +2199,7 @@ dlb2_do_port_link(struct rte_eventdev *dev,
 		err = dlb2_event_queue_join_ldb(dlb2, ev_port, ev_queue, prio);

 	if (err) {
-		DLB2_LOG_ERR("port link failure for %s ev_q %d, ev_port %d\n",
+		DLB2_LOG_ERR("port link failure for %s ev_q %d, ev_port %d",
 			     ev_queue->qm_queue.is_directed ? "DIR" : "LDB",
 			     ev_queue->id, ev_port->id);

@@ -2237,7 +2237,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
 	queue_is_dir = ev_queue->qm_queue.is_directed;

 	if (port_is_dir != queue_is_dir) {
-		DLB2_LOG_ERR("%s queue %u can't link to %s port %u\n",
+		DLB2_LOG_ERR("%s queue %u can't link to %s port %u",
 			     queue_is_dir ? "DIR" : "LDB", ev_queue->id,
 			     port_is_dir ? "DIR" : "LDB", ev_port->id);

@@ -2247,7 +2247,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,

 	/* Check if there is space for the requested link */
 	if (!link_exists && index == -1) {
-		DLB2_LOG_ERR("no space for new link\n");
+		DLB2_LOG_ERR("no space for new link");
 		rte_errno = -ENOSPC;
 		return -1;
 	}
@@ -2255,7 +2255,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
 	/* Check if the directed port is already linked */
 	if (ev_port->qm_port.is_directed && ev_port->num_links > 0 &&
 	    !link_exists) {
-		DLB2_LOG_ERR("Can't link DIR port %d to >1 queues\n",
+		DLB2_LOG_ERR("Can't link DIR port %d to >1 queues",
 			     ev_port->id);
 		rte_errno = -EINVAL;
 		return -1;
@@ -2264,7 +2264,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
 	/* Check if the directed queue is already linked */
 	if (ev_queue->qm_queue.is_directed && ev_queue->num_links > 0 &&
 	    !link_exists) {
-		DLB2_LOG_ERR("Can't link DIR queue %d to >1 ports\n",
+		DLB2_LOG_ERR("Can't link DIR queue %d to >1 ports",
 			     ev_queue->id);
 		rte_errno = -EINVAL;
 		return -1;
@@ -2286,14 +2286,14 @@ dlb2_eventdev_port_link(struct rte_eventdev *dev, void *event_port,
 	RTE_SET_USED(dev);

 	if (ev_port == NULL) {
-		DLB2_LOG_ERR("dlb2: evport not setup\n");
+		DLB2_LOG_ERR("dlb2: evport not setup");
 		rte_errno = -EINVAL;
 		return 0;
 	}

 	if (!ev_port->setup_done &&
 	    ev_port->qm_port.config_state != DLB2_PREV_CONFIGURED) {
-		DLB2_LOG_ERR("dlb2: evport not setup\n");
+		DLB2_LOG_ERR("dlb2: evport not setup");
 		rte_errno = -EINVAL;
 		return 0;
 	}
@@ -2378,7 +2378,7 @@ dlb2_hw_unmap_ldb_qid_from_port(struct dlb2_hw_dev *handle,

 	ret = dlb2_iface_unmap_qid(handle, &cfg);
 	if (ret < 0)
-		DLB2_LOG_ERR("dlb2: unmap qid error, ret=%d (driver status: %s)\n",
+		DLB2_LOG_ERR("dlb2: unmap qid error, ret=%d (driver status: %s)",
 			     ret, dlb2_error_strings[cfg.response.status]);

 	return ret;
@@ -2431,7 +2431,7 @@ dlb2_eventdev_port_unlink(struct rte_eventdev *dev, void *event_port,
 	RTE_SET_USED(dev);

 	if (!ev_port->setup_done) {
-		DLB2_LOG_ERR("dlb2: evport %d is not configured\n",
+		DLB2_LOG_ERR("dlb2: evport %d is not configured",
 			     ev_port->id);
 		rte_errno = -EINVAL;
 		return 0;
@@ -2456,7 +2456,7 @@ dlb2_eventdev_port_unlink(struct rte_eventdev *dev, void *event_port,
 		int ret, j;

 		if (queues[i] >= dlb2->num_queues) {
-			DLB2_LOG_ERR("dlb2: invalid queue id %d\n", queues[i]);
+			DLB2_LOG_ERR("dlb2: invalid queue id %d", queues[i]);
 			rte_errno = -EINVAL;
 			return i; /* return index of offending queue */
 		}
@@ -2474,7 +2474,7 @@ dlb2_eventdev_port_unlink(struct rte_eventdev *dev, void *event_port,

 		ret = dlb2_event_queue_detach_ldb(dlb2, ev_port, ev_queue);
 		if (ret) {
-			DLB2_LOG_ERR("unlink err=%d for port %d queue %d\n",
+			DLB2_LOG_ERR("unlink err=%d for port %d queue %d",
 				     ret, ev_port->id, queues[i]);
 			rte_errno = -ENOENT;
 			return i; /* return index of offending queue */
@@ -2501,7 +2501,7 @@ dlb2_eventdev_port_unlinks_in_progress(struct rte_eventdev *dev,
 	RTE_SET_USED(dev);

 	if (!ev_port->setup_done) {
-		DLB2_LOG_ERR("dlb2: evport %d is not configured\n",
+		DLB2_LOG_ERR("dlb2: evport %d is not configured",
 			     ev_port->id);
 		rte_errno = -EINVAL;
 		return 0;
@@ -2513,7 +2513,7 @@ dlb2_eventdev_port_unlinks_in_progress(struct rte_eventdev *dev,
 	ret = dlb2_iface_pending_port_unmaps(handle, &cfg);

 	if (ret < 0) {
-		DLB2_LOG_ERR("dlb2: num_unlinks_in_progress ret=%d (driver status: %s)\n",
+		DLB2_LOG_ERR("dlb2: num_unlinks_in_progress ret=%d (driver status: %s)",
 			     ret, dlb2_error_strings[cfg.response.status]);
 		return ret;
 	}
@@ -2606,7 +2606,7 @@ dlb2_eventdev_start(struct rte_eventdev *dev)

 	rte_spinlock_lock(&dlb2->qm_instance.resource_lock);
 	if (dlb2->run_state != DLB2_RUN_STATE_STOPPED) {
-		DLB2_LOG_ERR("bad state %d for dev_start\n",
+		DLB2_LOG_ERR("bad state %d for dev_start",
 			     (int)dlb2->run_state);
 		rte_spinlock_unlock(&dlb2->qm_instance.resource_lock);
 		return -EINVAL;
@@ -2642,7 +2642,7 @@ dlb2_eventdev_start(struct rte_eventdev *dev)

 	ret = dlb2_iface_sched_domain_start(handle, &cfg);
 	if (ret < 0) {
-		DLB2_LOG_ERR("dlb2: sched_domain_start ret=%d (driver status: %s)\n",
+		DLB2_LOG_ERR("dlb2: sched_domain_start ret=%d (driver status: %s)",
 			     ret, dlb2_error_strings[cfg.response.status]);
 		return ret;
 	}
@@ -2887,7 +2887,7 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
 		case RTE_SCHED_TYPE_ORDERED:
 			DLB2_LOG_DBG("dlb2: put_qe: RTE_SCHED_TYPE_ORDERED\n");
 			if (qm_queue->sched_type != RTE_SCHED_TYPE_ORDERED) {
-				DLB2_LOG_ERR("dlb2: tried to send ordered event to unordered queue %d\n",
+				DLB2_LOG_ERR("dlb2: tried to send ordered event to unordered queue %d",
 					     *queue_id);
 				rte_errno = -EINVAL;
 				return 1;
@@ -2906,7 +2906,7 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
 				*sched_type = DLB2_SCHED_UNORDERED;
 			break;
 		default:
-			DLB2_LOG_ERR("Unsupported LDB sched type in put_qe\n");
+			DLB2_LOG_ERR("Unsupported LDB sched type in put_qe");
 			DLB2_INC_STAT(ev_port->stats.tx_invalid, 1);
 			rte_errno = -EINVAL;
 			return 1;
@@ -3153,7 +3153,7 @@ dlb2_event_release(struct dlb2_eventdev *dlb2,
 	int i;

 	if (port_id > dlb2->num_ports) {
-		DLB2_LOG_ERR("Invalid port id %d in dlb2-event_release\n",
+		DLB2_LOG_ERR("Invalid port id %d in dlb2-event_release",
 			     port_id);
 		rte_errno = -EINVAL;
 		return;
@@ -3210,7 +3210,7 @@ dlb2_event_release(struct dlb2_eventdev *dlb2,
 sw_credit_update:
 	/* each release returns one credit */
 	if (unlikely(!ev_port->outstanding_releases)) {
-		DLB2_LOG_ERR("%s: Outstanding releases underflowed.\n",
+		DLB2_LOG_ERR("%s: Outstanding releases underflowed.",
 			     __func__);
 		return;
 	}
@@ -3364,7 +3364,7 @@ dlb2_process_dequeue_qes(struct dlb2_eventdev_port *ev_port,
 		 * buffer is a mbuf.
 		 */
 		if (unlikely(qe->error)) {
-			DLB2_LOG_ERR("QE error bit ON\n");
+			DLB2_LOG_ERR("QE error bit ON");
 			DLB2_INC_STAT(ev_port->stats.traffic.rx_drop, 1);
 			dlb2_consume_qe_immediate(qm_port, 1);
 			continue; /* Ignore */
@@ -4278,7 +4278,7 @@ dlb2_get_ldb_queue_depth(struct dlb2_eventdev *dlb2,

 	ret = dlb2_iface_get_ldb_queue_depth(handle, &cfg);
 	if (ret < 0) {
-		DLB2_LOG_ERR("dlb2: get_ldb_queue_depth ret=%d (driver status: %s)\n",
+		DLB2_LOG_ERR("dlb2: get_ldb_queue_depth ret=%d (driver status: %s)",
 			     ret, dlb2_error_strings[cfg.response.status]);
 		return ret;
 	}
@@ -4298,7 +4298,7 @@ dlb2_get_dir_queue_depth(struct dlb2_eventdev *dlb2,

 	ret = dlb2_iface_get_dir_queue_depth(handle, &cfg);
 	if (ret < 0) {
-		DLB2_LOG_ERR("dlb2: get_dir_queue_depth ret=%d (driver status: %s)\n",
+		DLB2_LOG_ERR("dlb2: get_dir_queue_depth ret=%d (driver status: %s)",
 			     ret, dlb2_error_strings[cfg.response.status]);
 		return ret;
 	}
@@ -4389,7 +4389,7 @@ dlb2_drain(struct rte_eventdev *dev)
 	}

 	if (i == dlb2->num_ports) {
-		DLB2_LOG_ERR("internal error: no LDB ev_ports\n");
+		DLB2_LOG_ERR("internal error: no LDB ev_ports");
 		return;
 	}

@@ -4397,7 +4397,7 @@ dlb2_drain(struct rte_eventdev *dev)
 	rte_event_port_unlink(dev_id, ev_port->id, NULL, 0);

 	if (rte_errno) {
-		DLB2_LOG_ERR("internal error: failed to unlink ev_port %d\n",
+		DLB2_LOG_ERR("internal error: failed to unlink ev_port %d",
 			     ev_port->id);
 		return;
 	}
@@ -4415,7 +4415,7 @@ dlb2_drain(struct rte_eventdev *dev)
 		/* Link the ev_port to the queue */
 		ret = rte_event_port_link(dev_id, ev_port->id, &qid, &prio, 1);
 		if (ret != 1) {
-			DLB2_LOG_ERR("internal error: failed to link ev_port %d to queue %d\n",
+			DLB2_LOG_ERR("internal error: failed to link ev_port %d to queue %d",
 				     ev_port->id, qid);
 			return;
 		}
@@ -4430,7 +4430,7 @@ dlb2_drain(struct rte_eventdev *dev)
 		/* Unlink the ev_port from the queue */
 		ret = rte_event_port_unlink(dev_id, ev_port->id, &qid, 1);
 		if (ret != 1) {
-			DLB2_LOG_ERR("internal error: failed to unlink ev_port %d to queue %d\n",
+			DLB2_LOG_ERR("internal error: failed to unlink ev_port %d to queue %d",
 				     ev_port->id, qid);
 			return;
 		}
@@ -4449,7 +4449,7 @@ dlb2_eventdev_stop(struct rte_eventdev *dev)
 		rte_spinlock_unlock(&dlb2->qm_instance.resource_lock);
 		return;
 	} else if (dlb2->run_state != DLB2_RUN_STATE_STARTED) {
-		DLB2_LOG_ERR("Internal error: bad state %d for dev_stop\n",
+		DLB2_LOG_ERR("Internal error: bad state %d for dev_stop",
 			     (int)dlb2->run_state);
 		rte_spinlock_unlock(&dlb2->qm_instance.resource_lock);
 		return;
@@ -4605,7 +4605,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,

 	err = dlb2_iface_open(&dlb2->qm_instance, name);
 	if (err < 0) {
-		DLB2_LOG_ERR("could not open event hardware device, err=%d\n",
+		DLB2_LOG_ERR("could not open event hardware device, err=%d",
 			     err);
 		return err;
 	}
@@ -4613,14 +4613,14 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
 	err = dlb2_iface_get_device_version(&dlb2->qm_instance,
 					    &dlb2->revision);
 	if (err < 0) {
-		DLB2_LOG_ERR("dlb2: failed to get the device version, err=%d\n",
+		DLB2_LOG_ERR("dlb2: failed to get the device version, err=%d",
 			     err);
 		return err;
 	}

 	err = dlb2_hw_query_resources(dlb2);
 	if (err) {
-		DLB2_LOG_ERR("get resources err=%d for %s\n",
+		DLB2_LOG_ERR("get resources err=%d for %s",
 			     err, name);
 		return err;
 	}
@@ -4643,7 +4643,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
 				break;
 		}
 		if (ret) {
-			DLB2_LOG_ERR("dlb2: failed to configure class of service, err=%d\n",
+			DLB2_LOG_ERR("dlb2: failed to configure class of service, err=%d",
 				     err);
 			return err;
 		}
@@ -4651,7 +4651,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,

 	err = dlb2_iface_get_cq_poll_mode(&dlb2->qm_instance, &dlb2->poll_mode);
 	if (err < 0) {
-		DLB2_LOG_ERR("dlb2: failed to get the poll mode, err=%d\n",
+		DLB2_LOG_ERR("dlb2: failed to get the poll mode, err=%d",
 			     err);
 		return err;
 	}
@@ -4659,7 +4659,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
 	/* Complete xtstats runtime initialization */
 	err = dlb2_xstats_init(dlb2);
 	if (err) {
-		DLB2_LOG_ERR("dlb2: failed to init xstats, err=%d\n", err);
+		DLB2_LOG_ERR("dlb2: failed to init xstats, err=%d", err);
 		return err;
 	}

@@ -4689,14 +4689,14 @@ dlb2_secondary_eventdev_probe(struct rte_eventdev *dev,

 	err = dlb2_iface_open(&dlb2->qm_instance, name);
 	if (err < 0) {
-		DLB2_LOG_ERR("could not open event hardware device, err=%d\n",
+		DLB2_LOG_ERR("could not open event hardware device, err=%d",
 			     err);
 		return err;
 	}

 	err = dlb2_hw_query_resources(dlb2);
 	if (err) {
-		DLB2_LOG_ERR("get resources err=%d for %s\n",
+		DLB2_LOG_ERR("get resources err=%d for %s",
 			     err, name);
 		return err;
 	}
diff --git a/drivers/event/dlb2/dlb2_xstats.c b/drivers/event/dlb2/dlb2_xstats.c
index ff15271dda..28de48e24e 100644
--- a/drivers/event/dlb2/dlb2_xstats.c
+++ b/drivers/event/dlb2/dlb2_xstats.c
@@ -766,7 +766,7 @@ dlb2_xstats_update(struct dlb2_eventdev *dlb2,
 			fn = get_queue_stat;
 			break;
 		default:
-			DLB2_LOG_ERR("Unexpected xstat fn_id %d\n", xs->fn_id);
+			DLB2_LOG_ERR("Unexpected xstat fn_id %d", xs->fn_id);
 			goto invalid_value;
 		}

@@ -827,7 +827,7 @@ dlb2_eventdev_xstats_get_by_name(const struct rte_eventdev *dev,
 				fn = get_queue_stat;
 				break;
 			default:
-				DLB2_LOG_ERR("Unexpected xstat fn_id %d\n",
+				DLB2_LOG_ERR("Unexpected xstat fn_id %d",
 					  xs->fn_id);
 				return (uint64_t)-1;
 			}
@@ -865,7 +865,7 @@ dlb2_xstats_reset_range(struct dlb2_eventdev *dlb2, uint32_t start,
 			fn = get_queue_stat;
 			break;
 		default:
-			DLB2_LOG_ERR("Unexpected xstat fn_id %d\n", xs->fn_id);
+			DLB2_LOG_ERR("Unexpected xstat fn_id %d", xs->fn_id);
 			return;
 		}

diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb2/pf/dlb2_main.c
index a95d3227a4..89eabc2a93 100644
--- a/drivers/event/dlb2/pf/dlb2_main.c
+++ b/drivers/event/dlb2/pf/dlb2_main.c
@@ -72,7 +72,7 @@ static int dlb2_pf_wait_for_device_ready(struct dlb2_dev *dlb2_dev,
 	};

 	if (retries == DLB2_READY_RETRY_LIMIT) {
-		DLB2_LOG_ERR("[%s()] wait for device ready timed out\n",
+		DLB2_LOG_ERR("[%s()] wait for device ready timed out",
 		       __func__);
 		return -1;
 	}
@@ -214,7 +214,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
 	pcie_cap_offset = rte_pci_find_capability(pdev, RTE_PCI_CAP_ID_EXP);

 	if (pcie_cap_offset < 0) {
-		DLB2_LOG_ERR("[%s()] failed to find the pcie capability\n",
+		DLB2_LOG_ERR("[%s()] failed to find the pcie capability",
 		       __func__);
 		return pcie_cap_offset;
 	}
@@ -261,7 +261,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
 	off = RTE_PCI_COMMAND;
 	cmd = 0;
 	if (rte_pci_write_config(pdev, &cmd, 2, off) != 2) {
-		DLB2_LOG_ERR("[%s()] failed to write the pci command\n",
+		DLB2_LOG_ERR("[%s()] failed to write the pci command",
 		       __func__);
 		return ret;
 	}
@@ -273,7 +273,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
 		off = pcie_cap_offset + RTE_PCI_EXP_DEVSTA;
 		ret = rte_pci_read_config(pdev, &devsta_busy_word, 2, off);
 		if (ret != 2) {
-			DLB2_LOG_ERR("[%s()] failed to read the pci device status\n",
+			DLB2_LOG_ERR("[%s()] failed to read the pci device status",
 			       __func__);
 			return ret;
 		}
@@ -286,7 +286,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
 	}

 	if (wait_count == 4) {
-		DLB2_LOG_ERR("[%s()] wait for pci pending transactions timed out\n",
+		DLB2_LOG_ERR("[%s()] wait for pci pending transactions timed out",
 		       __func__);
 		return -1;
 	}
@@ -294,7 +294,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
 	off = pcie_cap_offset + RTE_PCI_EXP_DEVCTL;
 	ret = rte_pci_read_config(pdev, &devctl_word, 2, off);
 	if (ret != 2) {
-		DLB2_LOG_ERR("[%s()] failed to read the pcie device control\n",
+		DLB2_LOG_ERR("[%s()] failed to read the pcie device control",
 		       __func__);
 		return ret;
 	}
@@ -303,7 +303,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)

 	ret = rte_pci_write_config(pdev, &devctl_word, 2, off);
 	if (ret != 2) {
-		DLB2_LOG_ERR("[%s()] failed to write the pcie device control\n",
+		DLB2_LOG_ERR("[%s()] failed to write the pcie device control",
 		       __func__);
 		return ret;
 	}
@@ -316,7 +316,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
 		off = pcie_cap_offset + RTE_PCI_EXP_DEVCTL;
 		ret = rte_pci_write_config(pdev, &dev_ctl_word, 2, off);
 		if (ret != 2) {
-			DLB2_LOG_ERR("[%s()] failed to write the pcie device control at offset %d\n",
+			DLB2_LOG_ERR("[%s()] failed to write the pcie device control at offset %d",
 				__func__, (int)off);
 			return ret;
 		}
@@ -324,7 +324,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
 		off = pcie_cap_offset + RTE_PCI_EXP_LNKCTL;
 		ret = rte_pci_write_config(pdev, &lnk_word, 2, off);
 		if (ret != 2) {
-			DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+			DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
 				__func__, (int)off);
 			return ret;
 		}
@@ -332,7 +332,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
 		off = pcie_cap_offset + RTE_PCI_EXP_SLTCTL;
 		ret = rte_pci_write_config(pdev, &slt_word, 2, off);
 		if (ret != 2) {
-			DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+			DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
 				__func__, (int)off);
 			return ret;
 		}
@@ -340,7 +340,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
 		off = pcie_cap_offset + RTE_PCI_EXP_RTCTL;
 		ret = rte_pci_write_config(pdev, &rt_ctl_word, 2, off);
 		if (ret != 2) {
-			DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+			DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
 				__func__, (int)off);
 			return ret;
 		}
@@ -348,7 +348,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
 		off = pcie_cap_offset + RTE_PCI_EXP_DEVCTL2;
 		ret = rte_pci_write_config(pdev, &dev_ctl2_word, 2, off);
 		if (ret != 2) {
-			DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+			DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
 				__func__, (int)off);
 			return ret;
 		}
@@ -356,7 +356,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
 		off = pcie_cap_offset + RTE_PCI_EXP_LNKCTL2;
 		ret = rte_pci_write_config(pdev, &lnk_word2, 2, off);
 		if (ret != 2) {
-			DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+			DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
 				__func__, (int)off);
 			return ret;
 		}
@@ -364,7 +364,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
 		off = pcie_cap_offset + RTE_PCI_EXP_SLTCTL2;
 		ret = rte_pci_write_config(pdev, &slt_word2, 2, off);
 		if (ret != 2) {
-			DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+			DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
 				__func__, (int)off);
 			return ret;
 		}
@@ -376,7 +376,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
 		off = pri_cap_offset + RTE_PCI_PRI_ALLOC_REQ;
 		ret = rte_pci_write_config(pdev, &pri_reqs_dword, 4, off);
 		if (ret != 4) {
-			DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+			DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
 				__func__, (int)off);
 			return ret;
 		}
@@ -384,7 +384,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
 		off = pri_cap_offset + RTE_PCI_PRI_CTRL;
 		ret = rte_pci_write_config(pdev, &pri_ctrl_word, 2, off);
 		if (ret != 2) {
-			DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+			DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
 				__func__, (int)off);
 			return ret;
 		}
@@ -402,7 +402,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)

 		ret = rte_pci_write_config(pdev, &tmp, 4, off);
 		if (ret != 4) {
-			DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+			DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
 				__func__, (int)off);
 			return ret;
 		}
@@ -413,7 +413,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)

 		ret = rte_pci_write_config(pdev, &tmp, 4, off);
 		if (ret != 4) {
-			DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+			DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
 				__func__, (int)off);
 			return ret;
 		}
@@ -424,7 +424,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)

 		ret = rte_pci_write_config(pdev, &tmp, 4, off);
 		if (ret != 4) {
-			DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+			DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
 				__func__, (int)off);
 			return ret;
 		}
@@ -434,7 +434,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
 		off = (i - 1) * 4;
 		ret = rte_pci_write_config(pdev, &dword[i - 1], 4, off);
 		if (ret != 4) {
-			DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+			DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
 				__func__, (int)off);
 			return ret;
 		}
@@ -444,7 +444,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
 	if (rte_pci_read_config(pdev, &cmd, 2, off) == 2) {
 		cmd &= ~RTE_PCI_COMMAND_INTX_DISABLE;
 		if (rte_pci_write_config(pdev, &cmd, 2, off) != 2) {
-			DLB2_LOG_ERR("[%s()] failed to write the pci command\n",
+			DLB2_LOG_ERR("[%s()] failed to write the pci command",
 			       __func__);
 			return ret;
 		}
@@ -457,7 +457,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
 			cmd |= RTE_PCI_MSIX_FLAGS_ENABLE;
 			cmd |= RTE_PCI_MSIX_FLAGS_MASKALL;
 			if (rte_pci_write_config(pdev, &cmd, 2, off) != 2) {
-				DLB2_LOG_ERR("[%s()] failed to write msix flags\n",
+				DLB2_LOG_ERR("[%s()] failed to write msix flags",
 				       __func__);
 				return ret;
 			}
@@ -467,7 +467,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
 		if (rte_pci_read_config(pdev, &cmd, 2, off) == 2) {
 			cmd &= ~RTE_PCI_MSIX_FLAGS_MASKALL;
 			if (rte_pci_write_config(pdev, &cmd, 2, off) != 2) {
-				DLB2_LOG_ERR("[%s()] failed to write msix flags\n",
+				DLB2_LOG_ERR("[%s()] failed to write msix flags",
 				       __func__);
 				return ret;
 			}
@@ -493,7 +493,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)

 		ret = rte_pci_write_config(pdev, &acs_ctrl, 2, off);
 		if (ret != 2) {
-			DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+			DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
 				__func__, (int)off);
 			return ret;
 		}
@@ -509,7 +509,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
 		off = acs_cap_offset + RTE_PCI_ACS_CTRL;
 		ret = rte_pci_write_config(pdev, &acs_ctrl, 2, off);
 		if (ret != 2) {
-			DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+			DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
 				__func__, (int)off);
 			return ret;
 		}
@@ -520,7 +520,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
 	 */
 	off = DLB2_PCI_PASID_CAP_OFFSET;
 	if (rte_pci_pasid_set_state(pdev, off, false) < 0) {
-		DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+		DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
 				__func__, (int)off);
 		return -1;
 	}
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c
index 3d15250e11..019e90f7e7 100644
--- a/drivers/event/dlb2/pf/dlb2_pf.c
+++ b/drivers/event/dlb2/pf/dlb2_pf.c
@@ -336,7 +336,7 @@ dlb2_pf_ldb_port_create(struct dlb2_hw_dev *handle,
 	/* Lock the page in memory */
 	ret = rte_mem_lock_page(port_base);
 	if (ret < 0) {
-		DLB2_LOG_ERR("dlb2 pf pmd could not lock page for device i/o\n");
+		DLB2_LOG_ERR("dlb2 pf pmd could not lock page for device i/o");
 		goto create_port_err;
 	}

@@ -411,7 +411,7 @@ dlb2_pf_dir_port_create(struct dlb2_hw_dev *handle,
 	/* Lock the page in memory */
 	ret = rte_mem_lock_page(port_base);
 	if (ret < 0) {
-		DLB2_LOG_ERR("dlb2 pf pmd could not lock page for device i/o\n");
+		DLB2_LOG_ERR("dlb2 pf pmd could not lock page for device i/o");
 		goto create_port_err;
 	}

@@ -737,7 +737,7 @@ dlb2_eventdev_pci_init(struct rte_eventdev *eventdev)
 						&dlb2_args,
 						dlb2->version);
 			if (ret) {
-				DLB2_LOG_ERR("PFPMD failed to parse args ret=%d, errno=%d\n",
+				DLB2_LOG_ERR("PFPMD failed to parse args ret=%d, errno=%d",
 					     ret, rte_errno);
 				goto dlb2_probe_failed;
 			}
@@ -748,7 +748,7 @@ dlb2_eventdev_pci_init(struct rte_eventdev *eventdev)
 		dlb2->qm_instance.pf_dev = dlb2_probe(pci_dev, probe_args);

 		if (dlb2->qm_instance.pf_dev == NULL) {
-			DLB2_LOG_ERR("DLB2 PF Probe failed with error %d\n",
+			DLB2_LOG_ERR("DLB2 PF Probe failed with error %d",
 				     rte_errno);
 			ret = -rte_errno;
 			goto dlb2_probe_failed;
@@ -766,13 +766,13 @@ dlb2_eventdev_pci_init(struct rte_eventdev *eventdev)
 	if (ret)
 		goto dlb2_probe_failed;

-	DLB2_LOG_INFO("DLB2 PF Probe success\n");
+	DLB2_LOG_INFO("DLB2 PF Probe success");

 	return 0;

 dlb2_probe_failed:

-	DLB2_LOG_INFO("DLB2 PF Probe failed, ret=%d\n", ret);
+	DLB2_LOG_INFO("DLB2 PF Probe failed, ret=%d", ret);

 	return ret;
 }
@@ -811,7 +811,7 @@ event_dlb2_pci_probe(struct rte_pci_driver *pci_drv,
 					     event_dlb2_pf_name);
 	if (ret) {
 		DLB2_LOG_INFO("rte_event_pmd_pci_probe_named() failed, "
-				"ret=%d\n", ret);
+				"ret=%d", ret);
 	}

 	return ret;
@@ -826,7 +826,7 @@ event_dlb2_pci_remove(struct rte_pci_device *pci_dev)

 	if (ret) {
 		DLB2_LOG_INFO("rte_event_pmd_pci_remove() failed, "
-				"ret=%d\n", ret);
+				"ret=%d", ret);
 	}

 	return ret;
@@ -845,7 +845,7 @@ event_dlb2_5_pci_probe(struct rte_pci_driver *pci_drv,
 					    event_dlb2_pf_name);
 	if (ret) {
 		DLB2_LOG_INFO("rte_event_pmd_pci_probe_named() failed, "
-				"ret=%d\n", ret);
+				"ret=%d", ret);
 	}

 	return ret;
@@ -860,7 +860,7 @@ event_dlb2_5_pci_remove(struct rte_pci_device *pci_dev)

 	if (ret) {
 		DLB2_LOG_INFO("rte_event_pmd_pci_remove() failed, "
-				"ret=%d\n", ret);
+				"ret=%d", ret);
 	}

 	return ret;
diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index dd4e64395f..4658eaf3a2 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -74,7 +74,7 @@ dpaa2_eventdev_enqueue_burst(void *port, const struct rte_event ev[],
 		ret = dpaa2_affine_qbman_swp();
 		if (ret < 0) {
 			DPAA2_EVENTDEV_ERR(
-				"Failed to allocate IO portal, tid: %d\n",
+				"Failed to allocate IO portal, tid: %d",
 				rte_gettid());
 			return 0;
 		}
@@ -276,7 +276,7 @@ dpaa2_eventdev_dequeue_burst(void *port, struct rte_event ev[],
 		ret = dpaa2_affine_qbman_swp();
 		if (ret < 0) {
 			DPAA2_EVENTDEV_ERR(
-				"Failed to allocate IO portal, tid: %d\n",
+				"Failed to allocate IO portal, tid: %d",
 				rte_gettid());
 			return 0;
 		}
@@ -849,7 +849,7 @@ dpaa2_eventdev_crypto_queue_add_all(const struct rte_eventdev *dev,
 	for (i = 0; i < cryptodev->data->nb_queue_pairs; i++) {
 		ret = dpaa2_sec_eventq_attach(cryptodev, i, dpcon, ev);
 		if (ret) {
-			DPAA2_EVENTDEV_ERR("dpaa2_sec_eventq_attach failed: ret %d\n",
+			DPAA2_EVENTDEV_ERR("dpaa2_sec_eventq_attach failed: ret %d",
 				    ret);
 			goto fail;
 		}
@@ -883,7 +883,7 @@ dpaa2_eventdev_crypto_queue_add(const struct rte_eventdev *dev,
 				      dpcon, &conf->ev);
 	if (ret) {
 		DPAA2_EVENTDEV_ERR(
-			"dpaa2_sec_eventq_attach failed: ret: %d\n", ret);
+			"dpaa2_sec_eventq_attach failed: ret: %d", ret);
 		return ret;
 	}
 	return 0;
@@ -903,7 +903,7 @@ dpaa2_eventdev_crypto_queue_del_all(const struct rte_eventdev *dev,
 		ret = dpaa2_sec_eventq_detach(cdev, i);
 		if (ret) {
 			DPAA2_EVENTDEV_ERR(
-				"dpaa2_sec_eventq_detach failed:ret %d\n", ret);
+				"dpaa2_sec_eventq_detach failed:ret %d", ret);
 			return ret;
 		}
 	}
@@ -926,7 +926,7 @@ dpaa2_eventdev_crypto_queue_del(const struct rte_eventdev *dev,
 	ret = dpaa2_sec_eventq_detach(cryptodev, rx_queue_id);
 	if (ret) {
 		DPAA2_EVENTDEV_ERR(
-			"dpaa2_sec_eventq_detach failed: ret: %d\n", ret);
+			"dpaa2_sec_eventq_detach failed: ret: %d", ret);
 		return ret;
 	}

@@ -1159,7 +1159,7 @@ dpaa2_eventdev_destroy(const char *name)

 	eventdev = rte_event_pmd_get_named_dev(name);
 	if (eventdev == NULL) {
-		RTE_EDEV_LOG_ERR("eventdev with name %s not allocated", name);
+		DPAA2_EVENTDEV_ERR("eventdev with name %s not allocated", name);
 		return -1;
 	}

diff --git a/drivers/event/octeontx/timvf_evdev.c b/drivers/event/octeontx/timvf_evdev.c
index 090b3ed183..82f17144a6 100644
--- a/drivers/event/octeontx/timvf_evdev.c
+++ b/drivers/event/octeontx/timvf_evdev.c
@@ -196,7 +196,7 @@ timvf_ring_start(const struct rte_event_timer_adapter *adptr)
 	timr->tck_int = NSEC2CLK(timr->tck_nsec, rte_get_timer_hz());
 	timr->fast_div = rte_reciprocal_value_u64(timr->tck_int);
 	timvf_log_info("nb_bkts %d min_ns %"PRIu64" min_cyc %"PRIu64""
-			" maxtmo %"PRIu64"\n",
+			" maxtmo %"PRIu64,
 			timr->nb_bkts, timr->tck_nsec, interval,
 			timr->max_tout);

diff --git a/drivers/event/opdl/opdl_evdev.c b/drivers/event/opdl/opdl_evdev.c
index 0cccaf7e97..fe0c0ede6f 100644
--- a/drivers/event/opdl/opdl_evdev.c
+++ b/drivers/event/opdl/opdl_evdev.c
@@ -99,7 +99,7 @@ opdl_port_link(struct rte_eventdev *dev,

 	if (unlikely(dev->data->dev_started)) {
 		PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
-			     "Attempt to link queue (%u) to port %d while device started\n",
+			     "Attempt to link queue (%u) to port %d while device started",
 			     dev->data->dev_id,
 				queues[0],
 				p->id);
@@ -110,7 +110,7 @@ opdl_port_link(struct rte_eventdev *dev,
 	/* Max of 1 queue per port */
 	if (num > 1) {
 		PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
-			     "Attempt to link more than one queue (%u) to port %d requested\n",
+			     "Attempt to link more than one queue (%u) to port %d requested",
 			     dev->data->dev_id,
 				num,
 				p->id);
@@ -120,7 +120,7 @@ opdl_port_link(struct rte_eventdev *dev,

 	if (!p->configured) {
 		PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
-			     "port %d not configured, cannot link to %u\n",
+			     "port %d not configured, cannot link to %u",
 			     dev->data->dev_id,
 				p->id,
 				queues[0]);
@@ -130,7 +130,7 @@ opdl_port_link(struct rte_eventdev *dev,

 	if (p->external_qid != OPDL_INVALID_QID) {
 		PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
-			     "port %d already linked to queue %u, cannot link to %u\n",
+			     "port %d already linked to queue %u, cannot link to %u",
 			     dev->data->dev_id,
 				p->id,
 				p->external_qid,
@@ -157,7 +157,7 @@ opdl_port_unlink(struct rte_eventdev *dev,

 	if (unlikely(dev->data->dev_started)) {
 		PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
-			     "Attempt to unlink queue (%u) to port %d while device started\n",
+			     "Attempt to unlink queue (%u) to port %d while device started",
 			     dev->data->dev_id,
 			     queues[0],
 			     p->id);
@@ -188,7 +188,7 @@ opdl_port_setup(struct rte_eventdev *dev,
 	/* Check if port already configured */
 	if (p->configured) {
 		PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
-			     "Attempt to setup port %d which is already setup\n",
+			     "Attempt to setup port %d which is already setup",
 			     dev->data->dev_id,
 			     p->id);
 		return -EDQUOT;
@@ -244,7 +244,7 @@ opdl_queue_setup(struct rte_eventdev *dev,
 	/* Extra sanity check, probably not needed */
 	if (queue_id == OPDL_INVALID_QID) {
 		PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
-			     "Invalid queue id %u requested\n",
+			     "Invalid queue id %u requested",
 			     dev->data->dev_id,
 			     queue_id);
 		return -EINVAL;
@@ -252,7 +252,7 @@ opdl_queue_setup(struct rte_eventdev *dev,

 	if (device->nb_q_md > device->max_queue_nb) {
 		PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
-			     "Max number of queues %u exceeded by request %u\n",
+			     "Max number of queues %u exceeded by request %u",
 			     dev->data->dev_id,
 			     device->max_queue_nb,
 			     device->nb_q_md);
@@ -262,7 +262,7 @@ opdl_queue_setup(struct rte_eventdev *dev,
 	if (RTE_EVENT_QUEUE_CFG_ALL_TYPES
 	    & conf->event_queue_cfg) {
 		PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
-			     "QUEUE_CFG_ALL_TYPES not supported\n",
+			     "QUEUE_CFG_ALL_TYPES not supported",
 			     dev->data->dev_id);
 		return -ENOTSUP;
 	} else if (RTE_EVENT_QUEUE_CFG_SINGLE_LINK
@@ -281,7 +281,7 @@ opdl_queue_setup(struct rte_eventdev *dev,
 			break;
 		default:
 			PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
-				     "Unknown queue type %d requested\n",
+				     "Unknown queue type %d requested",
 				     dev->data->dev_id,
 				     conf->event_queue_cfg);
 			return -EINVAL;
@@ -292,7 +292,7 @@ opdl_queue_setup(struct rte_eventdev *dev,
 	for (i = 0; i < device->nb_q_md; i++) {
 		if (device->q_md[i].ext_id == queue_id) {
 			PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
-				     "queue id %u already setup\n",
+				     "queue id %u already setup",
 				     dev->data->dev_id,
 				     queue_id);
 			return -EINVAL;
@@ -352,7 +352,7 @@ opdl_dev_configure(const struct rte_eventdev *dev)

 	if (conf->event_dev_cfg & RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT) {
 		PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
-			     "DEQUEUE_TIMEOUT not supported\n",
+			     "DEQUEUE_TIMEOUT not supported",
 			     dev->data->dev_id);
 		return -ENOTSUP;
 	}
@@ -659,7 +659,7 @@ opdl_probe(struct rte_vdev_device *vdev)

 		if (!kvlist) {
 			PMD_DRV_LOG(INFO,
-					"Ignoring unsupported parameters when creating device '%s'\n",
+					"Ignoring unsupported parameters when creating device '%s'",
 					name);
 		} else {
 			int ret = rte_kvargs_process(kvlist, NUMA_NODE_ARG,
@@ -706,7 +706,7 @@ opdl_probe(struct rte_vdev_device *vdev)

 	PMD_DRV_LOG(INFO, "DEV_ID:[%02d] : "
 		      "Success - creating eventdev device %s, numa_node:[%d], do_validation:[%s]"
-			  " , self_test:[%s]\n",
+			  " , self_test:[%s]",
 		      dev->data->dev_id,
 		      name,
 		      socket_id,
@@ -750,7 +750,7 @@ opdl_remove(struct rte_vdev_device *vdev)
 	if (name == NULL)
 		return -EINVAL;

-	PMD_DRV_LOG(INFO, "Closing eventdev opdl device %s\n", name);
+	PMD_DRV_LOG(INFO, "Closing eventdev opdl device %s", name);

 	return rte_event_pmd_vdev_uninit(name);
 }
diff --git a/drivers/event/opdl/opdl_test.c b/drivers/event/opdl/opdl_test.c
index b69c4769dc..9b0c4db5ce 100644
--- a/drivers/event/opdl/opdl_test.c
+++ b/drivers/event/opdl/opdl_test.c
@@ -101,7 +101,7 @@ init(struct test *t, int nb_queues, int nb_ports)

 	ret = rte_event_dev_configure(evdev, &config);
 	if (ret < 0)
-		PMD_DRV_LOG(ERR, "%d: Error configuring device\n", __LINE__);
+		PMD_DRV_LOG(ERR, "%d: Error configuring device", __LINE__);
 	return ret;
 };

@@ -119,7 +119,7 @@ create_ports(struct test *t, int num_ports)

 	for (i = 0; i < num_ports; i++) {
 		if (rte_event_port_setup(evdev, i, &conf) < 0) {
-			PMD_DRV_LOG(ERR, "Error setting up port %d\n", i);
+			PMD_DRV_LOG(ERR, "Error setting up port %d", i);
 			return -1;
 		}
 		t->port[i] = i;
@@ -158,7 +158,7 @@ create_queues_type(struct test *t, int num_qids, enum queue_type flags)

 	for (i = t->nb_qids ; i < t->nb_qids + num_qids; i++) {
 		if (rte_event_queue_setup(evdev, i, &conf) < 0) {
-			PMD_DRV_LOG(ERR, "%d: error creating qid %d\n ",
+			PMD_DRV_LOG(ERR, "%d: error creating qid %d ",
 					__LINE__, i);
 			return -1;
 		}
@@ -180,7 +180,7 @@ cleanup(struct test *t __rte_unused)
 {
 	rte_event_dev_stop(evdev);
 	rte_event_dev_close(evdev);
-	PMD_DRV_LOG(ERR, "clean up for test done\n");
+	PMD_DRV_LOG(ERR, "clean up for test done");
 	return 0;
 };

@@ -202,7 +202,7 @@ ordered_basic(struct test *t)
 	if (init(t, 2, tx_port+1) < 0 ||
 	    create_ports(t, tx_port+1) < 0 ||
 	    create_queues_type(t, 2, OPDL_Q_TYPE_ORDERED)) {
-		PMD_DRV_LOG(ERR, "%d: Error initializing device\n", __LINE__);
+		PMD_DRV_LOG(ERR, "%d: Error initializing device", __LINE__);
 		return -1;
 	}

@@ -226,7 +226,7 @@ ordered_basic(struct test *t)
 		err = rte_event_port_link(evdev, t->port[i], &t->qid[0], NULL,
 				1);
 		if (err != 1) {
-			PMD_DRV_LOG(ERR, "%d: error mapping lb qid\n",
+			PMD_DRV_LOG(ERR, "%d: error mapping lb qid",
 					__LINE__);
 			cleanup(t);
 			return -1;
@@ -236,13 +236,13 @@ ordered_basic(struct test *t)
 	err = rte_event_port_link(evdev, t->port[tx_port], &t->qid[1], NULL,
 			1);
 	if (err != 1) {
-		PMD_DRV_LOG(ERR, "%d: error mapping TX  qid\n", __LINE__);
+		PMD_DRV_LOG(ERR, "%d: error mapping TX  qid", __LINE__);
 		cleanup(t);
 		return -1;
 	}

 	if (rte_event_dev_start(evdev) < 0) {
-		PMD_DRV_LOG(ERR, "%d: Error with start call\n", __LINE__);
+		PMD_DRV_LOG(ERR, "%d: Error with start call", __LINE__);
 		return -1;
 	}
 	/* Enqueue 3 packets to the rx port */
@@ -250,7 +250,7 @@ ordered_basic(struct test *t)
 		struct rte_event ev;
 		mbufs[i] = rte_gen_arp(0, t->mbuf_pool);
 		if (!mbufs[i]) {
-			PMD_DRV_LOG(ERR, "%d: gen of pkt failed\n", __LINE__);
+			PMD_DRV_LOG(ERR, "%d: gen of pkt failed", __LINE__);
 			return -1;
 		}

@@ -262,7 +262,7 @@ ordered_basic(struct test *t)
 		/* generate pkt and enqueue */
 		err = rte_event_enqueue_burst(evdev, t->port[rx_port], &ev, 1);
 		if (err != 1) {
-			PMD_DRV_LOG(ERR, "%d: Failed to enqueue pkt %u, retval = %u\n",
+			PMD_DRV_LOG(ERR, "%d: Failed to enqueue pkt %u, retval = %u",
 					__LINE__, i, err);
 			return -1;
 		}
@@ -278,7 +278,7 @@ ordered_basic(struct test *t)
 		deq_pkts = rte_event_dequeue_burst(evdev, t->port[i],
 				&deq_ev[i], 1, 0);
 		if (deq_pkts != 1) {
-			PMD_DRV_LOG(ERR, "%d: Failed to deq\n", __LINE__);
+			PMD_DRV_LOG(ERR, "%d: Failed to deq", __LINE__);
 			rte_event_dev_dump(evdev, stdout);
 			return -1;
 		}
@@ -286,7 +286,7 @@ ordered_basic(struct test *t)

 		if (seq != (i-1)) {
 			PMD_DRV_LOG(ERR, " seq test failed ! eq is %d , "
-					"port number is %u\n", seq, i);
+					"port number is %u", seq, i);
 			return -1;
 		}
 	}
@@ -298,7 +298,7 @@ ordered_basic(struct test *t)
 		deq_ev[i].queue_id = t->qid[1];
 		err = rte_event_enqueue_burst(evdev, t->port[i], &deq_ev[i], 1);
 		if (err != 1) {
-			PMD_DRV_LOG(ERR, "%d: Failed to enqueue\n", __LINE__);
+			PMD_DRV_LOG(ERR, "%d: Failed to enqueue", __LINE__);
 			return -1;
 		}
 	}
@@ -309,7 +309,7 @@ ordered_basic(struct test *t)

 	/* Check to see if we've got all 3 packets */
 	if (deq_pkts != 3) {
-		PMD_DRV_LOG(ERR, "%d: expected 3 pkts at tx port got %d from port %d\n",
+		PMD_DRV_LOG(ERR, "%d: expected 3 pkts at tx port got %d from port %d",
 			__LINE__, deq_pkts, tx_port);
 		rte_event_dev_dump(evdev, stdout);
 		return 1;
@@ -339,7 +339,7 @@ atomic_basic(struct test *t)
 	if (init(t, 2, tx_port+1) < 0 ||
 	    create_ports(t, tx_port+1) < 0 ||
 	    create_queues_type(t, 2, OPDL_Q_TYPE_ATOMIC)) {
-		PMD_DRV_LOG(ERR, "%d: Error initializing device\n", __LINE__);
+		PMD_DRV_LOG(ERR, "%d: Error initializing device", __LINE__);
 		return -1;
 	}

@@ -364,7 +364,7 @@ atomic_basic(struct test *t)
 		err = rte_event_port_link(evdev, t->port[i], &t->qid[0], NULL,
 				1);
 		if (err != 1) {
-			PMD_DRV_LOG(ERR, "%d: error mapping lb qid\n",
+			PMD_DRV_LOG(ERR, "%d: error mapping lb qid",
 					__LINE__);
 			cleanup(t);
 			return -1;
@@ -374,13 +374,13 @@ atomic_basic(struct test *t)
 	err = rte_event_port_link(evdev, t->port[tx_port], &t->qid[1], NULL,
 			1);
 	if (err != 1) {
-		PMD_DRV_LOG(ERR, "%d: error mapping TX  qid\n", __LINE__);
+		PMD_DRV_LOG(ERR, "%d: error mapping TX  qid", __LINE__);
 		cleanup(t);
 		return -1;
 	}

 	if (rte_event_dev_start(evdev) < 0) {
-		PMD_DRV_LOG(ERR, "%d: Error with start call\n", __LINE__);
+		PMD_DRV_LOG(ERR, "%d: Error with start call", __LINE__);
 		return -1;
 	}

@@ -389,7 +389,7 @@ atomic_basic(struct test *t)
 		struct rte_event ev;
 		mbufs[i] = rte_gen_arp(0, t->mbuf_pool);
 		if (!mbufs[i]) {
-			PMD_DRV_LOG(ERR, "%d: gen of pkt failed\n", __LINE__);
+			PMD_DRV_LOG(ERR, "%d: gen of pkt failed", __LINE__);
 			return -1;
 		}

@@ -402,7 +402,7 @@ atomic_basic(struct test *t)
 		/* generate pkt and enqueue */
 		err = rte_event_enqueue_burst(evdev, t->port[rx_port], &ev, 1);
 		if (err != 1) {
-			PMD_DRV_LOG(ERR, "%d: Failed to enqueue pkt %u, retval = %u\n",
+			PMD_DRV_LOG(ERR, "%d: Failed to enqueue pkt %u, retval = %u",
 					__LINE__, i, err);
 			return -1;
 		}
@@ -419,7 +419,7 @@ atomic_basic(struct test *t)

 		if (t->port[i] != 2) {
 			if (deq_pkts != 0) {
-				PMD_DRV_LOG(ERR, "%d: deq none zero !\n",
+				PMD_DRV_LOG(ERR, "%d: deq none zero !",
 						__LINE__);
 				rte_event_dev_dump(evdev, stdout);
 				return -1;
@@ -427,7 +427,7 @@ atomic_basic(struct test *t)
 		} else {

 			if (deq_pkts != 3) {
-				PMD_DRV_LOG(ERR, "%d: deq not eqal to 3 %u !\n",
+				PMD_DRV_LOG(ERR, "%d: deq not eqal to 3 %u !",
 						__LINE__, deq_pkts);
 				rte_event_dev_dump(evdev, stdout);
 				return -1;
@@ -444,7 +444,7 @@ atomic_basic(struct test *t)

 			if (err != 3) {
 				PMD_DRV_LOG(ERR, "port %d: Failed to enqueue pkt %u, "
-						"retval = %u\n",
+						"retval = %u",
 						t->port[i], 3, err);
 				return -1;
 			}
@@ -460,7 +460,7 @@ atomic_basic(struct test *t)

 	/* Check to see if we've got all 3 packets */
 	if (deq_pkts != 3) {
-		PMD_DRV_LOG(ERR, "%d: expected 3 pkts at tx port got %d from port %d\n",
+		PMD_DRV_LOG(ERR, "%d: expected 3 pkts at tx port got %d from port %d",
 			__LINE__, deq_pkts, tx_port);
 		rte_event_dev_dump(evdev, stdout);
 		return 1;
@@ -568,7 +568,7 @@ single_link_w_stats(struct test *t)
 	    create_ports(t, 3) < 0 || /* 0,1,2 */
 	    create_queues_type(t, 1, OPDL_Q_TYPE_SINGLE_LINK) < 0 ||
 	    create_queues_type(t, 1, OPDL_Q_TYPE_ORDERED) < 0) {
-		PMD_DRV_LOG(ERR, "%d: Error initializing device\n", __LINE__);
+		PMD_DRV_LOG(ERR, "%d: Error initializing device", __LINE__);
 		return -1;
 	}

@@ -587,7 +587,7 @@ single_link_w_stats(struct test *t)
 	err = rte_event_port_link(evdev, t->port[1], &t->qid[0], NULL,
 				  1);
 	if (err != 1) {
-		PMD_DRV_LOG(ERR, "%d: error linking port:[%u] to queue:[%u]\n",
+		PMD_DRV_LOG(ERR, "%d: error linking port:[%u] to queue:[%u]",
 		       __LINE__,
 		       t->port[1],
 		       t->qid[0]);
@@ -598,7 +598,7 @@ single_link_w_stats(struct test *t)
 	err = rte_event_port_link(evdev, t->port[2], &t->qid[1], NULL,
 				  1);
 	if (err != 1) {
-		PMD_DRV_LOG(ERR, "%d: error linking port:[%u] to queue:[%u]\n",
+		PMD_DRV_LOG(ERR, "%d: error linking port:[%u] to queue:[%u]",
 		       __LINE__,
 		       t->port[2],
 		       t->qid[1]);
@@ -607,7 +607,7 @@ single_link_w_stats(struct test *t)
 	}

 	if (rte_event_dev_start(evdev) != 0) {
-		PMD_DRV_LOG(ERR, "%d: failed to start device\n", __LINE__);
+		PMD_DRV_LOG(ERR, "%d: failed to start device", __LINE__);
 		cleanup(t);
 		return -1;
 	}
@@ -619,7 +619,7 @@ single_link_w_stats(struct test *t)
 		struct rte_event ev;
 		mbufs[i] = rte_gen_arp(0, t->mbuf_pool);
 		if (!mbufs[i]) {
-			PMD_DRV_LOG(ERR, "%d: gen of pkt failed\n", __LINE__);
+			PMD_DRV_LOG(ERR, "%d: gen of pkt failed", __LINE__);
 			return -1;
 		}

@@ -631,7 +631,7 @@ single_link_w_stats(struct test *t)
 		/* generate pkt and enqueue */
 		err = rte_event_enqueue_burst(evdev, t->port[rx_port], &ev, 1);
 		if (err != 1) {
-			PMD_DRV_LOG(ERR, "%d: Failed to enqueue pkt %u, retval = %u\n",
+			PMD_DRV_LOG(ERR, "%d: Failed to enqueue pkt %u, retval = %u",
 			       __LINE__,
 			       t->port[rx_port],
 			       err);
@@ -647,7 +647,7 @@ single_link_w_stats(struct test *t)
 					   deq_ev, 3, 0);

 	if (deq_pkts != 3) {
-		PMD_DRV_LOG(ERR, "%d: deq not 3 !\n", __LINE__);
+		PMD_DRV_LOG(ERR, "%d: deq not 3 !", __LINE__);
 		cleanup(t);
 		return -1;
 	}
@@ -662,7 +662,7 @@ single_link_w_stats(struct test *t)
 					   NEW_NUM_PACKETS);

 	if (deq_pkts != 2) {
-		PMD_DRV_LOG(ERR, "%d: enq not 2 but %u!\n", __LINE__, deq_pkts);
+		PMD_DRV_LOG(ERR, "%d: enq not 2 but %u!", __LINE__, deq_pkts);
 		cleanup(t);
 		return -1;
 	}
@@ -676,7 +676,7 @@ single_link_w_stats(struct test *t)

 	/* Check to see if we've got all 2 packets */
 	if (deq_pkts != 2) {
-		PMD_DRV_LOG(ERR, "%d: expected 2 pkts at tx port got %d from port %d\n",
+		PMD_DRV_LOG(ERR, "%d: expected 2 pkts at tx port got %d from port %d",
 			__LINE__, deq_pkts, tx_port);
 		cleanup(t);
 		return -1;
@@ -706,7 +706,7 @@ single_link(struct test *t)
 	    create_ports(t, 3) < 0 || /* 0,1,2 */
 	    create_queues_type(t, 1, OPDL_Q_TYPE_SINGLE_LINK) < 0 ||
 	    create_queues_type(t, 1, OPDL_Q_TYPE_ORDERED) < 0) {
-		PMD_DRV_LOG(ERR, "%d: Error initializing device\n", __LINE__);
+		PMD_DRV_LOG(ERR, "%d: Error initializing device", __LINE__);
 		return -1;
 	}

@@ -725,7 +725,7 @@ single_link(struct test *t)
 	err = rte_event_port_link(evdev, t->port[1], &t->qid[0], NULL,
 				  1);
 	if (err != 1) {
-		PMD_DRV_LOG(ERR, "%d: error mapping lb qid\n", __LINE__);
+		PMD_DRV_LOG(ERR, "%d: error mapping lb qid", __LINE__);
 		cleanup(t);
 		return -1;
 	}
@@ -733,14 +733,14 @@ single_link(struct test *t)
 	err = rte_event_port_link(evdev, t->port[2], &t->qid[0], NULL,
 				  1);
 	if (err != 1) {
-		PMD_DRV_LOG(ERR, "%d: error mapping lb qid\n", __LINE__);
+		PMD_DRV_LOG(ERR, "%d: error mapping lb qid", __LINE__);
 		cleanup(t);
 		return -1;
 	}

 	if (rte_event_dev_start(evdev) == 0) {
 		PMD_DRV_LOG(ERR, "%d: start DIDN'T FAIL with more than 1 "
-				"SINGLE_LINK PORT\n", __LINE__);
+				"SINGLE_LINK PORT", __LINE__);
 		cleanup(t);
 		return -1;
 	}
@@ -789,7 +789,7 @@ qid_basic(struct test *t)
 	if (init(t, NUM_QUEUES, NUM_QUEUES+1) < 0 ||
 	    create_ports(t, NUM_QUEUES+1) < 0 ||
 	    create_queues_type(t, NUM_QUEUES, OPDL_Q_TYPE_ORDERED)) {
-		PMD_DRV_LOG(ERR, "%d: Error initializing device\n", __LINE__);
+		PMD_DRV_LOG(ERR, "%d: Error initializing device", __LINE__);
 		return -1;
 	}

@@ -805,7 +805,7 @@ qid_basic(struct test *t)

 		if (nb_linked != 1) {

-			PMD_DRV_LOG(ERR, "%s:%d: error mapping port:%u to queue:%u\n",
+			PMD_DRV_LOG(ERR, "%s:%d: error mapping port:%u to queue:%u",
 					__FILE__,
 					__LINE__,
 					i + 1,
@@ -826,7 +826,7 @@ qid_basic(struct test *t)
 					&t_qid,
 					NULL,
 					1) > 0) {
-			PMD_DRV_LOG(ERR, "%s:%d: Second call to port link on same port DID NOT fail\n",
+			PMD_DRV_LOG(ERR, "%s:%d: Second call to port link on same port DID NOT fail",
 					__FILE__,
 					__LINE__);
 			err = -1;
@@ -841,7 +841,7 @@ qid_basic(struct test *t)
 					BATCH_SIZE,
 					0);
 			if (test_num_events != 0) {
-				PMD_DRV_LOG(ERR, "%s:%d: Error dequeuing 0 packets from port %u on stopped device\n",
+				PMD_DRV_LOG(ERR, "%s:%d: Error dequeuing 0 packets from port %u on stopped device",
 						__FILE__,
 						__LINE__,
 						p_id);
@@ -855,7 +855,7 @@ qid_basic(struct test *t)
 					ev,
 					BATCH_SIZE);
 			if (test_num_events != 0) {
-				PMD_DRV_LOG(ERR, "%s:%d: Error enqueuing 0 packets to port %u on stopped device\n",
+				PMD_DRV_LOG(ERR, "%s:%d: Error enqueuing 0 packets to port %u on stopped device",
 						__FILE__,
 						__LINE__,
 						p_id);
@@ -868,7 +868,7 @@ qid_basic(struct test *t)
 	/* Start the device */
 	if (!err) {
 		if (rte_event_dev_start(evdev) < 0) {
-			PMD_DRV_LOG(ERR, "%s:%d: Error with start call\n",
+			PMD_DRV_LOG(ERR, "%s:%d: Error with start call",
 					__FILE__,
 					__LINE__);
 			err = -1;
@@ -884,7 +884,7 @@ qid_basic(struct test *t)
 					&t_qid,
 					NULL,
 					1) > 0) {
-			PMD_DRV_LOG(ERR, "%s:%d: Call to port link on started device DID NOT fail\n",
+			PMD_DRV_LOG(ERR, "%s:%d: Call to port link on started device DID NOT fail",
 					__FILE__,
 					__LINE__);
 			err = -1;
@@ -904,7 +904,7 @@ qid_basic(struct test *t)
 				ev,
 				BATCH_SIZE);
 		if (num_events != BATCH_SIZE) {
-			PMD_DRV_LOG(ERR, "%s:%d: Error enqueuing rx packets\n",
+			PMD_DRV_LOG(ERR, "%s:%d: Error enqueuing rx packets",
 					__FILE__,
 					__LINE__);
 			err = -1;
@@ -921,7 +921,7 @@ qid_basic(struct test *t)
 					0);

 			if (num_events != BATCH_SIZE) {
-				PMD_DRV_LOG(ERR, "%s:%d: Error dequeuing packets from port %u\n",
+				PMD_DRV_LOG(ERR, "%s:%d: Error dequeuing packets from port %u",
 						__FILE__,
 						__LINE__,
 						p_id);
@@ -930,7 +930,7 @@ qid_basic(struct test *t)
 			}

 			if (ev[0].queue_id != q_id) {
-				PMD_DRV_LOG(ERR, "%s:%d: Error event portid[%u] q_id:[%u] does not match expected:[%u]\n",
+				PMD_DRV_LOG(ERR, "%s:%d: Error event portid[%u] q_id:[%u] does not match expected:[%u]",
 						__FILE__,
 						__LINE__,
 						p_id,
@@ -949,7 +949,7 @@ qid_basic(struct test *t)
 					ev,
 					BATCH_SIZE);
 			if (num_events != BATCH_SIZE) {
-				PMD_DRV_LOG(ERR, "%s:%d: Error enqueuing packets from port:%u to queue:%u\n",
+				PMD_DRV_LOG(ERR, "%s:%d: Error enqueuing packets from port:%u to queue:%u",
 						__FILE__,
 						__LINE__,
 						p_id,
@@ -967,7 +967,7 @@ qid_basic(struct test *t)
 				BATCH_SIZE,
 				0);
 		if (num_events != BATCH_SIZE) {
-			PMD_DRV_LOG(ERR, "%s:%d: Error dequeuing packets from tx port %u\n",
+			PMD_DRV_LOG(ERR, "%s:%d: Error dequeuing packets from tx port %u",
 					__FILE__,
 					__LINE__,
 					p_id);
@@ -993,17 +993,17 @@ opdl_selftest(void)
 	evdev = rte_event_dev_get_dev_id(eventdev_name);

 	if (evdev < 0) {
-		PMD_DRV_LOG(ERR, "%d: Eventdev %s not found - creating.\n",
+		PMD_DRV_LOG(ERR, "%d: Eventdev %s not found - creating.",
 				__LINE__, eventdev_name);
 		/* turn on stats by default */
 		if (rte_vdev_init(eventdev_name, "do_validation=1") < 0) {
-			PMD_DRV_LOG(ERR, "Error creating eventdev\n");
+			PMD_DRV_LOG(ERR, "Error creating eventdev");
 			free(t);
 			return -1;
 		}
 		evdev = rte_event_dev_get_dev_id(eventdev_name);
 		if (evdev < 0) {
-			PMD_DRV_LOG(ERR, "Error finding newly created eventdev\n");
+			PMD_DRV_LOG(ERR, "Error finding newly created eventdev");
 			free(t);
 			return -1;
 		}
@@ -1019,27 +1019,27 @@ opdl_selftest(void)
 				512, /* use very small mbufs */
 				rte_socket_id());
 		if (!eventdev_func_mempool) {
-			PMD_DRV_LOG(ERR, "ERROR creating mempool\n");
+			PMD_DRV_LOG(ERR, "ERROR creating mempool");
 			free(t);
 			return -1;
 		}
 	}
 	t->mbuf_pool = eventdev_func_mempool;

-	PMD_DRV_LOG(ERR, "*** Running Ordered Basic test...\n");
+	PMD_DRV_LOG(ERR, "*** Running Ordered Basic test...");
 	ret = ordered_basic(t);

-	PMD_DRV_LOG(ERR, "*** Running Atomic Basic test...\n");
+	PMD_DRV_LOG(ERR, "*** Running Atomic Basic test...");
 	ret = atomic_basic(t);


-	PMD_DRV_LOG(ERR, "*** Running QID  Basic test...\n");
+	PMD_DRV_LOG(ERR, "*** Running QID  Basic test...");
 	ret = qid_basic(t);

-	PMD_DRV_LOG(ERR, "*** Running SINGLE LINK failure test...\n");
+	PMD_DRV_LOG(ERR, "*** Running SINGLE LINK failure test...");
 	ret = single_link(t);

-	PMD_DRV_LOG(ERR, "*** Running SINGLE LINK w stats test...\n");
+	PMD_DRV_LOG(ERR, "*** Running SINGLE LINK w stats test...");
 	ret = single_link_w_stats(t);

 	/*
diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index 2096496917..babe77a20f 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -173,7 +173,7 @@ sw_port_setup(struct rte_eventdev *dev, uint8_t port_id,
 			dev->data->socket_id,
 			RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ);
 	if (p->rx_worker_ring == NULL) {
-		SW_LOG_ERR("Error creating RX worker ring for port %d\n",
+		SW_LOG_ERR("Error creating RX worker ring for port %d",
 				port_id);
 		return -1;
 	}
@@ -193,7 +193,7 @@ sw_port_setup(struct rte_eventdev *dev, uint8_t port_id,
 			RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ);
 	if (p->cq_worker_ring == NULL) {
 		rte_event_ring_free(p->rx_worker_ring);
-		SW_LOG_ERR("Error creating CQ worker ring for port %d\n",
+		SW_LOG_ERR("Error creating CQ worker ring for port %d",
 				port_id);
 		return -1;
 	}
@@ -253,7 +253,7 @@ qid_init(struct sw_evdev *sw, unsigned int idx, int type,

 		if (!window_size) {
 			SW_LOG_DBG(
-				"invalid reorder_window_size for ordered queue\n"
+				"invalid reorder_window_size for ordered queue"
 				);
 			goto cleanup;
 		}
@@ -262,7 +262,7 @@ qid_init(struct sw_evdev *sw, unsigned int idx, int type,
 				window_size * sizeof(qid->reorder_buffer[0]),
 				0, socket_id);
 		if (!qid->reorder_buffer) {
-			SW_LOG_DBG("reorder_buffer malloc failed\n");
+			SW_LOG_DBG("reorder_buffer malloc failed");
 			goto cleanup;
 		}

@@ -334,7 +334,7 @@ sw_queue_setup(struct rte_eventdev *dev, uint8_t queue_id,
 		type = SW_SCHED_TYPE_DIRECT;
 	} else if (RTE_EVENT_QUEUE_CFG_ALL_TYPES
 			& conf->event_queue_cfg) {
-		SW_LOG_ERR("QUEUE_CFG_ALL_TYPES not supported\n");
+		SW_LOG_ERR("QUEUE_CFG_ALL_TYPES not supported");
 		return -ENOTSUP;
 	}

@@ -769,7 +769,7 @@ sw_start(struct rte_eventdev *dev)

 	/* check a service core is mapped to this service */
 	if (!rte_service_runstate_get(sw->service_id)) {
-		SW_LOG_ERR("Warning: No Service core enabled on service %s\n",
+		SW_LOG_ERR("Warning: No Service core enabled on service %s",
 				sw->service_name);
 		return -ENOENT;
 	}
@@ -777,7 +777,7 @@ sw_start(struct rte_eventdev *dev)
 	/* check all ports are set up */
 	for (i = 0; i < sw->port_count; i++)
 		if (sw->ports[i].rx_worker_ring == NULL) {
-			SW_LOG_ERR("Port %d not configured\n", i);
+			SW_LOG_ERR("Port %d not configured", i);
 			return -ESTALE;
 		}

@@ -785,7 +785,7 @@ sw_start(struct rte_eventdev *dev)
 	for (i = 0; i < sw->qid_count; i++)
 		if (!sw->qids[i].initialized ||
 		    sw->qids[i].cq_num_mapped_cqs == 0) {
-			SW_LOG_ERR("Queue %d not configured\n", i);
+			SW_LOG_ERR("Queue %d not configured", i);
 			return -ENOLINK;
 		}

@@ -997,7 +997,7 @@ sw_probe(struct rte_vdev_device *vdev)

 		if (!kvlist) {
 			SW_LOG_INFO(
-				"Ignoring unsupported parameters when creating device '%s'\n",
+				"Ignoring unsupported parameters when creating device '%s'",
 				name);
 		} else {
 			int ret = rte_kvargs_process(kvlist, NUMA_NODE_ARG,
@@ -1067,7 +1067,7 @@ sw_probe(struct rte_vdev_device *vdev)
 	SW_LOG_INFO(
 			"Creating eventdev sw device %s, numa_node=%d, "
 			"sched_quanta=%d, credit_quanta=%d "
-			"min_burst=%d, deq_burst=%d, refill_once=%d\n",
+			"min_burst=%d, deq_burst=%d, refill_once=%d",
 			name, socket_id, sched_quanta, credit_quanta,
 			min_burst_size, deq_burst_size, refill_once);

@@ -1131,7 +1131,7 @@ sw_remove(struct rte_vdev_device *vdev)
 	if (name == NULL)
 		return -EINVAL;

-	SW_LOG_INFO("Closing eventdev sw device %s\n", name);
+	SW_LOG_INFO("Closing eventdev sw device %s", name);

 	return rte_event_pmd_vdev_uninit(name);
 }
diff --git a/drivers/event/sw/sw_evdev_xstats.c b/drivers/event/sw/sw_evdev_xstats.c
index fbac8f3ab5..076b982ab8 100644
--- a/drivers/event/sw/sw_evdev_xstats.c
+++ b/drivers/event/sw/sw_evdev_xstats.c
@@ -419,7 +419,7 @@ sw_xstats_get_names(const struct rte_eventdev *dev,
 		start_offset = sw->xstats_offset_for_qid[queue_port_id];
 		break;
 	default:
-		SW_LOG_ERR("Invalid mode received in sw_xstats_get_names()\n");
+		SW_LOG_ERR("Invalid mode received in sw_xstats_get_names()");
 		return -EINVAL;
 	};

@@ -470,7 +470,7 @@ sw_xstats_update(struct sw_evdev *sw, enum rte_event_dev_xstats_mode mode,
 		xstats_mode_count = sw->xstats_count_per_qid[queue_port_id];
 		break;
 	default:
-		SW_LOG_ERR("Invalid mode received in sw_xstats_get()\n");
+		SW_LOG_ERR("Invalid mode received in sw_xstats_get()");
 		goto invalid_value;
 	};

diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
index 84371d5d1a..b0c6d153e4 100644
--- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
+++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
@@ -67,7 +67,7 @@ rte_hw_mbuf_create_pool(struct rte_mempool *mp)
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
 			DPAA2_MEMPOOL_ERR(
-				"Failed to allocate IO portal, tid: %d\n",
+				"Failed to allocate IO portal, tid: %d",
 				rte_gettid());
 			goto err1;
 		}
@@ -198,7 +198,7 @@ rte_dpaa2_mbuf_release(struct rte_mempool *pool __rte_unused,
 		ret = dpaa2_affine_qbman_swp();
 		if (ret != 0) {
 			DPAA2_MEMPOOL_ERR(
-				"Failed to allocate IO portal, tid: %d\n",
+				"Failed to allocate IO portal, tid: %d",
 				rte_gettid());
 			return;
 		}
@@ -342,7 +342,7 @@ rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool,
 		ret = dpaa2_affine_qbman_swp();
 		if (ret != 0) {
 			DPAA2_MEMPOOL_ERR(
-				"Failed to allocate IO portal, tid: %d\n",
+				"Failed to allocate IO portal, tid: %d",
 				rte_gettid());
 			return ret;
 		}
@@ -457,7 +457,7 @@ dpaa2_populate(struct rte_mempool *mp, unsigned int max_objs,
 	msl = rte_mem_virt2memseg_list(vaddr);

 	if (!msl) {
-		DPAA2_MEMPOOL_DEBUG("Memsegment is External.\n");
+		DPAA2_MEMPOOL_DEBUG("Memsegment is External.");
 		rte_fslmc_vfio_mem_dmamap((size_t)vaddr,
 				(size_t)paddr, (size_t)len);
 	}
diff --git a/drivers/mempool/octeontx/octeontx_fpavf.c b/drivers/mempool/octeontx/octeontx_fpavf.c
index 1513c632c6..966fee8bfe 100644
--- a/drivers/mempool/octeontx/octeontx_fpavf.c
+++ b/drivers/mempool/octeontx/octeontx_fpavf.c
@@ -134,7 +134,7 @@ octeontx_fpa_gpool_alloc(unsigned int object_size)

 		if (res->sz128 == 0) {
 			res->sz128 = sz128;
-			fpavf_log_dbg("gpool %d blk_sz %d\n", res->vf_id,
+			fpavf_log_dbg("gpool %d blk_sz %d", res->vf_id,
 				      sz128);

 			return res->vf_id;
@@ -273,7 +273,7 @@ octeontx_fpapf_pool_setup(unsigned int gpool, unsigned int buf_size,
 		goto err;
 	}

-	fpavf_log_dbg(" vfid %d gpool %d aid %d pool_cfg 0x%x pool_stack_base %" PRIx64 " pool_stack_end %" PRIx64" aura_cfg %" PRIx64 "\n",
+	fpavf_log_dbg(" vfid %d gpool %d aid %d pool_cfg 0x%x pool_stack_base %" PRIx64 " pool_stack_end %" PRIx64" aura_cfg %" PRIx64,
 		      fpa->vf_id, gpool, cfg.aid, (unsigned int)cfg.pool_cfg,
 		      cfg.pool_stack_base, cfg.pool_stack_end, cfg.aura_cfg);

@@ -351,8 +351,7 @@ octeontx_fpapf_aura_attach(unsigned int gpool_index)
 					sizeof(struct octeontx_mbox_fpa_cfg),
 					&resp, sizeof(resp));
 	if (ret < 0) {
-		fpavf_log_err("Could not attach fpa ");
-		fpavf_log_err("aura %d to pool %d. Err=%d. FuncErr=%d\n",
+		fpavf_log_err("Could not attach fpa aura %d to pool %d. Err=%d. FuncErr=%d",
 			      FPA_AURA_IDX(gpool_index), gpool_index, ret,
 			      hdr.res_code);
 		ret = -EACCES;
@@ -380,7 +379,7 @@ octeontx_fpapf_aura_detach(unsigned int gpool_index)
 	hdr.vfid = gpool_index;
 	ret = octeontx_mbox_send(&hdr, &cfg, sizeof(cfg), NULL, 0);
 	if (ret < 0) {
-		fpavf_log_err("Couldn't detach FPA aura %d Err=%d FuncErr=%d\n",
+		fpavf_log_err("Couldn't detach FPA aura %d Err=%d FuncErr=%d",
 			      FPA_AURA_IDX(gpool_index), ret,
 			      hdr.res_code);
 		ret = -EINVAL;
@@ -428,8 +427,7 @@ octeontx_fpapf_start_count(uint16_t gpool_index)
 	hdr.vfid = gpool_index;
 	ret = octeontx_mbox_send(&hdr, NULL, 0, NULL, 0);
 	if (ret < 0) {
-		fpavf_log_err("Could not start buffer counting for ");
-		fpavf_log_err("FPA pool %d. Err=%d. FuncErr=%d\n",
+		fpavf_log_err("Could not start buffer counting for FPA pool %d. Err=%d. FuncErr=%d",
 			      gpool_index, ret, hdr.res_code);
 		ret = -EINVAL;
 		goto err;
@@ -636,7 +634,7 @@ octeontx_fpa_bufpool_destroy(uintptr_t handle, int node_id)
 	cnt = fpavf_read64((void *)((uintptr_t)pool_bar +
 					FPA_VF_VHAURA_CNT(gaura)));
 	if (cnt) {
-		fpavf_log_dbg("buffer exist in pool cnt %" PRId64 "\n", cnt);
+		fpavf_log_dbg("buffer exist in pool cnt %" PRId64, cnt);
 		return -EBUSY;
 	}

@@ -664,7 +662,7 @@ octeontx_fpa_bufpool_destroy(uintptr_t handle, int node_id)
 				    (pool_bar + FPA_VF_VHAURA_OP_ALLOC(gaura)));

 		if (node == NULL) {
-			fpavf_log_err("GAURA[%u] missing %" PRIx64 " buf\n",
+			fpavf_log_err("GAURA[%u] missing %" PRIx64 " buf",
 				      gaura, avail);
 			break;
 		}
@@ -684,7 +682,7 @@ octeontx_fpa_bufpool_destroy(uintptr_t handle, int node_id)
 		curr = curr[0]) {
 		if (curr == curr[0] ||
 			((uintptr_t)curr != ((uintptr_t)curr[0] - sz))) {
-			fpavf_log_err("POOL# %u buf sequence err (%p vs. %p)\n",
+			fpavf_log_err("POOL# %u buf sequence err (%p vs. %p)",
 				      gpool, curr, curr[0]);
 		}
 	}
@@ -705,7 +703,7 @@ octeontx_fpa_bufpool_destroy(uintptr_t handle, int node_id)

 	ret = octeontx_fpapf_aura_detach(gpool);
 	if (ret) {
-		fpavf_log_err("Failed to detach gaura %u. error code=%d\n",
+		fpavf_log_err("Failed to detach gaura %u. error code=%d",
 			      gpool, ret);
 	}

@@ -757,7 +755,7 @@ octeontx_fpavf_identify(void *bar0)
 	stack_ln_ptr = fpavf_read64((void *)((uintptr_t)bar0 +
 					FPA_VF_VHPOOL_THRESHOLD(0)));
 	if (vf_idx >= FPA_VF_MAX) {
-		fpavf_log_err("vf_id(%d) greater than max vf (32)\n", vf_id);
+		fpavf_log_err("vf_id(%d) greater than max vf (32)", vf_id);
 		return -E2BIG;
 	}

diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
index f4de1c8412..631e521b58 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -27,11 +27,11 @@ octeontx_fpavf_alloc(struct rte_mempool *mp)
 		goto _end;

 	if ((uint32_t)rc != object_size)
-		fpavf_log_err("buffer size mismatch: %d instead of %u\n",
+		fpavf_log_err("buffer size mismatch: %d instead of %u",
 				rc, object_size);

-	fpavf_log_info("Pool created %p with .. ", (void *)pool);
-	fpavf_log_info("obj_sz %d, cnt %d\n", object_size, memseg_count);
+	fpavf_log_info("Pool created %p with .. obj_sz %d, cnt %d",
+		(void *)pool, object_size, memseg_count);

 	/* assign pool handle to mempool */
 	mp->pool_id = (uint64_t)pool;
diff --git a/drivers/ml/cnxk/cn10k_ml_dev.c b/drivers/ml/cnxk/cn10k_ml_dev.c
index 41f3b7a95d..3c328d9d0e 100644
--- a/drivers/ml/cnxk/cn10k_ml_dev.c
+++ b/drivers/ml/cnxk/cn10k_ml_dev.c
@@ -108,14 +108,14 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *cn10

 	kvlist = rte_kvargs_parse(devargs->args, valid_args);
 	if (kvlist == NULL) {
-		plt_err("Error parsing devargs\n");
+		plt_err("Error parsing devargs");
 		return -EINVAL;
 	}

 	if (rte_kvargs_count(kvlist, CN10K_ML_FW_PATH) == 1) {
 		ret = rte_kvargs_process(kvlist, CN10K_ML_FW_PATH, &parse_string_arg, &fw_path);
 		if (ret < 0) {
-			plt_err("Error processing arguments, key = %s\n", CN10K_ML_FW_PATH);
+			plt_err("Error processing arguments, key = %s", CN10K_ML_FW_PATH);
 			ret = -EINVAL;
 			goto exit;
 		}
@@ -126,7 +126,7 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *cn10
 		ret = rte_kvargs_process(kvlist, CN10K_ML_FW_ENABLE_DPE_WARNINGS,
 					 &parse_integer_arg, &cn10k_mldev->fw.enable_dpe_warnings);
 		if (ret < 0) {
-			plt_err("Error processing arguments, key = %s\n",
+			plt_err("Error processing arguments, key = %s",
 				CN10K_ML_FW_ENABLE_DPE_WARNINGS);
 			ret = -EINVAL;
 			goto exit;
@@ -138,7 +138,7 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *cn10
 		ret = rte_kvargs_process(kvlist, CN10K_ML_FW_REPORT_DPE_WARNINGS,
 					 &parse_integer_arg, &cn10k_mldev->fw.report_dpe_warnings);
 		if (ret < 0) {
-			plt_err("Error processing arguments, key = %s\n",
+			plt_err("Error processing arguments, key = %s",
 				CN10K_ML_FW_REPORT_DPE_WARNINGS);
 			ret = -EINVAL;
 			goto exit;
@@ -150,7 +150,7 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *cn10
 		ret = rte_kvargs_process(kvlist, CN10K_ML_DEV_CACHE_MODEL_DATA, &parse_integer_arg,
 					 &cn10k_mldev->cache_model_data);
 		if (ret < 0) {
-			plt_err("Error processing arguments, key = %s\n",
+			plt_err("Error processing arguments, key = %s",
 				CN10K_ML_DEV_CACHE_MODEL_DATA);
 			ret = -EINVAL;
 			goto exit;
@@ -162,7 +162,7 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *cn10
 		ret = rte_kvargs_process(kvlist, CN10K_ML_OCM_ALLOC_MODE, &parse_string_arg,
 					 &ocm_alloc_mode);
 		if (ret < 0) {
-			plt_err("Error processing arguments, key = %s\n", CN10K_ML_OCM_ALLOC_MODE);
+			plt_err("Error processing arguments, key = %s", CN10K_ML_OCM_ALLOC_MODE);
 			ret = -EINVAL;
 			goto exit;
 		}
@@ -173,7 +173,7 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *cn10
 		ret = rte_kvargs_process(kvlist, CN10K_ML_DEV_HW_QUEUE_LOCK, &parse_integer_arg,
 					 &cn10k_mldev->hw_queue_lock);
 		if (ret < 0) {
-			plt_err("Error processing arguments, key = %s\n",
+			plt_err("Error processing arguments, key = %s",
 				CN10K_ML_DEV_HW_QUEUE_LOCK);
 			ret = -EINVAL;
 			goto exit;
@@ -185,7 +185,7 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *cn10
 		ret = rte_kvargs_process(kvlist, CN10K_ML_OCM_PAGE_SIZE, &parse_integer_arg,
 					 &cn10k_mldev->ocm_page_size);
 		if (ret < 0) {
-			plt_err("Error processing arguments, key = %s\n", CN10K_ML_OCM_PAGE_SIZE);
+			plt_err("Error processing arguments, key = %s", CN10K_ML_OCM_PAGE_SIZE);
 			ret = -EINVAL;
 			goto exit;
 		}
@@ -204,7 +204,7 @@ check_args:
 	} else {
 		if ((cn10k_mldev->fw.enable_dpe_warnings < 0) ||
 		    (cn10k_mldev->fw.enable_dpe_warnings > 1)) {
-			plt_err("Invalid argument, %s = %d\n", CN10K_ML_FW_ENABLE_DPE_WARNINGS,
+			plt_err("Invalid argument, %s = %d", CN10K_ML_FW_ENABLE_DPE_WARNINGS,
 				cn10k_mldev->fw.enable_dpe_warnings);
 			ret = -EINVAL;
 			goto exit;
@@ -218,7 +218,7 @@ check_args:
 	} else {
 		if ((cn10k_mldev->fw.report_dpe_warnings < 0) ||
 		    (cn10k_mldev->fw.report_dpe_warnings > 1)) {
-			plt_err("Invalid argument, %s = %d\n", CN10K_ML_FW_REPORT_DPE_WARNINGS,
+			plt_err("Invalid argument, %s = %d", CN10K_ML_FW_REPORT_DPE_WARNINGS,
 				cn10k_mldev->fw.report_dpe_warnings);
 			ret = -EINVAL;
 			goto exit;
@@ -231,7 +231,7 @@ check_args:
 		cn10k_mldev->cache_model_data = CN10K_ML_DEV_CACHE_MODEL_DATA_DEFAULT;
 	} else {
 		if ((cn10k_mldev->cache_model_data < 0) || (cn10k_mldev->cache_model_data > 1)) {
-			plt_err("Invalid argument, %s = %d\n", CN10K_ML_DEV_CACHE_MODEL_DATA,
+			plt_err("Invalid argument, %s = %d", CN10K_ML_DEV_CACHE_MODEL_DATA,
 				cn10k_mldev->cache_model_data);
 			ret = -EINVAL;
 			goto exit;
@@ -244,7 +244,7 @@ check_args:
 	} else {
 		if (!((strcmp(ocm_alloc_mode, "lowest") == 0) ||
 		      (strcmp(ocm_alloc_mode, "largest") == 0))) {
-			plt_err("Invalid argument, %s = %s\n", CN10K_ML_OCM_ALLOC_MODE,
+			plt_err("Invalid argument, %s = %s", CN10K_ML_OCM_ALLOC_MODE,
 				ocm_alloc_mode);
 			ret = -EINVAL;
 			goto exit;
@@ -257,7 +257,7 @@ check_args:
 		cn10k_mldev->hw_queue_lock = CN10K_ML_DEV_HW_QUEUE_LOCK_DEFAULT;
 	} else {
 		if ((cn10k_mldev->hw_queue_lock < 0) || (cn10k_mldev->hw_queue_lock > 1)) {
-			plt_err("Invalid argument, %s = %d\n", CN10K_ML_DEV_HW_QUEUE_LOCK,
+			plt_err("Invalid argument, %s = %d", CN10K_ML_DEV_HW_QUEUE_LOCK,
 				cn10k_mldev->hw_queue_lock);
 			ret = -EINVAL;
 			goto exit;
@@ -269,7 +269,7 @@ check_args:
 		cn10k_mldev->ocm_page_size = CN10K_ML_OCM_PAGE_SIZE_DEFAULT;
 	} else {
 		if (cn10k_mldev->ocm_page_size < 0) {
-			plt_err("Invalid argument, %s = %d\n", CN10K_ML_OCM_PAGE_SIZE,
+			plt_err("Invalid argument, %s = %d", CN10K_ML_OCM_PAGE_SIZE,
 				cn10k_mldev->ocm_page_size);
 			ret = -EINVAL;
 			goto exit;
@@ -284,7 +284,7 @@ check_args:
 		}

 		if (!found) {
-			plt_err("Unsupported ocm_page_size = %d\n", cn10k_mldev->ocm_page_size);
+			plt_err("Unsupported ocm_page_size = %d", cn10k_mldev->ocm_page_size);
 			ret = -EINVAL;
 			goto exit;
 		}
@@ -773,7 +773,7 @@ cn10k_ml_fw_load(struct cnxk_ml_dev *cnxk_mldev)
 		/* Read firmware image to a buffer */
 		ret = rte_firmware_read(fw->path, &fw_buffer, &fw_size);
 		if ((ret < 0) || (fw_buffer == NULL)) {
-			plt_err("Unable to read firmware data: %s\n", fw->path);
+			plt_err("Unable to read firmware data: %s", fw->path);
 			return ret;
 		}

diff --git a/drivers/ml/cnxk/cnxk_ml_ops.c b/drivers/ml/cnxk/cnxk_ml_ops.c
index 971362b242..7bd73727e1 100644
--- a/drivers/ml/cnxk/cnxk_ml_ops.c
+++ b/drivers/ml/cnxk/cnxk_ml_ops.c
@@ -437,7 +437,7 @@ cnxk_ml_model_xstats_reset(struct cnxk_ml_dev *cnxk_mldev, int32_t model_id,

 			model = cnxk_mldev->mldev->data->models[model_id];
 			if (model == NULL) {
-				plt_err("Invalid model_id = %d\n", model_id);
+				plt_err("Invalid model_id = %d", model_id);
 				return -EINVAL;
 			}
 		}
@@ -454,7 +454,7 @@ cnxk_ml_model_xstats_reset(struct cnxk_ml_dev *cnxk_mldev, int32_t model_id,
 		} else {
 			for (j = 0; j < nb_ids; j++) {
 				if (stat_ids[j] < start_id || stat_ids[j] > end_id) {
-					plt_err("Invalid stat_ids[%d] = %d for model_id = %d\n", j,
+					plt_err("Invalid stat_ids[%d] = %d for model_id = %d", j,
 						stat_ids[j], lcl_model_id);
 					return -EINVAL;
 				}
@@ -510,12 +510,12 @@ cnxk_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *co

 	cnxk_ml_dev_info_get(dev, &dev_info);
 	if (conf->nb_models > dev_info.max_models) {
-		plt_err("Invalid device config, nb_models > %u\n", dev_info.max_models);
+		plt_err("Invalid device config, nb_models > %u", dev_info.max_models);
 		return -EINVAL;
 	}

 	if (conf->nb_queue_pairs > dev_info.max_queue_pairs) {
-		plt_err("Invalid device config, nb_queue_pairs > %u\n", dev_info.max_queue_pairs);
+		plt_err("Invalid device config, nb_queue_pairs > %u", dev_info.max_queue_pairs);
 		return -EINVAL;
 	}

@@ -533,10 +533,10 @@ cnxk_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *co
 		plt_ml_dbg("Re-configuring ML device, nb_queue_pairs = %u, nb_models = %u",
 			   conf->nb_queue_pairs, conf->nb_models);
 	} else if (cnxk_mldev->state == ML_CNXK_DEV_STATE_STARTED) {
-		plt_err("Device can't be reconfigured in started state\n");
+		plt_err("Device can't be reconfigured in started state");
 		return -ENOTSUP;
 	} else if (cnxk_mldev->state == ML_CNXK_DEV_STATE_CLOSED) {
-		plt_err("Device can't be reconfigured after close\n");
+		plt_err("Device can't be reconfigured after close");
 		return -ENOTSUP;
 	}

@@ -853,7 +853,7 @@ cnxk_ml_dev_queue_pair_setup(struct rte_ml_dev *dev, uint16_t queue_pair_id,
 	uint32_t nb_desc;

 	if (queue_pair_id >= dev->data->nb_queue_pairs) {
-		plt_err("Queue-pair id = %u (>= max queue pairs supported, %u)\n", queue_pair_id,
+		plt_err("Queue-pair id = %u (>= max queue pairs supported, %u)", queue_pair_id,
 			dev->data->nb_queue_pairs);
 		return -EINVAL;
 	}
@@ -1249,11 +1249,11 @@ cnxk_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, u
 	}

 	if ((total_wb_pages + max_scratch_pages) > ocm->num_pages) {
-		plt_err("model_id = %u: total_wb_pages (%u) + scratch_pages (%u) >  %u\n",
+		plt_err("model_id = %u: total_wb_pages (%u) + scratch_pages (%u) >  %u",
 			lcl_model_id, total_wb_pages, max_scratch_pages, ocm->num_pages);

 		if (model->type == ML_CNXK_MODEL_TYPE_GLOW) {
-			plt_ml_dbg("layer_id = %u: wb_pages = %u, scratch_pages = %u\n", layer_id,
+			plt_ml_dbg("layer_id = %u: wb_pages = %u, scratch_pages = %u", layer_id,
 				   model->layer[layer_id].glow.ocm_map.wb_pages,
 				   model->layer[layer_id].glow.ocm_map.scratch_pages);
 #ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM
@@ -1262,7 +1262,7 @@ cnxk_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, u
 			     layer_id++) {
 				if (model->layer[layer_id].type == ML_CNXK_LAYER_TYPE_MRVL) {
 					plt_ml_dbg(
-						"layer_id = %u: wb_pages = %u, scratch_pages = %u\n",
+						"layer_id = %u: wb_pages = %u, scratch_pages = %u",
 						layer_id,
 						model->layer[layer_id].glow.ocm_map.wb_pages,
 						model->layer[layer_id].glow.ocm_map.scratch_pages);
diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index cb6f8141a8..0f367faad5 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -359,13 +359,13 @@ atl_rx_init(struct rte_eth_dev *eth_dev)
 		buff_size = RTE_ALIGN_FLOOR(buff_size, 1024);
 		if (buff_size > HW_ATL_B0_RXD_BUF_SIZE_MAX) {
 			PMD_INIT_LOG(WARNING,
-				"Port %d queue %d: mem pool buff size is too big\n",
+				"Port %d queue %d: mem pool buff size is too big",
 				rxq->port_id, rxq->queue_id);
 			buff_size = HW_ATL_B0_RXD_BUF_SIZE_MAX;
 		}
 		if (buff_size < 1024) {
 			PMD_INIT_LOG(ERR,
-				"Port %d queue %d: mem pool buff size is too small\n",
+				"Port %d queue %d: mem pool buff size is too small",
 				rxq->port_id, rxq->queue_id);
 			return -EINVAL;
 		}
diff --git a/drivers/net/atlantic/hw_atl/hw_atl_utils.c b/drivers/net/atlantic/hw_atl/hw_atl_utils.c
index 84d11ab3a5..06d79115b9 100644
--- a/drivers/net/atlantic/hw_atl/hw_atl_utils.c
+++ b/drivers/net/atlantic/hw_atl/hw_atl_utils.c
@@ -76,7 +76,7 @@ int hw_atl_utils_initfw(struct aq_hw_s *self, const struct aq_fw_ops **fw_ops)
 					self->fw_ver_actual) == 0) {
 		*fw_ops = &aq_fw_2x_ops;
 	} else {
-		PMD_DRV_LOG(ERR, "Bad FW version detected: %x\n",
+		PMD_DRV_LOG(ERR, "Bad FW version detected: %x",
 			  self->fw_ver_actual);
 		return -EOPNOTSUPP;
 	}
@@ -124,7 +124,7 @@ static int hw_atl_utils_soft_reset_flb(struct aq_hw_s *self)
 		AQ_HW_SLEEP(10);
 	}
 	if (k == 1000) {
-		PMD_DRV_LOG(ERR, "MAC kickstart failed\n");
+		PMD_DRV_LOG(ERR, "MAC kickstart failed");
 		return -EIO;
 	}

@@ -152,7 +152,7 @@ static int hw_atl_utils_soft_reset_flb(struct aq_hw_s *self)
 		AQ_HW_SLEEP(10);
 	}
 	if (k == 1000) {
-		PMD_DRV_LOG(ERR, "FW kickstart failed\n");
+		PMD_DRV_LOG(ERR, "FW kickstart failed");
 		return -EIO;
 	}
 	/* Old FW requires fixed delay after init */
@@ -209,7 +209,7 @@ static int hw_atl_utils_soft_reset_rbl(struct aq_hw_s *self)
 		aq_hw_write_reg(self, 0x534, 0xA0);

 	if (rbl_status == 0xF1A7) {
-		PMD_DRV_LOG(ERR, "No FW detected. Dynamic FW load not implemented\n");
+		PMD_DRV_LOG(ERR, "No FW detected. Dynamic FW load not implemented");
 		return -EOPNOTSUPP;
 	}

@@ -221,7 +221,7 @@ static int hw_atl_utils_soft_reset_rbl(struct aq_hw_s *self)
 		AQ_HW_SLEEP(10);
 	}
 	if (k == 1000) {
-		PMD_DRV_LOG(ERR, "FW kickstart failed\n");
+		PMD_DRV_LOG(ERR, "FW kickstart failed");
 		return -EIO;
 	}
 	/* Old FW requires fixed delay after init */
@@ -246,7 +246,7 @@ int hw_atl_utils_soft_reset(struct aq_hw_s *self)
 	}

 	if (k == 1000) {
-		PMD_DRV_LOG(ERR, "Neither RBL nor FLB firmware started\n");
+		PMD_DRV_LOG(ERR, "Neither RBL nor FLB firmware started");
 		return -EOPNOTSUPP;
 	}

diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 6ce87f83f4..da45ebf45f 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -1352,7 +1352,7 @@ axgbe_priority_flow_ctrl_set(struct rte_eth_dev *dev,
 	tc_num = pdata->pfc_map[pfc_conf->priority];

 	if (pfc_conf->priority >= pdata->hw_feat.tc_cnt) {
-		PMD_INIT_LOG(ERR, "Max supported  traffic class: %d\n",
+		PMD_INIT_LOG(ERR, "Max supported  traffic class: %d",
 				pdata->hw_feat.tc_cnt);
 	return -EINVAL;
 	}
diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c
index 597ee43359..3153cc4d80 100644
--- a/drivers/net/bnx2x/bnx2x.c
+++ b/drivers/net/bnx2x/bnx2x.c
@@ -8124,7 +8124,7 @@ static int bnx2x_get_shmem_info(struct bnx2x_softc *sc)
 	val = sc->devinfo.bc_ver >> 8;
 	if (val < BNX2X_BC_VER) {
 		/* for now only warn later we might need to enforce this */
-		PMD_DRV_LOG(NOTICE, sc, "This driver needs bc_ver %X but found %X, please upgrade BC\n",
+		PMD_DRV_LOG(NOTICE, sc, "This driver needs bc_ver %X but found %X, please upgrade BC",
 			    BNX2X_BC_VER, val);
 	}
 	sc->link_params.feature_config_flags |=
@@ -9489,16 +9489,16 @@ static int bnx2x_prev_unload(struct bnx2x_softc *sc)
 	hw_lock_val = (REG_RD(sc, hw_lock_reg));
 	if (hw_lock_val) {
 		if (hw_lock_val & HW_LOCK_RESOURCE_NVRAM) {
-			PMD_DRV_LOG(DEBUG, sc, "Releasing previously held NVRAM lock\n");
+			PMD_DRV_LOG(DEBUG, sc, "Releasing previously held NVRAM lock");
 			REG_WR(sc, MCP_REG_MCPR_NVM_SW_ARB,
 			       (MCPR_NVM_SW_ARB_ARB_REQ_CLR1 << SC_PORT(sc)));
 		}
-		PMD_DRV_LOG(DEBUG, sc, "Releasing previously held HW lock\n");
+		PMD_DRV_LOG(DEBUG, sc, "Releasing previously held HW lock");
 		REG_WR(sc, hw_lock_reg, 0xffffffff);
 	}

 	if (MCPR_ACCESS_LOCK_LOCK & REG_RD(sc, MCP_REG_MCPR_ACCESS_LOCK)) {
-		PMD_DRV_LOG(DEBUG, sc, "Releasing previously held ALR\n");
+		PMD_DRV_LOG(DEBUG, sc, "Releasing previously held ALR");
 		REG_WR(sc, MCP_REG_MCPR_ACCESS_LOCK, 0);
 	}

diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 06c21ebe6d..3cca8a07f3 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -702,7 +702,7 @@ selection_logic(struct bond_dev_private *internals, uint16_t member_id)
 		ret = rte_eth_link_get_nowait(members[i], &link_info);
 		if (ret < 0) {
 			RTE_BOND_LOG(ERR,
-				"Member (port %u) link get failed: %s\n",
+				"Member (port %u) link get failed: %s",
 				members[i], rte_strerror(-ret));
 			continue;
 		}
@@ -879,7 +879,7 @@ bond_mode_8023ad_periodic_cb(void *arg)
 		ret = rte_eth_link_get_nowait(member_id, &link_info);
 		if (ret < 0) {
 			RTE_BOND_LOG(ERR,
-				"Member (port %u) link get failed: %s\n",
+				"Member (port %u) link get failed: %s",
 				member_id, rte_strerror(-ret));
 		}

diff --git a/drivers/net/bonding/rte_eth_bond_alb.c b/drivers/net/bonding/rte_eth_bond_alb.c
index 56945e2349..253f38da4a 100644
--- a/drivers/net/bonding/rte_eth_bond_alb.c
+++ b/drivers/net/bonding/rte_eth_bond_alb.c
@@ -60,7 +60,7 @@ bond_mode_alb_enable(struct rte_eth_dev *bond_dev)
 			0, data_size, socket_id);

 		if (internals->mode6.mempool == NULL) {
-			RTE_BOND_LOG(ERR, "%s: Failed to initialize ALB mempool.\n",
+			RTE_BOND_LOG(ERR, "%s: Failed to initialize ALB mempool.",
 				     bond_dev->device->name);
 			goto mempool_alloc_error;
 		}
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index 99e496556a..ffc1322047 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -482,7 +482,7 @@ __eth_bond_member_add_lock_free(uint16_t bonding_port_id, uint16_t member_port_i
 	ret = rte_eth_dev_info_get(member_port_id, &dev_info);
 	if (ret != 0) {
 		RTE_BOND_LOG(ERR,
-			"%s: Error during getting device (port %u) info: %s\n",
+			"%s: Error during getting device (port %u) info: %s",
 			__func__, member_port_id, strerror(-ret));

 		return ret;
@@ -609,7 +609,7 @@ __eth_bond_member_add_lock_free(uint16_t bonding_port_id, uint16_t member_port_i
 					&bonding_eth_dev->data->port_id);
 			internals->member_count--;
 			RTE_BOND_LOG(ERR,
-				"Member (port %u) link get failed: %s\n",
+				"Member (port %u) link get failed: %s",
 				member_port_id, rte_strerror(-ret));
 			return -1;
 		}
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index c40d18d128..4144c86be4 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -191,7 +191,7 @@ bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
 	ret = rte_eth_dev_info_get(member_port, &member_info);
 	if (ret != 0) {
 		RTE_BOND_LOG(ERR,
-			"%s: Error during getting device (port %u) info: %s\n",
+			"%s: Error during getting device (port %u) info: %s",
 			__func__, member_port, strerror(-ret));

 		return ret;
@@ -221,7 +221,7 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
 		ret = rte_eth_dev_info_get(bond_dev->data->port_id, &bond_info);
 		if (ret != 0) {
 			RTE_BOND_LOG(ERR,
-				"%s: Error during getting device (port %u) info: %s\n",
+				"%s: Error during getting device (port %u) info: %s",
 				__func__, bond_dev->data->port_id,
 				strerror(-ret));

@@ -2289,7 +2289,7 @@ bond_ethdev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 			ret = rte_eth_dev_info_get(member.port_id, &member_info);
 			if (ret != 0) {
 				RTE_BOND_LOG(ERR,
-					"%s: Error during getting device (port %u) info: %s\n",
+					"%s: Error during getting device (port %u) info: %s",
 					__func__,
 					member.port_id,
 					strerror(-ret));
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index c841b31051..60baf806ab 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -582,7 +582,7 @@ cnxk_nix_process_rx_conf(const struct rte_eth_rxconf *rx_conf,
 	}

 	if (mp == NULL || mp[0] == NULL || mp[1] == NULL) {
-		plt_err("invalid memory pools\n");
+		plt_err("invalid memory pools");
 		return -EINVAL;
 	}

@@ -610,7 +610,7 @@ cnxk_nix_process_rx_conf(const struct rte_eth_rxconf *rx_conf,
 		return -EINVAL;
 	}

-	plt_info("spb_pool:%s lpb_pool:%s lpb_len:%u spb_len:%u\n", (*spb_pool)->name,
+	plt_info("spb_pool:%s lpb_pool:%s lpb_len:%u spb_len:%u", (*spb_pool)->name,
 		 (*lpb_pool)->name, (*lpb_pool)->elt_size, (*spb_pool)->elt_size);

 	return 0;
diff --git a/drivers/net/cnxk/cnxk_ethdev_mcs.c b/drivers/net/cnxk/cnxk_ethdev_mcs.c
index 06ef7c98f3..119060bcf3 100644
--- a/drivers/net/cnxk/cnxk_ethdev_mcs.c
+++ b/drivers/net/cnxk/cnxk_ethdev_mcs.c
@@ -568,17 +568,17 @@ cnxk_eth_macsec_session_stats_get(struct cnxk_eth_dev *dev, struct cnxk_macsec_s
 	req.id = sess->flow_id;
 	req.dir = sess->dir;
 	roc_mcs_flowid_stats_get(mcs_dev->mdev, &req, &flow_stats);
-	plt_nix_dbg("\n******* FLOW_ID IDX[%u] STATS dir: %u********\n", sess->flow_id, sess->dir);
-	plt_nix_dbg("TX: tcam_hit_cnt: 0x%" PRIx64 "\n", flow_stats.tcam_hit_cnt);
+	plt_nix_dbg("******* FLOW_ID IDX[%u] STATS dir: %u********", sess->flow_id, sess->dir);
+	plt_nix_dbg("TX: tcam_hit_cnt: 0x%" PRIx64, flow_stats.tcam_hit_cnt);

 	req.id = mcs_dev->port_id;
 	req.dir = sess->dir;
 	roc_mcs_port_stats_get(mcs_dev->mdev, &req, &port_stats);
-	plt_nix_dbg("\n********** PORT[0] STATS ****************\n");
-	plt_nix_dbg("RX tcam_miss_cnt: 0x%" PRIx64 "\n", port_stats.tcam_miss_cnt);
-	plt_nix_dbg("RX parser_err_cnt: 0x%" PRIx64 "\n", port_stats.parser_err_cnt);
-	plt_nix_dbg("RX preempt_err_cnt: 0x%" PRIx64 "\n", port_stats.preempt_err_cnt);
-	plt_nix_dbg("RX sectag_insert_err_cnt: 0x%" PRIx64 "\n", port_stats.sectag_insert_err_cnt);
+	plt_nix_dbg("********** PORT[0] STATS ****************");
+	plt_nix_dbg("RX tcam_miss_cnt: 0x%" PRIx64, port_stats.tcam_miss_cnt);
+	plt_nix_dbg("RX parser_err_cnt: 0x%" PRIx64, port_stats.parser_err_cnt);
+	plt_nix_dbg("RX preempt_err_cnt: 0x%" PRIx64, port_stats.preempt_err_cnt);
+	plt_nix_dbg("RX sectag_insert_err_cnt: 0x%" PRIx64, port_stats.sectag_insert_err_cnt);

 	req.id = sess->secy_id;
 	req.dir = sess->dir;
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index c8f4848f92..89e00f8fc7 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -528,7 +528,7 @@ cnxk_nix_sq_flush(struct rte_eth_dev *eth_dev)
 		/* Wait for sq entries to be flushed */
 		rc = roc_nix_tm_sq_flush_spin(sq);
 		if (rc) {
-			plt_err("Failed to drain sq, rc=%d\n", rc);
+			plt_err("Failed to drain sq, rc=%d", rc);
 			goto exit;
 		}
 		if (data->tx_queue_state[i] == RTE_ETH_QUEUE_STATE_STARTED) {
diff --git a/drivers/net/cpfl/cpfl_flow_parser.c b/drivers/net/cpfl/cpfl_flow_parser.c
index 40569ddc6f..011229a470 100644
--- a/drivers/net/cpfl/cpfl_flow_parser.c
+++ b/drivers/net/cpfl/cpfl_flow_parser.c
@@ -2020,7 +2020,7 @@ cpfl_metadata_write_port_id(struct cpfl_itf *itf)

 	dev_id = cpfl_get_port_id(itf);
 	if (dev_id == CPFL_INVALID_HW_ID) {
-		PMD_DRV_LOG(ERR, "fail to get hw ID\n");
+		PMD_DRV_LOG(ERR, "fail to get hw ID");
 		return false;
 	}
 	cpfl_metadata_write16(&itf->adapter->meta, type, offset, dev_id << 3);
diff --git a/drivers/net/cpfl/cpfl_fxp_rule.c b/drivers/net/cpfl/cpfl_fxp_rule.c
index be34da9fa2..42553c9641 100644
--- a/drivers/net/cpfl/cpfl_fxp_rule.c
+++ b/drivers/net/cpfl/cpfl_fxp_rule.c
@@ -77,7 +77,7 @@ cpfl_receive_ctlq_msg(struct idpf_hw *hw, struct idpf_ctlq_info *cq, u16 num_q_m

 		if (ret && ret != CPFL_ERR_CTLQ_NO_WORK && ret != CPFL_ERR_CTLQ_ERROR &&
 		    ret != CPFL_ERR_CTLQ_EMPTY) {
-			PMD_INIT_LOG(ERR, "failed to recv ctrlq msg. err: 0x%4x\n", ret);
+			PMD_INIT_LOG(ERR, "failed to recv ctrlq msg. err: 0x%4x", ret);
 			retries++;
 			continue;
 		}
@@ -108,7 +108,7 @@ cpfl_receive_ctlq_msg(struct idpf_hw *hw, struct idpf_ctlq_info *cq, u16 num_q_m
 			buff_cnt = dma ? 1 : 0;
 			ret = cpfl_vport_ctlq_post_rx_buffs(hw, cq, &buff_cnt, &dma);
 			if (ret)
-				PMD_INIT_LOG(WARNING, "could not posted recv bufs\n");
+				PMD_INIT_LOG(WARNING, "could not posted recv bufs");
 		}
 		break;
 	}
@@ -131,7 +131,7 @@ cpfl_mod_rule_pack(struct cpfl_rule_info *rinfo, struct idpf_dma_mem *dma,

 	/* prepare rule blob */
 	if (!dma->va) {
-		PMD_INIT_LOG(ERR, "dma mem passed to %s is null\n", __func__);
+		PMD_INIT_LOG(ERR, "dma mem passed to %s is null", __func__);
 		return -1;
 	}
 	blob = (union cpfl_rule_cfg_pkt_record *)dma->va;
@@ -176,7 +176,7 @@ cpfl_default_rule_pack(struct cpfl_rule_info *rinfo, struct idpf_dma_mem *dma,
 	uint16_t cfg_ctrl;

 	if (!dma->va) {
-		PMD_INIT_LOG(ERR, "dma mem passed to %s is null\n", __func__);
+		PMD_INIT_LOG(ERR, "dma mem passed to %s is null", __func__);
 		return -1;
 	}
 	blob = (union cpfl_rule_cfg_pkt_record *)dma->va;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 8e610b6bba..c5b1f161fd 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -728,7 +728,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,

 	total_nb_rx_desc += nb_rx_desc;
 	if (total_nb_rx_desc > MAX_NB_RX_DESC) {
-		DPAA2_PMD_WARN("\nTotal nb_rx_desc exceeds %d limit. Please use Normal buffers",
+		DPAA2_PMD_WARN("Total nb_rx_desc exceeds %d limit. Please use Normal buffers",
 			       MAX_NB_RX_DESC);
 		DPAA2_PMD_WARN("To use Normal buffers, run 'export DPNI_NORMAL_BUF=1' before running dynamic_dpl.sh script");
 	}
@@ -1063,7 +1063,7 @@ dpaa2_dev_rx_queue_count(void *rx_queue)
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
 			DPAA2_PMD_ERR(
-				"Failed to allocate IO portal, tid: %d\n",
+				"Failed to allocate IO portal, tid: %d",
 				rte_gettid());
 			return -EINVAL;
 		}
@@ -1933,7 +1933,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
 	if (ret == -1)
 		DPAA2_PMD_DEBUG("No change in status");
 	else
-		DPAA2_PMD_INFO("Port %d Link is %s\n", dev->data->port_id,
+		DPAA2_PMD_INFO("Port %d Link is %s", dev->data->port_id,
 			       link.link_status ? "Up" : "Down");

 	return ret;
@@ -2307,7 +2307,7 @@ int dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev,
 				   dpaa2_ethq->tc_index, flow_id,
 				   OPR_OPT_CREATE, &ocfg, 0);
 		if (ret) {
-			DPAA2_PMD_ERR("Error setting opr: ret: %d\n", ret);
+			DPAA2_PMD_ERR("Error setting opr: ret: %d", ret);
 			return ret;
 		}

@@ -2423,7 +2423,7 @@ rte_pmd_dpaa2_thread_init(void)
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
 			DPAA2_PMD_ERR(
-				"Failed to allocate IO portal, tid: %d\n",
+				"Failed to allocate IO portal, tid: %d",
 				rte_gettid());
 			return;
 		}
@@ -2838,7 +2838,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 		WRIOP_SS_INITIALIZER(priv);
 		ret = dpaa2_eth_load_wriop_soft_parser(priv, DPNI_SS_INGRESS);
 		if (ret < 0) {
-			DPAA2_PMD_ERR(" Error(%d) in loading softparser\n",
+			DPAA2_PMD_ERR(" Error(%d) in loading softparser",
 				      ret);
 			return ret;
 		}
@@ -2846,7 +2846,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
 		ret = dpaa2_eth_enable_wriop_soft_parser(priv,
 							 DPNI_SS_INGRESS);
 		if (ret < 0) {
-			DPAA2_PMD_ERR(" Error(%d) in enabling softparser\n",
+			DPAA2_PMD_ERR(" Error(%d) in enabling softparser",
 				      ret);
 			return ret;
 		}
@@ -2929,7 +2929,7 @@ rte_dpaa2_probe(struct rte_dpaa2_driver *dpaa2_drv,
 				DPAA2_MAX_SGS * sizeof(struct qbman_sge),
 				rte_socket_id());
 			if (dpaa2_tx_sg_pool == NULL) {
-				DPAA2_PMD_ERR("SG pool creation failed\n");
+				DPAA2_PMD_ERR("SG pool creation failed");
 				return -ENOMEM;
 			}
 		}
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index eec7e60650..e590f6f748 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -3360,7 +3360,7 @@ dpaa2_flow_verify_action(
 				rxq = priv->rx_vq[rss_conf->queue[i]];
 				if (rxq->tc_index != attr->group) {
 					DPAA2_PMD_ERR(
-						"Queue/Group combination are not supported\n");
+						"Queue/Group combination are not supported");
 					return -ENOTSUP;
 				}
 			}
@@ -3601,7 +3601,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 						priv->token, &qos_cfg);
 					if (ret < 0) {
 						DPAA2_PMD_ERR(
-						"RSS QoS table can not be configured(%d)\n",
+						"RSS QoS table can not be configured(%d)",
 							ret);
 						return -1;
 					}
@@ -3718,14 +3718,14 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 					&priv->extract.tc_key_extract[flow->tc_id].dpkg);
 			if (ret < 0) {
 				DPAA2_PMD_ERR(
-				"unable to set flow distribution.please check queue config\n");
+				"unable to set flow distribution.please check queue config");
 				return ret;
 			}

 			/* Allocate DMA'ble memory to write the rules */
 			param = (size_t)rte_malloc(NULL, 256, 64);
 			if (!param) {
-				DPAA2_PMD_ERR("Memory allocation failure\n");
+				DPAA2_PMD_ERR("Memory allocation failure");
 				return -1;
 			}

@@ -3747,7 +3747,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 						 priv->token, &tc_cfg);
 			if (ret < 0) {
 				DPAA2_PMD_ERR(
-					"RSS TC table cannot be configured: %d\n",
+					"RSS TC table cannot be configured: %d",
 					ret);
 				rte_free((void *)param);
 				return -1;
@@ -3772,7 +3772,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 							 priv->token, &qos_cfg);
 				if (ret < 0) {
 					DPAA2_PMD_ERR(
-					"RSS QoS dist can't be configured-%d\n",
+					"RSS QoS dist can't be configured-%d",
 					ret);
 					return -1;
 				}
@@ -3841,20 +3841,20 @@ dpaa2_dev_verify_attr(struct dpni_attr *dpni_attr,
 	int ret = 0;

 	if (unlikely(attr->group >= dpni_attr->num_rx_tcs)) {
-		DPAA2_PMD_ERR("Priority group is out of range\n");
+		DPAA2_PMD_ERR("Priority group is out of range");
 		ret = -ENOTSUP;
 	}
 	if (unlikely(attr->priority >= dpni_attr->fs_entries)) {
-		DPAA2_PMD_ERR("Priority within the group is out of range\n");
+		DPAA2_PMD_ERR("Priority within the group is out of range");
 		ret = -ENOTSUP;
 	}
 	if (unlikely(attr->egress)) {
 		DPAA2_PMD_ERR(
-			"Flow configuration is not supported on egress side\n");
+			"Flow configuration is not supported on egress side");
 		ret = -ENOTSUP;
 	}
 	if (unlikely(!attr->ingress)) {
-		DPAA2_PMD_ERR("Ingress flag must be configured\n");
+		DPAA2_PMD_ERR("Ingress flag must be configured");
 		ret = -EINVAL;
 	}
 	return ret;
@@ -3933,7 +3933,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
 	ret = dpni_get_attributes(dpni, CMD_PRI_LOW, token, &dpni_attr);
 	if (ret < 0) {
 		DPAA2_PMD_ERR(
-			"Failure to get dpni@%p attribute, err code  %d\n",
+			"Failure to get dpni@%p attribute, err code  %d",
 			dpni, ret);
 		rte_flow_error_set(error, EPERM,
 			   RTE_FLOW_ERROR_TYPE_ATTR,
@@ -3945,7 +3945,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
 	ret = dpaa2_dev_verify_attr(&dpni_attr, flow_attr);
 	if (ret < 0) {
 		DPAA2_PMD_ERR(
-			"Invalid attributes are given\n");
+			"Invalid attributes are given");
 		rte_flow_error_set(error, EPERM,
 			   RTE_FLOW_ERROR_TYPE_ATTR,
 			   flow_attr, "invalid");
@@ -3955,7 +3955,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
 	ret = dpaa2_dev_verify_patterns(pattern);
 	if (ret < 0) {
 		DPAA2_PMD_ERR(
-			"Invalid pattern list is given\n");
+			"Invalid pattern list is given");
 		rte_flow_error_set(error, EPERM,
 			   RTE_FLOW_ERROR_TYPE_ITEM,
 			   pattern, "invalid");
@@ -3965,7 +3965,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
 	ret = dpaa2_dev_verify_actions(actions);
 	if (ret < 0) {
 		DPAA2_PMD_ERR(
-			"Invalid action list is given\n");
+			"Invalid action list is given");
 		rte_flow_error_set(error, EPERM,
 			   RTE_FLOW_ERROR_TYPE_ACTION,
 			   actions, "invalid");
@@ -4012,13 +4012,13 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
 	key_iova = (size_t)rte_zmalloc(NULL, 256, 64);
 	if (!key_iova) {
 		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration\n");
+			"Memory allocation failure for rule configuration");
 		goto mem_failure;
 	}
 	mask_iova = (size_t)rte_zmalloc(NULL, 256, 64);
 	if (!mask_iova) {
 		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration\n");
+			"Memory allocation failure for rule configuration");
 		goto mem_failure;
 	}

@@ -4029,13 +4029,13 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
 	key_iova = (size_t)rte_zmalloc(NULL, 256, 64);
 	if (!key_iova) {
 		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration\n");
+			"Memory allocation failure for rule configuration");
 		goto mem_failure;
 	}
 	mask_iova = (size_t)rte_zmalloc(NULL, 256, 64);
 	if (!mask_iova) {
 		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration\n");
+			"Memory allocation failure for rule configuration");
 		goto mem_failure;
 	}

diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 2ff1a98fda..7dd5a60966 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -88,7 +88,7 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
 			   (2 * DIST_PARAM_IOVA_SIZE), RTE_CACHE_LINE_SIZE);
 	if (!flow) {
 		DPAA2_PMD_ERR(
-			"Memory allocation failure for rule configuration\n");
+			"Memory allocation failure for rule configuration");
 		goto creation_error;
 	}
 	key_iova = (void *)((size_t)flow + sizeof(struct rte_flow));
@@ -211,7 +211,7 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,

 	vf_conf = (const struct rte_flow_action_vf *)(actions[0]->conf);
 	if (vf_conf->id == 0 || vf_conf->id > dpdmux_dev->num_ifs) {
-		DPAA2_PMD_ERR("Invalid destination id\n");
+		DPAA2_PMD_ERR("Invalid destination id");
 		goto creation_error;
 	}
 	dpdmux_action.dest_if = vf_conf->id;
diff --git a/drivers/net/dpaa2/dpaa2_recycle.c b/drivers/net/dpaa2/dpaa2_recycle.c
index fbfdf360d1..4fde9b95a0 100644
--- a/drivers/net/dpaa2/dpaa2_recycle.c
+++ b/drivers/net/dpaa2/dpaa2_recycle.c
@@ -423,7 +423,7 @@ ls_mac_serdes_lpbk_support(uint16_t mac_id,

 	sd_idx = ls_serdes_cfg_to_idx(sd_cfg, sd_id);
 	if (sd_idx < 0) {
-		DPAA2_PMD_ERR("Serdes protocol(0x%02x) does not exist\n",
+		DPAA2_PMD_ERR("Serdes protocol(0x%02x) does not exist",
 			sd_cfg);
 		return false;
 	}
@@ -552,7 +552,7 @@ ls_serdes_eth_lpbk(uint16_t mac_id, int en)
 				(serdes_id - LSX_SERDES_1) * 0x10000,
 				sizeof(struct ccsr_ls_serdes) / 64 * 64 + 64);
 	if (!serdes_base) {
-		DPAA2_PMD_ERR("Serdes register map failed\n");
+		DPAA2_PMD_ERR("Serdes register map failed");
 		return -ENOMEM;
 	}

@@ -587,7 +587,7 @@ lx_serdes_eth_lpbk(uint16_t mac_id, int en)
 					(serdes_id - LSX_SERDES_1) * 0x10000,
 					sizeof(struct ccsr_lx_serdes) / 64 * 64 + 64);
 	if (!serdes_base) {
-		DPAA2_PMD_ERR("Serdes register map failed\n");
+		DPAA2_PMD_ERR("Serdes register map failed");
 		return -ENOMEM;
 	}

diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 23f7c4132d..b64232b88f 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -640,7 +640,7 @@ dump_err_pkts(struct dpaa2_queue *dpaa2_q)
 	if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
-			DPAA2_PMD_ERR("Failed to allocate IO portal, tid: %d\n",
+			DPAA2_PMD_ERR("Failed to allocate IO portal, tid: %d",
 				rte_gettid());
 			return;
 		}
@@ -691,7 +691,7 @@ dump_err_pkts(struct dpaa2_queue *dpaa2_q)
 		hw_annot_addr = (void *)((size_t)v_addr + DPAA2_FD_PTA_SIZE);
 		fas = hw_annot_addr;

-		DPAA2_PMD_ERR("\n\n[%d] error packet on port[%d]:"
+		DPAA2_PMD_ERR("[%d] error packet on port[%d]:"
 			" fd_off: %d, fd_err: %x, fas_status: %x",
 			rte_lcore_id(), eth_data->port_id,
 			DPAA2_GET_FD_OFFSET(fd), DPAA2_GET_FD_ERR(fd),
@@ -976,7 +976,7 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
 			DPAA2_PMD_ERR(
-				"Failed to allocate IO portal, tid: %d\n",
+				"Failed to allocate IO portal, tid: %d",
 				rte_gettid());
 			return 0;
 		}
@@ -1107,7 +1107,7 @@ uint16_t dpaa2_dev_tx_conf(void *queue)
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
 			DPAA2_PMD_ERR(
-				"Failed to allocate IO portal, tid: %d\n",
+				"Failed to allocate IO portal, tid: %d",
 				rte_gettid());
 			return 0;
 		}
@@ -1256,7 +1256,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
 			DPAA2_PMD_ERR(
-				"Failed to allocate IO portal, tid: %d\n",
+				"Failed to allocate IO portal, tid: %d",
 				rte_gettid());
 			return 0;
 		}
@@ -1573,7 +1573,7 @@ dpaa2_dev_tx_multi_txq_ordered(void **queue,
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
 			DPAA2_PMD_ERR(
-				"Failed to allocate IO portal, tid: %d\n",
+				"Failed to allocate IO portal, tid: %d",
 				rte_gettid());
 			return 0;
 		}
@@ -1747,7 +1747,7 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
 			DPAA2_PMD_ERR(
-				"Failed to allocate IO portal, tid: %d\n",
+				"Failed to allocate IO portal, tid: %d",
 				rte_gettid());
 			return 0;
 		}
diff --git a/drivers/net/dpaa2/dpaa2_sparser.c b/drivers/net/dpaa2/dpaa2_sparser.c
index 63463c4fbf..eb649fb063 100644
--- a/drivers/net/dpaa2/dpaa2_sparser.c
+++ b/drivers/net/dpaa2/dpaa2_sparser.c
@@ -165,7 +165,7 @@ int dpaa2_eth_load_wriop_soft_parser(struct dpaa2_dev_priv *priv,

 	addr = rte_malloc(NULL, sp_param.size, 64);
 	if (!addr) {
-		DPAA2_PMD_ERR("Memory unavailable for soft parser param\n");
+		DPAA2_PMD_ERR("Memory unavailable for soft parser param");
 		return -1;
 	}

@@ -174,7 +174,7 @@ int dpaa2_eth_load_wriop_soft_parser(struct dpaa2_dev_priv *priv,

 	ret = dpni_load_sw_sequence(dpni, CMD_PRI_LOW, priv->token, &cfg);
 	if (ret) {
-		DPAA2_PMD_ERR("dpni_load_sw_sequence failed\n");
+		DPAA2_PMD_ERR("dpni_load_sw_sequence failed");
 		rte_free(addr);
 		return ret;
 	}
@@ -214,7 +214,7 @@ int dpaa2_eth_enable_wriop_soft_parser(struct dpaa2_dev_priv *priv,
 	if (cfg.param_size) {
 		param_addr = rte_malloc(NULL, cfg.param_size, 64);
 		if (!param_addr) {
-			DPAA2_PMD_ERR("Memory unavailable for soft parser param\n");
+			DPAA2_PMD_ERR("Memory unavailable for soft parser param");
 			return -1;
 		}

@@ -227,7 +227,7 @@ int dpaa2_eth_enable_wriop_soft_parser(struct dpaa2_dev_priv *priv,

 	ret = dpni_enable_sw_sequence(dpni, CMD_PRI_LOW, priv->token, &cfg);
 	if (ret) {
-		DPAA2_PMD_ERR("dpni_enable_sw_sequence failed for dpni%d\n",
+		DPAA2_PMD_ERR("dpni_enable_sw_sequence failed for dpni%d",
 			priv->hw_id);
 		rte_free(param_addr);
 		return ret;
diff --git a/drivers/net/dpaa2/dpaa2_tm.c b/drivers/net/dpaa2/dpaa2_tm.c
index 8fe5bfa013..3c0f282ec3 100644
--- a/drivers/net/dpaa2/dpaa2_tm.c
+++ b/drivers/net/dpaa2/dpaa2_tm.c
@@ -584,7 +584,7 @@ dpaa2_tm_configure_queue(struct rte_eth_dev *dev, struct dpaa2_tm_node *node)
 		return -1;
 	}

-	DPAA2_PMD_DEBUG("tc_id = %d, channel = %d\n\n", tc_id,
+	DPAA2_PMD_DEBUG("tc_id = %d, channel = %d", tc_id,
 			node->parent->channel_id);
 	ret = dpni_set_queue(dpni, CMD_PRI_LOW, priv->token, DPNI_QUEUE_TX,
 			     ((node->parent->channel_id << 8) | tc_id),
@@ -653,7 +653,7 @@ dpaa2_tm_sort_and_configure(struct rte_eth_dev *dev,
 	int i;

 	if (n == 1) {
-		DPAA2_PMD_DEBUG("node id = %d\n, priority = %d, index = %d\n",
+		DPAA2_PMD_DEBUG("node id = %d, priority = %d, index = %d",
 				nodes[n - 1]->id, nodes[n - 1]->priority,
 				n - 1);
 		dpaa2_tm_configure_queue(dev, nodes[n - 1]);
@@ -669,7 +669,7 @@ dpaa2_tm_sort_and_configure(struct rte_eth_dev *dev,
 	}
 	dpaa2_tm_sort_and_configure(dev, nodes, n - 1);

-	DPAA2_PMD_DEBUG("node id = %d\n, priority = %d, index = %d\n",
+	DPAA2_PMD_DEBUG("node id = %d, priority = %d, index = %d",
 			nodes[n - 1]->id, nodes[n - 1]->priority,
 			n - 1);
 	dpaa2_tm_configure_queue(dev, nodes[n - 1]);
@@ -709,7 +709,7 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 			}
 		}
 		if (i > 0) {
-			DPAA2_PMD_DEBUG("Configure queues\n");
+			DPAA2_PMD_DEBUG("Configure queues");
 			dpaa2_tm_sort_and_configure(dev, nodes, i);
 		}
 	}
@@ -733,13 +733,13 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 				node->profile->params.peak.rate / (1024 * 1024);
 			/* root node */
 			if (node->parent == NULL) {
-				DPAA2_PMD_DEBUG("LNI S.rate = %u, burst =%u\n",
+				DPAA2_PMD_DEBUG("LNI S.rate = %u, burst =%u",
 						tx_cr_shaper.rate_limit,
 						tx_cr_shaper.max_burst_size);
 				param = 0x2;
 				param |= node->profile->params.pkt_length_adjust << 16;
 			} else {
-				DPAA2_PMD_DEBUG("Channel = %d S.rate = %u\n",
+				DPAA2_PMD_DEBUG("Channel = %d S.rate = %u",
 						node->channel_id,
 						tx_cr_shaper.rate_limit);
 				param = (node->channel_id << 8);
@@ -871,15 +871,15 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
 					"Scheduling Failed\n");
 			goto out;
 		}
-		DPAA2_PMD_DEBUG("########################################\n");
-		DPAA2_PMD_DEBUG("Channel idx = %d\n", prio_cfg.channel_idx);
+		DPAA2_PMD_DEBUG("########################################");
+		DPAA2_PMD_DEBUG("Channel idx = %d", prio_cfg.channel_idx);
 		for (t = 0; t < DPNI_MAX_TC; t++) {
 			DPAA2_PMD_DEBUG("tc = %d mode = %d ", t, prio_cfg.tc_sched[t].mode);
-			DPAA2_PMD_DEBUG("delta = %d\n", prio_cfg.tc_sched[t].delta_bandwidth);
+			DPAA2_PMD_DEBUG("delta = %d", prio_cfg.tc_sched[t].delta_bandwidth);
 		}
-		DPAA2_PMD_DEBUG("prioritya = %d\n", prio_cfg.prio_group_A);
-		DPAA2_PMD_DEBUG("priorityb = %d\n", prio_cfg.prio_group_B);
-		DPAA2_PMD_DEBUG("separate grps = %d\n\n", prio_cfg.separate_groups);
+		DPAA2_PMD_DEBUG("prioritya = %d", prio_cfg.prio_group_A);
+		DPAA2_PMD_DEBUG("priorityb = %d", prio_cfg.prio_group_B);
+		DPAA2_PMD_DEBUG("separate grps = %d", prio_cfg.separate_groups);
 	}
 	return 0;

diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 8858f975f8..d64a1aedd3 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -5053,7 +5053,7 @@ eth_igb_get_module_info(struct rte_eth_dev *dev,
 		PMD_DRV_LOG(ERR,
 			    "Address change required to access page 0xA2, "
 			    "but not supported. Please report the module "
-			    "type to the driver maintainers.\n");
+			    "type to the driver maintainers.");
 		page_swap = true;
 	}

diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index c9352f0746..d8c30ef150 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -150,7 +150,7 @@ print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
 	char buf[RTE_ETHER_ADDR_FMT_SIZE];

 	rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, eth_addr);
-	ENETC_PMD_NOTICE("%s%s\n", name, buf);
+	ENETC_PMD_NOTICE("%s%s", name, buf);
 }

 static int
@@ -197,7 +197,7 @@ enetc_hardware_init(struct enetc_eth_hw *hw)
 		char *first_byte;

 		ENETC_PMD_NOTICE("MAC is not available for this SI, "
-				"set random MAC\n");
+				"set random MAC");
 		mac = (uint32_t *)hw->mac.addr;
 		*mac = (uint32_t)rte_rand();
 		first_byte = (char *)mac;
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 898aad1c37..8c7067fbb5 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -253,7 +253,7 @@ enetfec_eth_link_update(struct rte_eth_dev *dev,
 	link.link_status = lstatus;
 	link.link_speed = RTE_ETH_SPEED_NUM_1G;

-	ENETFEC_PMD_INFO("Port (%d) link is %s\n", dev->data->port_id,
+	ENETFEC_PMD_INFO("Port (%d) link is %s", dev->data->port_id,
 			 "Up");

 	return rte_eth_linkstatus_set(dev, &link);
@@ -462,7 +462,7 @@ enetfec_rx_queue_setup(struct rte_eth_dev *dev,
 	}

 	if (queue_idx >= ENETFEC_MAX_Q) {
-		ENETFEC_PMD_ERR("Invalid queue id %" PRIu16 ", max %d\n",
+		ENETFEC_PMD_ERR("Invalid queue id %" PRIu16 ", max %d",
 			queue_idx, ENETFEC_MAX_Q);
 		return -EINVAL;
 	}
diff --git a/drivers/net/enetfec/enet_uio.c b/drivers/net/enetfec/enet_uio.c
index 6539cbb354..9f4e896985 100644
--- a/drivers/net/enetfec/enet_uio.c
+++ b/drivers/net/enetfec/enet_uio.c
@@ -177,7 +177,7 @@ config_enetfec_uio(struct enetfec_private *fep)

 	/* Mapping is done only one time */
 	if (enetfec_count > 0) {
-		ENETFEC_PMD_INFO("Mapped!\n");
+		ENETFEC_PMD_INFO("Mapped!");
 		return 0;
 	}

@@ -191,7 +191,7 @@ config_enetfec_uio(struct enetfec_private *fep)
 	/* Open device file */
 	uio_job->uio_fd = open(uio_device_file_name, O_RDWR);
 	if (uio_job->uio_fd < 0) {
-		ENETFEC_PMD_WARN("Unable to open ENETFEC_UIO file\n");
+		ENETFEC_PMD_WARN("Unable to open ENETFEC_UIO file");
 		return -1;
 	}

@@ -230,7 +230,7 @@ enetfec_configure(void)

 	d = opendir(FEC_UIO_DEVICE_SYS_ATTR_PATH);
 	if (d == NULL) {
-		ENETFEC_PMD_ERR("\nError opening directory '%s': %s\n",
+		ENETFEC_PMD_ERR("Error opening directory '%s': %s",
 			FEC_UIO_DEVICE_SYS_ATTR_PATH, strerror(errno));
 		return -1;
 	}
@@ -249,7 +249,7 @@ enetfec_configure(void)
 			ret = sscanf(dir->d_name + strlen("uio"), "%d",
 							&uio_minor_number);
 			if (ret < 0)
-				ENETFEC_PMD_ERR("Error: not find minor number\n");
+				ENETFEC_PMD_ERR("Error: not find minor number");
 			/*
 			 * Open file uioX/name and read first line which
 			 * contains the name for the device. Based on the
@@ -259,7 +259,7 @@ enetfec_configure(void)
 			ret = file_read_first_line(FEC_UIO_DEVICE_SYS_ATTR_PATH,
 					dir->d_name, "name", uio_name);
 			if (ret != 0) {
-				ENETFEC_PMD_INFO("file_read_first_line failed\n");
+				ENETFEC_PMD_INFO("file_read_first_line failed");
 				closedir(d);
 				return -1;
 			}
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index b04b6c9aa1..1121874346 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -670,7 +670,7 @@ static void debug_log_add_del_addr(struct rte_ether_addr *addr, bool add)
 	char mac_str[RTE_ETHER_ADDR_FMT_SIZE];

 	rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, addr);
-	ENICPMD_LOG(DEBUG, " %s address %s\n",
+	ENICPMD_LOG(DEBUG, " %s address %s",
 		     add ? "add" : "remove", mac_str);
 }

@@ -693,7 +693,7 @@ static int enicpmd_set_mc_addr_list(struct rte_eth_dev *eth_dev,
 		    rte_is_broadcast_ether_addr(addr)) {
 			rte_ether_format_addr(mac_str,
 					RTE_ETHER_ADDR_FMT_SIZE, addr);
-			ENICPMD_LOG(ERR, " invalid multicast address %s\n",
+			ENICPMD_LOG(ERR, " invalid multicast address %s",
 				     mac_str);
 			return -EINVAL;
 		}
@@ -701,7 +701,7 @@ static int enicpmd_set_mc_addr_list(struct rte_eth_dev *eth_dev,

 	/* Flush all if requested */
 	if (nb_mc_addr == 0 || mc_addr_set == NULL) {
-		ENICPMD_LOG(DEBUG, " flush multicast addresses\n");
+		ENICPMD_LOG(DEBUG, " flush multicast addresses");
 		for (i = 0; i < enic->mc_count; i++) {
 			addr = &enic->mc_addrs[i];
 			debug_log_add_del_addr(addr, false);
@@ -714,7 +714,7 @@ static int enicpmd_set_mc_addr_list(struct rte_eth_dev *eth_dev,
 	}

 	if (nb_mc_addr > ENIC_MULTICAST_PERFECT_FILTERS) {
-		ENICPMD_LOG(ERR, " too many multicast addresses: max=%d\n",
+		ENICPMD_LOG(ERR, " too many multicast addresses: max=%d",
 			     ENIC_MULTICAST_PERFECT_FILTERS);
 		return -ENOSPC;
 	}
@@ -980,7 +980,7 @@ static int udp_tunnel_common_check(struct enic *enic,
 	    tnl->prot_type != RTE_ETH_TUNNEL_TYPE_GENEVE)
 		return -ENOTSUP;
 	if (!enic->overlay_offload) {
-		ENICPMD_LOG(DEBUG, " overlay offload is not supported\n");
+		ENICPMD_LOG(DEBUG, " overlay offload is not supported");
 		return -ENOTSUP;
 	}
 	return 0;
@@ -993,10 +993,10 @@ static int update_tunnel_port(struct enic *enic, uint16_t port, bool vxlan)
 	cfg = vxlan ? OVERLAY_CFG_VXLAN_PORT_UPDATE :
 		OVERLAY_CFG_GENEVE_PORT_UPDATE;
 	if (vnic_dev_overlay_offload_cfg(enic->vdev, cfg, port)) {
-		ENICPMD_LOG(DEBUG, " failed to update tunnel port\n");
+		ENICPMD_LOG(DEBUG, " failed to update tunnel port");
 		return -EINVAL;
 	}
-	ENICPMD_LOG(DEBUG, " updated %s port to %u\n",
+	ENICPMD_LOG(DEBUG, " updated %s port to %u",
 		    vxlan ? "vxlan" : "geneve", port);
 	if (vxlan)
 		enic->vxlan_port = port;
@@ -1027,7 +1027,7 @@ static int enicpmd_dev_udp_tunnel_port_add(struct rte_eth_dev *eth_dev,
 	 * "Adding" a new port number replaces it.
 	 */
 	if (tnl->udp_port == port || tnl->udp_port == 0) {
-		ENICPMD_LOG(DEBUG, " %u is already configured or invalid\n",
+		ENICPMD_LOG(DEBUG, " %u is already configured or invalid",
 			     tnl->udp_port);
 		return -EINVAL;
 	}
@@ -1059,7 +1059,7 @@ static int enicpmd_dev_udp_tunnel_port_del(struct rte_eth_dev *eth_dev,
 	 * which is tied to inner RSS and TSO.
 	 */
 	if (tnl->udp_port != port) {
-		ENICPMD_LOG(DEBUG, " %u is not a configured tunnel port\n",
+		ENICPMD_LOG(DEBUG, " %u is not a configured tunnel port",
 			     tnl->udp_port);
 		return -EINVAL;
 	}
@@ -1323,7 +1323,7 @@ static int eth_enic_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	}
 	if (eth_da.nb_representor_ports > 0 &&
 	    eth_da.type != RTE_ETH_REPRESENTOR_VF) {
-		ENICPMD_LOG(ERR, "unsupported representor type: %s\n",
+		ENICPMD_LOG(ERR, "unsupported representor type: %s",
 			    pci_dev->device.devargs->args);
 		return -ENOTSUP;
 	}
diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c
index e6c9ad442a..758000ea21 100644
--- a/drivers/net/enic/enic_flow.c
+++ b/drivers/net/enic/enic_flow.c
@@ -1351,14 +1351,14 @@ static void
 enic_dump_actions(const struct filter_action_v2 *ea)
 {
 	if (ea->type == FILTER_ACTION_RQ_STEERING) {
-		ENICPMD_LOG(INFO, "Action(V1), queue: %u\n", ea->rq_idx);
+		ENICPMD_LOG(INFO, "Action(V1), queue: %u", ea->rq_idx);
 	} else if (ea->type == FILTER_ACTION_V2) {
-		ENICPMD_LOG(INFO, "Actions(V2)\n");
+		ENICPMD_LOG(INFO, "Actions(V2)");
 		if (ea->flags & FILTER_ACTION_RQ_STEERING_FLAG)
-			ENICPMD_LOG(INFO, "\tqueue: %u\n",
+			ENICPMD_LOG(INFO, "\tqueue: %u",
 			       enic_sop_rq_idx_to_rte_idx(ea->rq_idx));
 		if (ea->flags & FILTER_ACTION_FILTER_ID_FLAG)
-			ENICPMD_LOG(INFO, "\tfilter_id: %u\n", ea->filter_id);
+			ENICPMD_LOG(INFO, "\tfilter_id: %u", ea->filter_id);
 	}
 }

@@ -1374,13 +1374,13 @@ enic_dump_filter(const struct filter_v2 *filt)

 	switch (filt->type) {
 	case FILTER_IPV4_5TUPLE:
-		ENICPMD_LOG(INFO, "FILTER_IPV4_5TUPLE\n");
+		ENICPMD_LOG(INFO, "FILTER_IPV4_5TUPLE");
 		break;
 	case FILTER_USNIC_IP:
 	case FILTER_DPDK_1:
 		/* FIXME: this should be a loop */
 		gp = &filt->u.generic_1;
-		ENICPMD_LOG(INFO, "Filter: vlan: 0x%04x, mask: 0x%04x\n",
+		ENICPMD_LOG(INFO, "Filter: vlan: 0x%04x, mask: 0x%04x",
 		       gp->val_vlan, gp->mask_vlan);

 		if (gp->mask_flags & FILTER_GENERIC_1_IPV4)
@@ -1438,7 +1438,7 @@ enic_dump_filter(const struct filter_v2 *filt)
 				 ? "ipfrag(y)" : "ipfrag(n)");
 		else
 			sprintf(ipfrag, "%s ", "ipfrag(x)");
-		ENICPMD_LOG(INFO, "\tFlags: %s%s%s%s%s%s%s%s\n", ip4, ip6, udp,
+		ENICPMD_LOG(INFO, "\tFlags: %s%s%s%s%s%s%s%s", ip4, ip6, udp,
 			 tcp, tcpudp, ip4csum, l4csum, ipfrag);

 		for (i = 0; i < FILTER_GENERIC_1_NUM_LAYERS; i++) {
@@ -1455,7 +1455,7 @@ enic_dump_filter(const struct filter_v2 *filt)
 				bp += 2;
 			}
 			*bp = '\0';
-			ENICPMD_LOG(INFO, "\tL%u mask: %s\n", i + 2, buf);
+			ENICPMD_LOG(INFO, "\tL%u mask: %s", i + 2, buf);
 			bp = buf;
 			for (j = 0; j <= mbyte; j++) {
 				sprintf(bp, "%02x",
@@ -1463,11 +1463,11 @@ enic_dump_filter(const struct filter_v2 *filt)
 				bp += 2;
 			}
 			*bp = '\0';
-			ENICPMD_LOG(INFO, "\tL%u  val: %s\n", i + 2, buf);
+			ENICPMD_LOG(INFO, "\tL%u  val: %s", i + 2, buf);
 		}
 		break;
 	default:
-		ENICPMD_LOG(INFO, "FILTER UNKNOWN\n");
+		ENICPMD_LOG(INFO, "FILTER UNKNOWN");
 		break;
 	}
 }
diff --git a/drivers/net/enic/enic_vf_representor.c b/drivers/net/enic/enic_vf_representor.c
index 5d8d29135c..8469e06de9 100644
--- a/drivers/net/enic/enic_vf_representor.c
+++ b/drivers/net/enic/enic_vf_representor.c
@@ -64,7 +64,7 @@ static int enic_vf_dev_tx_queue_setup(struct rte_eth_dev *eth_dev,
 	/* Pass vf not pf because of cq index calculation. See enic_alloc_wq */
 	err = enic_alloc_wq(&vf->enic, queue_idx, socket_id, nb_desc);
 	if (err) {
-		ENICPMD_LOG(ERR, "error in allocating wq\n");
+		ENICPMD_LOG(ERR, "error in allocating wq");
 		return err;
 	}
 	return 0;
@@ -104,7 +104,7 @@ static int enic_vf_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
 	ret = enic_alloc_rq(&vf->enic, queue_idx, socket_id, mp, nb_desc,
 			    rx_conf->rx_free_thresh);
 	if (ret) {
-		ENICPMD_LOG(ERR, "error in allocating rq\n");
+		ENICPMD_LOG(ERR, "error in allocating rq");
 		return ret;
 	}
 	return 0;
@@ -230,14 +230,14 @@ static int enic_vf_dev_start(struct rte_eth_dev *eth_dev)
 	/* enic_enable */
 	ret = enic_alloc_rx_queue_mbufs(pf, &pf->rq[index]);
 	if (ret) {
-		ENICPMD_LOG(ERR, "Failed to alloc sop RX queue mbufs\n");
+		ENICPMD_LOG(ERR, "Failed to alloc sop RX queue mbufs");
 		return ret;
 	}
 	ret = enic_alloc_rx_queue_mbufs(pf, data_rq);
 	if (ret) {
 		/* Release the allocated mbufs for the sop rq*/
 		enic_rxmbuf_queue_release(pf, &pf->rq[index]);
-		ENICPMD_LOG(ERR, "Failed to alloc data RX queue mbufs\n");
+		ENICPMD_LOG(ERR, "Failed to alloc data RX queue mbufs");
 		return ret;
 	}
 	enic_start_rq(pf, vf->pf_rq_sop_idx);
@@ -430,7 +430,7 @@ static int enic_vf_stats_get(struct rte_eth_dev *eth_dev,
 	/* Get VF stats via PF */
 	err = vnic_dev_stats_dump(vf->enic.vdev, &vs);
 	if (err) {
-		ENICPMD_LOG(ERR, "error in getting stats\n");
+		ENICPMD_LOG(ERR, "error in getting stats");
 		return err;
 	}
 	stats->ipackets = vs->rx.rx_frames_ok;
@@ -453,7 +453,7 @@ static int enic_vf_stats_reset(struct rte_eth_dev *eth_dev)
 	/* Ask PF to clear VF stats */
 	err = vnic_dev_stats_clear(vf->enic.vdev);
 	if (err)
-		ENICPMD_LOG(ERR, "error in clearing stats\n");
+		ENICPMD_LOG(ERR, "error in clearing stats");
 	return err;
 }

@@ -581,7 +581,7 @@ static int get_vf_config(struct enic_vf_representor *vf)
 	/* VF MAC */
 	err = vnic_dev_get_mac_addr(vf->enic.vdev, vf->mac_addr.addr_bytes);
 	if (err) {
-		ENICPMD_LOG(ERR, "error in getting MAC address\n");
+		ENICPMD_LOG(ERR, "error in getting MAC address");
 		return err;
 	}
 	rte_ether_addr_copy(&vf->mac_addr, vf->eth_dev->data->mac_addrs);
@@ -591,7 +591,7 @@ static int get_vf_config(struct enic_vf_representor *vf)
 			    offsetof(struct vnic_enet_config, mtu),
 			    sizeof(c->mtu), &c->mtu);
 	if (err) {
-		ENICPMD_LOG(ERR, "error in getting MTU\n");
+		ENICPMD_LOG(ERR, "error in getting MTU");
 		return err;
 	}
 	/*
diff --git a/drivers/net/failsafe/failsafe_args.c b/drivers/net/failsafe/failsafe_args.c
index 3b867437d7..1b8f1d3050 100644
--- a/drivers/net/failsafe/failsafe_args.c
+++ b/drivers/net/failsafe/failsafe_args.c
@@ -406,7 +406,7 @@ failsafe_args_parse(struct rte_eth_dev *dev, const char *params)
 		kvlist = rte_kvargs_parse(mut_params,
 				pmd_failsafe_init_parameters);
 		if (kvlist == NULL) {
-			ERROR("Error parsing parameters, usage:\n"
+			ERROR("Error parsing parameters, usage:"
 				PMD_FAILSAFE_PARAM_STRING);
 			return -1;
 		}
diff --git a/drivers/net/failsafe/failsafe_eal.c b/drivers/net/failsafe/failsafe_eal.c
index d71b512f81..e79d3b4120 100644
--- a/drivers/net/failsafe/failsafe_eal.c
+++ b/drivers/net/failsafe/failsafe_eal.c
@@ -16,7 +16,7 @@ fs_ethdev_portid_get(const char *name, uint16_t *port_id)
 	size_t len;

 	if (name == NULL) {
-		DEBUG("Null pointer is specified\n");
+		DEBUG("Null pointer is specified");
 		return -EINVAL;
 	}
 	len = strlen(name);
diff --git a/drivers/net/failsafe/failsafe_ether.c b/drivers/net/failsafe/failsafe_ether.c
index 031f3eb13f..dc4aba6e30 100644
--- a/drivers/net/failsafe/failsafe_ether.c
+++ b/drivers/net/failsafe/failsafe_ether.c
@@ -38,7 +38,7 @@ fs_flow_complain(struct rte_flow_error *error)
 		errstr = "unknown type";
 	else
 		errstr = errstrlist[error->type];
-	ERROR("Caught error type %d (%s): %s%s\n",
+	ERROR("Caught error type %d (%s): %s%s",
 		error->type, errstr,
 		error->cause ? (snprintf(buf, sizeof(buf), "cause: %p, ",
 				error->cause), buf) : "",
@@ -640,7 +640,7 @@ failsafe_eth_new_event_callback(uint16_t port_id,
 		if (sdev->state >= DEV_PROBED)
 			continue;
 		if (dev->device == NULL) {
-			WARN("Trying to probe malformed device %s.\n",
+			WARN("Trying to probe malformed device %s.",
 			     sdev->devargs.name);
 			continue;
 		}
diff --git a/drivers/net/failsafe/failsafe_intr.c b/drivers/net/failsafe/failsafe_intr.c
index 969ded6ced..68b7310b85 100644
--- a/drivers/net/failsafe/failsafe_intr.c
+++ b/drivers/net/failsafe/failsafe_intr.c
@@ -173,17 +173,17 @@ fs_rx_event_proxy_service_install(struct fs_priv *priv)
 		/* run the service */
 		ret = rte_service_component_runstate_set(priv->rxp.sid, 1);
 		if (ret < 0) {
-			ERROR("Failed Setting component runstate\n");
+			ERROR("Failed Setting component runstate");
 			return ret;
 		}
 		ret = rte_service_set_stats_enable(priv->rxp.sid, 1);
 		if (ret < 0) {
-			ERROR("Failed enabling stats\n");
+			ERROR("Failed enabling stats");
 			return ret;
 		}
 		ret = rte_service_runstate_set(priv->rxp.sid, 1);
 		if (ret < 0) {
-			ERROR("Failed to run service\n");
+			ERROR("Failed to run service");
 			return ret;
 		}
 		priv->rxp.sstate = SS_READY;
diff --git a/drivers/net/gve/base/gve_adminq.c b/drivers/net/gve/base/gve_adminq.c
index 343bd13d67..438c0c5441 100644
--- a/drivers/net/gve/base/gve_adminq.c
+++ b/drivers/net/gve/base/gve_adminq.c
@@ -11,7 +11,7 @@
 #define GVE_ADMINQ_SLEEP_LEN		20
 #define GVE_MAX_ADMINQ_EVENT_COUNTER_CHECK	100

-#define GVE_DEVICE_OPTION_ERROR_FMT "%s option error:\n Expected: length=%d, feature_mask=%x.\n Actual: length=%d, feature_mask=%x."
+#define GVE_DEVICE_OPTION_ERROR_FMT "%s option error: Expected: length=%d, feature_mask=%x. Actual: length=%d, feature_mask=%x."

 #define GVE_DEVICE_OPTION_TOO_BIG_FMT "Length of %s option larger than expected. Possible older version of guest driver."

diff --git a/drivers/net/hinic/base/hinic_pmd_eqs.c b/drivers/net/hinic/base/hinic_pmd_eqs.c
index fecb653401..f0e1139a98 100644
--- a/drivers/net/hinic/base/hinic_pmd_eqs.c
+++ b/drivers/net/hinic/base/hinic_pmd_eqs.c
@@ -471,7 +471,7 @@ int hinic_comm_aeqs_init(struct hinic_hwdev *hwdev)

 	num_aeqs = HINIC_HWIF_NUM_AEQS(hwdev->hwif);
 	if (num_aeqs < HINIC_MIN_AEQS) {
-		PMD_DRV_LOG(ERR, "PMD need %d AEQs, Chip has %d\n",
+		PMD_DRV_LOG(ERR, "PMD need %d AEQs, Chip has %d",
 				HINIC_MIN_AEQS, num_aeqs);
 		return -EINVAL;
 	}
diff --git a/drivers/net/hinic/base/hinic_pmd_mbox.c b/drivers/net/hinic/base/hinic_pmd_mbox.c
index 92a7cc1a11..a75a6953ad 100644
--- a/drivers/net/hinic/base/hinic_pmd_mbox.c
+++ b/drivers/net/hinic/base/hinic_pmd_mbox.c
@@ -310,7 +310,7 @@ static int mbox_msg_ack_aeqn(struct hinic_hwdev *hwdev)
 		/* This is used for ovs */
 		msg_ack_aeqn = HINIC_AEQN_1;
 	} else {
-		PMD_DRV_LOG(ERR, "Warning: Invalid aeq num: %d\n", aeq_num);
+		PMD_DRV_LOG(ERR, "Warning: Invalid aeq num: %d", aeq_num);
 		msg_ack_aeqn = -1;
 	}

@@ -372,13 +372,13 @@ static int init_mbox_info(struct hinic_recv_mbox *mbox_info)

 	mbox_info->mbox = kzalloc(MBOX_MAX_BUF_SZ, GFP_KERNEL);
 	if (!mbox_info->mbox) {
-		PMD_DRV_LOG(ERR, "Alloc mbox buf_in mem failed\n");
+		PMD_DRV_LOG(ERR, "Alloc mbox buf_in mem failed");
 		return -ENOMEM;
 	}

 	mbox_info->buf_out = kzalloc(MBOX_MAX_BUF_SZ, GFP_KERNEL);
 	if (!mbox_info->buf_out) {
-		PMD_DRV_LOG(ERR, "Alloc mbox buf_out mem failed\n");
+		PMD_DRV_LOG(ERR, "Alloc mbox buf_out mem failed");
 		err = -ENOMEM;
 		goto alloc_buf_out_err;
 	}
diff --git a/drivers/net/hinic/base/hinic_pmd_niccfg.c b/drivers/net/hinic/base/hinic_pmd_niccfg.c
index 8c08d63286..a08020313f 100644
--- a/drivers/net/hinic/base/hinic_pmd_niccfg.c
+++ b/drivers/net/hinic/base/hinic_pmd_niccfg.c
@@ -683,7 +683,7 @@ int hinic_get_pause_info(void *hwdev, struct nic_pause_config *nic_pause)
 				     &pause_info, sizeof(pause_info),
 				     &pause_info, &out_size);
 	if (err || !out_size || pause_info.mgmt_msg_head.status) {
-		PMD_DRV_LOG(ERR, "Failed to get pause info, err: %d, status: 0x%x, out size: 0x%x\n",
+		PMD_DRV_LOG(ERR, "Failed to get pause info, err: %d, status: 0x%x, out size: 0x%x",
 			err, pause_info.mgmt_msg_head.status, out_size);
 		return -EIO;
 	}
@@ -1332,7 +1332,7 @@ int hinic_get_mgmt_version(void *hwdev, char *fw)
 				     &fw_ver, sizeof(fw_ver), &fw_ver,
 				     &out_size);
 	if (err || !out_size || fw_ver.mgmt_msg_head.status) {
-		PMD_DRV_LOG(ERR, "Failed to get mgmt version, err: %d, status: 0x%x, out size: 0x%x\n",
+		PMD_DRV_LOG(ERR, "Failed to get mgmt version, err: %d, status: 0x%x, out size: 0x%x",
 			err, fw_ver.mgmt_msg_head.status, out_size);
 		return -EIO;
 	}
@@ -1767,7 +1767,7 @@ int hinic_set_fdir_filter(void *hwdev, u8 filter_type, u8 qid, u8 type_enable,
 			&port_filer_cmd, &out_size);
 	if (err || !out_size || port_filer_cmd.mgmt_msg_head.status) {
 		PMD_DRV_LOG(ERR, "Set port Q filter failed, err: %d, status: 0x%x, out size: 0x%x, type: 0x%x,"
-			" enable: 0x%x, qid: 0x%x, filter_type_enable: 0x%x\n",
+			" enable: 0x%x, qid: 0x%x, filter_type_enable: 0x%x",
 			err, port_filer_cmd.mgmt_msg_head.status, out_size,
 			filter_type, enable, qid, type_enable);
 		return -EIO;
@@ -1819,7 +1819,7 @@ int hinic_set_normal_filter(void *hwdev, u8 qid, u8 normal_type_enable,
 			&port_filer_cmd, &out_size);
 	if (err || !out_size || port_filer_cmd.mgmt_msg_head.status) {
 		PMD_DRV_LOG(ERR, "Set normal filter failed, err: %d, status: 0x%x, out size: 0x%x, fdir_flag: 0x%x,"
-			" enable: 0x%x, qid: 0x%x, normal_type_enable: 0x%x, key:0x%x\n",
+			" enable: 0x%x, qid: 0x%x, normal_type_enable: 0x%x, key:0x%x",
 			err, port_filer_cmd.mgmt_msg_head.status, out_size,
 			flag, enable, qid, normal_type_enable, key);
 		return -EIO;
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index d4978e0649..cb5c013b21 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -1914,7 +1914,7 @@ static int hinic_flow_ctrl_set(struct rte_eth_dev *dev,
 	nic_dev->nic_pause.rx_pause = nic_pause.rx_pause;
 	nic_dev->nic_pause.tx_pause = nic_pause.tx_pause;

-	PMD_DRV_LOG(INFO, "Set pause options, tx: %s, rx: %s, auto: %s\n",
+	PMD_DRV_LOG(INFO, "Set pause options, tx: %s, rx: %s, auto: %s",
 		nic_pause.tx_pause ? "on" : "off",
 		nic_pause.rx_pause ? "on" : "off",
 		nic_pause.auto_neg ? "on" : "off");
@@ -2559,7 +2559,7 @@ static int hinic_pf_get_default_cos(struct hinic_hwdev *hwdev, u8 *cos_id)

 	valid_cos_bitmap = hwdev->cfg_mgmt->svc_cap.valid_cos_bitmap;
 	if (!valid_cos_bitmap) {
-		PMD_DRV_LOG(ERR, "PF has none cos to support\n");
+		PMD_DRV_LOG(ERR, "PF has none cos to support");
 		return -EFAULT;
 	}

diff --git a/drivers/net/hns3/hns3_dump.c b/drivers/net/hns3/hns3_dump.c
index cb369be5be..a3b58e0a8f 100644
--- a/drivers/net/hns3/hns3_dump.c
+++ b/drivers/net/hns3/hns3_dump.c
@@ -242,7 +242,7 @@ hns3_get_rx_queue(struct rte_eth_dev *dev)
 	for (queue_id = 0; queue_id < dev->data->nb_rx_queues; queue_id++) {
 		rx_queues = dev->data->rx_queues;
 		if (rx_queues == NULL || rx_queues[queue_id] == NULL) {
-			hns3_err(hw, "detect rx_queues is NULL!\n");
+			hns3_err(hw, "detect rx_queues is NULL!");
 			return NULL;
 		}

@@ -267,7 +267,7 @@ hns3_get_tx_queue(struct rte_eth_dev *dev)
 	for (queue_id = 0; queue_id < dev->data->nb_tx_queues; queue_id++) {
 		tx_queues = dev->data->tx_queues;
 		if (tx_queues == NULL || tx_queues[queue_id] == NULL) {
-			hns3_err(hw, "detect tx_queues is NULL!\n");
+			hns3_err(hw, "detect tx_queues is NULL!");
 			return NULL;
 		}

@@ -297,7 +297,7 @@ hns3_get_rxtx_fake_queue_info(FILE *file, struct rte_eth_dev *dev)
 	if (dev->data->nb_rx_queues < dev->data->nb_tx_queues) {
 		rx_queues = hw->fkq_data.rx_queues;
 		if (rx_queues == NULL || rx_queues[queue_id] == NULL) {
-			hns3_err(hw, "detect rx_queues is NULL!\n");
+			hns3_err(hw, "detect rx_queues is NULL!");
 			return;
 		}
 		rxq = (struct hns3_rx_queue *)rx_queues[queue_id];
@@ -311,7 +311,7 @@ hns3_get_rxtx_fake_queue_info(FILE *file, struct rte_eth_dev *dev)
 		queue_id = 0;

 		if (tx_queues == NULL || tx_queues[queue_id] == NULL) {
-			hns3_err(hw, "detect tx_queues is NULL!\n");
+			hns3_err(hw, "detect tx_queues is NULL!");
 			return;
 		}
 		txq = (struct hns3_tx_queue *)tx_queues[queue_id];
@@ -961,7 +961,7 @@ hns3_rx_descriptor_dump(const struct rte_eth_dev *dev, uint16_t queue_id,
 		return -EINVAL;

 	if (num > rxq->nb_rx_desc) {
-		hns3_err(hw, "Invalid BD num=%u\n", num);
+		hns3_err(hw, "Invalid BD num=%u", num);
 		return -EINVAL;
 	}

@@ -1003,7 +1003,7 @@ hns3_tx_descriptor_dump(const struct rte_eth_dev *dev, uint16_t queue_id,
 		return -EINVAL;

 	if (num > txq->nb_tx_desc) {
-		hns3_err(hw, "Invalid BD num=%u\n", num);
+		hns3_err(hw, "Invalid BD num=%u", num);
 		return -EINVAL;
 	}

diff --git a/drivers/net/hns3/hns3_intr.c b/drivers/net/hns3/hns3_intr.c
index 916bf30dcb..0b768ef140 100644
--- a/drivers/net/hns3/hns3_intr.c
+++ b/drivers/net/hns3/hns3_intr.c
@@ -1806,7 +1806,7 @@ enable_tm_err_intr(struct hns3_adapter *hns, bool en)

 	ret = hns3_cmd_send(hw, &desc, 1);
 	if (ret)
-		hns3_err(hw, "fail to %s TM QCN mem errors, ret = %d\n",
+		hns3_err(hw, "fail to %s TM QCN mem errors, ret = %d",
 			 en ? "enable" : "disable", ret);

 	return ret;
@@ -1847,7 +1847,7 @@ enable_common_err_intr(struct hns3_adapter *hns, bool en)

 	ret = hns3_cmd_send(hw, &desc[0], RTE_DIM(desc));
 	if (ret)
-		hns3_err(hw, "fail to %s common err interrupts, ret = %d\n",
+		hns3_err(hw, "fail to %s common err interrupts, ret = %d",
 			 en ? "enable" : "disable", ret);

 	return ret;
@@ -1984,7 +1984,7 @@ query_num_bds(struct hns3_hw *hw, bool is_ras, uint32_t *mpf_bd_num,
 	pf_bd_num_val = rte_le_to_cpu_32(desc.data[1]);
 	if (mpf_bd_num_val < mpf_min_bd_num || pf_bd_num_val < pf_min_bd_num) {
 		hns3_err(hw, "error bd num: mpf(%u), min_mpf(%u), "
-			 "pf(%u), min_pf(%u)\n", mpf_bd_num_val, mpf_min_bd_num,
+			 "pf(%u), min_pf(%u)", mpf_bd_num_val, mpf_min_bd_num,
 			 pf_bd_num_val, pf_min_bd_num);
 		return -EINVAL;
 	}
@@ -2061,7 +2061,7 @@ hns3_handle_hw_error(struct hns3_adapter *hns, struct hns3_cmd_desc *desc,
 		opcode = HNS3_OPC_QUERY_CLEAR_PF_RAS_INT;
 		break;
 	default:
-		hns3_err(hw, "error hardware err_type = %d\n", err_type);
+		hns3_err(hw, "error hardware err_type = %d", err_type);
 		return -EINVAL;
 	}

@@ -2069,7 +2069,7 @@ hns3_handle_hw_error(struct hns3_adapter *hns, struct hns3_cmd_desc *desc,
 	hns3_cmd_setup_basic_desc(&desc[0], opcode, true);
 	ret = hns3_cmd_send(hw, &desc[0], num);
 	if (ret) {
-		hns3_err(hw, "query hw err int 0x%x cmd failed, ret = %d\n",
+		hns3_err(hw, "query hw err int 0x%x cmd failed, ret = %d",
 			 opcode, ret);
 		return ret;
 	}
@@ -2097,7 +2097,7 @@ hns3_handle_hw_error(struct hns3_adapter *hns, struct hns3_cmd_desc *desc,
 	hns3_cmd_reuse_desc(&desc[0], false);
 	ret = hns3_cmd_send(hw, &desc[0], num);
 	if (ret)
-		hns3_err(hw, "clear all hw err int cmd failed, ret = %d\n",
+		hns3_err(hw, "clear all hw err int cmd failed, ret = %d",
 			 ret);

 	return ret;
diff --git a/drivers/net/hns3/hns3_ptp.c b/drivers/net/hns3/hns3_ptp.c
index 894ac6dd71..c6e77d21cb 100644
--- a/drivers/net/hns3/hns3_ptp.c
+++ b/drivers/net/hns3/hns3_ptp.c
@@ -50,7 +50,7 @@ hns3_ptp_int_en(struct hns3_hw *hw, bool en)
 	ret = hns3_cmd_send(hw, &desc, 1);
 	if (ret)
 		hns3_err(hw,
-			"failed to %s ptp interrupt, ret = %d\n",
+			"failed to %s ptp interrupt, ret = %d",
 			en ? "enable" : "disable", ret);

 	return ret;
diff --git a/drivers/net/hns3/hns3_regs.c b/drivers/net/hns3/hns3_regs.c
index be1be6a89c..955bc7e3af 100644
--- a/drivers/net/hns3/hns3_regs.c
+++ b/drivers/net/hns3/hns3_regs.c
@@ -355,7 +355,7 @@ hns3_get_dfx_reg_bd_num(struct hns3_hw *hw, uint32_t *bd_num_list,

 	ret = hns3_cmd_send(hw, desc, HNS3_GET_DFX_REG_BD_NUM_SIZE);
 	if (ret) {
-		hns3_err(hw, "fail to get dfx bd num, ret = %d.\n", ret);
+		hns3_err(hw, "fail to get dfx bd num, ret = %d.", ret);
 		return ret;
 	}

@@ -387,7 +387,7 @@ hns3_dfx_reg_cmd_send(struct hns3_hw *hw, struct hns3_cmd_desc *desc,
 	ret = hns3_cmd_send(hw, desc, bd_num);
 	if (ret)
 		hns3_err(hw, "fail to query dfx registers, opcode = 0x%04X, "
-			 "ret = %d.\n", opcode, ret);
+			 "ret = %d.", opcode, ret);

 	return ret;
 }
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index ffc1f6d874..2b043cd693 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -653,7 +653,7 @@ eth_i40e_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,

 	if (eth_da.nb_representor_ports > 0 &&
 	    eth_da.type != RTE_ETH_REPRESENTOR_VF) {
-		PMD_DRV_LOG(ERR, "unsupported representor type: %s\n",
+		PMD_DRV_LOG(ERR, "unsupported representor type: %s",
 			    pci_dev->device.devargs->args);
 		return -ENOTSUP;
 	}
@@ -1480,10 +1480,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)

 	val = I40E_READ_REG(hw, I40E_GL_FWSTS);
 	if (val & I40E_GL_FWSTS_FWS1B_MASK) {
-		PMD_INIT_LOG(ERR, "\nERROR: "
-			"Firmware recovery mode detected. Limiting functionality.\n"
-			"Refer to the Intel(R) Ethernet Adapters and Devices "
-			"User Guide for details on firmware recovery mode.");
+		PMD_INIT_LOG(ERR, "ERROR: Firmware recovery mode detected. Limiting functionality.");
 		return -EIO;
 	}

@@ -2222,7 +2219,7 @@ i40e_phy_conf_link(struct i40e_hw *hw,
 	status = i40e_aq_get_phy_capabilities(hw, false, true, &phy_ab,
 					      NULL);
 	if (status) {
-		PMD_DRV_LOG(ERR, "Failed to get PHY capabilities: %d\n",
+		PMD_DRV_LOG(ERR, "Failed to get PHY capabilities: %d",
 				status);
 		return ret;
 	}
@@ -2232,7 +2229,7 @@ i40e_phy_conf_link(struct i40e_hw *hw,
 	status = i40e_aq_get_phy_capabilities(hw, false, false, &phy_ab,
 					      NULL);
 	if (status) {
-		PMD_DRV_LOG(ERR, "Failed to get the current PHY config: %d\n",
+		PMD_DRV_LOG(ERR, "Failed to get the current PHY config: %d",
 				status);
 		return ret;
 	}
@@ -2257,7 +2254,7 @@ i40e_phy_conf_link(struct i40e_hw *hw,
 	 * Warn users and config the default available speeds.
 	 */
 	if (is_up && !(force_speed & avail_speed)) {
-		PMD_DRV_LOG(WARNING, "Invalid speed setting, set to default!\n");
+		PMD_DRV_LOG(WARNING, "Invalid speed setting, set to default!");
 		phy_conf.link_speed = avail_speed;
 	} else {
 		phy_conf.link_speed = is_up ? force_speed : avail_speed;
@@ -6814,7 +6811,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
 				I40E_GL_MDET_TX_QUEUE_SHIFT) -
 					hw->func_caps.base_queue;
 		PMD_DRV_LOG(WARNING, "Malicious Driver Detection event 0x%02x on TX "
-			"queue %d PF number 0x%02x VF number 0x%02x device %s\n",
+			"queue %d PF number 0x%02x VF number 0x%02x device %s",
 				event, queue, pf_num, vf_num, dev->data->name);
 		I40E_WRITE_REG(hw, I40E_GL_MDET_TX, I40E_MDD_CLEAR32);
 		mdd_detected = true;
@@ -6830,7 +6827,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
 					hw->func_caps.base_queue;

 		PMD_DRV_LOG(WARNING, "Malicious Driver Detection event 0x%02x on RX "
-				"queue %d of function 0x%02x device %s\n",
+				"queue %d of function 0x%02x device %s",
 					event, queue, func, dev->data->name);
 		I40E_WRITE_REG(hw, I40E_GL_MDET_RX, I40E_MDD_CLEAR32);
 		mdd_detected = true;
@@ -6840,13 +6837,13 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
 		reg = I40E_READ_REG(hw, I40E_PF_MDET_TX);
 		if (reg & I40E_PF_MDET_TX_VALID_MASK) {
 			I40E_WRITE_REG(hw, I40E_PF_MDET_TX, I40E_MDD_CLEAR16);
-			PMD_DRV_LOG(WARNING, "TX driver issue detected on PF\n");
+			PMD_DRV_LOG(WARNING, "TX driver issue detected on PF");
 		}
 		reg = I40E_READ_REG(hw, I40E_PF_MDET_RX);
 		if (reg & I40E_PF_MDET_RX_VALID_MASK) {
 			I40E_WRITE_REG(hw, I40E_PF_MDET_RX,
 					I40E_MDD_CLEAR16);
-			PMD_DRV_LOG(WARNING, "RX driver issue detected on PF\n");
+			PMD_DRV_LOG(WARNING, "RX driver issue detected on PF");
 		}
 	}

@@ -6859,7 +6856,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
 					I40E_MDD_CLEAR16);
 			vf->num_mdd_events++;
 			PMD_DRV_LOG(WARNING, "TX driver issue detected on VF %d %-"
-					PRIu64 "times\n",
+					PRIu64 "times",
 					i, vf->num_mdd_events);
 		}

@@ -6869,7 +6866,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
 					I40E_MDD_CLEAR16);
 			vf->num_mdd_events++;
 			PMD_DRV_LOG(WARNING, "RX driver issue detected on VF %d %-"
-					PRIu64 "times\n",
+					PRIu64 "times",
 					i, vf->num_mdd_events);
 		}
 	}
@@ -11304,7 +11301,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
 	if (!(hw->flags & I40E_HW_FLAG_AQ_PHY_ACCESS_CAPABLE)) {
 		PMD_DRV_LOG(ERR,
 			    "Module EEPROM memory read not supported. "
-			    "Please update the NVM image.\n");
+			    "Please update the NVM image.");
 		return -EINVAL;
 	}

@@ -11315,7 +11312,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
 	if (hw->phy.link_info.phy_type == I40E_PHY_TYPE_EMPTY) {
 		PMD_DRV_LOG(ERR,
 			    "Cannot read module EEPROM memory. "
-			    "No module connected.\n");
+			    "No module connected.");
 		return -EINVAL;
 	}

@@ -11345,7 +11342,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
 		if (sff8472_swap & I40E_MODULE_SFF_ADDR_MODE) {
 			PMD_DRV_LOG(WARNING,
 				    "Module address swap to access "
-				    "page 0xA2 is not supported.\n");
+				    "page 0xA2 is not supported.");
 			modinfo->type = RTE_ETH_MODULE_SFF_8079;
 			modinfo->eeprom_len = RTE_ETH_MODULE_SFF_8079_LEN;
 		} else if (sff8472_comp == 0x00) {
@@ -11381,7 +11378,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
 		modinfo->eeprom_len = I40E_MODULE_QSFP_MAX_LEN;
 		break;
 	default:
-		PMD_DRV_LOG(ERR, "Module type unrecognized\n");
+		PMD_DRV_LOG(ERR, "Module type unrecognized");
 		return -EINVAL;
 	}
 	return 0;
@@ -11683,7 +11680,7 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
 			}
 		}
 		name[strlen(name) - 1] = '\0';
-		PMD_DRV_LOG(INFO, "name = %s\n", name);
+		PMD_DRV_LOG(INFO, "name = %s", name);
 		if (!strcmp(name, "GTPC"))
 			new_pctype =
 				i40e_find_customized_pctype(pf,
@@ -11827,7 +11824,7 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
 					continue;
 				memset(name, 0, sizeof(name));
 				strcpy(name, proto[n].name);
-				PMD_DRV_LOG(INFO, "name = %s\n", name);
+				PMD_DRV_LOG(INFO, "name = %s", name);
 				if (!strncasecmp(name, "PPPOE", 5))
 					ptype_mapping[i].sw_ptype |=
 						RTE_PTYPE_L2_ETHER_PPPOE;
diff --git a/drivers/net/i40e/i40e_pf.c b/drivers/net/i40e/i40e_pf.c
index 15d9ff868f..4a47a8f7ee 100644
--- a/drivers/net/i40e/i40e_pf.c
+++ b/drivers/net/i40e/i40e_pf.c
@@ -1280,17 +1280,17 @@ i40e_pf_host_process_cmd_request_queues(struct i40e_pf_vf *vf, uint8_t *msg)
 		req_pairs = i40e_align_floor(req_pairs) << 1;

 	if (req_pairs == 0) {
-		PMD_DRV_LOG(ERR, "VF %d tried to request 0 queues. Ignoring.\n",
+		PMD_DRV_LOG(ERR, "VF %d tried to request 0 queues. Ignoring.",
 			    vf->vf_idx);
 	} else if (req_pairs > I40E_MAX_QP_NUM_PER_VF) {
 		PMD_DRV_LOG(ERR,
-			    "VF %d tried to request more than %d queues.\n",
+			    "VF %d tried to request more than %d queues.",
 			    vf->vf_idx,
 			    I40E_MAX_QP_NUM_PER_VF);
 		vfres->num_queue_pairs = I40E_MAX_QP_NUM_PER_VF;
 	} else if (req_pairs > cur_pairs + pf->qp_pool.num_free) {
 		PMD_DRV_LOG(ERR, "VF %d requested %d queues (rounded to %d) "
-			"but only %d available\n",
+			"but only %d available",
 			vf->vf_idx,
 			vfres->num_queue_pairs,
 			req_pairs,
@@ -1550,7 +1550,7 @@ check:
 	if (first_cycle && cur_cycle < first_cycle +
 			(uint64_t)pf->vf_msg_cfg.period * rte_get_timer_hz()) {
 		PMD_DRV_LOG(WARNING, "VF %u too much messages(%u in %u"
-				" seconds),\n\tany new message from which"
+				" seconds), any new message from which"
 				" will be ignored during next %u seconds!",
 				vf_id, pf->vf_msg_cfg.max_msg,
 				(uint32_t)((cur_cycle - first_cycle +
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 5e693cb1ea..e65e8829d9 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1229,11 +1229,11 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			ctx_txd->type_cmd_tso_mss =
 				rte_cpu_to_le_64(cd_type_cmd_tso_mss);

-			PMD_TX_LOG(DEBUG, "mbuf: %p, TCD[%u]:\n"
-				"tunneling_params: %#x;\n"
-				"l2tag2: %#hx;\n"
-				"rsvd: %#hx;\n"
-				"type_cmd_tso_mss: %#"PRIx64";\n",
+			PMD_TX_LOG(DEBUG, "mbuf: %p, TCD[%u]: "
+				"tunneling_params: %#x; "
+				"l2tag2: %#hx; "
+				"rsvd: %#hx; "
+				"type_cmd_tso_mss: %#"PRIx64";",
 				tx_pkt, tx_id,
 				ctx_txd->tunneling_params,
 				ctx_txd->l2tag2,
@@ -1276,12 +1276,12 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 				txd = &txr[tx_id];
 				txn = &sw_ring[txe->next_id];
 			}
-			PMD_TX_LOG(DEBUG, "mbuf: %p, TDD[%u]:\n"
-				"buf_dma_addr: %#"PRIx64";\n"
-				"td_cmd: %#x;\n"
-				"td_offset: %#x;\n"
-				"td_len: %u;\n"
-				"td_tag: %#x;\n",
+			PMD_TX_LOG(DEBUG, "mbuf: %p, TDD[%u]: "
+				"buf_dma_addr: %#"PRIx64"; "
+				"td_cmd: %#x; "
+				"td_offset: %#x; "
+				"td_len: %u; "
+				"td_tag: %#x;",
 				tx_pkt, tx_id, buf_dma_addr,
 				td_cmd, td_offset, slen, td_tag);

@@ -3467,7 +3467,7 @@ i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct i40e_tx_queue *txq)
 				txq->queue_id);
 	else
 		PMD_INIT_LOG(DEBUG,
-				"Neither simple nor vector Tx enabled on Tx queue %u\n",
+				"Neither simple nor vector Tx enabled on Tx queue %u",
 				txq->queue_id);
 }

diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 54bff05675..9087909ec2 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -2301,7 +2301,7 @@ static int iavf_parse_devargs(struct rte_eth_dev *dev)

 	kvlist = rte_kvargs_parse(devargs->args, iavf_valid_args);
 	if (!kvlist) {
-		PMD_INIT_LOG(ERR, "invalid kvargs key\n");
+		PMD_INIT_LOG(ERR, "invalid kvargs key");
 		return -EINVAL;
 	}

@@ -2336,7 +2336,7 @@ static int iavf_parse_devargs(struct rte_eth_dev *dev)
 	if (ad->devargs.quanta_size != 0 &&
 	    (ad->devargs.quanta_size < 256 || ad->devargs.quanta_size > 4096 ||
 	     ad->devargs.quanta_size & 0x40)) {
-		PMD_INIT_LOG(ERR, "invalid quanta size\n");
+		PMD_INIT_LOG(ERR, "invalid quanta size");
 		ret = -EINVAL;
 		goto bail;
 	}
@@ -2972,12 +2972,12 @@ iavf_dev_reset(struct rte_eth_dev *dev)
 	 */
 	ret = iavf_check_vf_reset_done(hw);
 	if (ret) {
-		PMD_DRV_LOG(ERR, "Wait too long for reset done!\n");
+		PMD_DRV_LOG(ERR, "Wait too long for reset done!");
 		return ret;
 	}
 	iavf_set_no_poll(adapter, false);

-	PMD_DRV_LOG(DEBUG, "Start dev_reset ...\n");
+	PMD_DRV_LOG(DEBUG, "Start dev_reset ...");
 	ret = iavf_dev_uninit(dev);
 	if (ret)
 		return ret;
@@ -3022,7 +3022,7 @@ iavf_handle_hw_reset(struct rte_eth_dev *dev)
 		return;

 	if (!iavf_is_reset_detected(adapter)) {
-		PMD_DRV_LOG(DEBUG, "reset not start\n");
+		PMD_DRV_LOG(DEBUG, "reset not start");
 		return;
 	}

@@ -3049,7 +3049,7 @@ iavf_handle_hw_reset(struct rte_eth_dev *dev)
 	goto exit;

 error:
-	PMD_DRV_LOG(DEBUG, "RESET recover with error code=%d\n", ret);
+	PMD_DRV_LOG(DEBUG, "RESET recover with error code=%dn", ret);
 exit:
 	vf->in_reset_recovery = false;
 	iavf_set_no_poll(adapter, false);
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index f19aa14646..ec0dffa30e 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -3027,7 +3027,7 @@ iavf_check_vlan_up2tc(struct iavf_tx_queue *txq, struct rte_mbuf *m)
 	up = m->vlan_tci >> IAVF_VLAN_TAG_PCP_OFFSET;

 	if (!(vf->qos_cap->cap[txq->tc].tc_prio & BIT(up))) {
-		PMD_TX_LOG(ERR, "packet with vlan pcp %u cannot transmit in queue %u\n",
+		PMD_TX_LOG(ERR, "packet with vlan pcp %u cannot transmit in queue %u",
 			up, txq->queue_id);
 		return -1;
 	} else {
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 5d845bba31..a025b0ea7f 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1646,7 +1646,7 @@ ice_dcf_init_repr_info(struct ice_dcf_adapter *dcf_adapter)
 				   dcf_adapter->real_hw.num_vfs,
 				   sizeof(dcf_adapter->repr_infos[0]), 0);
 	if (!dcf_adapter->repr_infos) {
-		PMD_DRV_LOG(ERR, "Failed to alloc memory for VF representors\n");
+		PMD_DRV_LOG(ERR, "Failed to alloc memory for VF representors");
 		return -ENOMEM;
 	}

@@ -2087,7 +2087,7 @@ eth_ice_dcf_pci_probe(__rte_unused struct rte_pci_driver *pci_drv,
 		}

 		if (dcf_adapter->real_hw.vf_vsi_map[vf_id] == dcf_vsi_id) {
-			PMD_DRV_LOG(ERR, "VF ID %u is DCF's ID.\n", vf_id);
+			PMD_DRV_LOG(ERR, "VF ID %u is DCF's ID.", vf_id);
 			ret = -EINVAL;
 			break;
 		}
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index af281f069a..564ff02fd8 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -133,7 +133,7 @@ ice_dcf_vf_repr_hw(struct ice_dcf_vf_repr *repr)
 	struct ice_dcf_adapter *dcf_adapter;

 	if (!repr->dcf_valid) {
-		PMD_DRV_LOG(ERR, "DCF for VF representor has been released\n");
+		PMD_DRV_LOG(ERR, "DCF for VF representor has been released");
 		return NULL;
 	}

@@ -272,7 +272,7 @@ ice_dcf_vf_repr_vlan_offload_set(struct rte_eth_dev *dev, int mask)

 		if (enable && repr->outer_vlan_info.port_vlan_ena) {
 			PMD_DRV_LOG(ERR,
-				    "Disable the port VLAN firstly\n");
+				    "Disable the port VLAN firstly");
 			return -EINVAL;
 		}

@@ -318,7 +318,7 @@ ice_dcf_vf_repr_vlan_pvid_set(struct rte_eth_dev *dev,

 	if (repr->outer_vlan_info.stripping_ena) {
 		PMD_DRV_LOG(ERR,
-			    "Disable the VLAN stripping firstly\n");
+			    "Disable the VLAN stripping firstly");
 		return -EINVAL;
 	}

@@ -367,7 +367,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,

 	if (vlan_type != RTE_ETH_VLAN_TYPE_OUTER) {
 		PMD_DRV_LOG(ERR,
-			    "Can accelerate only outer VLAN in QinQ\n");
+			    "Can accelerate only outer VLAN in QinQ");
 		return -EINVAL;
 	}

@@ -375,7 +375,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
 	    tpid != RTE_ETHER_TYPE_VLAN &&
 	    tpid != RTE_ETHER_TYPE_QINQ1) {
 		PMD_DRV_LOG(ERR,
-			    "Invalid TPID: 0x%04x\n", tpid);
+			    "Invalid TPID: 0x%04x", tpid);
 		return -EINVAL;
 	}

@@ -387,7 +387,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
 						    true);
 		if (err) {
 			PMD_DRV_LOG(ERR,
-				    "Failed to reset port VLAN : %d\n",
+				    "Failed to reset port VLAN : %d",
 				    err);
 			return err;
 		}
@@ -398,7 +398,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
 						       RTE_ETH_VLAN_STRIP_MASK);
 		if (err) {
 			PMD_DRV_LOG(ERR,
-				    "Failed to reset VLAN stripping : %d\n",
+				    "Failed to reset VLAN stripping : %d",
 				    err);
 			return err;
 		}
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index c1d2b91ad7..86f43050a5 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1867,7 +1867,7 @@ no_dsn:

 	strncpy(pkg_file, ICE_PKG_FILE_DEFAULT, ICE_MAX_PKG_FILENAME_SIZE);
 	if (rte_firmware_read(pkg_file, &buf, &bufsz) < 0) {
-		PMD_INIT_LOG(ERR, "failed to search file path\n");
+		PMD_INIT_LOG(ERR, "failed to search file path");
 		return -1;
 	}

@@ -1876,7 +1876,7 @@ load_fw:

 	err = ice_copy_and_init_pkg(hw, buf, bufsz);
 	if (!ice_is_init_pkg_successful(err)) {
-		PMD_INIT_LOG(ERR, "ice_copy_and_init_hw failed: %d\n", err);
+		PMD_INIT_LOG(ERR, "ice_copy_and_init_hw failed: %d", err);
 		free(buf);
 		return -1;
 	}
@@ -2074,7 +2074,7 @@ static int ice_parse_devargs(struct rte_eth_dev *dev)

 	kvlist = rte_kvargs_parse(devargs->args, ice_valid_args);
 	if (kvlist == NULL) {
-		PMD_INIT_LOG(ERR, "Invalid kvargs key\n");
+		PMD_INIT_LOG(ERR, "Invalid kvargs key");
 		return -EINVAL;
 	}

@@ -2340,20 +2340,20 @@ ice_dev_init(struct rte_eth_dev *dev)
 	if (pos) {
 		if (rte_pci_read_config(pci_dev, &dsn_low, 4, pos + 4) < 0 ||
 				rte_pci_read_config(pci_dev, &dsn_high, 4, pos + 8) < 0) {
-			PMD_INIT_LOG(ERR, "Failed to read pci config space\n");
+			PMD_INIT_LOG(ERR, "Failed to read pci config space");
 		} else {
 			use_dsn = true;
 			dsn = (uint64_t)dsn_high << 32 | dsn_low;
 		}
 	} else {
-		PMD_INIT_LOG(ERR, "Failed to read device serial number\n");
+		PMD_INIT_LOG(ERR, "Failed to read device serial number");
 	}

 	ret = ice_load_pkg(pf->adapter, use_dsn, dsn);
 	if (ret == 0) {
 		ret = ice_init_hw_tbls(hw);
 		if (ret) {
-			PMD_INIT_LOG(ERR, "ice_init_hw_tbls failed: %d\n", ret);
+			PMD_INIT_LOG(ERR, "ice_init_hw_tbls failed: %d", ret);
 			rte_free(hw->pkg_copy);
 		}
 	}
@@ -2405,14 +2405,14 @@ ice_dev_init(struct rte_eth_dev *dev)

 	ret = ice_aq_stop_lldp(hw, true, false, NULL);
 	if (ret != ICE_SUCCESS)
-		PMD_INIT_LOG(DEBUG, "lldp has already stopped\n");
+		PMD_INIT_LOG(DEBUG, "lldp has already stopped");
 	ret = ice_init_dcb(hw, true);
 	if (ret != ICE_SUCCESS)
-		PMD_INIT_LOG(DEBUG, "Failed to init DCB\n");
+		PMD_INIT_LOG(DEBUG, "Failed to init DCB");
 	/* Forward LLDP packets to default VSI */
 	ret = ice_vsi_config_sw_lldp(vsi, true);
 	if (ret != ICE_SUCCESS)
-		PMD_INIT_LOG(DEBUG, "Failed to cfg lldp\n");
+		PMD_INIT_LOG(DEBUG, "Failed to cfg lldp");
 	/* register callback func to eal lib */
 	rte_intr_callback_register(intr_handle,
 				   ice_interrupt_handler, dev);
@@ -2439,7 +2439,7 @@ ice_dev_init(struct rte_eth_dev *dev)
 	if (hw->phy_cfg == ICE_PHY_E822) {
 		ret = ice_start_phy_timer_e822(hw, hw->pf_id, true);
 		if (ret)
-			PMD_INIT_LOG(ERR, "Failed to start phy timer\n");
+			PMD_INIT_LOG(ERR, "Failed to start phy timer");
 	}

 	if (!ad->is_safe_mode) {
@@ -2686,7 +2686,7 @@ ice_hash_moveout(struct ice_pf *pf, struct ice_rss_hash_cfg *cfg)
 	status = ice_rem_rss_cfg(hw, vsi->idx, cfg);
 	if (status && status != ICE_ERR_DOES_NOT_EXIST) {
 		PMD_DRV_LOG(ERR,
-			    "ice_rem_rss_cfg failed for VSI:%d, error:%d\n",
+			    "ice_rem_rss_cfg failed for VSI:%d, error:%d",
 			    vsi->idx, status);
 		return -EBUSY;
 	}
@@ -2707,7 +2707,7 @@ ice_hash_moveback(struct ice_pf *pf, struct ice_rss_hash_cfg *cfg)
 	status = ice_add_rss_cfg(hw, vsi->idx, cfg);
 	if (status) {
 		PMD_DRV_LOG(ERR,
-			    "ice_add_rss_cfg failed for VSI:%d, error:%d\n",
+			    "ice_add_rss_cfg failed for VSI:%d, error:%d",
 			    vsi->idx, status);
 		return -EBUSY;
 	}
@@ -3102,7 +3102,7 @@ ice_rem_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,

 	ret = ice_rem_rss_cfg(hw, vsi_id, cfg);
 	if (ret && ret != ICE_ERR_DOES_NOT_EXIST)
-		PMD_DRV_LOG(ERR, "remove rss cfg failed\n");
+		PMD_DRV_LOG(ERR, "remove rss cfg failed");

 	ice_rem_rss_cfg_post(pf, cfg->addl_hdrs);

@@ -3118,15 +3118,15 @@ ice_add_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,

 	ret = ice_add_rss_cfg_pre(pf, cfg->addl_hdrs);
 	if (ret)
-		PMD_DRV_LOG(ERR, "add rss cfg pre failed\n");
+		PMD_DRV_LOG(ERR, "add rss cfg pre failed");

 	ret = ice_add_rss_cfg(hw, vsi_id, cfg);
 	if (ret)
-		PMD_DRV_LOG(ERR, "add rss cfg failed\n");
+		PMD_DRV_LOG(ERR, "add rss cfg failed");

 	ret = ice_add_rss_cfg_post(pf, cfg);
 	if (ret)
-		PMD_DRV_LOG(ERR, "add rss cfg post failed\n");
+		PMD_DRV_LOG(ERR, "add rss cfg post failed");

 	return 0;
 }
@@ -3316,7 +3316,7 @@ ice_get_default_rss_key(uint8_t *rss_key, uint32_t rss_key_size)
 	if (rss_key_size > sizeof(default_key)) {
 		PMD_DRV_LOG(WARNING,
 			    "requested size %u is larger than default %zu, "
-			    "only %zu bytes are gotten for key\n",
+			    "only %zu bytes are gotten for key",
 			    rss_key_size, sizeof(default_key),
 			    sizeof(default_key));
 	}
@@ -3351,12 +3351,12 @@ static int ice_init_rss(struct ice_pf *pf)

 	if (nb_q == 0) {
 		PMD_DRV_LOG(WARNING,
-			"RSS is not supported as rx queues number is zero\n");
+			"RSS is not supported as rx queues number is zero");
 		return 0;
 	}

 	if (is_safe_mode) {
-		PMD_DRV_LOG(WARNING, "RSS is not supported in safe mode\n");
+		PMD_DRV_LOG(WARNING, "RSS is not supported in safe mode");
 		return 0;
 	}

@@ -4202,7 +4202,7 @@ ice_phy_conf_link(struct ice_hw *hw,
 		cfg.phy_type_low = phy_type_low & phy_caps->phy_type_low;
 		cfg.phy_type_high = phy_type_high & phy_caps->phy_type_high;
 	} else {
-		PMD_DRV_LOG(WARNING, "Invalid speed setting, set to default!\n");
+		PMD_DRV_LOG(WARNING, "Invalid speed setting, set to default!");
 		cfg.phy_type_low = phy_caps->phy_type_low;
 		cfg.phy_type_high = phy_caps->phy_type_high;
 	}
@@ -5657,7 +5657,7 @@ ice_get_module_info(struct rte_eth_dev *dev,
 		}
 		break;
 	default:
-		PMD_DRV_LOG(WARNING, "SFF Module Type not recognized.\n");
+		PMD_DRV_LOG(WARNING, "SFF Module Type not recognized.");
 		return -EINVAL;
 	}
 	return 0;
@@ -5728,7 +5728,7 @@ ice_get_module_eeprom(struct rte_eth_dev *dev,
 							   0, NULL);
 				PMD_DRV_LOG(DEBUG, "SFF %02X %02X %02X %X = "
 					"%02X%02X%02X%02X."
-					"%02X%02X%02X%02X (%X)\n",
+					"%02X%02X%02X%02X (%X)",
 					addr, offset, page, is_sfp,
 					value[0], value[1],
 					value[2], value[3],
diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c
index 0b7920ad44..dd9130ace3 100644
--- a/drivers/net/ice/ice_fdir_filter.c
+++ b/drivers/net/ice/ice_fdir_filter.c
@@ -334,7 +334,7 @@ ice_fdir_counter_alloc(struct ice_pf *pf, uint32_t shared, uint32_t id)
 	}

 	if (!counter_free) {
-		PMD_DRV_LOG(ERR, "No free counter found\n");
+		PMD_DRV_LOG(ERR, "No free counter found");
 		return NULL;
 	}

diff --git a/drivers/net/ice/ice_hash.c b/drivers/net/ice/ice_hash.c
index d8c46347d2..dad117679d 100644
--- a/drivers/net/ice/ice_hash.c
+++ b/drivers/net/ice/ice_hash.c
@@ -1242,13 +1242,13 @@ ice_hash_add_raw_cfg(struct ice_adapter *ad,
 					   ice_get_hw_vsi_num(hw, vsi_handle),
 					   id);
 		if (ret) {
-			PMD_DRV_LOG(ERR, "remove RSS flow failed\n");
+			PMD_DRV_LOG(ERR, "remove RSS flow failed");
 			return ret;
 		}

 		ret = ice_rem_prof(hw, ICE_BLK_RSS, id);
 		if (ret) {
-			PMD_DRV_LOG(ERR, "remove RSS profile failed\n");
+			PMD_DRV_LOG(ERR, "remove RSS profile failed");
 			return ret;
 		}
 	}
@@ -1256,7 +1256,7 @@ ice_hash_add_raw_cfg(struct ice_adapter *ad,
 	/* add new profile */
 	ret = ice_flow_set_hw_prof(hw, vsi_handle, 0, prof, ICE_BLK_RSS);
 	if (ret) {
-		PMD_DRV_LOG(ERR, "HW profile add failed\n");
+		PMD_DRV_LOG(ERR, "HW profile add failed");
 		return ret;
 	}

@@ -1378,7 +1378,7 @@ ice_hash_rem_raw_cfg(struct ice_adapter *ad,
 	return 0;

 err:
-	PMD_DRV_LOG(ERR, "HW profile remove failed\n");
+	PMD_DRV_LOG(ERR, "HW profile remove failed");
 	return ret;
 }

diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index dea6a5b535..7da314217a 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -2822,7 +2822,7 @@ ice_xmit_cleanup(struct ice_tx_queue *txq)
 	if (!(txd[desc_to_clean_to].cmd_type_offset_bsz &
 	    rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))) {
 		PMD_TX_LOG(DEBUG, "TX descriptor %4u is not done "
-			   "(port=%d queue=%d) value=0x%"PRIx64"\n",
+			   "(port=%d queue=%d) value=0x%"PRIx64,
 			   desc_to_clean_to,
 			   txq->port_id, txq->queue_id,
 			   txd[desc_to_clean_to].cmd_type_offset_bsz);
diff --git a/drivers/net/ipn3ke/ipn3ke_ethdev.c b/drivers/net/ipn3ke/ipn3ke_ethdev.c
index 2c15611a23..baae80d661 100644
--- a/drivers/net/ipn3ke/ipn3ke_ethdev.c
+++ b/drivers/net/ipn3ke/ipn3ke_ethdev.c
@@ -203,7 +203,7 @@ ipn3ke_vbng_init_done(struct ipn3ke_hw *hw)
 	}

 	if (!timeout) {
-		IPN3KE_AFU_PMD_ERR("IPN3KE vBNG INIT timeout.\n");
+		IPN3KE_AFU_PMD_ERR("IPN3KE vBNG INIT timeout.");
 		return -1;
 	}

@@ -348,7 +348,7 @@ ipn3ke_hw_init(struct rte_afu_device *afu_dev,
 		hw->acc_tm = 1;
 		hw->acc_flow = 1;

-		IPN3KE_AFU_PMD_DEBUG("UPL_version is 0x%x\n",
+		IPN3KE_AFU_PMD_DEBUG("UPL_version is 0x%x",
 			IPN3KE_READ_REG(hw, 0));
 	}

diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c
index d20a29b9a2..a2f76268b5 100644
--- a/drivers/net/ipn3ke/ipn3ke_flow.c
+++ b/drivers/net/ipn3ke/ipn3ke_flow.c
@@ -993,7 +993,7 @@ ipn3ke_flow_hw_update(struct ipn3ke_hw *hw,
 	uint32_t time_out = MHL_COMMAND_TIME_COUNT;
 	uint32_t i;

-	IPN3KE_AFU_PMD_DEBUG("IPN3KE flow dump start\n");
+	IPN3KE_AFU_PMD_DEBUG("IPN3KE flow dump start");

 	pdata = (uint32_t *)flow->rule.key;
 	IPN3KE_AFU_PMD_DEBUG(" - key   :");
@@ -1003,7 +1003,6 @@ ipn3ke_flow_hw_update(struct ipn3ke_hw *hw,

 	for (i = 0; i < 4; i++)
 		IPN3KE_AFU_PMD_DEBUG(" %02x", ipn3ke_swap32(pdata[3 - i]));
-	IPN3KE_AFU_PMD_DEBUG("\n");

 	pdata = (uint32_t *)flow->rule.result;
 	IPN3KE_AFU_PMD_DEBUG(" - result:");
@@ -1013,7 +1012,7 @@ ipn3ke_flow_hw_update(struct ipn3ke_hw *hw,

 	for (i = 0; i < 1; i++)
 		IPN3KE_AFU_PMD_DEBUG(" %02x", pdata[i]);
-	IPN3KE_AFU_PMD_DEBUG("IPN3KE flow dump end\n");
+	IPN3KE_AFU_PMD_DEBUG("IPN3KE flow dump end");

 	pdata = (uint32_t *)flow->rule.key;

@@ -1254,7 +1253,7 @@ int ipn3ke_flow_init(void *dev)
 				IPN3KE_CLF_RX_TEST,
 				0,
 				0x1);
-	IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_RX_TEST: %x\n", data);
+	IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_RX_TEST: %x", data);

 	/* configure base mac address */
 	IPN3KE_MASK_WRITE_REG(hw,
@@ -1268,7 +1267,7 @@ int ipn3ke_flow_init(void *dev)
 				IPN3KE_CLF_BASE_DST_MAC_ADDR_HI,
 				0,
 				0xFFFF);
-	IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_BASE_DST_MAC_ADDR_HI: %x\n", data);
+	IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_BASE_DST_MAC_ADDR_HI: %x", data);

 	IPN3KE_MASK_WRITE_REG(hw,
 			IPN3KE_CLF_BASE_DST_MAC_ADDR_LOW,
@@ -1281,7 +1280,7 @@ int ipn3ke_flow_init(void *dev)
 				IPN3KE_CLF_BASE_DST_MAC_ADDR_LOW,
 				0,
 				0xFFFFFFFF);
-	IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_BASE_DST_MAC_ADDR_LOW: %x\n", data);
+	IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_BASE_DST_MAC_ADDR_LOW: %x", data);


 	/* configure hash lookup rules enable */
@@ -1296,7 +1295,7 @@ int ipn3ke_flow_init(void *dev)
 				IPN3KE_CLF_LKUP_ENABLE,
 				0,
 				0xFF);
-	IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_LKUP_ENABLE: %x\n", data);
+	IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_LKUP_ENABLE: %x", data);


 	/* configure rx parse config, settings associated with VxLAN */
@@ -1311,7 +1310,7 @@ int ipn3ke_flow_init(void *dev)
 				IPN3KE_CLF_RX_PARSE_CFG,
 				0,
 				0x3FFFF);
-	IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_RX_PARSE_CFG: %x\n", data);
+	IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_RX_PARSE_CFG: %x", data);


 	/* configure QinQ S-Tag */
@@ -1326,7 +1325,7 @@ int ipn3ke_flow_init(void *dev)
 				IPN3KE_CLF_QINQ_STAG,
 				0,
 				0xFFFF);
-	IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_QINQ_STAG: %x\n", data);
+	IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_QINQ_STAG: %x", data);


 	/* configure gen ctrl */
@@ -1341,7 +1340,7 @@ int ipn3ke_flow_init(void *dev)
 				IPN3KE_CLF_MHL_GEN_CTRL,
 				0,
 				0x1F);
-	IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_MHL_GEN_CTRL: %x\n", data);
+	IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_MHL_GEN_CTRL: %x", data);


 	/* clear monitoring register */
@@ -1356,7 +1355,7 @@ int ipn3ke_flow_init(void *dev)
 				IPN3KE_CLF_MHL_MON_0,
 				0,
 				0xFFFFFFFF);
-	IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_MHL_MON_0: %x\n", data);
+	IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_MHL_MON_0: %x", data);


 	ipn3ke_flow_hw_flush(hw);
@@ -1366,7 +1365,7 @@ int ipn3ke_flow_init(void *dev)
 						IPN3KE_CLF_EM_NUM,
 						0,
 						0xFFFFFFFF);
-	IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_EN_NUM: %x\n", hw->flow_max_entries);
+	IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_EN_NUM: %x", hw->flow_max_entries);
 	hw->flow_num_entries = 0;

 	return 0;
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 8145f1bb2a..feb57420c3 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -2401,8 +2401,8 @@ ipn3ke_update_link(struct rte_rawdev *rawdev,
 	else
 		link->link_status = 0;

-	IPN3KE_AFU_PMD_DEBUG("port is %d\n", port);
-	IPN3KE_AFU_PMD_DEBUG("link->link_status is %d\n", link->link_status);
+	IPN3KE_AFU_PMD_DEBUG("port is %d", port);
+	IPN3KE_AFU_PMD_DEBUG("link->link_status is %d", link->link_status);

 	rawdev->dev_ops->attr_get(rawdev,
 				"LineSideLinkSpeed",
@@ -2479,14 +2479,14 @@ ipn3ke_rpst_link_update(struct rte_eth_dev *ethdev,

 	if (!rpst->ori_linfo.link_status &&
 		link.link_status) {
-		IPN3KE_AFU_PMD_DEBUG("Update Rpst %d Up\n", rpst->port_id);
+		IPN3KE_AFU_PMD_DEBUG("Update Rpst %d Up", rpst->port_id);
 		rpst->ori_linfo.link_status = link.link_status;
 		rpst->ori_linfo.link_speed = link.link_speed;

 		rte_eth_linkstatus_set(ethdev, &link);

 		if (rpst->i40e_pf_eth) {
-			IPN3KE_AFU_PMD_DEBUG("Update FVL PF %d Up\n",
+			IPN3KE_AFU_PMD_DEBUG("Update FVL PF %d Up",
 				rpst->i40e_pf_eth_port_id);
 			rte_eth_dev_set_link_up(rpst->i40e_pf_eth_port_id);
 			pf = rpst->i40e_pf_eth;
@@ -2494,7 +2494,7 @@ ipn3ke_rpst_link_update(struct rte_eth_dev *ethdev,
 		}
 	} else if (rpst->ori_linfo.link_status &&
 				!link.link_status) {
-		IPN3KE_AFU_PMD_DEBUG("Update Rpst %d Down\n",
+		IPN3KE_AFU_PMD_DEBUG("Update Rpst %d Down",
 			rpst->port_id);
 		rpst->ori_linfo.link_status = link.link_status;
 		rpst->ori_linfo.link_speed = link.link_speed;
@@ -2502,7 +2502,7 @@ ipn3ke_rpst_link_update(struct rte_eth_dev *ethdev,
 		rte_eth_linkstatus_set(ethdev, &link);

 		if (rpst->i40e_pf_eth) {
-			IPN3KE_AFU_PMD_DEBUG("Update FVL PF %d Down\n",
+			IPN3KE_AFU_PMD_DEBUG("Update FVL PF %d Down",
 				rpst->i40e_pf_eth_port_id);
 			rte_eth_dev_set_link_down(rpst->i40e_pf_eth_port_id);
 			pf = rpst->i40e_pf_eth;
@@ -2537,14 +2537,14 @@ ipn3ke_rpst_link_check(struct ipn3ke_rpst *rpst)

 	if (!rpst->ori_linfo.link_status &&
 				link.link_status) {
-		IPN3KE_AFU_PMD_DEBUG("Check Rpst %d Up\n", rpst->port_id);
+		IPN3KE_AFU_PMD_DEBUG("Check Rpst %d Up", rpst->port_id);
 		rpst->ori_linfo.link_status = link.link_status;
 		rpst->ori_linfo.link_speed = link.link_speed;

 		rte_eth_linkstatus_set(rpst->ethdev, &link);

 		if (rpst->i40e_pf_eth) {
-			IPN3KE_AFU_PMD_DEBUG("Check FVL PF %d Up\n",
+			IPN3KE_AFU_PMD_DEBUG("Check FVL PF %d Up",
 				rpst->i40e_pf_eth_port_id);
 			rte_eth_dev_set_link_up(rpst->i40e_pf_eth_port_id);
 			pf = rpst->i40e_pf_eth;
@@ -2552,14 +2552,14 @@ ipn3ke_rpst_link_check(struct ipn3ke_rpst *rpst)
 		}
 	} else if (rpst->ori_linfo.link_status &&
 		!link.link_status) {
-		IPN3KE_AFU_PMD_DEBUG("Check Rpst %d Down\n", rpst->port_id);
+		IPN3KE_AFU_PMD_DEBUG("Check Rpst %d Down", rpst->port_id);
 		rpst->ori_linfo.link_status = link.link_status;
 		rpst->ori_linfo.link_speed = link.link_speed;

 		rte_eth_linkstatus_set(rpst->ethdev, &link);

 		if (rpst->i40e_pf_eth) {
-			IPN3KE_AFU_PMD_DEBUG("Check FVL PF %d Down\n",
+			IPN3KE_AFU_PMD_DEBUG("Check FVL PF %d Down",
 				rpst->i40e_pf_eth_port_id);
 			rte_eth_dev_set_link_down(rpst->i40e_pf_eth_port_id);
 			pf = rpst->i40e_pf_eth;
diff --git a/drivers/net/ipn3ke/ipn3ke_tm.c b/drivers/net/ipn3ke/ipn3ke_tm.c
index 0260227900..44a8b88699 100644
--- a/drivers/net/ipn3ke/ipn3ke_tm.c
+++ b/drivers/net/ipn3ke/ipn3ke_tm.c
@@ -1934,10 +1934,10 @@ ipn3ke_tm_show(struct rte_eth_dev *dev)

 	tm_id = tm->tm_id;

-	IPN3KE_AFU_PMD_DEBUG("***HQoS Tree(%d)***\n", tm_id);
+	IPN3KE_AFU_PMD_DEBUG("***HQoS Tree(%d)***", tm_id);

 	port_n = tm->h.port_node;
-	IPN3KE_AFU_PMD_DEBUG("Port: (%d|%s)\n", port_n->node_index,
+	IPN3KE_AFU_PMD_DEBUG("Port: (%d|%s)", port_n->node_index,
 				str_state[port_n->node_state]);

 	vt_nl = &tm->h.port_node->children_node_list;
@@ -1951,7 +1951,6 @@ ipn3ke_tm_show(struct rte_eth_dev *dev)
 					cos_n->node_index,
 					str_state[cos_n->node_state]);
 		}
-		IPN3KE_AFU_PMD_DEBUG("\n");
 	}
 }

@@ -1969,14 +1968,13 @@ ipn3ke_tm_show_commmit(struct rte_eth_dev *dev)

 	tm_id = tm->tm_id;

-	IPN3KE_AFU_PMD_DEBUG("***Commit Tree(%d)***\n", tm_id);
+	IPN3KE_AFU_PMD_DEBUG("***Commit Tree(%d)***", tm_id);
 	n = tm->h.port_commit_node;
 	IPN3KE_AFU_PMD_DEBUG("Port: ");
 	if (n)
 		IPN3KE_AFU_PMD_DEBUG("(%d|%s)",
 			n->node_index,
 			str_state[n->node_state]);
-	IPN3KE_AFU_PMD_DEBUG("\n");

 	nl = &tm->h.vt_commit_node_list;
 	IPN3KE_AFU_PMD_DEBUG("VT  : ");
@@ -1985,7 +1983,6 @@ ipn3ke_tm_show_commmit(struct rte_eth_dev *dev)
 				n->node_index,
 				str_state[n->node_state]);
 	}
-	IPN3KE_AFU_PMD_DEBUG("\n");

 	nl = &tm->h.cos_commit_node_list;
 	IPN3KE_AFU_PMD_DEBUG("COS : ");
@@ -1994,7 +1991,6 @@ ipn3ke_tm_show_commmit(struct rte_eth_dev *dev)
 				n->node_index,
 				str_state[n->node_state]);
 	}
-	IPN3KE_AFU_PMD_DEBUG("\n");
 }

 /* Traffic manager hierarchy commit */
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index a44497ce51..3ac65ca3b3 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1154,10 +1154,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 	}

 	if (hw->mac.ops.fw_recovery_mode && hw->mac.ops.fw_recovery_mode(hw)) {
-		PMD_INIT_LOG(ERR, "\nERROR: "
-			"Firmware recovery mode detected. Limiting functionality.\n"
-			"Refer to the Intel(R) Ethernet Adapters and Devices "
-			"User Guide for details on firmware recovery mode.");
+		PMD_INIT_LOG(ERR, "ERROR: Firmware recovery mode detected. Limiting functionality.");
 		return -EIO;
 	}

@@ -1782,7 +1779,7 @@ eth_ixgbe_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,

 	if (eth_da.nb_representor_ports > 0 &&
 	    eth_da.type != RTE_ETH_REPRESENTOR_VF) {
-		PMD_DRV_LOG(ERR, "unsupported representor type: %s\n",
+		PMD_DRV_LOG(ERR, "unsupported representor type: %s",
 			    pci_dev->device.devargs->args);
 		return -ENOTSUP;
 	}
diff --git a/drivers/net/ixgbe/ixgbe_ipsec.c b/drivers/net/ixgbe/ixgbe_ipsec.c
index d331308556..3a666ba15f 100644
--- a/drivers/net/ixgbe/ixgbe_ipsec.c
+++ b/drivers/net/ixgbe/ixgbe_ipsec.c
@@ -120,7 +120,7 @@ ixgbe_crypto_add_sa(struct ixgbe_crypto_session *ic_session)
 		/* Fail if no match and no free entries*/
 		if (ip_index < 0) {
 			PMD_DRV_LOG(ERR,
-				    "No free entry left in the Rx IP table\n");
+				    "No free entry left in the Rx IP table");
 			return -1;
 		}

@@ -134,7 +134,7 @@ ixgbe_crypto_add_sa(struct ixgbe_crypto_session *ic_session)
 		/* Fail if no free entries*/
 		if (sa_index < 0) {
 			PMD_DRV_LOG(ERR,
-				    "No free entry left in the Rx SA table\n");
+				    "No free entry left in the Rx SA table");
 			return -1;
 		}

@@ -232,7 +232,7 @@ ixgbe_crypto_add_sa(struct ixgbe_crypto_session *ic_session)
 		/* Fail if no free entries*/
 		if (sa_index < 0) {
 			PMD_DRV_LOG(ERR,
-				    "No free entry left in the Tx SA table\n");
+				    "No free entry left in the Tx SA table");
 			return -1;
 		}

@@ -291,7 +291,7 @@ ixgbe_crypto_remove_sa(struct rte_eth_dev *dev,
 		/* Fail if no match*/
 		if (ip_index < 0) {
 			PMD_DRV_LOG(ERR,
-				    "Entry not found in the Rx IP table\n");
+				    "Entry not found in the Rx IP table");
 			return -1;
 		}

@@ -306,7 +306,7 @@ ixgbe_crypto_remove_sa(struct rte_eth_dev *dev,
 		/* Fail if no match*/
 		if (sa_index < 0) {
 			PMD_DRV_LOG(ERR,
-				    "Entry not found in the Rx SA table\n");
+				    "Entry not found in the Rx SA table");
 			return -1;
 		}

@@ -349,7 +349,7 @@ ixgbe_crypto_remove_sa(struct rte_eth_dev *dev,
 		/* Fail if no match entries*/
 		if (sa_index < 0) {
 			PMD_DRV_LOG(ERR,
-				    "Entry not found in the Tx SA table\n");
+				    "Entry not found in the Tx SA table");
 			return -1;
 		}
 		reg_val = IPSRXIDX_WRITE | (sa_index << 3);
@@ -379,7 +379,7 @@ ixgbe_crypto_create_session(void *device,
 	if (conf->crypto_xform->type != RTE_CRYPTO_SYM_XFORM_AEAD ||
 			conf->crypto_xform->aead.algo !=
 					RTE_CRYPTO_AEAD_AES_GCM) {
-		PMD_DRV_LOG(ERR, "Unsupported crypto transformation mode\n");
+		PMD_DRV_LOG(ERR, "Unsupported crypto transformation mode");
 		return -ENOTSUP;
 	}
 	aead_xform = &conf->crypto_xform->aead;
@@ -388,14 +388,14 @@ ixgbe_crypto_create_session(void *device,
 		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 			ic_session->op = IXGBE_OP_AUTHENTICATED_DECRYPTION;
 		} else {
-			PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
+			PMD_DRV_LOG(ERR, "IPsec decryption not enabled");
 			return -ENOTSUP;
 		}
 	} else {
 		if (dev_conf->txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
 			ic_session->op = IXGBE_OP_AUTHENTICATED_ENCRYPTION;
 		} else {
-			PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
+			PMD_DRV_LOG(ERR, "IPsec encryption not enabled");
 			return -ENOTSUP;
 		}
 	}
@@ -409,7 +409,7 @@ ixgbe_crypto_create_session(void *device,

 	if (ic_session->op == IXGBE_OP_AUTHENTICATED_ENCRYPTION) {
 		if (ixgbe_crypto_add_sa(ic_session)) {
-			PMD_DRV_LOG(ERR, "Failed to add SA\n");
+			PMD_DRV_LOG(ERR, "Failed to add SA");
 			return -EPERM;
 		}
 	}
@@ -431,12 +431,12 @@ ixgbe_crypto_remove_session(void *device,
 	struct ixgbe_crypto_session *ic_session = SECURITY_GET_SESS_PRIV(session);

 	if (eth_dev != ic_session->dev) {
-		PMD_DRV_LOG(ERR, "Session not bound to this device\n");
+		PMD_DRV_LOG(ERR, "Session not bound to this device");
 		return -ENODEV;
 	}

 	if (ixgbe_crypto_remove_sa(eth_dev, ic_session)) {
-		PMD_DRV_LOG(ERR, "Failed to remove session\n");
+		PMD_DRV_LOG(ERR, "Failed to remove session");
 		return -EFAULT;
 	}

diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index 0a0f639e39..002bc71c2a 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -171,14 +171,14 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
 	struct ixgbe_ethertype_filter ethertype_filter;

 	if (!hw->mac.ops.set_ethertype_anti_spoofing) {
-		PMD_DRV_LOG(INFO, "ether type anti-spoofing is not supported.\n");
+		PMD_DRV_LOG(INFO, "ether type anti-spoofing is not supported.");
 		return;
 	}

 	i = ixgbe_ethertype_filter_lookup(filter_info,
 					  IXGBE_ETHERTYPE_FLOW_CTRL);
 	if (i >= 0) {
-		PMD_DRV_LOG(ERR, "A ether type filter entity for flow control already exists!\n");
+		PMD_DRV_LOG(ERR, "A ether type filter entity for flow control already exists!");
 		return;
 	}

@@ -191,7 +191,7 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
 	i = ixgbe_ethertype_filter_insert(filter_info,
 					  &ethertype_filter);
 	if (i < 0) {
-		PMD_DRV_LOG(ERR, "Cannot find an unused ether type filter entity for flow control.\n");
+		PMD_DRV_LOG(ERR, "Cannot find an unused ether type filter entity for flow control.");
 		return;
 	}

@@ -422,7 +422,7 @@ ixgbe_disable_vf_mc_promisc(struct rte_eth_dev *dev, uint32_t vf)

 	vmolr = IXGBE_READ_REG(hw, IXGBE_VMOLR(vf));

-	PMD_DRV_LOG(INFO, "VF %u: disabling multicast promiscuous\n", vf);
+	PMD_DRV_LOG(INFO, "VF %u: disabling multicast promiscuous", vf);

 	vmolr &= ~IXGBE_VMOLR_MPE;

@@ -628,7 +628,7 @@ ixgbe_negotiate_vf_api(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
 		break;
 	}

-	PMD_DRV_LOG(ERR, "Negotiate invalid api version %u from VF %d\n",
+	PMD_DRV_LOG(ERR, "Negotiate invalid api version %u from VF %d",
 		api_version, vf);

 	return -1;
@@ -677,7 +677,7 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
 	case RTE_ETH_MQ_TX_NONE:
 	case RTE_ETH_MQ_TX_DCB:
 		PMD_DRV_LOG(ERR, "PF must work with virtualization for VF %u"
-			", but its tx mode = %d\n", vf,
+			", but its tx mode = %d", vf,
 			eth_conf->txmode.mq_mode);
 		return -1;

@@ -711,7 +711,7 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
 		break;

 	default:
-		PMD_DRV_LOG(ERR, "PF work with invalid mode = %d\n",
+		PMD_DRV_LOG(ERR, "PF work with invalid mode = %d",
 			eth_conf->txmode.mq_mode);
 		return -1;
 	}
@@ -767,7 +767,7 @@ ixgbe_set_vf_mc_promisc(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
 		if (!(fctrl & IXGBE_FCTRL_UPE)) {
 			/* VF promisc requires PF in promisc */
 			PMD_DRV_LOG(ERR,
-			       "Enabling VF promisc requires PF in promisc\n");
+			       "Enabling VF promisc requires PF in promisc");
 			return -1;
 		}

@@ -804,7 +804,7 @@ ixgbe_set_vf_macvlan_msg(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
 	if (index) {
 		if (!rte_is_valid_assigned_ether_addr(
 			(struct rte_ether_addr *)new_mac)) {
-			PMD_DRV_LOG(ERR, "set invalid mac vf:%d\n", vf);
+			PMD_DRV_LOG(ERR, "set invalid mac vf:%d", vf);
 			return -1;
 		}

diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.c b/drivers/net/ixgbe/rte_pmd_ixgbe.c
index f76ef63921..15c28e7a3f 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe.c
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe.c
@@ -955,7 +955,7 @@ STATIC s32 rte_pmd_ixgbe_acquire_swfw(struct ixgbe_hw *hw, u32 mask)
 	while (--retries) {
 		status = ixgbe_acquire_swfw_semaphore(hw, mask);
 		if (status) {
-			PMD_DRV_LOG(ERR, "Get SWFW sem failed, Status = %d\n",
+			PMD_DRV_LOG(ERR, "Get SWFW sem failed, Status = %d",
 				    status);
 			return status;
 		}
@@ -964,18 +964,18 @@ STATIC s32 rte_pmd_ixgbe_acquire_swfw(struct ixgbe_hw *hw, u32 mask)
 			return IXGBE_SUCCESS;

 		if (status == IXGBE_ERR_TOKEN_RETRY)
-			PMD_DRV_LOG(ERR, "Get PHY token failed, Status = %d\n",
+			PMD_DRV_LOG(ERR, "Get PHY token failed, Status = %d",
 				    status);

 		ixgbe_release_swfw_semaphore(hw, mask);
 		if (status != IXGBE_ERR_TOKEN_RETRY) {
 			PMD_DRV_LOG(ERR,
-				    "Retry get PHY token failed, Status=%d\n",
+				    "Retry get PHY token failed, Status=%d",
 				    status);
 			return status;
 		}
 	}
-	PMD_DRV_LOG(ERR, "swfw acquisition retries failed!: PHY ID = 0x%08X\n",
+	PMD_DRV_LOG(ERR, "swfw acquisition retries failed!: PHY ID = 0x%08X",
 		    hw->phy.id);
 	return status;
 }
diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
index 18377d9caf..f05f4c24df 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -1292,7 +1292,7 @@ memif_connect(struct rte_eth_dev *dev)
 						PROT_READ | PROT_WRITE,
 						MAP_SHARED, mr->fd, 0);
 				if (mr->addr == MAP_FAILED) {
-					MIF_LOG(ERR, "mmap failed: %s\n",
+					MIF_LOG(ERR, "mmap failed: %s",
 						strerror(errno));
 					return -1;
 				}
diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c
index a1a7e93288..7c0ac6888b 100644
--- a/drivers/net/mlx4/mlx4.c
+++ b/drivers/net/mlx4/mlx4.c
@@ -106,7 +106,7 @@ mlx4_init_shared_data(void)
 						 sizeof(*mlx4_shared_data),
 						 SOCKET_ID_ANY, 0);
 			if (mz == NULL) {
-				ERROR("Cannot allocate mlx4 shared data\n");
+				ERROR("Cannot allocate mlx4 shared data");
 				ret = -rte_errno;
 				goto error;
 			}
@@ -117,7 +117,7 @@ mlx4_init_shared_data(void)
 			/* Lookup allocated shared memory. */
 			mz = rte_memzone_lookup(MZ_MLX4_PMD_SHARED_DATA);
 			if (mz == NULL) {
-				ERROR("Cannot attach mlx4 shared data\n");
+				ERROR("Cannot attach mlx4 shared data");
 				ret = -rte_errno;
 				goto error;
 			}
diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index 9bf1ec5509..297ff3fb31 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -257,7 +257,7 @@ hn_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	if (tx_free_thresh + 3 >= nb_desc) {
 		PMD_INIT_LOG(ERR,
 			     "tx_free_thresh must be less than the number of TX entries minus 3(%u)."
-			     " (tx_free_thresh=%u port=%u queue=%u)\n",
+			     " (tx_free_thresh=%u port=%u queue=%u)",
 			     nb_desc - 3,
 			     tx_free_thresh, dev->data->port_id, queue_idx);
 		return -EINVAL;
@@ -902,7 +902,7 @@ struct hn_rx_queue *hn_rx_queue_alloc(struct hn_data *hv,

 		if (!rxq->rxbuf_info) {
 			PMD_DRV_LOG(ERR,
-				"Could not allocate rxbuf info for queue %d\n",
+				"Could not allocate rxbuf info for queue %d",
 				queue_id);
 			rte_free(rxq->event_buf);
 			rte_free(rxq);
diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
index 4dced0d328..68b0a8b8ab 100644
--- a/drivers/net/ngbe/base/ngbe_hw.c
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -1067,7 +1067,7 @@ s32 ngbe_set_pcie_master(struct ngbe_hw *hw, bool enable)
 	u32 i;

 	if (rte_pci_set_bus_master(pci_dev, enable) < 0) {
-		DEBUGOUT("Cannot configure PCI bus master\n");
+		DEBUGOUT("Cannot configure PCI bus master");
 		return -1;
 	}

diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index fb86e7b10d..4321924cb9 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -381,7 +381,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 		ssid = ngbe_flash_read_dword(hw, 0xFFFDC);
 		if (ssid == 0x1) {
 			PMD_INIT_LOG(ERR,
-				"Read of internal subsystem device id failed\n");
+				"Read of internal subsystem device id failed");
 			return -ENODEV;
 		}
 		hw->sub_system_id = (u16)ssid >> 8 | (u16)ssid << 8;
diff --git a/drivers/net/ngbe/ngbe_pf.c b/drivers/net/ngbe/ngbe_pf.c
index 947ae7fe94..bb62e2fbb7 100644
--- a/drivers/net/ngbe/ngbe_pf.c
+++ b/drivers/net/ngbe/ngbe_pf.c
@@ -71,7 +71,7 @@ int ngbe_pf_host_init(struct rte_eth_dev *eth_dev)
 			sizeof(struct ngbe_vf_info) * vf_num, 0);
 	if (*vfinfo == NULL) {
 		PMD_INIT_LOG(ERR,
-			"Cannot allocate memory for private VF data\n");
+			"Cannot allocate memory for private VF data");
 		return -ENOMEM;
 	}

@@ -320,7 +320,7 @@ ngbe_disable_vf_mc_promisc(struct rte_eth_dev *eth_dev, uint32_t vf)

 	vmolr = rd32(hw, NGBE_POOLETHCTL(vf));

-	PMD_DRV_LOG(INFO, "VF %u: disabling multicast promiscuous\n", vf);
+	PMD_DRV_LOG(INFO, "VF %u: disabling multicast promiscuous", vf);

 	vmolr &= ~NGBE_POOLETHCTL_MCP;

@@ -482,7 +482,7 @@ ngbe_negotiate_vf_api(struct rte_eth_dev *eth_dev,
 		break;
 	}

-	PMD_DRV_LOG(ERR, "Negotiate invalid api version %u from VF %d\n",
+	PMD_DRV_LOG(ERR, "Negotiate invalid api version %u from VF %d",
 		api_version, vf);

 	return -1;
@@ -564,7 +564,7 @@ ngbe_set_vf_mc_promisc(struct rte_eth_dev *eth_dev,
 		if (!(fctrl & NGBE_PSRCTL_UCP)) {
 			/* VF promisc requires PF in promisc */
 			PMD_DRV_LOG(ERR,
-			       "Enabling VF promisc requires PF in promisc\n");
+			       "Enabling VF promisc requires PF in promisc");
 			return -1;
 		}

@@ -601,7 +601,7 @@ ngbe_set_vf_macvlan_msg(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)

 	if (index) {
 		if (!rte_is_valid_assigned_ether_addr(ea)) {
-			PMD_DRV_LOG(ERR, "set invalid mac vf:%d\n", vf);
+			PMD_DRV_LOG(ERR, "set invalid mac vf:%d", vf);
 			return -1;
 		}

diff --git a/drivers/net/octeon_ep/cnxk_ep_tx.c b/drivers/net/octeon_ep/cnxk_ep_tx.c
index 9f11a2f317..8628edf8a7 100644
--- a/drivers/net/octeon_ep/cnxk_ep_tx.c
+++ b/drivers/net/octeon_ep/cnxk_ep_tx.c
@@ -139,7 +139,7 @@ cnxk_ep_xmit_pkts_scalar_mseg(struct rte_mbuf **tx_pkts, struct otx_ep_instr_que
 		num_sg = (frags + mask) / OTX_EP_NUM_SG_PTRS;

 		if (unlikely(pkt_len > OTX_EP_MAX_PKT_SZ && num_sg > OTX_EP_MAX_SG_LISTS)) {
-			otx_ep_err("Failed to xmit the pkt, pkt_len is higher or pkt has more segments\n");
+			otx_ep_err("Failed to xmit the pkt, pkt_len is higher or pkt has more segments");
 			goto exit;
 		}

diff --git a/drivers/net/octeon_ep/cnxk_ep_vf.c b/drivers/net/octeon_ep/cnxk_ep_vf.c
index ef275703c3..74b63a161f 100644
--- a/drivers/net/octeon_ep/cnxk_ep_vf.c
+++ b/drivers/net/octeon_ep/cnxk_ep_vf.c
@@ -102,7 +102,7 @@ cnxk_ep_vf_setup_iq_regs(struct otx_ep_device *otx_ep, uint32_t iq_no)
 	}

 	if (loop < 0) {
-		otx_ep_err("IDLE bit is not set\n");
+		otx_ep_err("IDLE bit is not set");
 		return -EIO;
 	}

@@ -134,7 +134,7 @@ cnxk_ep_vf_setup_iq_regs(struct otx_ep_device *otx_ep, uint32_t iq_no)
 	} while (reg_val != 0 && loop--);

 	if (loop < 0) {
-		otx_ep_err("INST CNT REGISTER is not zero\n");
+		otx_ep_err("INST CNT REGISTER is not zero");
 		return -EIO;
 	}

@@ -181,7 +181,7 @@ cnxk_ep_vf_setup_oq_regs(struct otx_ep_device *otx_ep, uint32_t oq_no)
 	}

 	if (loop < 0) {
-		otx_ep_err("OUT CNT REGISTER value is zero\n");
+		otx_ep_err("OUT CNT REGISTER value is zero");
 		return -EIO;
 	}

@@ -217,7 +217,7 @@ cnxk_ep_vf_setup_oq_regs(struct otx_ep_device *otx_ep, uint32_t oq_no)
 	}

 	if (loop < 0) {
-		otx_ep_err("Packets credit register value is not cleared\n");
+		otx_ep_err("Packets credit register value is not cleared");
 		return -EIO;
 	}

@@ -250,7 +250,7 @@ cnxk_ep_vf_setup_oq_regs(struct otx_ep_device *otx_ep, uint32_t oq_no)
 	}

 	if (loop < 0) {
-		otx_ep_err("Packets sent register value is not cleared\n");
+		otx_ep_err("Packets sent register value is not cleared");
 		return -EIO;
 	}

@@ -280,7 +280,7 @@ cnxk_ep_vf_enable_iq(struct otx_ep_device *otx_ep, uint32_t q_no)
 	}

 	if (loop < 0) {
-		otx_ep_err("INSTR DBELL not coming back to 0\n");
+		otx_ep_err("INSTR DBELL not coming back to 0");
 		return -EIO;
 	}

diff --git a/drivers/net/octeon_ep/otx2_ep_vf.c b/drivers/net/octeon_ep/otx2_ep_vf.c
index 7f4edf8dcf..fdab542246 100644
--- a/drivers/net/octeon_ep/otx2_ep_vf.c
+++ b/drivers/net/octeon_ep/otx2_ep_vf.c
@@ -37,7 +37,7 @@ otx2_vf_reset_iq(struct otx_ep_device *otx_ep, int q_no)
 				  SDP_VF_R_IN_INSTR_DBELL(q_no));
 	}
 	if (loop < 0) {
-		otx_ep_err("%s: doorbell init retry limit exceeded.\n", __func__);
+		otx_ep_err("%s: doorbell init retry limit exceeded.", __func__);
 		return -EIO;
 	}

@@ -48,7 +48,7 @@ otx2_vf_reset_iq(struct otx_ep_device *otx_ep, int q_no)
 		rte_delay_ms(1);
 	} while ((d64 & ~SDP_VF_R_IN_CNTS_OUT_INT) != 0 && loop--);
 	if (loop < 0) {
-		otx_ep_err("%s: in_cnts init retry limit exceeded.\n", __func__);
+		otx_ep_err("%s: in_cnts init retry limit exceeded.", __func__);
 		return -EIO;
 	}

@@ -81,7 +81,7 @@ otx2_vf_reset_oq(struct otx_ep_device *otx_ep, int q_no)
 				  SDP_VF_R_OUT_SLIST_DBELL(q_no));
 	}
 	if (loop < 0) {
-		otx_ep_err("%s: doorbell init retry limit exceeded.\n", __func__);
+		otx_ep_err("%s: doorbell init retry limit exceeded.", __func__);
 		return -EIO;
 	}

@@ -109,7 +109,7 @@ otx2_vf_reset_oq(struct otx_ep_device *otx_ep, int q_no)
 		rte_delay_ms(1);
 	} while ((d64 & ~SDP_VF_R_OUT_CNTS_IN_INT) != 0 && loop--);
 	if (loop < 0) {
-		otx_ep_err("%s: out_cnts init retry limit exceeded.\n", __func__);
+		otx_ep_err("%s: out_cnts init retry limit exceeded.", __func__);
 		return -EIO;
 	}

@@ -252,7 +252,7 @@ otx2_vf_setup_iq_regs(struct otx_ep_device *otx_ep, uint32_t iq_no)
 	}

 	if (loop < 0) {
-		otx_ep_err("IDLE bit is not set\n");
+		otx_ep_err("IDLE bit is not set");
 		return -EIO;
 	}

@@ -283,7 +283,7 @@ otx2_vf_setup_iq_regs(struct otx_ep_device *otx_ep, uint32_t iq_no)
 	} while (reg_val != 0 && loop--);

 	if (loop < 0) {
-		otx_ep_err("INST CNT REGISTER is not zero\n");
+		otx_ep_err("INST CNT REGISTER is not zero");
 		return -EIO;
 	}

@@ -332,7 +332,7 @@ otx2_vf_setup_oq_regs(struct otx_ep_device *otx_ep, uint32_t oq_no)
 	}

 	if (loop < 0) {
-		otx_ep_err("OUT CNT REGISTER value is zero\n");
+		otx_ep_err("OUT CNT REGISTER value is zero");
 		return -EIO;
 	}

@@ -368,7 +368,7 @@ otx2_vf_setup_oq_regs(struct otx_ep_device *otx_ep, uint32_t oq_no)
 	}

 	if (loop < 0) {
-		otx_ep_err("Packets credit register value is not cleared\n");
+		otx_ep_err("Packets credit register value is not cleared");
 		return -EIO;
 	}
 	otx_ep_dbg("SDP_R[%d]_credit:%x", oq_no, rte_read32(droq->pkts_credit_reg));
@@ -425,7 +425,7 @@ otx2_vf_enable_iq(struct otx_ep_device *otx_ep, uint32_t q_no)
 	}

 	if (loop < 0) {
-		otx_ep_err("INSTR DBELL not coming back to 0\n");
+		otx_ep_err("INSTR DBELL not coming back to 0");
 		return -EIO;
 	}

diff --git a/drivers/net/octeon_ep/otx_ep_common.h b/drivers/net/octeon_ep/otx_ep_common.h
index 82e57520d3..938c51b35d 100644
--- a/drivers/net/octeon_ep/otx_ep_common.h
+++ b/drivers/net/octeon_ep/otx_ep_common.h
@@ -119,7 +119,7 @@ union otx_ep_instr_irh {
 	{\
 	typeof(value) val = (value); \
 	typeof(reg_off) off = (reg_off); \
-	otx_ep_dbg("octeon_write_csr64: reg: 0x%08lx val: 0x%016llx\n", \
+	otx_ep_dbg("octeon_write_csr64: reg: 0x%08lx val: 0x%016llx", \
 		   (unsigned long)off, (unsigned long long)val); \
 	rte_write64(val, ((base_addr) + off)); \
 	}
diff --git a/drivers/net/octeon_ep/otx_ep_ethdev.c b/drivers/net/octeon_ep/otx_ep_ethdev.c
index 615cbbb648..c0298a56ac 100644
--- a/drivers/net/octeon_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeon_ep/otx_ep_ethdev.c
@@ -118,7 +118,7 @@ otx_ep_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
 	ret = otx_ep_mbox_get_link_info(eth_dev, &link);
 	if (ret)
 		return -EINVAL;
-	otx_ep_dbg("link status resp link %d duplex %d autoneg %d link_speed %d\n",
+	otx_ep_dbg("link status resp link %d duplex %d autoneg %d link_speed %d",
 		    link.link_status, link.link_duplex, link.link_autoneg, link.link_speed);
 	return rte_eth_linkstatus_set(eth_dev, &link);
 }
@@ -163,7 +163,7 @@ otx_ep_dev_set_default_mac_addr(struct rte_eth_dev *eth_dev,
 	ret = otx_ep_mbox_set_mac_addr(eth_dev, mac_addr);
 	if (ret)
 		return -EINVAL;
-	otx_ep_dbg("Default MAC address " RTE_ETHER_ADDR_PRT_FMT "\n",
+	otx_ep_dbg("Default MAC address " RTE_ETHER_ADDR_PRT_FMT "",
 		    RTE_ETHER_ADDR_BYTES(mac_addr));
 	rte_ether_addr_copy(mac_addr, eth_dev->data->mac_addrs);
 	return 0;
@@ -180,7 +180,7 @@ otx_ep_dev_start(struct rte_eth_dev *eth_dev)
 	/* Enable IQ/OQ for this device */
 	ret = otx_epvf->fn_list.enable_io_queues(otx_epvf);
 	if (ret) {
-		otx_ep_err("IOQ enable failed\n");
+		otx_ep_err("IOQ enable failed");
 		return ret;
 	}

@@ -189,7 +189,7 @@ otx_ep_dev_start(struct rte_eth_dev *eth_dev)
 			    otx_epvf->droq[q]->pkts_credit_reg);

 		rte_wmb();
-		otx_ep_info("OQ[%d] dbells [%d]\n", q,
+		otx_ep_info("OQ[%d] dbells [%d]", q,
 		rte_read32(otx_epvf->droq[q]->pkts_credit_reg));
 	}

@@ -198,7 +198,7 @@ otx_ep_dev_start(struct rte_eth_dev *eth_dev)
 	otx_ep_set_tx_func(eth_dev);
 	otx_ep_set_rx_func(eth_dev);

-	otx_ep_info("dev started\n");
+	otx_ep_info("dev started");

 	for (q = 0; q < eth_dev->data->nb_rx_queues; q++)
 		eth_dev->data->rx_queue_state[q] = RTE_ETH_QUEUE_STATE_STARTED;
@@ -241,7 +241,7 @@ otx_ep_ism_setup(struct otx_ep_device *otx_epvf)
 	/* Same DMA buffer is shared by OQ and IQ, clear it at start */
 	memset(otx_epvf->ism_buffer_mz->addr, 0, OTX_EP_ISM_BUFFER_SIZE);
 	if (otx_epvf->ism_buffer_mz == NULL) {
-		otx_ep_err("Failed to allocate ISM buffer\n");
+		otx_ep_err("Failed to allocate ISM buffer");
 		return(-1);
 	}
 	otx_ep_dbg("ISM: virt: 0x%p, dma: 0x%" PRIX64,
@@ -285,12 +285,12 @@ otx_ep_chip_specific_setup(struct otx_ep_device *otx_epvf)
 			ret = -EINVAL;
 		break;
 	default:
-		otx_ep_err("Unsupported device\n");
+		otx_ep_err("Unsupported device");
 		ret = -EINVAL;
 	}

 	if (!ret)
-		otx_ep_info("OTX_EP dev_id[%d]\n", dev_id);
+		otx_ep_info("OTX_EP dev_id[%d]", dev_id);

 	return ret;
 }
@@ -304,7 +304,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)

 	ret = otx_ep_chip_specific_setup(otx_epvf);
 	if (ret) {
-		otx_ep_err("Chip specific setup failed\n");
+		otx_ep_err("Chip specific setup failed");
 		goto setup_fail;
 	}

@@ -328,7 +328,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
 		otx_epvf->eth_dev->rx_pkt_burst = &cnxk_ep_recv_pkts;
 		otx_epvf->chip_gen = OTX_EP_CN10XX;
 	} else {
-		otx_ep_err("Invalid chip_id\n");
+		otx_ep_err("Invalid chip_id");
 		ret = -EINVAL;
 		goto setup_fail;
 	}
@@ -336,7 +336,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
 	otx_epvf->max_rx_queues = ethdev_queues;
 	otx_epvf->max_tx_queues = ethdev_queues;

-	otx_ep_info("OTX_EP Device is Ready\n");
+	otx_ep_info("OTX_EP Device is Ready");

 setup_fail:
 	return ret;
@@ -356,10 +356,10 @@ otx_ep_dev_configure(struct rte_eth_dev *eth_dev)
 	txmode = &conf->txmode;
 	if (eth_dev->data->nb_rx_queues > otx_epvf->max_rx_queues ||
 	    eth_dev->data->nb_tx_queues > otx_epvf->max_tx_queues) {
-		otx_ep_err("invalid num queues\n");
+		otx_ep_err("invalid num queues");
 		return -EINVAL;
 	}
-	otx_ep_info("OTX_EP Device is configured with num_txq %d num_rxq %d\n",
+	otx_ep_info("OTX_EP Device is configured with num_txq %d num_rxq %d",
 		    eth_dev->data->nb_rx_queues, eth_dev->data->nb_tx_queues);

 	otx_epvf->rx_offloads = rxmode->offloads;
@@ -403,29 +403,29 @@ otx_ep_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
 	uint16_t buf_size;

 	if (q_no >= otx_epvf->max_rx_queues) {
-		otx_ep_err("Invalid rx queue number %u\n", q_no);
+		otx_ep_err("Invalid rx queue number %u", q_no);
 		return -EINVAL;
 	}

 	if (num_rx_descs & (num_rx_descs - 1)) {
-		otx_ep_err("Invalid rx desc number should be pow 2  %u\n",
+		otx_ep_err("Invalid rx desc number should be pow 2  %u",
 			   num_rx_descs);
 		return -EINVAL;
 	}
 	if (num_rx_descs < (SDP_GBL_WMARK * 8)) {
-		otx_ep_err("Invalid rx desc number(%u) should at least be greater than 8xwmark  %u\n",
+		otx_ep_err("Invalid rx desc number(%u) should at least be greater than 8xwmark  %u",
 			   num_rx_descs, (SDP_GBL_WMARK * 8));
 		return -EINVAL;
 	}

-	otx_ep_dbg("setting up rx queue %u\n", q_no);
+	otx_ep_dbg("setting up rx queue %u", q_no);

 	mbp_priv = rte_mempool_get_priv(mp);
 	buf_size = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;

 	if (otx_ep_setup_oqs(otx_epvf, q_no, num_rx_descs, buf_size, mp,
 			     socket_id)) {
-		otx_ep_err("droq allocation failed\n");
+		otx_ep_err("droq allocation failed");
 		return -1;
 	}

@@ -454,7 +454,7 @@ otx_ep_rx_queue_release(struct rte_eth_dev *dev, uint16_t q_no)
 	int q_id = rq->q_no;

 	if (otx_ep_delete_oqs(otx_epvf, q_id))
-		otx_ep_err("Failed to delete OQ:%d\n", q_id);
+		otx_ep_err("Failed to delete OQ:%d", q_id);
 }

 /**
@@ -488,16 +488,16 @@ otx_ep_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
 	int retval;

 	if (q_no >= otx_epvf->max_tx_queues) {
-		otx_ep_err("Invalid tx queue number %u\n", q_no);
+		otx_ep_err("Invalid tx queue number %u", q_no);
 		return -EINVAL;
 	}
 	if (num_tx_descs & (num_tx_descs - 1)) {
-		otx_ep_err("Invalid tx desc number should be pow 2  %u\n",
+		otx_ep_err("Invalid tx desc number should be pow 2  %u",
 			   num_tx_descs);
 		return -EINVAL;
 	}
 	if (num_tx_descs < (SDP_GBL_WMARK * 8)) {
-		otx_ep_err("Invalid tx desc number(%u) should at least be greater than 8*wmark(%u)\n",
+		otx_ep_err("Invalid tx desc number(%u) should at least be greater than 8*wmark(%u)",
 			   num_tx_descs, (SDP_GBL_WMARK * 8));
 		return -EINVAL;
 	}
@@ -505,12 +505,12 @@ otx_ep_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
 	retval = otx_ep_setup_iqs(otx_epvf, q_no, num_tx_descs, socket_id);

 	if (retval) {
-		otx_ep_err("IQ(TxQ) creation failed.\n");
+		otx_ep_err("IQ(TxQ) creation failed.");
 		return retval;
 	}

 	eth_dev->data->tx_queues[q_no] = otx_epvf->instr_queue[q_no];
-	otx_ep_dbg("tx queue[%d] setup\n", q_no);
+	otx_ep_dbg("tx queue[%d] setup", q_no);
 	return 0;
 }

@@ -603,23 +603,23 @@ otx_ep_dev_close(struct rte_eth_dev *eth_dev)
 	num_queues = otx_epvf->nb_rx_queues;
 	for (q_no = 0; q_no < num_queues; q_no++) {
 		if (otx_ep_delete_oqs(otx_epvf, q_no)) {
-			otx_ep_err("Failed to delete OQ:%d\n", q_no);
+			otx_ep_err("Failed to delete OQ:%d", q_no);
 			return -EINVAL;
 		}
 	}
-	otx_ep_dbg("Num OQs:%d freed\n", otx_epvf->nb_rx_queues);
+	otx_ep_dbg("Num OQs:%d freed", otx_epvf->nb_rx_queues);

 	num_queues = otx_epvf->nb_tx_queues;
 	for (q_no = 0; q_no < num_queues; q_no++) {
 		if (otx_ep_delete_iqs(otx_epvf, q_no)) {
-			otx_ep_err("Failed to delete IQ:%d\n", q_no);
+			otx_ep_err("Failed to delete IQ:%d", q_no);
 			return -EINVAL;
 		}
 	}
-	otx_ep_dbg("Num IQs:%d freed\n", otx_epvf->nb_tx_queues);
+	otx_ep_dbg("Num IQs:%d freed", otx_epvf->nb_tx_queues);

 	if (rte_eth_dma_zone_free(eth_dev, "ism", 0)) {
-		otx_ep_err("Failed to delete ISM buffer\n");
+		otx_ep_err("Failed to delete ISM buffer");
 		return -EINVAL;
 	}

@@ -635,7 +635,7 @@ otx_ep_dev_get_mac_addr(struct rte_eth_dev *eth_dev,
 	ret = otx_ep_mbox_get_mac_addr(eth_dev, mac_addr);
 	if (ret)
 		return -EINVAL;
-	otx_ep_dbg("Get MAC address " RTE_ETHER_ADDR_PRT_FMT "\n",
+	otx_ep_dbg("Get MAC address " RTE_ETHER_ADDR_PRT_FMT,
 		    RTE_ETHER_ADDR_BYTES(mac_addr));
 	return 0;
 }
@@ -684,22 +684,22 @@ static int otx_ep_eth_dev_query_set_vf_mac(struct rte_eth_dev *eth_dev,
 	ret_val = otx_ep_dev_get_mac_addr(eth_dev, mac_addr);
 	if (!ret_val) {
 		if (!rte_is_valid_assigned_ether_addr(mac_addr)) {
-			otx_ep_dbg("PF doesn't have valid VF MAC addr" RTE_ETHER_ADDR_PRT_FMT "\n",
+			otx_ep_dbg("PF doesn't have valid VF MAC addr" RTE_ETHER_ADDR_PRT_FMT,
 				    RTE_ETHER_ADDR_BYTES(mac_addr));
 			rte_eth_random_addr(mac_addr->addr_bytes);
-			otx_ep_dbg("Setting Random MAC address" RTE_ETHER_ADDR_PRT_FMT "\n",
+			otx_ep_dbg("Setting Random MAC address" RTE_ETHER_ADDR_PRT_FMT,
 				    RTE_ETHER_ADDR_BYTES(mac_addr));
 			ret_val = otx_ep_dev_set_default_mac_addr(eth_dev, mac_addr);
 			if (ret_val) {
-				otx_ep_err("Setting MAC address " RTE_ETHER_ADDR_PRT_FMT "fails\n",
+				otx_ep_err("Setting MAC address " RTE_ETHER_ADDR_PRT_FMT "fails",
 					    RTE_ETHER_ADDR_BYTES(mac_addr));
 				return ret_val;
 			}
 		}
-		otx_ep_dbg("Received valid MAC addr from PF" RTE_ETHER_ADDR_PRT_FMT "\n",
+		otx_ep_dbg("Received valid MAC addr from PF" RTE_ETHER_ADDR_PRT_FMT,
 			    RTE_ETHER_ADDR_BYTES(mac_addr));
 	} else {
-		otx_ep_err("Getting MAC address from PF via Mbox fails with ret_val: %d\n",
+		otx_ep_err("Getting MAC address from PF via Mbox fails with ret_val: %d",
 			    ret_val);
 		return ret_val;
 	}
@@ -734,7 +734,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
 	otx_epvf->mbox_neg_ver = OTX_EP_MBOX_VERSION_V1;
 	eth_dev->data->mac_addrs = rte_zmalloc("otx_ep", RTE_ETHER_ADDR_LEN, 0);
 	if (eth_dev->data->mac_addrs == NULL) {
-		otx_ep_err("MAC addresses memory allocation failed\n");
+		otx_ep_err("MAC addresses memory allocation failed");
 		eth_dev->dev_ops = NULL;
 		return -ENOMEM;
 	}
@@ -754,12 +754,12 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
 	    otx_epvf->chip_id == PCI_DEVID_CNF10KA_EP_NET_VF ||
 	    otx_epvf->chip_id == PCI_DEVID_CNF10KB_EP_NET_VF) {
 		otx_epvf->pkind = SDP_OTX2_PKIND_FS0;
-		otx_ep_info("using pkind %d\n", otx_epvf->pkind);
+		otx_ep_info("using pkind %d", otx_epvf->pkind);
 	} else if (otx_epvf->chip_id == PCI_DEVID_OCTEONTX_EP_VF) {
 		otx_epvf->pkind = SDP_PKIND;
-		otx_ep_info("Using pkind %d.\n", otx_epvf->pkind);
+		otx_ep_info("Using pkind %d.", otx_epvf->pkind);
 	} else {
-		otx_ep_err("Invalid chip id\n");
+		otx_ep_err("Invalid chip id");
 		return -EINVAL;
 	}

@@ -768,7 +768,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)

 	if (otx_ep_eth_dev_query_set_vf_mac(eth_dev,
 				(struct rte_ether_addr *)&vf_mac_addr)) {
-		otx_ep_err("set mac addr failed\n");
+		otx_ep_err("set mac addr failed");
 		return -ENODEV;
 	}
 	rte_ether_addr_copy(&vf_mac_addr, eth_dev->data->mac_addrs);
diff --git a/drivers/net/octeon_ep/otx_ep_mbox.c b/drivers/net/octeon_ep/otx_ep_mbox.c
index 4118645dc7..c92adeaf9a 100644
--- a/drivers/net/octeon_ep/otx_ep_mbox.c
+++ b/drivers/net/octeon_ep/otx_ep_mbox.c
@@ -44,11 +44,11 @@ __otx_ep_send_mbox_cmd(struct otx_ep_device *otx_ep,
 		}
 	}
 	if (count == OTX_EP_MBOX_TIMEOUT_MS) {
-		otx_ep_err("mbox send Timeout count:%d\n", count);
+		otx_ep_err("mbox send Timeout count:%d", count);
 		return OTX_EP_MBOX_TIMEOUT_MS;
 	}
 	if (rsp->s.type != OTX_EP_MBOX_TYPE_RSP_ACK) {
-		otx_ep_err("mbox received  NACK from PF\n");
+		otx_ep_err("mbox received  NACK from PF");
 		return OTX_EP_MBOX_CMD_STATUS_NACK;
 	}

@@ -65,7 +65,7 @@ otx_ep_send_mbox_cmd(struct otx_ep_device *otx_ep,

 	rte_spinlock_lock(&otx_ep->mbox_lock);
 	if (otx_ep_cmd_versions[cmd.s.opcode] > otx_ep->mbox_neg_ver) {
-		otx_ep_dbg("CMD:%d not supported in Version:%d\n", cmd.s.opcode,
+		otx_ep_dbg("CMD:%d not supported in Version:%d", cmd.s.opcode,
 			    otx_ep->mbox_neg_ver);
 		rte_spinlock_unlock(&otx_ep->mbox_lock);
 		return -EOPNOTSUPP;
@@ -92,7 +92,7 @@ otx_ep_mbox_bulk_read(struct otx_ep_device *otx_ep,
 	/* Send cmd to read data from PF */
 	ret = __otx_ep_send_mbox_cmd(otx_ep, cmd, &rsp);
 	if (ret) {
-		otx_ep_err("mbox bulk read data request failed\n");
+		otx_ep_err("mbox bulk read data request failed");
 		rte_spinlock_unlock(&otx_ep->mbox_lock);
 		return ret;
 	}
@@ -108,7 +108,7 @@ otx_ep_mbox_bulk_read(struct otx_ep_device *otx_ep,
 	while (data_len) {
 		ret = __otx_ep_send_mbox_cmd(otx_ep, cmd, &rsp);
 		if (ret) {
-			otx_ep_err("mbox bulk read data request failed\n");
+			otx_ep_err("mbox bulk read data request failed");
 			otx_ep->mbox_data_index = 0;
 			memset(otx_ep->mbox_data_buf, 0, OTX_EP_MBOX_MAX_DATA_BUF_SIZE);
 			rte_spinlock_unlock(&otx_ep->mbox_lock);
@@ -154,10 +154,10 @@ otx_ep_mbox_set_mtu(struct rte_eth_dev *eth_dev, uint16_t mtu)

 	ret = otx_ep_send_mbox_cmd(otx_ep, cmd, &rsp);
 	if (ret) {
-		otx_ep_err("set MTU failed\n");
+		otx_ep_err("set MTU failed");
 		return -EINVAL;
 	}
-	otx_ep_dbg("mtu set  success mtu %u\n", mtu);
+	otx_ep_dbg("mtu set  success mtu %u", mtu);

 	return 0;
 }
@@ -178,10 +178,10 @@ otx_ep_mbox_set_mac_addr(struct rte_eth_dev *eth_dev,
 		cmd.s_set_mac.mac_addr[i] = mac_addr->addr_bytes[i];
 	ret = otx_ep_send_mbox_cmd(otx_ep, cmd, &rsp);
 	if (ret) {
-		otx_ep_err("set MAC address failed\n");
+		otx_ep_err("set MAC address failed");
 		return -EINVAL;
 	}
-	otx_ep_dbg("%s VF MAC " RTE_ETHER_ADDR_PRT_FMT "\n",
+	otx_ep_dbg("%s VF MAC " RTE_ETHER_ADDR_PRT_FMT,
 		    __func__, RTE_ETHER_ADDR_BYTES(mac_addr));
 	rte_ether_addr_copy(mac_addr, eth_dev->data->mac_addrs);
 	return 0;
@@ -201,12 +201,12 @@ otx_ep_mbox_get_mac_addr(struct rte_eth_dev *eth_dev,
 	cmd.s_set_mac.opcode = OTX_EP_MBOX_CMD_GET_MAC_ADDR;
 	ret = otx_ep_send_mbox_cmd(otx_ep, cmd, &rsp);
 	if (ret) {
-		otx_ep_err("get MAC address failed\n");
+		otx_ep_err("get MAC address failed");
 		return -EINVAL;
 	}
 	for (i = 0; i < RTE_ETHER_ADDR_LEN; i++)
 		mac_addr->addr_bytes[i] = rsp.s_set_mac.mac_addr[i];
-	otx_ep_dbg("%s VF MAC " RTE_ETHER_ADDR_PRT_FMT "\n",
+	otx_ep_dbg("%s VF MAC " RTE_ETHER_ADDR_PRT_FMT,
 		    __func__, RTE_ETHER_ADDR_BYTES(mac_addr));
 	return 0;
 }
@@ -224,7 +224,7 @@ int otx_ep_mbox_get_link_status(struct rte_eth_dev *eth_dev,
 	cmd.s_link_status.opcode = OTX_EP_MBOX_CMD_GET_LINK_STATUS;
 	ret = otx_ep_send_mbox_cmd(otx_ep, cmd, &rsp);
 	if (ret) {
-		otx_ep_err("Get link status failed\n");
+		otx_ep_err("Get link status failed");
 		return -EINVAL;
 	}
 	*oper_up = rsp.s_link_status.status;
@@ -242,7 +242,7 @@ int otx_ep_mbox_get_link_info(struct rte_eth_dev *eth_dev,
 	ret = otx_ep_mbox_bulk_read(otx_ep, OTX_EP_MBOX_CMD_GET_LINK_INFO,
 				      (uint8_t *)&link_info, (int32_t *)&size);
 	if (ret) {
-		otx_ep_err("Get link info failed\n");
+		otx_ep_err("Get link info failed");
 		return ret;
 	}
 	link->link_status = RTE_ETH_LINK_UP;
@@ -310,12 +310,12 @@ int otx_ep_mbox_version_check(struct rte_eth_dev *eth_dev)
 	 * during initialization of PMD driver.
 	 */
 	if (ret == OTX_EP_MBOX_CMD_STATUS_NACK || rsp.s_version.version == 0) {
-		otx_ep_dbg("VF Mbox version fallback to base version from:%u\n",
+		otx_ep_dbg("VF Mbox version fallback to base version from:%u",
 			(uint32_t)cmd.s_version.version);
 		return 0;
 	}
 	otx_ep->mbox_neg_ver = (uint32_t)rsp.s_version.version;
-	otx_ep_dbg("VF Mbox version:%u Negotiated VF version with PF:%u\n",
+	otx_ep_dbg("VF Mbox version:%u Negotiated VF version with PF:%u",
 		    (uint32_t)cmd.s_version.version,
 		    (uint32_t)rsp.s_version.version);
 	return 0;
diff --git a/drivers/net/octeon_ep/otx_ep_rxtx.c b/drivers/net/octeon_ep/otx_ep_rxtx.c
index c421ef0a1c..65a1f304e8 100644
--- a/drivers/net/octeon_ep/otx_ep_rxtx.c
+++ b/drivers/net/octeon_ep/otx_ep_rxtx.c
@@ -22,19 +22,19 @@ otx_ep_dmazone_free(const struct rte_memzone *mz)
 	int ret = 0;

 	if (mz == NULL) {
-		otx_ep_err("Memzone: NULL\n");
+		otx_ep_err("Memzone: NULL");
 		return;
 	}

 	mz_tmp = rte_memzone_lookup(mz->name);
 	if (mz_tmp == NULL) {
-		otx_ep_err("Memzone %s Not Found\n", mz->name);
+		otx_ep_err("Memzone %s Not Found", mz->name);
 		return;
 	}

 	ret = rte_memzone_free(mz);
 	if (ret)
-		otx_ep_err("Memzone free failed : ret = %d\n", ret);
+		otx_ep_err("Memzone free failed : ret = %d", ret);
 }

 /* Free IQ resources */
@@ -46,7 +46,7 @@ otx_ep_delete_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no)

 	iq = otx_ep->instr_queue[iq_no];
 	if (iq == NULL) {
-		otx_ep_err("Invalid IQ[%d]\n", iq_no);
+		otx_ep_err("Invalid IQ[%d]", iq_no);
 		return -EINVAL;
 	}

@@ -68,7 +68,7 @@ otx_ep_delete_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no)

 	otx_ep->nb_tx_queues--;

-	otx_ep_info("IQ[%d] is deleted\n", iq_no);
+	otx_ep_info("IQ[%d] is deleted", iq_no);

 	return 0;
 }
@@ -94,7 +94,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
 					     OTX_EP_PCI_RING_ALIGN,
 					     socket_id);
 	if (iq->iq_mz == NULL) {
-		otx_ep_err("IQ[%d] memzone alloc failed\n", iq_no);
+		otx_ep_err("IQ[%d] memzone alloc failed", iq_no);
 		goto iq_init_fail;
 	}

@@ -102,7 +102,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
 	iq->base_addr = (uint8_t *)iq->iq_mz->addr;

 	if (num_descs & (num_descs - 1)) {
-		otx_ep_err("IQ[%d] descs not in power of 2\n", iq_no);
+		otx_ep_err("IQ[%d] descs not in power of 2", iq_no);
 		goto iq_init_fail;
 	}

@@ -117,7 +117,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
 			RTE_CACHE_LINE_SIZE,
 			rte_socket_id());
 	if (iq->req_list == NULL) {
-		otx_ep_err("IQ[%d] req_list alloc failed\n", iq_no);
+		otx_ep_err("IQ[%d] req_list alloc failed", iq_no);
 		goto iq_init_fail;
 	}

@@ -125,7 +125,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
 		sg = rte_zmalloc_socket("sg_entry", (OTX_EP_MAX_SG_LISTS * OTX_EP_SG_ENTRY_SIZE),
 			OTX_EP_SG_ALIGN, rte_socket_id());
 		if (sg == NULL) {
-			otx_ep_err("IQ[%d] sg_entries alloc failed\n", iq_no);
+			otx_ep_err("IQ[%d] sg_entries alloc failed", iq_no);
 			goto iq_init_fail;
 		}

@@ -133,14 +133,14 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
 		iq->req_list[i].finfo.g.sg = sg;
 	}

-	otx_ep_info("IQ[%d]: base: %p basedma: %lx count: %d\n",
+	otx_ep_info("IQ[%d]: base: %p basedma: %lx count: %d",
 		     iq_no, iq->base_addr, (unsigned long)iq->base_addr_dma,
 		     iq->nb_desc);

 	iq->mbuf_list = rte_zmalloc_socket("mbuf_list",	(iq->nb_desc * sizeof(struct rte_mbuf *)),
 					   RTE_CACHE_LINE_SIZE, rte_socket_id());
 	if (!iq->mbuf_list) {
-		otx_ep_err("IQ[%d] mbuf_list alloc failed\n", iq_no);
+		otx_ep_err("IQ[%d] mbuf_list alloc failed", iq_no);
 		goto iq_init_fail;
 	}

@@ -185,12 +185,12 @@ otx_ep_setup_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no, int num_descs,
 	otx_ep->instr_queue[iq_no] = iq;

 	if (otx_ep_init_instr_queue(otx_ep, iq_no, num_descs, socket_id)) {
-		otx_ep_err("IQ init is failed\n");
+		otx_ep_err("IQ init is failed");
 		goto delete_IQ;
 	}
 	otx_ep->nb_tx_queues++;

-	otx_ep_info("IQ[%d] is created.\n", iq_no);
+	otx_ep_info("IQ[%d] is created.", iq_no);

 	return 0;

@@ -233,7 +233,7 @@ otx_ep_delete_oqs(struct otx_ep_device *otx_ep, uint32_t oq_no)

 	droq = otx_ep->droq[oq_no];
 	if (droq == NULL) {
-		otx_ep_err("Invalid droq[%d]\n", oq_no);
+		otx_ep_err("Invalid droq[%d]", oq_no);
 		return -EINVAL;
 	}

@@ -253,7 +253,7 @@ otx_ep_delete_oqs(struct otx_ep_device *otx_ep, uint32_t oq_no)

 	otx_ep->nb_rx_queues--;

-	otx_ep_info("OQ[%d] is deleted\n", oq_no);
+	otx_ep_info("OQ[%d] is deleted", oq_no);
 	return 0;
 }

@@ -268,7 +268,7 @@ otx_ep_droq_setup_ring_buffers(struct otx_ep_droq *droq)
 	for (idx = 0; idx < droq->nb_desc; idx++) {
 		buf = rte_pktmbuf_alloc(droq->mpool);
 		if (buf == NULL) {
-			otx_ep_err("OQ buffer alloc failed\n");
+			otx_ep_err("OQ buffer alloc failed");
 			droq->stats.rx_alloc_failure++;
 			return -ENOMEM;
 		}
@@ -296,7 +296,7 @@ otx_ep_init_droq(struct otx_ep_device *otx_ep, uint32_t q_no,
 	uint32_t desc_ring_size;
 	int ret;

-	otx_ep_info("OQ[%d] Init start\n", q_no);
+	otx_ep_info("OQ[%d] Init start", q_no);

 	droq = otx_ep->droq[q_no];
 	droq->otx_ep_dev = otx_ep;
@@ -316,23 +316,23 @@ otx_ep_init_droq(struct otx_ep_device *otx_ep, uint32_t q_no,
 						      socket_id);

 	if (droq->desc_ring_mz == NULL) {
-		otx_ep_err("OQ:%d desc_ring allocation failed\n", q_no);
+		otx_ep_err("OQ:%d desc_ring allocation failed", q_no);
 		goto init_droq_fail;
 	}

 	droq->desc_ring_dma = droq->desc_ring_mz->iova;
 	droq->desc_ring = (struct otx_ep_droq_desc *)droq->desc_ring_mz->addr;

-	otx_ep_dbg("OQ[%d]: desc_ring: virt: 0x%p, dma: %lx\n",
+	otx_ep_dbg("OQ[%d]: desc_ring: virt: 0x%p, dma: %lx",
 		    q_no, droq->desc_ring, (unsigned long)droq->desc_ring_dma);
-	otx_ep_dbg("OQ[%d]: num_desc: %d\n", q_no, droq->nb_desc);
+	otx_ep_dbg("OQ[%d]: num_desc: %d", q_no, droq->nb_desc);

 	/* OQ buf_list set up */
 	droq->recv_buf_list = rte_zmalloc_socket("recv_buf_list",
 				(droq->nb_desc * sizeof(struct rte_mbuf *)),
 				 RTE_CACHE_LINE_SIZE, socket_id);
 	if (droq->recv_buf_list == NULL) {
-		otx_ep_err("OQ recv_buf_list alloc failed\n");
+		otx_ep_err("OQ recv_buf_list alloc failed");
 		goto init_droq_fail;
 	}

@@ -366,17 +366,17 @@ otx_ep_setup_oqs(struct otx_ep_device *otx_ep, int oq_no, int num_descs,
 	droq = (struct otx_ep_droq *)rte_zmalloc("otx_ep_OQ",
 				sizeof(*droq), RTE_CACHE_LINE_SIZE);
 	if (droq == NULL) {
-		otx_ep_err("Droq[%d] Creation Failed\n", oq_no);
+		otx_ep_err("Droq[%d] Creation Failed", oq_no);
 		return -ENOMEM;
 	}
 	otx_ep->droq[oq_no] = droq;

 	if (otx_ep_init_droq(otx_ep, oq_no, num_descs, desc_size, mpool,
 			     socket_id)) {
-		otx_ep_err("Droq[%d] Initialization failed\n", oq_no);
+		otx_ep_err("Droq[%d] Initialization failed", oq_no);
 		goto delete_OQ;
 	}
-	otx_ep_info("OQ[%d] is created.\n", oq_no);
+	otx_ep_info("OQ[%d] is created.", oq_no);

 	otx_ep->nb_rx_queues++;

@@ -401,12 +401,12 @@ otx_ep_iqreq_delete(struct otx_ep_instr_queue *iq, uint32_t idx)
 	case OTX_EP_REQTYPE_NORESP_GATHER:
 		/* This will take care of multiple segments also */
 		rte_pktmbuf_free(mbuf);
-		otx_ep_dbg("IQ buffer freed at idx[%d]\n", idx);
+		otx_ep_dbg("IQ buffer freed at idx[%d]", idx);
 		break;

 	case OTX_EP_REQTYPE_NONE:
 	default:
-		otx_ep_info("This iqreq mode is not supported:%d\n", reqtype);
+		otx_ep_info("This iqreq mode is not supported:%d", reqtype);
 	}

 	/* Reset the request list at this index */
@@ -568,7 +568,7 @@ prepare_xmit_gather_list(struct otx_ep_instr_queue *iq, struct rte_mbuf *m, uint
 	num_sg = (frags + mask) / OTX_EP_NUM_SG_PTRS;

 	if (unlikely(pkt_len > OTX_EP_MAX_PKT_SZ && num_sg > OTX_EP_MAX_SG_LISTS)) {
-		otx_ep_err("Failed to xmit the pkt, pkt_len is higher or pkt has more segments\n");
+		otx_ep_err("Failed to xmit the pkt, pkt_len is higher or pkt has more segments");
 		goto exit;
 	}

@@ -644,16 +644,16 @@ otx_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
 		iqcmd.irh.u64 = rte_bswap64(iqcmd.irh.u64);

 #ifdef OTX_EP_IO_DEBUG
-		otx_ep_dbg("After swapping\n");
-		otx_ep_dbg("Word0 [dptr]: 0x%016lx\n",
+		otx_ep_dbg("After swapping");
+		otx_ep_dbg("Word0 [dptr]: 0x%016lx",
 			   (unsigned long)iqcmd.dptr);
-		otx_ep_dbg("Word1 [ihtx]: 0x%016lx\n", (unsigned long)iqcmd.ih);
-		otx_ep_dbg("Word2 [pki_ih3]: 0x%016lx\n",
+		otx_ep_dbg("Word1 [ihtx]: 0x%016lx", (unsigned long)iqcmd.ih);
+		otx_ep_dbg("Word2 [pki_ih3]: 0x%016lx",
 			   (unsigned long)iqcmd.pki_ih3);
-		otx_ep_dbg("Word3 [rptr]: 0x%016lx\n",
+		otx_ep_dbg("Word3 [rptr]: 0x%016lx",
 			   (unsigned long)iqcmd.rptr);
-		otx_ep_dbg("Word4 [irh]: 0x%016lx\n", (unsigned long)iqcmd.irh);
-		otx_ep_dbg("Word5 [exhdr[0]]: 0x%016lx\n",
+		otx_ep_dbg("Word4 [irh]: 0x%016lx", (unsigned long)iqcmd.irh);
+		otx_ep_dbg("Word5 [exhdr[0]]: 0x%016lx",
 				(unsigned long)iqcmd.exhdr[0]);
 		rte_pktmbuf_dump(stdout, m, rte_pktmbuf_pkt_len(m));
 #endif
@@ -726,7 +726,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, struct otx_ep_droq *droq,
 	if (unlikely(!info->length)) {
 		int retry = OTX_EP_MAX_DELAYED_PKT_RETRIES;
 		/* otx_ep_dbg("OCTEON DROQ[%d]: read_idx: %d; Data not ready "
-		 * "yet, Retry; pending=%lu\n", droq->q_no, droq->read_idx,
+		 * "yet, Retry; pending=%lu", droq->q_no, droq->read_idx,
 		 * droq->pkts_pending);
 		 */
 		droq->stats.pkts_delayed_data++;
@@ -735,7 +735,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, struct otx_ep_droq *droq,
 			rte_delay_us_block(50);
 		}
 		if (!retry && !info->length) {
-			otx_ep_err("OCTEON DROQ[%d]: read_idx: %d; Retry failed !!\n",
+			otx_ep_err("OCTEON DROQ[%d]: read_idx: %d; Retry failed !!",
 				   droq->q_no, droq->read_idx);
 			/* May be zero length packet; drop it */
 			assert(0);
@@ -803,7 +803,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, struct otx_ep_droq *droq,

 				last_buf = mbuf;
 			} else {
-				otx_ep_err("no buf\n");
+				otx_ep_err("no buf");
 				assert(0);
 			}

diff --git a/drivers/net/octeon_ep/otx_ep_vf.c b/drivers/net/octeon_ep/otx_ep_vf.c
index 236b7a874c..7defb0f13d 100644
--- a/drivers/net/octeon_ep/otx_ep_vf.c
+++ b/drivers/net/octeon_ep/otx_ep_vf.c
@@ -142,7 +142,7 @@ otx_ep_setup_iq_regs(struct otx_ep_device *otx_ep, uint32_t iq_no)
 	iq->inst_cnt_reg = (uint8_t *)otx_ep->hw_addr +
 			   OTX_EP_R_IN_CNTS(iq_no);

-	otx_ep_dbg("InstQ[%d]:dbell reg @ 0x%p inst_cnt_reg @ 0x%p\n",
+	otx_ep_dbg("InstQ[%d]:dbell reg @ 0x%p inst_cnt_reg @ 0x%p",
 		     iq_no, iq->doorbell_reg, iq->inst_cnt_reg);

 	loop = OTX_EP_BUSY_LOOP_COUNT;
@@ -220,14 +220,14 @@ otx_ep_setup_oq_regs(struct otx_ep_device *otx_ep, uint32_t oq_no)
 	}
 	if (loop < 0)
 		return -EIO;
-	otx_ep_dbg("OTX_EP_R[%d]_credit:%x\n", oq_no,
+	otx_ep_dbg("OTX_EP_R[%d]_credit:%x", oq_no,
 		     rte_read32(droq->pkts_credit_reg));

 	/* Clear the OQ_OUT_CNTS doorbell  */
 	reg_val = rte_read32(droq->pkts_sent_reg);
 	rte_write32((uint32_t)reg_val, droq->pkts_sent_reg);

-	otx_ep_dbg("OTX_EP_R[%d]_sent: %x\n", oq_no,
+	otx_ep_dbg("OTX_EP_R[%d]_sent: %x", oq_no,
 		     rte_read32(droq->pkts_sent_reg));

 	loop = OTX_EP_BUSY_LOOP_COUNT;
@@ -259,7 +259,7 @@ otx_ep_enable_iq(struct otx_ep_device *otx_ep, uint32_t q_no)
 	}

 	if (loop < 0) {
-		otx_ep_err("dbell reset failed\n");
+		otx_ep_err("dbell reset failed");
 		return -EIO;
 	}

@@ -269,7 +269,7 @@ otx_ep_enable_iq(struct otx_ep_device *otx_ep, uint32_t q_no)

 	otx_ep_write64(reg_val, otx_ep->hw_addr, OTX_EP_R_IN_ENABLE(q_no));

-	otx_ep_info("IQ[%d] enable done\n", q_no);
+	otx_ep_info("IQ[%d] enable done", q_no);

 	return 0;
 }
@@ -290,7 +290,7 @@ otx_ep_enable_oq(struct otx_ep_device *otx_ep, uint32_t q_no)
 		rte_delay_ms(1);
 	}
 	if (loop < 0) {
-		otx_ep_err("dbell reset failed\n");
+		otx_ep_err("dbell reset failed");
 		return -EIO;
 	}

@@ -299,7 +299,7 @@ otx_ep_enable_oq(struct otx_ep_device *otx_ep, uint32_t q_no)
 	reg_val |= 0x1ull;
 	otx_ep_write64(reg_val, otx_ep->hw_addr, OTX_EP_R_OUT_ENABLE(q_no));

-	otx_ep_info("OQ[%d] enable done\n", q_no);
+	otx_ep_info("OQ[%d] enable done", q_no);

 	return 0;
 }
@@ -402,10 +402,10 @@ otx_ep_vf_setup_device(struct otx_ep_device *otx_ep)
 	if (otx_ep->conf == NULL) {
 		otx_ep->conf = otx_ep_get_defconf(otx_ep);
 		if (otx_ep->conf == NULL) {
-			otx_ep_err("OTX_EP VF default config not found\n");
+			otx_ep_err("OTX_EP VF default config not found");
 			return -ENOENT;
 		}
-		otx_ep_info("Default config is used\n");
+		otx_ep_info("Default config is used");
 	}

 	/* Get IOQs (RPVF] count */
@@ -414,7 +414,7 @@ otx_ep_vf_setup_device(struct otx_ep_device *otx_ep)
 	otx_ep->sriov_info.rings_per_vf = ((reg_val >> OTX_EP_R_IN_CTL_RPVF_POS)
 					  & OTX_EP_R_IN_CTL_RPVF_MASK);

-	otx_ep_info("OTX_EP RPVF: %d\n", otx_ep->sriov_info.rings_per_vf);
+	otx_ep_info("OTX_EP RPVF: %d", otx_ep->sriov_info.rings_per_vf);

 	otx_ep->fn_list.setup_iq_regs       = otx_ep_setup_iq_regs;
 	otx_ep->fn_list.setup_oq_regs       = otx_ep_setup_oq_regs;
diff --git a/drivers/net/octeontx/base/octeontx_pkovf.c b/drivers/net/octeontx/base/octeontx_pkovf.c
index 5d445dfb49..7aec84a813 100644
--- a/drivers/net/octeontx/base/octeontx_pkovf.c
+++ b/drivers/net/octeontx/base/octeontx_pkovf.c
@@ -364,7 +364,7 @@ octeontx_pko_chan_stop(struct octeontx_pko_vf_ctl_s *ctl, uint64_t chanid)

 		res = octeontx_pko_dq_close(dq);
 		if (res < 0)
-			octeontx_log_err("closing DQ%d failed\n", dq);
+			octeontx_log_err("closing DQ%d failed", dq);

 		dq_cnt++;
 		dq++;
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index 2a8378a33e..5f0cd1bb7f 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -1223,7 +1223,7 @@ octeontx_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
 	if (dev->data->tx_queues[qid]) {
 		res = octeontx_dev_tx_queue_stop(dev, qid);
 		if (res < 0)
-			octeontx_log_err("failed stop tx_queue(%d)\n", qid);
+			octeontx_log_err("failed stop tx_queue(%d)", qid);

 		rte_free(dev->data->tx_queues[qid]);
 	}
@@ -1342,7 +1342,7 @@ octeontx_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,

 	/* Verify queue index */
 	if (qidx >= dev->data->nb_rx_queues) {
-		octeontx_log_err("QID %d not supported (0 - %d available)\n",
+		octeontx_log_err("QID %d not supported (0 - %d available)",
 				qidx, (dev->data->nb_rx_queues - 1));
 		return -ENOTSUP;
 	}
diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c
index bfec085045..9626c343dc 100644
--- a/drivers/net/pcap/pcap_ethdev.c
+++ b/drivers/net/pcap/pcap_ethdev.c
@@ -1093,11 +1093,11 @@ set_iface_direction(const char *iface, pcap_t *pcap,
 {
 	const char *direction_str = (direction == PCAP_D_IN) ? "IN" : "OUT";
 	if (pcap_setdirection(pcap, direction) < 0) {
-		PMD_LOG(ERR, "Setting %s pcap direction %s failed - %s\n",
+		PMD_LOG(ERR, "Setting %s pcap direction %s failed - %s",
 				iface, direction_str, pcap_geterr(pcap));
 		return -1;
 	}
-	PMD_LOG(INFO, "Setting %s pcap direction %s\n",
+	PMD_LOG(INFO, "Setting %s pcap direction %s",
 			iface, direction_str);
 	return 0;
 }
diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
index 0073dd7405..dc04a52639 100644
--- a/drivers/net/pfe/pfe_ethdev.c
+++ b/drivers/net/pfe/pfe_ethdev.c
@@ -161,7 +161,7 @@ pfe_recv_pkts_on_intr(void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		writel(readl(HIF_INT_ENABLE) | HIF_RXPKT_INT, HIF_INT_ENABLE);
 		ret = epoll_wait(priv->pfe->hif.epoll_fd, &epoll_ev, 1, ticks);
 		if (ret < 0 && errno != EINTR)
-			PFE_PMD_ERR("epoll_wait fails with %d\n", errno);
+			PFE_PMD_ERR("epoll_wait fails with %d", errno);
 	}

 	return work_done;
@@ -338,9 +338,9 @@ pfe_eth_open_cdev(struct pfe_eth_priv_s *priv)

 	pfe_cdev_fd = open(PFE_CDEV_PATH, O_RDONLY);
 	if (pfe_cdev_fd < 0) {
-		PFE_PMD_WARN("Unable to open PFE device file (%s).\n",
+		PFE_PMD_WARN("Unable to open PFE device file (%s).",
 			     PFE_CDEV_PATH);
-		PFE_PMD_WARN("Link status update will not be available.\n");
+		PFE_PMD_WARN("Link status update will not be available.");
 		priv->link_fd = PFE_CDEV_INVALID_FD;
 		return -1;
 	}
@@ -582,16 +582,16 @@ pfe_eth_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)

 		ret = ioctl(priv->link_fd, ioctl_cmd, &lstatus);
 		if (ret != 0) {
-			PFE_PMD_ERR("Unable to fetch link status (ioctl)\n");
+			PFE_PMD_ERR("Unable to fetch link status (ioctl)");
 			return -1;
 		}
-		PFE_PMD_DEBUG("Fetched link state (%d) for dev %d.\n",
+		PFE_PMD_DEBUG("Fetched link state (%d) for dev %d.",
 			      lstatus, priv->id);
 	}

 	if (old.link_status == lstatus) {
 		/* no change in status */
-		PFE_PMD_DEBUG("No change in link status; Not updating.\n");
+		PFE_PMD_DEBUG("No change in link status; Not updating.");
 		return -1;
 	}

@@ -602,7 +602,7 @@ pfe_eth_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)

 	pfe_eth_atomic_write_link_status(dev, &link);

-	PFE_PMD_INFO("Port (%d) link is %s\n", dev->data->port_id,
+	PFE_PMD_INFO("Port (%d) link is %s", dev->data->port_id,
 		     link.link_status ? "up" : "down");

 	return 0;
@@ -992,24 +992,24 @@ pmd_pfe_probe(struct rte_vdev_device *vdev)

 	addr = of_get_address(np, 0, &cbus_size, NULL);
 	if (!addr) {
-		PFE_PMD_ERR("of_get_address cannot return qman address\n");
+		PFE_PMD_ERR("of_get_address cannot return qman address");
 		goto err;
 	}
 	cbus_addr = of_translate_address(np, addr);
 	if (!cbus_addr) {
-		PFE_PMD_ERR("of_translate_address failed\n");
+		PFE_PMD_ERR("of_translate_address failed");
 		goto err;
 	}

 	addr = of_get_address(np, 1, &ddr_size, NULL);
 	if (!addr) {
-		PFE_PMD_ERR("of_get_address cannot return qman address\n");
+		PFE_PMD_ERR("of_get_address cannot return qman address");
 		goto err;
 	}

 	g_pfe->ddr_phys_baseaddr = of_translate_address(np, addr);
 	if (!g_pfe->ddr_phys_baseaddr) {
-		PFE_PMD_ERR("of_translate_address failed\n");
+		PFE_PMD_ERR("of_translate_address failed");
 		goto err;
 	}

diff --git a/drivers/net/pfe/pfe_hif.c b/drivers/net/pfe/pfe_hif.c
index e2b23bbeb7..abb9cde996 100644
--- a/drivers/net/pfe/pfe_hif.c
+++ b/drivers/net/pfe/pfe_hif.c
@@ -309,7 +309,7 @@ client_put_rxpacket(struct hif_rx_queue *queue,
 	if (readl(&desc->ctrl) & CL_DESC_OWN) {
 		mbuf = rte_cpu_to_le_64(rte_pktmbuf_alloc(pool));
 		if (unlikely(!mbuf)) {
-			PFE_PMD_WARN("Buffer allocation failure\n");
+			PFE_PMD_WARN("Buffer allocation failure");
 			return NULL;
 		}

@@ -770,9 +770,9 @@ pfe_hif_rx_idle(struct pfe_hif *hif)
 	} while (--hif_stop_loop);

 	if (readl(HIF_RX_STATUS) & BDP_CSR_RX_DMA_ACTV)
-		PFE_PMD_ERR("Failed\n");
+		PFE_PMD_ERR("Failed");
 	else
-		PFE_PMD_INFO("Done\n");
+		PFE_PMD_INFO("Done");
 }
 #endif

@@ -806,7 +806,7 @@ pfe_hif_init(struct pfe *pfe)

 		pfe_cdev_fd = open(PFE_CDEV_PATH, O_RDWR);
 		if (pfe_cdev_fd < 0) {
-			PFE_PMD_WARN("Unable to open PFE device file (%s).\n",
+			PFE_PMD_WARN("Unable to open PFE device file (%s).",
 				     PFE_CDEV_PATH);
 			pfe->cdev_fd = PFE_CDEV_INVALID_FD;
 			return -1;
@@ -817,7 +817,7 @@ pfe_hif_init(struct pfe *pfe)
 		/* hif interrupt enable */
 		err = ioctl(pfe->cdev_fd, PFE_CDEV_HIF_INTR_EN, &event_fd);
 		if (err) {
-			PFE_PMD_ERR("\nioctl failed for intr enable err: %d\n",
+			PFE_PMD_ERR("ioctl failed for intr enable err: %d",
 					errno);
 			goto err0;
 		}
@@ -826,7 +826,7 @@ pfe_hif_init(struct pfe *pfe)
 		epoll_ev.data.fd = event_fd;
 		err = epoll_ctl(epoll_fd, EPOLL_CTL_ADD, event_fd, &epoll_ev);
 		if (err < 0) {
-			PFE_PMD_ERR("epoll_ctl failed with err = %d\n", errno);
+			PFE_PMD_ERR("epoll_ctl failed with err = %d", errno);
 			goto err0;
 		}
 		pfe->hif.epoll_fd = epoll_fd;
diff --git a/drivers/net/pfe/pfe_hif_lib.c b/drivers/net/pfe/pfe_hif_lib.c
index 6fe6d33d23..541ba365c6 100644
--- a/drivers/net/pfe/pfe_hif_lib.c
+++ b/drivers/net/pfe/pfe_hif_lib.c
@@ -157,7 +157,7 @@ hif_lib_client_init_rx_buffers(struct hif_client_s *client,
 		queue->queue_id = 0;
 		queue->port_id = client->port_id;
 		queue->priv = client->priv;
-		PFE_PMD_DEBUG("rx queue: %d, base: %p, size: %d\n", qno,
+		PFE_PMD_DEBUG("rx queue: %d, base: %p, size: %d", qno,
 			      queue->base, queue->size);
 	}

diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index c35585f5fd..dcc8cbe943 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -887,7 +887,7 @@ qede_free_tx_pkt(struct qede_tx_queue *txq)
 	mbuf = txq->sw_tx_ring[idx];
 	if (mbuf) {
 		nb_segs = mbuf->nb_segs;
-		PMD_TX_LOG(DEBUG, txq, "nb_segs to free %u\n", nb_segs);
+		PMD_TX_LOG(DEBUG, txq, "nb_segs to free %u", nb_segs);
 		while (nb_segs) {
 			/* It's like consuming rxbuf in recv() */
 			ecore_chain_consume(&txq->tx_pbl);
@@ -897,7 +897,7 @@ qede_free_tx_pkt(struct qede_tx_queue *txq)
 		rte_pktmbuf_free(mbuf);
 		txq->sw_tx_ring[idx] = NULL;
 		txq->sw_tx_cons++;
-		PMD_TX_LOG(DEBUG, txq, "Freed tx packet\n");
+		PMD_TX_LOG(DEBUG, txq, "Freed tx packet");
 	} else {
 		ecore_chain_consume(&txq->tx_pbl);
 		txq->nb_tx_avail++;
@@ -919,7 +919,7 @@ qede_process_tx_compl(__rte_unused struct ecore_dev *edev,

 #ifdef RTE_LIBRTE_QEDE_DEBUG_TX
 	sw_tx_cons = ecore_chain_get_cons_idx(&txq->tx_pbl);
-	PMD_TX_LOG(DEBUG, txq, "Tx Completions = %u\n",
+	PMD_TX_LOG(DEBUG, txq, "Tx Completions = %u",
 		   abs(hw_bd_cons - sw_tx_cons));
 #endif
 	while (hw_bd_cons !=  ecore_chain_get_cons_idx(&txq->tx_pbl))
@@ -1353,7 +1353,7 @@ qede_rx_process_tpa_cmn_cont_end_cqe(__rte_unused struct qede_dev *qdev,
 		tpa_info->tpa_tail = curr_frag;
 		qede_rx_bd_ring_consume(rxq);
 		if (unlikely(qede_alloc_rx_buffer(rxq) != 0)) {
-			PMD_RX_LOG(ERR, rxq, "mbuf allocation fails\n");
+			PMD_RX_LOG(ERR, rxq, "mbuf allocation fails");
 			rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed++;
 			rxq->rx_alloc_errors++;
 		}
@@ -1365,7 +1365,7 @@ qede_rx_process_tpa_cont_cqe(struct qede_dev *qdev,
 			     struct qede_rx_queue *rxq,
 			     struct eth_fast_path_rx_tpa_cont_cqe *cqe)
 {
-	PMD_RX_LOG(INFO, rxq, "TPA cont[%d] - len [%d]\n",
+	PMD_RX_LOG(INFO, rxq, "TPA cont[%d] - len [%d]",
 		   cqe->tpa_agg_index, rte_le_to_cpu_16(cqe->len_list[0]));
 	/* only len_list[0] will have value */
 	qede_rx_process_tpa_cmn_cont_end_cqe(qdev, rxq, cqe->tpa_agg_index,
@@ -1388,7 +1388,7 @@ qede_rx_process_tpa_end_cqe(struct qede_dev *qdev,
 	rx_mb->pkt_len = cqe->total_packet_len;

 	PMD_RX_LOG(INFO, rxq, "TPA End[%d] reason %d cqe_len %d nb_segs %d"
-		   " pkt_len %d\n", cqe->tpa_agg_index, cqe->end_reason,
+		   " pkt_len %d", cqe->tpa_agg_index, cqe->end_reason,
 		   rte_le_to_cpu_16(cqe->len_list[0]), rx_mb->nb_segs,
 		   rx_mb->pkt_len);
 }
@@ -1471,7 +1471,7 @@ qede_process_sg_pkts(void *p_rxq,  struct rte_mbuf *rx_mb,
 							pkt_len;
 		if (unlikely(!cur_size)) {
 			PMD_RX_LOG(ERR, rxq, "Length is 0 while %u BDs"
-				   " left for mapping jumbo\n", num_segs);
+				   " left for mapping jumbo", num_segs);
 			qede_recycle_rx_bd_ring(rxq, qdev, num_segs);
 			return -EINVAL;
 		}
@@ -1497,7 +1497,7 @@ print_rx_bd_info(struct rte_mbuf *m, struct qede_rx_queue *rxq,
 	PMD_RX_LOG(INFO, rxq,
 		"len 0x%04x bf 0x%04x hash_val 0x%x"
 		" ol_flags 0x%04lx l2=%s l3=%s l4=%s tunn=%s"
-		" inner_l2=%s inner_l3=%s inner_l4=%s\n",
+		" inner_l2=%s inner_l3=%s inner_l4=%s",
 		m->data_len, bitfield, m->hash.rss,
 		(unsigned long)m->ol_flags,
 		rte_get_ptype_l2_name(m->packet_type),
@@ -1548,7 +1548,7 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)

 			PMD_RX_LOG(ERR, rxq,
 				   "New buffers allocation failed,"
-				   "dropping incoming packets\n");
+				   "dropping incoming packets");
 			dev = &rte_eth_devices[rxq->port_id];
 			dev->data->rx_mbuf_alloc_failed += count;
 			rxq->rx_alloc_errors += count;
@@ -1579,13 +1579,13 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		cqe =
 		    (union eth_rx_cqe *)ecore_chain_consume(&rxq->rx_comp_ring);
 		cqe_type = cqe->fast_path_regular.type;
-		PMD_RX_LOG(INFO, rxq, "Rx CQE type %d\n", cqe_type);
+		PMD_RX_LOG(INFO, rxq, "Rx CQE type %d", cqe_type);

 		if (likely(cqe_type == ETH_RX_CQE_TYPE_REGULAR)) {
 			fp_cqe = &cqe->fast_path_regular;
 		} else {
 			if (cqe_type == ETH_RX_CQE_TYPE_SLOW_PATH) {
-				PMD_RX_LOG(INFO, rxq, "Got unexpected slowpath CQE\n");
+				PMD_RX_LOG(INFO, rxq, "Got unexpected slowpath CQE");
 				ecore_eth_cqe_completion
 					(&edev->hwfns[rxq->queue_id %
 						      edev->num_hwfns],
@@ -1611,10 +1611,10 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 #endif

 		if (unlikely(qede_tunn_exist(parse_flag))) {
-			PMD_RX_LOG(INFO, rxq, "Rx tunneled packet\n");
+			PMD_RX_LOG(INFO, rxq, "Rx tunneled packet");
 			if (unlikely(qede_check_tunn_csum_l4(parse_flag))) {
 				PMD_RX_LOG(ERR, rxq,
-					    "L4 csum failed, flags = 0x%x\n",
+					    "L4 csum failed, flags = 0x%x",
 					    parse_flag);
 				rxq->rx_hw_errors++;
 				ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
@@ -1624,7 +1624,7 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)

 			if (unlikely(qede_check_tunn_csum_l3(parse_flag))) {
 				PMD_RX_LOG(ERR, rxq,
-					"Outer L3 csum failed, flags = 0x%x\n",
+					"Outer L3 csum failed, flags = 0x%x",
 					parse_flag);
 				rxq->rx_hw_errors++;
 				ol_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
@@ -1659,7 +1659,7 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		 */
 		if (unlikely(qede_check_notunn_csum_l4(parse_flag))) {
 			PMD_RX_LOG(ERR, rxq,
-				    "L4 csum failed, flags = 0x%x\n",
+				    "L4 csum failed, flags = 0x%x",
 				    parse_flag);
 			rxq->rx_hw_errors++;
 			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
@@ -1667,7 +1667,7 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		}
 		if (unlikely(qede_check_notunn_csum_l3(rx_mb, parse_flag))) {
-			PMD_RX_LOG(ERR, rxq, "IP csum failed, flags = 0x%x\n",
+			PMD_RX_LOG(ERR, rxq, "IP csum failed, flags = 0x%x",
 				   parse_flag);
 			rxq->rx_hw_errors++;
 			ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
@@ -1776,7 +1776,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)

 			PMD_RX_LOG(ERR, rxq,
 				   "New buffers allocation failed,"
-				   "dropping incoming packets\n");
+				   "dropping incoming packets");
 			dev = &rte_eth_devices[rxq->port_id];
 			dev->data->rx_mbuf_alloc_failed += count;
 			rxq->rx_alloc_errors += count;
@@ -1805,7 +1805,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		cqe =
 		    (union eth_rx_cqe *)ecore_chain_consume(&rxq->rx_comp_ring);
 		cqe_type = cqe->fast_path_regular.type;
-		PMD_RX_LOG(INFO, rxq, "Rx CQE type %d\n", cqe_type);
+		PMD_RX_LOG(INFO, rxq, "Rx CQE type %d", cqe_type);

 		switch (cqe_type) {
 		case ETH_RX_CQE_TYPE_REGULAR:
@@ -1823,7 +1823,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			 */
 			PMD_RX_LOG(INFO, rxq,
 			 "TPA start[%d] - len_on_first_bd %d header %d"
-			 " [bd_list[0] %d], [seg_len %d]\n",
+			 " [bd_list[0] %d], [seg_len %d]",
 			 cqe_start_tpa->tpa_agg_index,
 			 rte_le_to_cpu_16(cqe_start_tpa->len_on_first_bd),
 			 cqe_start_tpa->header_len,
@@ -1843,7 +1843,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			rx_mb = rxq->tpa_info[tpa_agg_idx].tpa_head;
 			goto tpa_end;
 		case ETH_RX_CQE_TYPE_SLOW_PATH:
-			PMD_RX_LOG(INFO, rxq, "Got unexpected slowpath CQE\n");
+			PMD_RX_LOG(INFO, rxq, "Got unexpected slowpath CQE");
 			ecore_eth_cqe_completion(
 				&edev->hwfns[rxq->queue_id % edev->num_hwfns],
 				(struct eth_slow_path_rx_cqe *)cqe);
@@ -1881,10 +1881,10 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			rss_hash = rte_le_to_cpu_32(cqe_start_tpa->rss_hash);
 		}
 		if (qede_tunn_exist(parse_flag)) {
-			PMD_RX_LOG(INFO, rxq, "Rx tunneled packet\n");
+			PMD_RX_LOG(INFO, rxq, "Rx tunneled packet");
 			if (unlikely(qede_check_tunn_csum_l4(parse_flag))) {
 				PMD_RX_LOG(ERR, rxq,
-					    "L4 csum failed, flags = 0x%x\n",
+					    "L4 csum failed, flags = 0x%x",
 					    parse_flag);
 				rxq->rx_hw_errors++;
 				ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
@@ -1894,7 +1894,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)

 			if (unlikely(qede_check_tunn_csum_l3(parse_flag))) {
 				PMD_RX_LOG(ERR, rxq,
-					"Outer L3 csum failed, flags = 0x%x\n",
+					"Outer L3 csum failed, flags = 0x%x",
 					parse_flag);
 				  rxq->rx_hw_errors++;
 				ol_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
@@ -1933,7 +1933,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 		 */
 		if (unlikely(qede_check_notunn_csum_l4(parse_flag))) {
 			PMD_RX_LOG(ERR, rxq,
-				    "L4 csum failed, flags = 0x%x\n",
+				    "L4 csum failed, flags = 0x%x",
 				    parse_flag);
 			rxq->rx_hw_errors++;
 			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
@@ -1941,7 +1941,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
 			ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
 		}
 		if (unlikely(qede_check_notunn_csum_l3(rx_mb, parse_flag))) {
-			PMD_RX_LOG(ERR, rxq, "IP csum failed, flags = 0x%x\n",
+			PMD_RX_LOG(ERR, rxq, "IP csum failed, flags = 0x%x",
 				   parse_flag);
 			rxq->rx_hw_errors++;
 			ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
@@ -2117,13 +2117,13 @@ print_tx_bd_info(struct qede_tx_queue *txq,
 		   rte_cpu_to_le_16(bd1->data.bitfields));
 	if (bd2)
 		PMD_TX_LOG(INFO, txq,
-		   "BD2: nbytes=0x%04x bf1=0x%04x bf2=0x%04x tunn_ip=0x%04x\n",
+		   "BD2: nbytes=0x%04x bf1=0x%04x bf2=0x%04x tunn_ip=0x%04x",
 		   rte_cpu_to_le_16(bd2->nbytes), bd2->data.bitfields1,
 		   bd2->data.bitfields2, bd2->data.tunn_ip_size);
 	if (bd3)
 		PMD_TX_LOG(INFO, txq,
 		   "BD3: nbytes=0x%04x bf=0x%04x MSS=0x%04x "
-		   "tunn_l4_hdr_start_offset_w=0x%04x tunn_hdr_size=0x%04x\n",
+		   "tunn_l4_hdr_start_offset_w=0x%04x tunn_hdr_size=0x%04x",
 		   rte_cpu_to_le_16(bd3->nbytes),
 		   rte_cpu_to_le_16(bd3->data.bitfields),
 		   rte_cpu_to_le_16(bd3->data.lso_mss),
@@ -2131,7 +2131,7 @@ print_tx_bd_info(struct qede_tx_queue *txq,
 		   bd3->data.tunn_hdr_size_w);

 	rte_get_tx_ol_flag_list(tx_ol_flags, ol_buf, sizeof(ol_buf));
-	PMD_TX_LOG(INFO, txq, "TX offloads = %s\n", ol_buf);
+	PMD_TX_LOG(INFO, txq, "TX offloads = %s", ol_buf);
 }
 #endif

@@ -2201,7 +2201,7 @@ qede_xmit_prep_pkts(__rte_unused void *p_txq, struct rte_mbuf **tx_pkts,

 #ifdef RTE_LIBRTE_QEDE_DEBUG_TX
 	if (unlikely(i != nb_pkts))
-		PMD_TX_LOG(ERR, txq, "TX prepare failed for %u\n",
+		PMD_TX_LOG(ERR, txq, "TX prepare failed for %u",
 			   nb_pkts - i);
 #endif
 	return i;
@@ -2215,16 +2215,16 @@ qede_mpls_tunn_tx_sanity_check(struct rte_mbuf *mbuf,
 			       struct qede_tx_queue *txq)
 {
 	if (((mbuf->outer_l2_len + mbuf->outer_l3_len) / 2) > 0xff)
-		PMD_TX_LOG(ERR, txq, "tunn_l4_hdr_start_offset overflow\n");
+		PMD_TX_LOG(ERR, txq, "tunn_l4_hdr_start_offset overflow");
 	if (((mbuf->outer_l2_len + mbuf->outer_l3_len +
 		MPLSINUDP_HDR_SIZE) / 2) > 0xff)
-		PMD_TX_LOG(ERR, txq, "tunn_hdr_size overflow\n");
+		PMD_TX_LOG(ERR, txq, "tunn_hdr_size overflow");
 	if (((mbuf->l2_len - MPLSINUDP_HDR_SIZE) / 2) >
 		ETH_TX_DATA_2ND_BD_TUNN_INNER_L2_HDR_SIZE_W_MASK)
-		PMD_TX_LOG(ERR, txq, "inner_l2_hdr_size overflow\n");
+		PMD_TX_LOG(ERR, txq, "inner_l2_hdr_size overflow");
 	if (((mbuf->l2_len - MPLSINUDP_HDR_SIZE + mbuf->l3_len) / 2) >
 		ETH_TX_DATA_2ND_BD_L4_HDR_START_OFFSET_W_MASK)
-		PMD_TX_LOG(ERR, txq, "inner_l2_hdr_size overflow\n");
+		PMD_TX_LOG(ERR, txq, "inner_l2_hdr_size overflow");
 }
 #endif

diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index ba2ef4058e..ee563c55ce 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1817,7 +1817,7 @@ nicvf_dev_start(struct rte_eth_dev *dev)
 	/* Apply new link configurations if changed */
 	ret = nicvf_apply_link_speed(dev);
 	if (ret) {
-		PMD_INIT_LOG(ERR, "Failed to set link configuration\n");
+		PMD_INIT_LOG(ERR, "Failed to set link configuration");
 		return ret;
 	}

diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index ad29c3cfec..a8bdc10232 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -612,7 +612,7 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
 		ssid = txgbe_flash_read_dword(hw, 0xFFFDC);
 		if (ssid == 0x1) {
 			PMD_INIT_LOG(ERR,
-				"Read of internal subsystem device id failed\n");
+				"Read of internal subsystem device id failed");
 			return -ENODEV;
 		}
 		hw->subsystem_device_id = (u16)ssid >> 8 | (u16)ssid << 8;
@@ -2756,7 +2756,7 @@ txgbe_dev_detect_sfp(void *param)
 		PMD_DRV_LOG(INFO, "SFP not present.");
 	} else if (err == 0) {
 		hw->mac.setup_sfp(hw);
-		PMD_DRV_LOG(INFO, "detected SFP+: %d\n", hw->phy.sfp_type);
+		PMD_DRV_LOG(INFO, "detected SFP+: %d", hw->phy.sfp_type);
 		txgbe_dev_setup_link_alarm_handler(dev);
 		txgbe_dev_link_update(dev, 0);
 	}
diff --git a/drivers/net/txgbe/txgbe_ipsec.c b/drivers/net/txgbe/txgbe_ipsec.c
index f9f8108fb8..4af49dd802 100644
--- a/drivers/net/txgbe/txgbe_ipsec.c
+++ b/drivers/net/txgbe/txgbe_ipsec.c
@@ -100,7 +100,7 @@ txgbe_crypto_add_sa(struct txgbe_crypto_session *ic_session)
 		/* Fail if no match and no free entries*/
 		if (ip_index < 0) {
 			PMD_DRV_LOG(ERR,
-				    "No free entry left in the Rx IP table\n");
+				    "No free entry left in the Rx IP table");
 			return -1;
 		}

@@ -114,7 +114,7 @@ txgbe_crypto_add_sa(struct txgbe_crypto_session *ic_session)
 		/* Fail if no free entries*/
 		if (sa_index < 0) {
 			PMD_DRV_LOG(ERR,
-				    "No free entry left in the Rx SA table\n");
+				    "No free entry left in the Rx SA table");
 			return -1;
 		}

@@ -210,7 +210,7 @@ txgbe_crypto_add_sa(struct txgbe_crypto_session *ic_session)
 		/* Fail if no free entries*/
 		if (sa_index < 0) {
 			PMD_DRV_LOG(ERR,
-				    "No free entry left in the Tx SA table\n");
+				    "No free entry left in the Tx SA table");
 			return -1;
 		}

@@ -269,7 +269,7 @@ txgbe_crypto_remove_sa(struct rte_eth_dev *dev,
 		/* Fail if no match*/
 		if (ip_index < 0) {
 			PMD_DRV_LOG(ERR,
-				    "Entry not found in the Rx IP table\n");
+				    "Entry not found in the Rx IP table");
 			return -1;
 		}

@@ -284,7 +284,7 @@ txgbe_crypto_remove_sa(struct rte_eth_dev *dev,
 		/* Fail if no match*/
 		if (sa_index < 0) {
 			PMD_DRV_LOG(ERR,
-				    "Entry not found in the Rx SA table\n");
+				    "Entry not found in the Rx SA table");
 			return -1;
 		}

@@ -329,7 +329,7 @@ txgbe_crypto_remove_sa(struct rte_eth_dev *dev,
 		/* Fail if no match entries*/
 		if (sa_index < 0) {
 			PMD_DRV_LOG(ERR,
-				    "Entry not found in the Tx SA table\n");
+				    "Entry not found in the Tx SA table");
 			return -1;
 		}
 		reg_val = TXGBE_IPSRXIDX_WRITE | (sa_index << 3);
@@ -359,7 +359,7 @@ txgbe_crypto_create_session(void *device,
 	if (conf->crypto_xform->type != RTE_CRYPTO_SYM_XFORM_AEAD ||
 			conf->crypto_xform->aead.algo !=
 					RTE_CRYPTO_AEAD_AES_GCM) {
-		PMD_DRV_LOG(ERR, "Unsupported crypto transformation mode\n");
+		PMD_DRV_LOG(ERR, "Unsupported crypto transformation mode");
 		return -ENOTSUP;
 	}
 	aead_xform = &conf->crypto_xform->aead;
@@ -368,14 +368,14 @@ txgbe_crypto_create_session(void *device,
 		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 			ic_session->op = TXGBE_OP_AUTHENTICATED_DECRYPTION;
 		} else {
-			PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
+			PMD_DRV_LOG(ERR, "IPsec decryption not enabled");
 			return -ENOTSUP;
 		}
 	} else {
 		if (dev_conf->txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
 			ic_session->op = TXGBE_OP_AUTHENTICATED_ENCRYPTION;
 		} else {
-			PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
+			PMD_DRV_LOG(ERR, "IPsec encryption not enabled");
 			return -ENOTSUP;
 		}
 	}
@@ -389,7 +389,7 @@ txgbe_crypto_create_session(void *device,

 	if (ic_session->op == TXGBE_OP_AUTHENTICATED_ENCRYPTION) {
 		if (txgbe_crypto_add_sa(ic_session)) {
-			PMD_DRV_LOG(ERR, "Failed to add SA\n");
+			PMD_DRV_LOG(ERR, "Failed to add SA");
 			return -EPERM;
 		}
 	}
@@ -411,12 +411,12 @@ txgbe_crypto_remove_session(void *device,
 	struct txgbe_crypto_session *ic_session = SECURITY_GET_SESS_PRIV(session);

 	if (eth_dev != ic_session->dev) {
-		PMD_DRV_LOG(ERR, "Session not bound to this device\n");
+		PMD_DRV_LOG(ERR, "Session not bound to this device");
 		return -ENODEV;
 	}

 	if (txgbe_crypto_remove_sa(eth_dev, ic_session)) {
-		PMD_DRV_LOG(ERR, "Failed to remove session\n");
+		PMD_DRV_LOG(ERR, "Failed to remove session");
 		return -EFAULT;
 	}

diff --git a/drivers/net/txgbe/txgbe_pf.c b/drivers/net/txgbe/txgbe_pf.c
index 176f79005c..700632bd88 100644
--- a/drivers/net/txgbe/txgbe_pf.c
+++ b/drivers/net/txgbe/txgbe_pf.c
@@ -85,7 +85,7 @@ int txgbe_pf_host_init(struct rte_eth_dev *eth_dev)
 			sizeof(struct txgbe_vf_info) * vf_num, 0);
 	if (*vfinfo == NULL) {
 		PMD_INIT_LOG(ERR,
-			"Cannot allocate memory for private VF data\n");
+			"Cannot allocate memory for private VF data");
 		return -ENOMEM;
 	}

@@ -167,14 +167,14 @@ txgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
 	struct txgbe_ethertype_filter ethertype_filter;

 	if (!hw->mac.set_ethertype_anti_spoofing) {
-		PMD_DRV_LOG(INFO, "ether type anti-spoofing is not supported.\n");
+		PMD_DRV_LOG(INFO, "ether type anti-spoofing is not supported.");
 		return;
 	}

 	i = txgbe_ethertype_filter_lookup(filter_info,
 					  TXGBE_ETHERTYPE_FLOW_CTRL);
 	if (i >= 0) {
-		PMD_DRV_LOG(ERR, "A ether type filter entity for flow control already exists!\n");
+		PMD_DRV_LOG(ERR, "A ether type filter entity for flow control already exists!");
 		return;
 	}

@@ -187,7 +187,7 @@ txgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
 	i = txgbe_ethertype_filter_insert(filter_info,
 					  &ethertype_filter);
 	if (i < 0) {
-		PMD_DRV_LOG(ERR, "Cannot find an unused ether type filter entity for flow control.\n");
+		PMD_DRV_LOG(ERR, "Cannot find an unused ether type filter entity for flow control.");
 		return;
 	}

@@ -408,7 +408,7 @@ txgbe_disable_vf_mc_promisc(struct rte_eth_dev *eth_dev, uint32_t vf)

 	vmolr = rd32(hw, TXGBE_POOLETHCTL(vf));

-	PMD_DRV_LOG(INFO, "VF %u: disabling multicast promiscuous\n", vf);
+	PMD_DRV_LOG(INFO, "VF %u: disabling multicast promiscuous", vf);

 	vmolr &= ~TXGBE_POOLETHCTL_MCP;

@@ -570,7 +570,7 @@ txgbe_negotiate_vf_api(struct rte_eth_dev *eth_dev,
 		break;
 	}

-	PMD_DRV_LOG(ERR, "Negotiate invalid api version %u from VF %d\n",
+	PMD_DRV_LOG(ERR, "Negotiate invalid api version %u from VF %d",
 		api_version, vf);

 	return -1;
@@ -614,7 +614,7 @@ txgbe_get_vf_queues(struct rte_eth_dev *eth_dev, uint32_t vf, uint32_t *msgbuf)
 	case RTE_ETH_MQ_TX_NONE:
 	case RTE_ETH_MQ_TX_DCB:
 		PMD_DRV_LOG(ERR, "PF must work with virtualization for VF %u"
-			", but its tx mode = %d\n", vf,
+			", but its tx mode = %d", vf,
 			eth_conf->txmode.mq_mode);
 		return -1;

@@ -648,7 +648,7 @@ txgbe_get_vf_queues(struct rte_eth_dev *eth_dev, uint32_t vf, uint32_t *msgbuf)
 		break;

 	default:
-		PMD_DRV_LOG(ERR, "PF work with invalid mode = %d\n",
+		PMD_DRV_LOG(ERR, "PF work with invalid mode = %d",
 			eth_conf->txmode.mq_mode);
 		return -1;
 	}
@@ -704,7 +704,7 @@ txgbe_set_vf_mc_promisc(struct rte_eth_dev *eth_dev,
 		if (!(fctrl & TXGBE_PSRCTL_UCP)) {
 			/* VF promisc requires PF in promisc */
 			PMD_DRV_LOG(ERR,
-			       "Enabling VF promisc requires PF in promisc\n");
+			       "Enabling VF promisc requires PF in promisc");
 			return -1;
 		}

@@ -741,7 +741,7 @@ txgbe_set_vf_macvlan_msg(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)

 	if (index) {
 		if (!rte_is_valid_assigned_ether_addr(ea)) {
-			PMD_DRV_LOG(ERR, "set invalid mac vf:%d\n", vf);
+			PMD_DRV_LOG(ERR, "set invalid mac vf:%d", vf);
 			return -1;
 		}

diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 1bfd6aba80..d93d443ec9 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -1088,7 +1088,7 @@ virtio_user_dev_create_shadow_cvq(struct virtio_user_dev *dev, struct virtqueue
 	scvq = virtqueue_alloc(&dev->hw, vq->vq_queue_index, vq->vq_nentries,
 			VTNET_CQ, SOCKET_ID_ANY, name);
 	if (!scvq) {
-		PMD_INIT_LOG(ERR, "(%s) Failed to alloc shadow control vq\n", dev->path);
+		PMD_INIT_LOG(ERR, "(%s) Failed to alloc shadow control vq", dev->path);
 		return -ENOMEM;
 	}

diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index 70ae9c6035..f98cdb6d58 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -1094,10 +1094,10 @@ vmxnet3_dev_start(struct rte_eth_dev *dev)
 			ret = VMXNET3_READ_BAR1_REG(hw, VMXNET3_REG_CMD);
 			if (ret != 0)
 				PMD_INIT_LOG(DEBUG,
-					"Failed in setup memory region cmd\n");
+					"Failed in setup memory region cmd");
 			ret = 0;
 		} else {
-			PMD_INIT_LOG(DEBUG, "Failed to setup memory region\n");
+			PMD_INIT_LOG(DEBUG, "Failed to setup memory region");
 		}
 	} else {
 		PMD_INIT_LOG(WARNING, "Memregs can't init (rx: %d, tx: %d)",
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index 380f41f98b..e226641fdf 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -1341,7 +1341,7 @@ vmxnet3_dev_rxtx_init(struct rte_eth_dev *dev)
 			/* Zero number of descriptors in the configuration of the RX queue */
 			if (ret == 0) {
 				PMD_INIT_LOG(ERR,
-					"Invalid configuration in Rx queue: %d, buffers ring: %d\n",
+					"Invalid configuration in Rx queue: %d, buffers ring: %d",
 					i, j);
 				return -EINVAL;
 			}
diff --git a/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c b/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
index aeee4ac289..de8c024abb 100644
--- a/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
+++ b/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
@@ -68,7 +68,7 @@ dpaa2_cmdif_enqueue_bufs(struct rte_rawdev *dev,
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
 			DPAA2_CMDIF_ERR(
-				"Failed to allocate IO portal, tid: %d\n",
+				"Failed to allocate IO portal, tid: %d",
 				rte_gettid());
 			return 0;
 		}
@@ -99,14 +99,14 @@ dpaa2_cmdif_enqueue_bufs(struct rte_rawdev *dev,
 	do {
 		ret = qbman_swp_enqueue_multiple(swp, &eqdesc, &fd, NULL, 1);
 		if (ret < 0 && ret != -EBUSY)
-			DPAA2_CMDIF_ERR("Transmit failure with err: %d\n", ret);
+			DPAA2_CMDIF_ERR("Transmit failure with err: %d", ret);
 		retry_count++;
 	} while ((ret == -EBUSY) && (retry_count < DPAA2_MAX_TX_RETRY_COUNT));

 	if (ret < 0)
 		return ret;

-	DPAA2_CMDIF_DP_DEBUG("Successfully transmitted a packet\n");
+	DPAA2_CMDIF_DP_DEBUG("Successfully transmitted a packet");

 	return 1;
 }
@@ -133,7 +133,7 @@ dpaa2_cmdif_dequeue_bufs(struct rte_rawdev *dev,
 		ret = dpaa2_affine_qbman_swp();
 		if (ret) {
 			DPAA2_CMDIF_ERR(
-				"Failed to allocate IO portal, tid: %d\n",
+				"Failed to allocate IO portal, tid: %d",
 				rte_gettid());
 			return 0;
 		}
@@ -152,7 +152,7 @@ dpaa2_cmdif_dequeue_bufs(struct rte_rawdev *dev,

 	while (1) {
 		if (qbman_swp_pull(swp, &pulldesc)) {
-			DPAA2_CMDIF_DP_WARN("VDQ cmd not issued. QBMAN is busy\n");
+			DPAA2_CMDIF_DP_WARN("VDQ cmd not issued. QBMAN is busy");
 			/* Portal was busy, try again */
 			continue;
 		}
@@ -169,7 +169,7 @@ dpaa2_cmdif_dequeue_bufs(struct rte_rawdev *dev,
 	/* Check for valid frame. */
 	status = (uint8_t)qbman_result_DQ_flags(dq_storage);
 	if (unlikely((status & QBMAN_DQ_STAT_VALIDFRAME) == 0)) {
-		DPAA2_CMDIF_DP_DEBUG("No frame is delivered\n");
+		DPAA2_CMDIF_DP_DEBUG("No frame is delivered");
 		return 0;
 	}

@@ -181,7 +181,7 @@ dpaa2_cmdif_dequeue_bufs(struct rte_rawdev *dev,
 	cmdif_rcv_cnxt->flc = DPAA2_GET_FD_FLC(fd);
 	cmdif_rcv_cnxt->frc = DPAA2_GET_FD_FRC(fd);

-	DPAA2_CMDIF_DP_DEBUG("packet received\n");
+	DPAA2_CMDIF_DP_DEBUG("packet received");

 	return 1;
 }
diff --git a/drivers/raw/ifpga/afu_pmd_n3000.c b/drivers/raw/ifpga/afu_pmd_n3000.c
index 67b3941265..6aae1b224e 100644
--- a/drivers/raw/ifpga/afu_pmd_n3000.c
+++ b/drivers/raw/ifpga/afu_pmd_n3000.c
@@ -1506,7 +1506,7 @@ static int dma_afu_set_irqs(struct afu_rawdev *dev, uint32_t vec_start,
 	rte_memcpy(&irq_set->data, efds, sizeof(*efds) * count);
 	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 	if (ret)
-		IFPGA_RAWDEV_PMD_ERR("Error enabling MSI-X interrupts\n");
+		IFPGA_RAWDEV_PMD_ERR("Error enabling MSI-X interrupts");

 	rte_free(irq_set);
 	return ret;
diff --git a/drivers/raw/ifpga/ifpga_rawdev.c b/drivers/raw/ifpga/ifpga_rawdev.c
index f89bd3f9e2..997fbf8a0d 100644
--- a/drivers/raw/ifpga/ifpga_rawdev.c
+++ b/drivers/raw/ifpga/ifpga_rawdev.c
@@ -383,7 +383,7 @@ ifpga_monitor_sensor(struct rte_rawdev *raw_dev,
 			goto fail;

 		if (value == 0xdeadbeef) {
-			IFPGA_RAWDEV_PMD_DEBUG("dev_id %d sensor %s value %x\n",
+			IFPGA_RAWDEV_PMD_DEBUG("dev_id %d sensor %s value %x",
 					raw_dev->dev_id, sensor->name, value);
 			continue;
 		}
@@ -391,13 +391,13 @@ ifpga_monitor_sensor(struct rte_rawdev *raw_dev,
 		/* monitor temperature sensors */
 		if (!strcmp(sensor->name, "Board Temperature") ||
 				!strcmp(sensor->name, "FPGA Die Temperature")) {
-			IFPGA_RAWDEV_PMD_DEBUG("read sensor %s %d %d %d\n",
+			IFPGA_RAWDEV_PMD_DEBUG("read sensor %s %d %d %d",
 					sensor->name, value, sensor->high_warn,
 					sensor->high_fatal);

 			if (HIGH_WARN(sensor, value) ||
 				LOW_WARN(sensor, value)) {
-				IFPGA_RAWDEV_PMD_INFO("%s reach threshold %d\n",
+				IFPGA_RAWDEV_PMD_INFO("%s reach threshold %d",
 					sensor->name, value);
 				*gsd_start = true;
 				break;
@@ -408,7 +408,7 @@ ifpga_monitor_sensor(struct rte_rawdev *raw_dev,
 		if (!strcmp(sensor->name, "12V AUX Voltage")) {
 			if (value < AUX_VOLTAGE_WARN) {
 				IFPGA_RAWDEV_PMD_INFO(
-					"%s reach threshold %d mV\n",
+					"%s reach threshold %d mV",
 					sensor->name, value);
 				*gsd_start = true;
 				break;
@@ -444,7 +444,7 @@ static int set_surprise_link_check_aer(
 	if (ifpga_monitor_sensor(rdev, &enable))
 		return -EFAULT;
 	if (enable || force_disable) {
-		IFPGA_RAWDEV_PMD_ERR("Set AER, pls graceful shutdown\n");
+		IFPGA_RAWDEV_PMD_ERR("Set AER, pls graceful shutdown");
 		ifpga_rdev->aer_enable = 1;
 		/* get bridge fd */
 		strlcpy(path, "/sys/bus/pci/devices/", sizeof(path));
@@ -660,7 +660,7 @@ ifpga_rawdev_info_get(struct rte_rawdev *dev,
 			continue;

 		if (ifpga_fill_afu_dev(acc, afu_dev)) {
-			IFPGA_RAWDEV_PMD_ERR("cannot get info\n");
+			IFPGA_RAWDEV_PMD_ERR("cannot get info");
 			return -ENOENT;
 		}
 	}
@@ -815,13 +815,13 @@ fpga_pr(struct rte_rawdev *raw_dev, u32 port_id, const char *buffer, u32 size,

 	ret = opae_manager_flash(mgr, port_id, buffer, size, status);
 	if (ret) {
-		IFPGA_RAWDEV_PMD_ERR("%s pr error %d\n", __func__, ret);
+		IFPGA_RAWDEV_PMD_ERR("%s pr error %d", __func__, ret);
 		return ret;
 	}

 	ret = opae_bridge_reset(br);
 	if (ret) {
-		IFPGA_RAWDEV_PMD_ERR("%s reset port:%d error %d\n",
+		IFPGA_RAWDEV_PMD_ERR("%s reset port:%d error %d",
 				__func__, port_id, ret);
 		return ret;
 	}
@@ -845,14 +845,14 @@ rte_fpga_do_pr(struct rte_rawdev *rawdev, int port_id,

 	file_fd = open(file_name, O_RDONLY);
 	if (file_fd < 0) {
-		IFPGA_RAWDEV_PMD_ERR("%s: open file error: %s\n",
+		IFPGA_RAWDEV_PMD_ERR("%s: open file error: %s",
 				__func__, file_name);
-		IFPGA_RAWDEV_PMD_ERR("Message : %s\n", strerror(errno));
+		IFPGA_RAWDEV_PMD_ERR("Message : %s", strerror(errno));
 		return -EINVAL;
 	}
 	ret = stat(file_name, &file_stat);
 	if (ret) {
-		IFPGA_RAWDEV_PMD_ERR("stat on bitstream file failed: %s\n",
+		IFPGA_RAWDEV_PMD_ERR("stat on bitstream file failed: %s",
 				file_name);
 		ret = -EINVAL;
 		goto close_fd;
@@ -863,7 +863,7 @@ rte_fpga_do_pr(struct rte_rawdev *rawdev, int port_id,
 		goto close_fd;
 	}

-	IFPGA_RAWDEV_PMD_INFO("bitstream file size: %zu\n", buffer_size);
+	IFPGA_RAWDEV_PMD_INFO("bitstream file size: %zu", buffer_size);
 	buffer = rte_malloc(NULL, buffer_size, 0);
 	if (!buffer) {
 		ret = -ENOMEM;
@@ -879,7 +879,7 @@ rte_fpga_do_pr(struct rte_rawdev *rawdev, int port_id,

 	/*do PR now*/
 	ret = fpga_pr(rawdev, port_id, buffer, buffer_size, &pr_error);
-	IFPGA_RAWDEV_PMD_INFO("downloading to device port %d....%s.\n", port_id,
+	IFPGA_RAWDEV_PMD_INFO("downloading to device port %d....%s.", port_id,
 		ret ? "failed" : "success");
 	if (ret) {
 		ret = -EINVAL;
@@ -922,7 +922,7 @@ ifpga_rawdev_pr(struct rte_rawdev *dev,
 				afu_pr_conf->afu_id.port,
 				afu_pr_conf->bs_path);
 		if (ret) {
-			IFPGA_RAWDEV_PMD_ERR("do pr error %d\n", ret);
+			IFPGA_RAWDEV_PMD_ERR("do pr error %d", ret);
 			return ret;
 		}
 	}
@@ -953,7 +953,7 @@ ifpga_rawdev_pr(struct rte_rawdev *dev,
 		rte_memcpy(&afu_pr_conf->afu_id.uuid.uuid_high, uuid.b + 8,
 			sizeof(u64));

-		IFPGA_RAWDEV_PMD_INFO("%s: uuid_l=0x%lx, uuid_h=0x%lx\n",
+		IFPGA_RAWDEV_PMD_INFO("%s: uuid_l=0x%lx, uuid_h=0x%lx",
 			__func__,
 			(unsigned long)afu_pr_conf->afu_id.uuid.uuid_low,
 			(unsigned long)afu_pr_conf->afu_id.uuid.uuid_high);
@@ -1229,13 +1229,13 @@ fme_err_read_seu_emr(struct opae_manager *mgr)
 	if (ret)
 		return -EINVAL;

-	IFPGA_RAWDEV_PMD_INFO("seu emr low: 0x%" PRIx64 "\n", val);
+	IFPGA_RAWDEV_PMD_INFO("seu emr low: 0x%" PRIx64, val);

 	ret = ifpga_get_fme_error_prop(mgr, FME_ERR_PROP_SEU_EMR_HIGH, &val);
 	if (ret)
 		return -EINVAL;

-	IFPGA_RAWDEV_PMD_INFO("seu emr high: 0x%" PRIx64 "\n", val);
+	IFPGA_RAWDEV_PMD_INFO("seu emr high: 0x%" PRIx64, val);

 	return 0;
 }
@@ -1250,7 +1250,7 @@ static int fme_clear_warning_intr(struct opae_manager *mgr)
 	if (ifpga_get_fme_error_prop(mgr, FME_ERR_PROP_NONFATAL_ERRORS, &val))
 		return -EINVAL;
 	if ((val & 0x40) != 0)
-		IFPGA_RAWDEV_PMD_INFO("clean not done\n");
+		IFPGA_RAWDEV_PMD_INFO("clean not done");

 	return 0;
 }
@@ -1262,14 +1262,14 @@ static int fme_clean_fme_error(struct opae_manager *mgr)
 	if (ifpga_get_fme_error_prop(mgr, FME_ERR_PROP_ERRORS, &val))
 		return -EINVAL;

-	IFPGA_RAWDEV_PMD_DEBUG("before clean 0x%" PRIx64 "\n", val);
+	IFPGA_RAWDEV_PMD_DEBUG("before clean 0x%" PRIx64, val);

 	ifpga_set_fme_error_prop(mgr, FME_ERR_PROP_CLEAR, val);

 	if (ifpga_get_fme_error_prop(mgr, FME_ERR_PROP_ERRORS, &val))
 		return -EINVAL;

-	IFPGA_RAWDEV_PMD_DEBUG("after clean 0x%" PRIx64 "\n", val);
+	IFPGA_RAWDEV_PMD_DEBUG("after clean 0x%" PRIx64, val);

 	return 0;
 }
@@ -1289,15 +1289,15 @@ fme_err_handle_error0(struct opae_manager *mgr)
 	fme_error0.csr = val;

 	if (fme_error0.fabric_err)
-		IFPGA_RAWDEV_PMD_ERR("Fabric error\n");
+		IFPGA_RAWDEV_PMD_ERR("Fabric error");
 	else if (fme_error0.fabfifo_overflow)
-		IFPGA_RAWDEV_PMD_ERR("Fabric fifo under/overflow error\n");
+		IFPGA_RAWDEV_PMD_ERR("Fabric fifo under/overflow error");
 	else if (fme_error0.afu_acc_mode_err)
-		IFPGA_RAWDEV_PMD_ERR("AFU PF/VF access mismatch detected\n");
+		IFPGA_RAWDEV_PMD_ERR("AFU PF/VF access mismatch detected");
 	else if (fme_error0.pcie0cdc_parity_err)
-		IFPGA_RAWDEV_PMD_ERR("PCIe0 CDC Parity Error\n");
+		IFPGA_RAWDEV_PMD_ERR("PCIe0 CDC Parity Error");
 	else if (fme_error0.cvlcdc_parity_err)
-		IFPGA_RAWDEV_PMD_ERR("CVL CDC Parity Error\n");
+		IFPGA_RAWDEV_PMD_ERR("CVL CDC Parity Error");
 	else if (fme_error0.fpgaseuerr)
 		fme_err_read_seu_emr(mgr);

@@ -1320,17 +1320,17 @@ fme_err_handle_catfatal_error(struct opae_manager *mgr)
 	fme_catfatal.csr = val;

 	if (fme_catfatal.cci_fatal_err)
-		IFPGA_RAWDEV_PMD_ERR("CCI error detected\n");
+		IFPGA_RAWDEV_PMD_ERR("CCI error detected");
 	else if (fme_catfatal.fabric_fatal_err)
-		IFPGA_RAWDEV_PMD_ERR("Fabric fatal error detected\n");
+		IFPGA_RAWDEV_PMD_ERR("Fabric fatal error detected");
 	else if (fme_catfatal.pcie_poison_err)
-		IFPGA_RAWDEV_PMD_ERR("Poison error from PCIe ports\n");
+		IFPGA_RAWDEV_PMD_ERR("Poison error from PCIe ports");
 	else if (fme_catfatal.inject_fata_err)
-		IFPGA_RAWDEV_PMD_ERR("Injected Fatal Error\n");
+		IFPGA_RAWDEV_PMD_ERR("Injected Fatal Error");
 	else if (fme_catfatal.crc_catast_err)
-		IFPGA_RAWDEV_PMD_ERR("a catastrophic EDCRC error\n");
+		IFPGA_RAWDEV_PMD_ERR("a catastrophic EDCRC error");
 	else if (fme_catfatal.injected_catast_err)
-		IFPGA_RAWDEV_PMD_ERR("Injected Catastrophic Error\n");
+		IFPGA_RAWDEV_PMD_ERR("Injected Catastrophic Error");
 	else if (fme_catfatal.bmc_seu_catast_err)
 		fme_err_read_seu_emr(mgr);

@@ -1349,28 +1349,28 @@ fme_err_handle_nonfaterror(struct opae_manager *mgr)
 	nonfaterr.csr = val;

 	if (nonfaterr.temp_thresh_ap1)
-		IFPGA_RAWDEV_PMD_INFO("Temperature threshold triggered AP1\n");
+		IFPGA_RAWDEV_PMD_INFO("Temperature threshold triggered AP1");
 	else if (nonfaterr.temp_thresh_ap2)
-		IFPGA_RAWDEV_PMD_INFO("Temperature threshold triggered AP2\n");
+		IFPGA_RAWDEV_PMD_INFO("Temperature threshold triggered AP2");
 	else if (nonfaterr.pcie_error)
-		IFPGA_RAWDEV_PMD_INFO("an error has occurred in pcie\n");
+		IFPGA_RAWDEV_PMD_INFO("an error has occurred in pcie");
 	else if (nonfaterr.portfatal_error)
-		IFPGA_RAWDEV_PMD_INFO("fatal error occurred in AFU port.\n");
+		IFPGA_RAWDEV_PMD_INFO("fatal error occurred in AFU port.");
 	else if (nonfaterr.proc_hot)
-		IFPGA_RAWDEV_PMD_INFO("a ProcHot event\n");
+		IFPGA_RAWDEV_PMD_INFO("a ProcHot event");
 	else if (nonfaterr.afu_acc_mode_err)
-		IFPGA_RAWDEV_PMD_INFO("an AFU PF/VF access mismatch\n");
+		IFPGA_RAWDEV_PMD_INFO("an AFU PF/VF access mismatch");
 	else if (nonfaterr.injected_nonfata_err) {
-		IFPGA_RAWDEV_PMD_INFO("Injected Warning Error\n");
+		IFPGA_RAWDEV_PMD_INFO("Injected Warning Error");
 		fme_clear_warning_intr(mgr);
 	} else if (nonfaterr.temp_thresh_AP6)
-		IFPGA_RAWDEV_PMD_INFO("Temperature threshold triggered AP6\n");
+		IFPGA_RAWDEV_PMD_INFO("Temperature threshold triggered AP6");
 	else if (nonfaterr.power_thresh_AP1)
-		IFPGA_RAWDEV_PMD_INFO("Power threshold triggered AP1\n");
+		IFPGA_RAWDEV_PMD_INFO("Power threshold triggered AP1");
 	else if (nonfaterr.power_thresh_AP2)
-		IFPGA_RAWDEV_PMD_INFO("Power threshold triggered AP2\n");
+		IFPGA_RAWDEV_PMD_INFO("Power threshold triggered AP2");
 	else if (nonfaterr.mbp_err)
-		IFPGA_RAWDEV_PMD_INFO("an MBP event\n");
+		IFPGA_RAWDEV_PMD_INFO("an MBP event");

 	return 0;
 }
@@ -1380,7 +1380,7 @@ fme_interrupt_handler(void *param)
 {
 	struct opae_manager *mgr = (struct opae_manager *)param;

-	IFPGA_RAWDEV_PMD_INFO("%s interrupt occurred\n", __func__);
+	IFPGA_RAWDEV_PMD_INFO("%s interrupt occurred", __func__);

 	fme_err_handle_error0(mgr);
 	fme_err_handle_nonfaterror(mgr);
@@ -1406,7 +1406,7 @@ ifpga_unregister_msix_irq(struct ifpga_rawdev *dev, enum ifpga_irq_type type,
 		return -EINVAL;

 	if ((*intr_handle) == NULL) {
-		IFPGA_RAWDEV_PMD_ERR("%s interrupt %d not registered\n",
+		IFPGA_RAWDEV_PMD_ERR("%s interrupt %d not registered",
 			type == IFPGA_FME_IRQ ? "FME" : "AFU",
 			type == IFPGA_FME_IRQ ? 0 : vec_start);
 		return -ENOENT;
@@ -1416,7 +1416,7 @@ ifpga_unregister_msix_irq(struct ifpga_rawdev *dev, enum ifpga_irq_type type,

 	rc = rte_intr_callback_unregister(*intr_handle, handler, arg);
 	if (rc < 0) {
-		IFPGA_RAWDEV_PMD_ERR("Failed to unregister %s interrupt %d\n",
+		IFPGA_RAWDEV_PMD_ERR("Failed to unregister %s interrupt %d",
 			type == IFPGA_FME_IRQ ? "FME" : "AFU",
 			type == IFPGA_FME_IRQ ? 0 : vec_start);
 	} else {
@@ -1479,7 +1479,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
 			rte_intr_efds_index_get(*intr_handle, 0)))
 		return -rte_errno;

-	IFPGA_RAWDEV_PMD_DEBUG("register %s irq, vfio_fd=%d, fd=%d\n",
+	IFPGA_RAWDEV_PMD_DEBUG("register %s irq, vfio_fd=%d, fd=%d",
 			name, rte_intr_dev_fd_get(*intr_handle),
 			rte_intr_fd_get(*intr_handle));

@@ -1520,7 +1520,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
 		return -EINVAL;
 	}

-	IFPGA_RAWDEV_PMD_INFO("success register %s interrupt\n", name);
+	IFPGA_RAWDEV_PMD_INFO("success register %s interrupt", name);

 	free(intr_efds);
 	return 0;
diff --git a/drivers/regex/cn9k/cn9k_regexdev.c b/drivers/regex/cn9k/cn9k_regexdev.c
index e96cbf4141..aa809ab5bf 100644
--- a/drivers/regex/cn9k/cn9k_regexdev.c
+++ b/drivers/regex/cn9k/cn9k_regexdev.c
@@ -192,7 +192,7 @@ ree_dev_register(const char *name)
 {
 	struct rte_regexdev *dev;

-	cn9k_ree_dbg("Creating regexdev %s\n", name);
+	cn9k_ree_dbg("Creating regexdev %s", name);

 	/* allocate device structure */
 	dev = rte_regexdev_register(name);
diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c b/drivers/vdpa/ifc/ifcvf_vdpa.c
index f034bd59ba..2958368813 100644
--- a/drivers/vdpa/ifc/ifcvf_vdpa.c
+++ b/drivers/vdpa/ifc/ifcvf_vdpa.c
@@ -536,7 +536,7 @@ notify_relay(void *arg)
 		if (nfds < 0) {
 			if (errno == EINTR)
 				continue;
-			DRV_LOG(ERR, "epoll_wait return fail\n");
+			DRV_LOG(ERR, "epoll_wait return fail");
 			return 1;
 		}

@@ -651,12 +651,12 @@ intr_relay(void *arg)
 				    errno == EWOULDBLOCK ||
 				    errno == EAGAIN)
 					continue;
-				DRV_LOG(ERR, "Error reading from file descriptor %d: %s\n",
+				DRV_LOG(ERR, "Error reading from file descriptor %d: %s",
 					csc_event.data.fd,
 					strerror(errno));
 				goto out;
 			} else if (nbytes == 0) {
-				DRV_LOG(ERR, "Read nothing from file descriptor %d\n",
+				DRV_LOG(ERR, "Read nothing from file descriptor %d",
 					csc_event.data.fd);
 				continue;
 			} else {
@@ -1500,7 +1500,7 @@ ifcvf_pci_get_device_type(struct rte_pci_device *pci_dev)
 	uint16_t device_id;

 	if (pci_device_id < 0x1000 || pci_device_id > 0x107f) {
-		DRV_LOG(ERR, "Probe device is not a virtio device\n");
+		DRV_LOG(ERR, "Probe device is not a virtio device");
 		return -1;
 	}

@@ -1577,7 +1577,7 @@ ifcvf_blk_get_config(int vid, uint8_t *config, uint32_t size)
 	DRV_LOG(DEBUG, "      sectors  : %u", dev_cfg->geometry.sectors);
 	DRV_LOG(DEBUG, "num_queues: 0x%08x", dev_cfg->num_queues);

-	DRV_LOG(DEBUG, "config: [%x] [%x] [%x] [%x] [%x] [%x] [%x] [%x]\n",
+	DRV_LOG(DEBUG, "config: [%x] [%x] [%x] [%x] [%x] [%x] [%x] [%x]",
 		config[0], config[1], config[2], config[3], config[4],
 		config[5], config[6], config[7]);
 	return 0;
diff --git a/drivers/vdpa/nfp/nfp_vdpa.c b/drivers/vdpa/nfp/nfp_vdpa.c
index cef80b5476..3e4247dbcb 100644
--- a/drivers/vdpa/nfp/nfp_vdpa.c
+++ b/drivers/vdpa/nfp/nfp_vdpa.c
@@ -127,7 +127,7 @@ nfp_vdpa_vfio_setup(struct nfp_vdpa_dev *device)
 	if (device->vfio_group_fd < 0)
 		goto container_destroy;

-	DRV_VDPA_LOG(DEBUG, "container_fd=%d, group_fd=%d,\n",
+	DRV_VDPA_LOG(DEBUG, "container_fd=%d, group_fd=%d,",
 			device->vfio_container_fd, device->vfio_group_fd);

 	ret = rte_pci_map_device(pci_dev);
--
2.34.1

---
  Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- -	2024-11-11 14:23:05.530150931 +0800
+++ 0003-drivers-remove-redundant-newline-from-logs.patch	2024-11-11 14:23:05.002192842 +0800
@@ -1 +1 @@
-From f665790a5dbad7b645ff46f31d65e977324e7bfc Mon Sep 17 00:00:00 2001
+From 5b424bd34d8c972d428d03bc9952528d597e2040 Mon Sep 17 00:00:00 2001
@@ -4,0 +5 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
@@ -6 +7 @@
-Fix places where two newline characters may be logged.
+[ upstream commit f665790a5dbad7b645ff46f31d65e977324e7bfc ]
@@ -8 +9 @@
-Cc: stable@dpdk.org
+Fix places where two newline characters may be logged.
@@ -13 +14 @@
- drivers/baseband/acc/rte_acc100_pmd.c         |  20 +-
+ drivers/baseband/acc/rte_acc100_pmd.c         |  22 +-
@@ -15 +16 @@
- .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c         |  16 +-
+ .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c         |  14 +-
@@ -53 +54 @@
- drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c   |  44 ++--
+ drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c   |  42 ++--
@@ -55 +56 @@
- drivers/crypto/dpaa_sec/dpaa_sec.c            |  27 ++-
+ drivers/crypto/dpaa_sec/dpaa_sec.c            |  24 +-
@@ -99,3 +99,0 @@
- drivers/net/cnxk/cn10k_ethdev.c               |   2 +-
- drivers/net/cnxk/cn9k_ethdev.c                |   2 +-
- drivers/net/cnxk/cnxk_eswitch_devargs.c       |   2 +-
@@ -105,3 +102,0 @@
- drivers/net/cnxk/cnxk_rep.c                   |   8 +-
- drivers/net/cnxk/cnxk_rep.h                   |   2 +-
- drivers/net/cpfl/cpfl_ethdev.c                |   2 +-
@@ -128,3 +123 @@
- drivers/net/gve/base/gve_adminq.c             |   4 +-
- drivers/net/gve/gve_rx.c                      |   2 +-
- drivers/net/gve/gve_tx.c                      |   2 +-
+ drivers/net/gve/base/gve_adminq.c             |   2 +-
@@ -139 +132 @@
- drivers/net/i40e/i40e_ethdev.c                |  51 ++--
+ drivers/net/i40e/i40e_ethdev.c                |  37 ++-
@@ -141 +134 @@
- drivers/net/i40e/i40e_rxtx.c                  |  42 ++--
+ drivers/net/i40e/i40e_rxtx.c                  |  24 +-
@@ -143 +136 @@
- drivers/net/iavf/iavf_rxtx.c                  |  16 +-
+ drivers/net/iavf/iavf_rxtx.c                  |   2 +-
@@ -146 +139 @@
- drivers/net/ice/ice_ethdev.c                  |  50 ++--
+ drivers/net/ice/ice_ethdev.c                  |  44 ++--
@@ -149 +142 @@
- drivers/net/ice/ice_rxtx.c                    |  18 +-
+ drivers/net/ice/ice_rxtx.c                    |   2 +-
@@ -168 +161 @@
- drivers/net/octeon_ep/otx_ep_ethdev.c         |  82 +++----
+ drivers/net/octeon_ep/otx_ep_ethdev.c         |  80 +++----
@@ -183 +175,0 @@
- drivers/net/virtio/virtio_user/vhost_vdpa.c   |   2 +-
@@ -193 +185 @@
- 180 files changed, 1244 insertions(+), 1262 deletions(-)
+ 171 files changed, 1194 insertions(+), 1211 deletions(-)
@@ -196 +188 @@
-index ab69350080..5c91acab7e 100644
+index 292537e24d..9d028f0f48 100644
@@ -199 +191 @@
-@@ -229,7 +229,7 @@ fetch_acc100_config(struct rte_bbdev *dev)
+@@ -230,7 +230,7 @@ fetch_acc100_config(struct rte_bbdev *dev)
@@ -208 +200,10 @@
-@@ -2672,7 +2672,7 @@ harq_loopback(struct acc_queue *q, struct rte_bbdev_dec_op *op,
+@@ -1229,7 +1229,7 @@ acc100_fcw_ld_fill(struct rte_bbdev_dec_op *op, struct acc_fcw_ld *fcw,
+ 			harq_in_length = RTE_ALIGN_FLOOR(harq_in_length, ACC100_HARQ_ALIGN_COMP);
+
+ 		if ((harq_layout[harq_index].offset > 0) && harq_prun) {
+-			rte_bbdev_log_debug("HARQ IN offset unexpected for now\n");
++			rte_bbdev_log_debug("HARQ IN offset unexpected for now");
+ 			fcw->hcin_size0 = harq_layout[harq_index].size0;
+ 			fcw->hcin_offset = harq_layout[harq_index].offset;
+ 			fcw->hcin_size1 = harq_in_length - harq_layout[harq_index].offset;
+@@ -2890,7 +2890,7 @@ harq_loopback(struct acc_queue *q, struct rte_bbdev_dec_op *op,
@@ -217 +218 @@
-@@ -2710,7 +2710,7 @@ harq_loopback(struct acc_queue *q, struct rte_bbdev_dec_op *op,
+@@ -2928,7 +2928,7 @@ harq_loopback(struct acc_queue *q, struct rte_bbdev_dec_op *op,
@@ -226 +227 @@
-@@ -2726,7 +2726,7 @@ harq_loopback(struct acc_queue *q, struct rte_bbdev_dec_op *op,
+@@ -2944,7 +2944,7 @@ harq_loopback(struct acc_queue *q, struct rte_bbdev_dec_op *op,
@@ -235,3 +236 @@
-@@ -3450,7 +3450,7 @@ acc100_enqueue_ldpc_dec_cb(struct rte_bbdev_queue_data *q_data,
- 		}
- 		avail -= 1;
+@@ -3691,7 +3691,7 @@ acc100_enqueue_ldpc_dec_cb(struct rte_bbdev_queue_data *q_data,
@@ -239,2 +238,4 @@
--		rte_bbdev_log(INFO, "Op %d %d %d %d %d %d %d %d %d %d %d\n",
-+		rte_bbdev_log(INFO, "Op %d %d %d %d %d %d %d %d %d %d %d",
+ 		if (i > 0)
+ 			same_op = cmp_ldpc_dec_op(&ops[i-1]);
+-		rte_bbdev_log(INFO, "Op %d %d %d %d %d %d %d %d %d %d %d %d\n",
++		rte_bbdev_log(INFO, "Op %d %d %d %d %d %d %d %d %d %d %d %d",
@@ -244 +245 @@
-@@ -3566,7 +3566,7 @@ dequeue_enc_one_op_cb(struct acc_queue *q, struct rte_bbdev_enc_op **ref_op,
+@@ -3808,7 +3808,7 @@ dequeue_enc_one_op_cb(struct acc_queue *q, struct rte_bbdev_enc_op **ref_op,
@@ -253,3 +254,3 @@
-@@ -3643,7 +3643,7 @@ dequeue_enc_one_op_tb(struct acc_queue *q, struct rte_bbdev_enc_op **ref_op,
- 		atom_desc.atom_hdr = rte_atomic_load_explicit((uint64_t __rte_atomic *)desc,
- 				rte_memory_order_relaxed);
+@@ -3885,7 +3885,7 @@ dequeue_enc_one_op_tb(struct acc_queue *q, struct rte_bbdev_enc_op **ref_op,
+ 		atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc,
+ 				__ATOMIC_RELAXED);
@@ -262 +263 @@
-@@ -3739,7 +3739,7 @@ dequeue_ldpc_dec_one_op_cb(struct rte_bbdev_queue_data *q_data,
+@@ -3981,7 +3981,7 @@ dequeue_ldpc_dec_one_op_cb(struct rte_bbdev_queue_data *q_data,
@@ -271,3 +272,3 @@
-@@ -3818,7 +3818,7 @@ dequeue_dec_one_op_tb(struct acc_queue *q, struct rte_bbdev_dec_op **ref_op,
- 		atom_desc.atom_hdr = rte_atomic_load_explicit((uint64_t __rte_atomic *)desc,
- 				rte_memory_order_relaxed);
+@@ -4060,7 +4060,7 @@ dequeue_dec_one_op_tb(struct acc_queue *q, struct rte_bbdev_dec_op **ref_op,
+ 		atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc,
+ 				__ATOMIC_RELAXED);
@@ -280 +281 @@
-@@ -4552,7 +4552,7 @@ acc100_configure(const char *dev_name, struct rte_acc_conf *conf)
+@@ -4797,7 +4797,7 @@ acc100_configure(const char *dev_name, struct rte_acc_conf *conf)
@@ -290 +291 @@
-index 585dc49bd6..fad984ccc1 100644
+index 686e086a5c..88e1d03ebf 100644
@@ -365 +366 @@
-@@ -3304,7 +3304,7 @@ vrb_dequeue_ldpc_dec_one_op_cb(struct rte_bbdev_queue_data *q_data,
+@@ -3319,7 +3319,7 @@ vrb_dequeue_ldpc_dec_one_op_cb(struct rte_bbdev_queue_data *q_data,
@@ -374 +375 @@
-@@ -3411,7 +3411,7 @@ vrb_dequeue_dec_one_op_tb(struct acc_queue *q, struct rte_bbdev_dec_op **ref_op,
+@@ -3440,7 +3440,7 @@ vrb_dequeue_dec_one_op_tb(struct acc_queue *q, struct rte_bbdev_dec_op **ref_op,
@@ -383 +384 @@
-@@ -3946,7 +3946,7 @@ vrb2_check_mld_r_constraint(struct rte_bbdev_mldts_op *op) {
+@@ -3985,7 +3985,7 @@ vrb2_check_mld_r_constraint(struct rte_bbdev_mldts_op *op) {
@@ -392 +393 @@
-@@ -4606,7 +4606,7 @@ vrb1_configure(const char *dev_name, struct rte_acc_conf *conf)
+@@ -4650,7 +4650,7 @@ vrb1_configure(const char *dev_name, struct rte_acc_conf *conf)
@@ -401 +402 @@
-@@ -4976,7 +4976,7 @@ vrb2_configure(const char *dev_name, struct rte_acc_conf *conf)
+@@ -5020,7 +5020,7 @@ vrb2_configure(const char *dev_name, struct rte_acc_conf *conf)
@@ -411 +412 @@
-index 9b253cde28..3e04e44ba2 100644
+index 6b0644ffc5..d60cd3a5c5 100644
@@ -414 +415 @@
-@@ -1997,10 +1997,10 @@ fpga_5gnr_mutex_acquisition(struct fpga_5gnr_queue *q)
+@@ -1498,14 +1498,14 @@ fpga_mutex_acquisition(struct fpga_queue *q)
@@ -417,5 +418,9 @@
- 			usleep(FPGA_5GNR_TIMEOUT_CHECK_INTERVAL);
--		rte_bbdev_log_debug("Acquiring Mutex for %x\n", q->ddr_mutex_uuid);
-+		rte_bbdev_log_debug("Acquiring Mutex for %x", q->ddr_mutex_uuid);
- 		fpga_5gnr_reg_write_32(q->d->mmio_base, FPGA_5GNR_FEC_MUTEX, mutex_ctrl);
- 		mutex_read = fpga_5gnr_reg_read_32(q->d->mmio_base, FPGA_5GNR_FEC_MUTEX);
+ 			usleep(FPGA_TIMEOUT_CHECK_INTERVAL);
+-		rte_bbdev_log_debug("Acquiring Mutex for %x\n",
++		rte_bbdev_log_debug("Acquiring Mutex for %x",
+ 				q->ddr_mutex_uuid);
+ 		fpga_reg_write_32(q->d->mmio_base,
+ 				FPGA_5GNR_FEC_MUTEX,
+ 				mutex_ctrl);
+ 		mutex_read = fpga_reg_read_32(q->d->mmio_base,
+ 				FPGA_5GNR_FEC_MUTEX);
@@ -427,7 +432,6 @@
-@@ -2038,7 +2038,7 @@ fpga_5gnr_harq_write_loopback(struct fpga_5gnr_queue *q,
- 		reg_32 = fpga_5gnr_reg_read_32(q->d->mmio_base, FPGA_5GNR_FEC_HARQ_BUF_SIZE_REGS);
- 		if (reg_32 < harq_in_length) {
- 			left_length = reg_32;
--			rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size\n");
-+			rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size");
- 		}
+@@ -1546,7 +1546,7 @@ fpga_harq_write_loopback(struct fpga_queue *q,
+ 			FPGA_5GNR_FEC_HARQ_BUF_SIZE_REGS);
+ 	if (reg_32 < harq_in_length) {
+ 		left_length = reg_32;
+-		rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size\n");
++		rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size");
@@ -436,7 +440,7 @@
-@@ -2108,17 +2108,17 @@ fpga_5gnr_harq_read_loopback(struct fpga_5gnr_queue *q,
- 		reg = fpga_5gnr_reg_read_32(q->d->mmio_base, FPGA_5GNR_FEC_HARQ_BUF_SIZE_REGS);
- 		if (reg < harq_in_length) {
- 			harq_in_length = reg;
--			rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size\n");
-+			rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size");
- 		}
+ 	input = (uint64_t *)rte_pktmbuf_mtod_offset(harq_input,
+@@ -1609,18 +1609,18 @@ fpga_harq_read_loopback(struct fpga_queue *q,
+ 			FPGA_5GNR_FEC_HARQ_BUF_SIZE_REGS);
+ 	if (reg < harq_in_length) {
+ 		harq_in_length = reg;
+-		rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size\n");
++		rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size");
@@ -448 +452,2 @@
- 				harq_output->buf_len - rte_pktmbuf_headroom(harq_output),
+ 				harq_output->buf_len -
+ 				rte_pktmbuf_headroom(harq_output),
@@ -450 +455,2 @@
- 		harq_in_length = harq_output->buf_len - rte_pktmbuf_headroom(harq_output);
+ 		harq_in_length = harq_output->buf_len -
+ 				rte_pktmbuf_headroom(harq_output);
@@ -457,6 +463,6 @@
-@@ -2142,7 +2142,7 @@ fpga_5gnr_harq_read_loopback(struct fpga_5gnr_queue *q,
- 		while (reg != 1) {
- 			reg = fpga_5gnr_reg_read_8(q->d->mmio_base, FPGA_5GNR_FEC_DDR4_RD_RDY_REGS);
- 			if (reg == FPGA_5GNR_DDR_OVERFLOW) {
--				rte_bbdev_log(ERR, "Read address is overflow!\n");
-+				rte_bbdev_log(ERR, "Read address is overflow!");
+@@ -1642,7 +1642,7 @@ fpga_harq_read_loopback(struct fpga_queue *q,
+ 				FPGA_5GNR_FEC_DDR4_RD_RDY_REGS);
+ 			if (reg == FPGA_DDR_OVERFLOW) {
+ 				rte_bbdev_log(ERR,
+-						"Read address is overflow!\n");
++						"Read address is overflow!");
@@ -466,9 +471,0 @@
-@@ -3376,7 +3376,7 @@ int rte_fpga_5gnr_fec_configure(const char *dev_name, const struct rte_fpga_5gnr
- 		return -ENODEV;
- 	}
- 	struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(bbdev->device);
--	rte_bbdev_log(INFO, "Configure dev id %x\n", pci_dev->id.device_id);
-+	rte_bbdev_log(INFO, "Configure dev id %x", pci_dev->id.device_id);
- 	if (pci_dev->id.device_id == VC_5GNR_PF_DEVICE_ID)
- 		return vc_5gnr_configure(dev_name, conf);
- 	else if (pci_dev->id.device_id == AGX100_PF_DEVICE_ID)
@@ -498 +495 @@
-index 574743a9da..1f661dd801 100644
+index 8ddc7ff05f..a66dcd8962 100644
@@ -574 +571 @@
-index c155f4a2fd..097d6dca08 100644
+index 89f0f329c0..adb452fd3e 100644
@@ -577 +574 @@
-@@ -500,7 +500,7 @@ rte_fslmc_find_device(const struct rte_device *start, rte_dev_cmp_t cmp,
+@@ -499,7 +499,7 @@ rte_fslmc_find_device(const struct rte_device *start, rte_dev_cmp_t cmp,
@@ -586 +583 @@
-@@ -515,7 +515,7 @@ rte_fslmc_find_device(const struct rte_device *start, rte_dev_cmp_t cmp,
+@@ -514,7 +514,7 @@ rte_fslmc_find_device(const struct rte_device *start, rte_dev_cmp_t cmp,
@@ -595 +592 @@
-@@ -629,14 +629,14 @@ fslmc_bus_dev_iterate(const void *start, const char *str,
+@@ -628,14 +628,14 @@ fslmc_bus_dev_iterate(const void *start, const char *str,
@@ -613 +610 @@
-index e12fd62f34..6981679a2d 100644
+index 5966776a85..b90efeb651 100644
@@ -748 +745 @@
-index daf7684d8e..438ac72563 100644
+index 14aff233d5..35eb8b7628 100644
@@ -751 +748 @@
-@@ -1564,7 +1564,7 @@ dev_init(struct dev *dev, struct plt_pci_device *pci_dev)
+@@ -1493,7 +1493,7 @@ dev_init(struct dev *dev, struct plt_pci_device *pci_dev)
@@ -868 +865 @@
-index ac522f8235..d890fad681 100644
+index 9e5e614b3b..92401e04d0 100644
@@ -871 +868 @@
-@@ -908,7 +908,7 @@ nix_tm_sq_flush_pre(struct roc_nix_sq *sq)
+@@ -906,7 +906,7 @@ nix_tm_sq_flush_pre(struct roc_nix_sq *sq)
@@ -894 +891 @@
-index 9f3870a311..e24826bb5d 100644
+index e1cef7a670..c1b91ad92f 100644
@@ -897 +894 @@
-@@ -504,7 +504,7 @@ roc_nix_tm_hierarchy_disable(struct roc_nix *roc_nix)
+@@ -503,7 +503,7 @@ roc_nix_tm_hierarchy_disable(struct roc_nix *roc_nix)
@@ -920 +917 @@
-index 293b0c81a1..499f93e373 100644
+index 748d287bad..b02c9c7f38 100644
@@ -923 +920 @@
-@@ -186,7 +186,7 @@ sso_rsrc_get(struct roc_sso *roc_sso)
+@@ -171,7 +171,7 @@ sso_rsrc_get(struct roc_sso *roc_sso)
@@ -933 +930 @@
-index 095afbb9e6..83228fb2b6 100644
+index f8607b2852..d39af3c85e 100644
@@ -936 +933 @@
-@@ -342,7 +342,7 @@ tim_free_lf_count_get(struct dev *dev, uint16_t *nb_lfs)
+@@ -317,7 +317,7 @@ tim_free_lf_count_get(struct dev *dev, uint16_t *nb_lfs)
@@ -946 +943 @@
-index 87a3ac80b9..636f93604e 100644
+index b393be4cf6..2e6846312b 100644
@@ -968 +965 @@
-index e638c616d8..561836760c 100644
+index f6be84ceb5..105450774e 100644
@@ -971 +968,2 @@
-@@ -10,7 +10,7 @@
+@@ -9,7 +9,7 @@
+
@@ -973 +970,0 @@
- #define RTE_LOGTYPE_IDPF_COMMON idpf_common_logtype
@@ -980 +977 @@
-@@ -18,9 +18,6 @@ extern int idpf_common_logtype;
+@@ -17,9 +17,6 @@ extern int idpf_common_logtype;
@@ -1035 +1032 @@
-index ad44b0e01f..4bf9bac23e 100644
+index f95dd33375..21a110d22e 100644
@@ -1366 +1363 @@
-index bb19854b50..7353fd4957 100644
+index 7391360925..d52f937548 100644
@@ -1659 +1656 @@
-index 6ed7a8f41c..27cdbf5ed4 100644
+index b55258689b..1713600db7 100644
@@ -1799 +1796 @@
-index 0dcf971a15..8956f7750d 100644
+index 583ba3b523..acb40bdf77 100644
@@ -1830 +1827 @@
-index 0c800fc350..5088d8ded6 100644
+index b7ca3af5a4..6d42b92d8b 100644
@@ -1843 +1840 @@
-index ca99bc6f42..700e141667 100644
+index a5271d7227..c92fdb446d 100644
@@ -1846 +1843 @@
-@@ -227,7 +227,7 @@ cryptodev_ccp_create(const char *name,
+@@ -228,7 +228,7 @@ cryptodev_ccp_create(const char *name,
@@ -1856 +1853 @@
-index dbd36a8a54..32415e815e 100644
+index c2a807fa94..cf163e0208 100644
@@ -1859 +1856 @@
-@@ -1953,7 +1953,7 @@ fill_sess_aead(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess)
+@@ -1952,7 +1952,7 @@ fill_sess_aead(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess)
@@ -1868 +1865 @@
-@@ -2037,7 +2037,7 @@ fill_sm_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *ses
+@@ -2036,7 +2036,7 @@ fill_sm_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *ses
@@ -1877 +1874 @@
-@@ -2114,7 +2114,7 @@ fill_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess)
+@@ -2113,7 +2113,7 @@ fill_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess)
@@ -1887 +1884 @@
-index c1f7181d55..99b6359e52 100644
+index 6ae356ace0..b65bea3b3f 100644
@@ -1959 +1956 @@
-@@ -1447,7 +1447,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
+@@ -1446,7 +1446,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
@@ -1968 +1965 @@
-@@ -1476,7 +1476,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
+@@ -1475,7 +1475,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
@@ -1977 +1974 @@
-@@ -1494,7 +1494,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
+@@ -1493,7 +1493,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
@@ -1986 +1983 @@
-@@ -1570,7 +1570,7 @@ sec_fd_to_mbuf(const struct qbman_fd *fd, struct dpaa2_sec_qp *qp)
+@@ -1569,7 +1569,7 @@ sec_fd_to_mbuf(const struct qbman_fd *fd, struct dpaa2_sec_qp *qp)
@@ -1995 +1992 @@
-@@ -1603,7 +1603,7 @@ sec_fd_to_mbuf(const struct qbman_fd *fd, struct dpaa2_sec_qp *qp)
+@@ -1602,7 +1602,7 @@ sec_fd_to_mbuf(const struct qbman_fd *fd, struct dpaa2_sec_qp *qp)
@@ -2004 +2001 @@
-@@ -1825,7 +1825,7 @@ dpaa2_sec_enqueue_burst_ordered(void *qp, struct rte_crypto_op **ops,
+@@ -1824,7 +1824,7 @@ dpaa2_sec_enqueue_burst_ordered(void *qp, struct rte_crypto_op **ops,
@@ -2013 +2010 @@
-@@ -1842,7 +1842,7 @@ dpaa2_sec_enqueue_burst_ordered(void *qp, struct rte_crypto_op **ops,
+@@ -1841,7 +1841,7 @@ dpaa2_sec_enqueue_burst_ordered(void *qp, struct rte_crypto_op **ops,
@@ -2022 +2019 @@
-@@ -1885,7 +1885,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
+@@ -1884,7 +1884,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
@@ -2031 +2028 @@
-@@ -1938,7 +1938,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
+@@ -1937,7 +1937,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
@@ -2040 +2037 @@
-@@ -1949,7 +1949,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
+@@ -1948,7 +1948,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
@@ -2048,2 +2045,2 @@
- 					dpaa2_sec_dump(ops[num_rx], stdout);
-@@ -1967,7 +1967,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
+ 					dpaa2_sec_dump(ops[num_rx]);
+@@ -1966,7 +1966,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
@@ -2058,10 +2055 @@
-@@ -2017,7 +2017,7 @@ dpaa2_sec_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
-
- 	if (qp_conf->nb_descriptors < (2 * FLE_POOL_CACHE_SIZE)) {
- 		DPAA2_SEC_ERR("Minimum supported nb_descriptors %d,"
--			      " but given %d\n", (2 * FLE_POOL_CACHE_SIZE),
-+			      " but given %d", (2 * FLE_POOL_CACHE_SIZE),
- 			      qp_conf->nb_descriptors);
- 		return -EINVAL;
- 	}
-@@ -2544,7 +2544,7 @@ dpaa2_sec_aead_init(struct rte_crypto_sym_xform *xform,
+@@ -2555,7 +2555,7 @@ dpaa2_sec_aead_init(struct rte_crypto_sym_xform *xform,
@@ -2076 +2064 @@
-@@ -4254,7 +4254,7 @@ check_devargs_handler(const char *key, const char *value,
+@@ -4275,7 +4275,7 @@ check_devargs_handler(const char *key, const char *value,
@@ -2162 +2150 @@
-index 1ddad6944e..225bf950e9 100644
+index 906ea39047..131cd90c94 100644
@@ -2174 +2162 @@
-@@ -851,7 +851,7 @@ dpaa_sec_deq(struct dpaa_sec_qp *qp, struct rte_crypto_op **ops, int nb_ops)
+@@ -849,7 +849,7 @@ dpaa_sec_deq(struct dpaa_sec_qp *qp, struct rte_crypto_op **ops, int nb_ops)
@@ -2182,2 +2170,2 @@
- 					dpaa_sec_dump(ctx, qp, stdout);
-@@ -1946,7 +1946,7 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
+ 					dpaa_sec_dump(ctx, qp);
+@@ -1944,7 +1944,7 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
@@ -2192 +2180 @@
-@@ -2056,7 +2056,7 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
+@@ -2054,7 +2054,7 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
@@ -2201 +2189 @@
-@@ -2097,7 +2097,7 @@ dpaa_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
+@@ -2095,7 +2095,7 @@ dpaa_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
@@ -2210 +2198 @@
-@@ -2160,7 +2160,7 @@ dpaa_sec_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+@@ -2158,7 +2158,7 @@ dpaa_sec_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
@@ -2219 +2207 @@
-@@ -2466,7 +2466,7 @@ dpaa_sec_aead_init(struct rte_cryptodev *dev __rte_unused,
+@@ -2459,7 +2459,7 @@ dpaa_sec_aead_init(struct rte_cryptodev *dev __rte_unused,
@@ -2228 +2216,2 @@
-@@ -2517,9 +2517,8 @@ dpaa_sec_detach_rxq(struct dpaa_sec_dev_private *qi, struct qman_fq *fq)
+@@ -2508,7 +2508,7 @@ dpaa_sec_detach_rxq(struct dpaa_sec_dev_private *qi, struct qman_fq *fq)
+ 	for (i = 0; i < RTE_DPAA_MAX_RX_QUEUE; i++) {
@@ -2230,7 +2219,3 @@
- 			ret = qman_retire_fq(fq, NULL);
- 			if (ret != 0)
--				DPAA_SEC_ERR("Queue %d is not retired"
--					     " err: %d\n", fq->fqid,
--					     ret);
-+				DPAA_SEC_ERR("Queue %d is not retired err: %d",
-+					     fq->fqid, ret);
+ 			if (qman_retire_fq(fq, NULL) != 0)
+-				DPAA_SEC_DEBUG("Queue is not retired\n");
++				DPAA_SEC_DEBUG("Queue is not retired");
@@ -2240 +2225 @@
-@@ -3475,7 +3474,7 @@ dpaa_sec_eventq_attach(const struct rte_cryptodev *dev,
+@@ -3483,7 +3483,7 @@ dpaa_sec_eventq_attach(const struct rte_cryptodev *dev,
@@ -2249 +2234 @@
-@@ -3574,7 +3573,7 @@ check_devargs_handler(__rte_unused const char *key, const char *value,
+@@ -3582,7 +3582,7 @@ check_devargs_handler(__rte_unused const char *key, const char *value,
@@ -2258 +2243 @@
-@@ -3637,7 +3636,7 @@ dpaa_sec_dev_init(struct rte_cryptodev *cryptodev)
+@@ -3645,7 +3645,7 @@ dpaa_sec_dev_init(struct rte_cryptodev *cryptodev)
@@ -2267 +2252 @@
-@@ -3705,7 +3704,7 @@ dpaa_sec_dev_init(struct rte_cryptodev *cryptodev)
+@@ -3713,7 +3713,7 @@ dpaa_sec_dev_init(struct rte_cryptodev *cryptodev)
@@ -2277 +2262 @@
-index f8c85b6528..60dbaee4ec 100644
+index fb895a8bc6..82ac1fa1c4 100644
@@ -2280 +2265 @@
-@@ -30,7 +30,7 @@ extern int dpaa_logtype_sec;
+@@ -29,7 +29,7 @@ extern int dpaa_logtype_sec;
@@ -2284,2 +2269,2 @@
--	RTE_LOG_DP(level, DPAA_SEC, fmt, ## args)
-+	RTE_LOG_DP_LINE(level, DPAA_SEC, fmt, ## args)
+-	RTE_LOG_DP(level, PMD, fmt, ## args)
++	RTE_LOG_DP_LINE(level, PMD, fmt, ## args)
@@ -2343 +2328 @@
-index be6dbe9b1b..d42acd913c 100644
+index 52722f94a0..252bcb3192 100644
@@ -2356 +2341 @@
-index ef4228bd38..f3633091a9 100644
+index 80de25c65b..8e74645e0a 100644
@@ -2359 +2344 @@
-@@ -113,7 +113,7 @@ aesni_mb_set_session_auth_parameters(IMB_MGR *mb_mgr,
+@@ -107,7 +107,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
@@ -2368 +2353 @@
-@@ -136,7 +136,7 @@ aesni_mb_set_session_auth_parameters(IMB_MGR *mb_mgr,
+@@ -130,7 +130,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
@@ -2377 +2362 @@
-@@ -171,7 +171,7 @@ aesni_mb_set_session_auth_parameters(IMB_MGR *mb_mgr,
+@@ -165,7 +165,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
@@ -2386 +2371 @@
-@@ -198,7 +198,7 @@ aesni_mb_set_session_auth_parameters(IMB_MGR *mb_mgr,
+@@ -192,7 +192,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
@@ -2395 +2380 @@
-@@ -211,7 +211,7 @@ aesni_mb_set_session_auth_parameters(IMB_MGR *mb_mgr,
+@@ -205,7 +205,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
@@ -2404 +2389 @@
-@@ -223,11 +223,11 @@ aesni_mb_set_session_auth_parameters(IMB_MGR *mb_mgr,
+@@ -217,11 +217,11 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
@@ -2418 +2403 @@
-@@ -243,7 +243,7 @@ aesni_mb_set_session_auth_parameters(IMB_MGR *mb_mgr,
+@@ -237,7 +237,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
@@ -2427 +2412 @@
-@@ -258,7 +258,7 @@ aesni_mb_set_session_auth_parameters(IMB_MGR *mb_mgr,
+@@ -252,7 +252,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
@@ -2436 +2421 @@
-@@ -389,7 +389,7 @@ aesni_mb_set_session_auth_parameters(IMB_MGR *mb_mgr,
+@@ -361,7 +361,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
@@ -2445 +2430 @@
-@@ -725,7 +725,7 @@ aesni_mb_set_session_aead_parameters(const IMB_MGR *mb_mgr,
+@@ -691,7 +691,7 @@ aesni_mb_set_session_aead_parameters(const IMB_MGR *mb_mgr,
@@ -2454 +2439 @@
-@@ -761,7 +761,7 @@ aesni_mb_set_session_aead_parameters(const IMB_MGR *mb_mgr,
+@@ -727,7 +727,7 @@ aesni_mb_set_session_aead_parameters(const IMB_MGR *mb_mgr,
@@ -2463 +2448 @@
-@@ -782,7 +782,7 @@ aesni_mb_set_session_aead_parameters(const IMB_MGR *mb_mgr,
+@@ -748,7 +748,7 @@ aesni_mb_set_session_aead_parameters(const IMB_MGR *mb_mgr,
@@ -2472 +2457 @@
-@@ -1234,7 +1234,7 @@ handle_sgl_linear(IMB_JOB *job, struct rte_crypto_op *op, uint32_t dst_offset,
+@@ -1200,7 +1200,7 @@ handle_sgl_linear(IMB_JOB *job, struct rte_crypto_op *op, uint32_t dst_offset,
@@ -2482 +2467 @@
-index a96779f059..65f0e5c568 100644
+index e64df1a462..a0b354bb83 100644
@@ -2504 +2489 @@
-index 6e2afde34f..3104e6d31e 100644
+index 4647d568de..aa2363ef15 100644
@@ -2916 +2901 @@
-index 491f5ecd5b..e43884e69b 100644
+index 2bf3060278..5d240a3de1 100644
@@ -2919 +2904 @@
-@@ -1531,7 +1531,7 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev)
+@@ -1520,7 +1520,7 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
@@ -2926,2 +2911,2 @@
- 	if (qat_pci_dev->qat_dev_gen == QAT_VQAT &&
- 		sub_id != ADF_VQAT_ASYM_PCI_SUBSYSTEM_ID) {
+ 	if (gen_dev_ops->cryptodev_ops == NULL) {
+ 		QAT_LOG(ERR, "Device %s does not support asymmetric crypto",
@@ -2929 +2914 @@
-index eb267db424..50d687fd37 100644
+index 9f4f6c3d93..224cc0ab50 100644
@@ -2932 +2917 @@
-@@ -581,7 +581,7 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
+@@ -569,7 +569,7 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
@@ -2941 +2926 @@
-@@ -1180,7 +1180,7 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev,
+@@ -1073,7 +1073,7 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev,
@@ -2950 +2935 @@
-@@ -1805,7 +1805,7 @@ static int aes_ipsecmb_job(uint8_t *in, uint8_t *out, IMB_MGR *m,
+@@ -1676,7 +1676,7 @@ static int aes_ipsecmb_job(uint8_t *in, uint8_t *out, IMB_MGR *m,
@@ -2959 +2944 @@
-@@ -2657,10 +2657,8 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
+@@ -2480,10 +2480,8 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
@@ -3187 +3172 @@
-index 7cd6ebc1e0..bce4b4b277 100644
+index 4db3b0554c..8bc076f5d5 100644
@@ -3190 +3175 @@
-@@ -357,7 +357,7 @@ hisi_dma_start(struct rte_dma_dev *dev)
+@@ -358,7 +358,7 @@ hisi_dma_start(struct rte_dma_dev *dev)
@@ -3199 +3184 @@
-@@ -630,7 +630,7 @@ hisi_dma_scan_cq(struct hisi_dma_dev *hw)
+@@ -631,7 +631,7 @@ hisi_dma_scan_cq(struct hisi_dma_dev *hw)
@@ -3208 +3193 @@
-@@ -912,7 +912,7 @@ hisi_dma_probe(struct rte_pci_driver *pci_drv __rte_unused,
+@@ -913,7 +913,7 @@ hisi_dma_probe(struct rte_pci_driver *pci_drv __rte_unused,
@@ -3231 +3216 @@
-index 81637d9420..60ac219559 100644
+index a78889a7ef..2ee78773bb 100644
@@ -3234 +3219 @@
-@@ -324,7 +324,7 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
+@@ -323,7 +323,7 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
@@ -3243 +3228 @@
-@@ -339,7 +339,7 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
+@@ -338,7 +338,7 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
@@ -3252 +3237 @@
-@@ -365,7 +365,7 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
+@@ -364,7 +364,7 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
@@ -3336 +3321 @@
-index f0a4998bdd..c43ab864ca 100644
+index 5044cb17ef..9dc5edb3fb 100644
@@ -3339 +3324 @@
-@@ -171,7 +171,7 @@ dlb2_hw_query_resources(struct dlb2_eventdev *dlb2)
+@@ -168,7 +168,7 @@ dlb2_hw_query_resources(struct dlb2_eventdev *dlb2)
@@ -3348 +3333 @@
-@@ -259,7 +259,7 @@ set_producer_coremask(const char *key __rte_unused,
+@@ -256,7 +256,7 @@ set_producer_coremask(const char *key __rte_unused,
@@ -3357 +3342 @@
-@@ -293,7 +293,7 @@ set_max_cq_depth(const char *key __rte_unused,
+@@ -290,7 +290,7 @@ set_max_cq_depth(const char *key __rte_unused,
@@ -3366 +3351 @@
-@@ -304,7 +304,7 @@ set_max_cq_depth(const char *key __rte_unused,
+@@ -301,7 +301,7 @@ set_max_cq_depth(const char *key __rte_unused,
@@ -3375 +3360 @@
-@@ -322,7 +322,7 @@ set_max_enq_depth(const char *key __rte_unused,
+@@ -319,7 +319,7 @@ set_max_enq_depth(const char *key __rte_unused,
@@ -3384 +3369 @@
-@@ -333,7 +333,7 @@ set_max_enq_depth(const char *key __rte_unused,
+@@ -330,7 +330,7 @@ set_max_enq_depth(const char *key __rte_unused,
@@ -3393 +3378 @@
-@@ -351,7 +351,7 @@ set_max_num_events(const char *key __rte_unused,
+@@ -348,7 +348,7 @@ set_max_num_events(const char *key __rte_unused,
@@ -3402 +3387 @@
-@@ -361,7 +361,7 @@ set_max_num_events(const char *key __rte_unused,
+@@ -358,7 +358,7 @@ set_max_num_events(const char *key __rte_unused,
@@ -3411 +3396 @@
-@@ -378,7 +378,7 @@ set_num_dir_credits(const char *key __rte_unused,
+@@ -375,7 +375,7 @@ set_num_dir_credits(const char *key __rte_unused,
@@ -3420 +3405 @@
-@@ -388,7 +388,7 @@ set_num_dir_credits(const char *key __rte_unused,
+@@ -385,7 +385,7 @@ set_num_dir_credits(const char *key __rte_unused,
@@ -3429 +3414 @@
-@@ -405,7 +405,7 @@ set_dev_id(const char *key __rte_unused,
+@@ -402,7 +402,7 @@ set_dev_id(const char *key __rte_unused,
@@ -3438 +3423 @@
-@@ -425,7 +425,7 @@ set_poll_interval(const char *key __rte_unused,
+@@ -422,7 +422,7 @@ set_poll_interval(const char *key __rte_unused,
@@ -3447 +3432 @@
-@@ -445,7 +445,7 @@ set_port_cos(const char *key __rte_unused,
+@@ -442,7 +442,7 @@ set_port_cos(const char *key __rte_unused,
@@ -3456 +3441 @@
-@@ -458,18 +458,18 @@ set_port_cos(const char *key __rte_unused,
+@@ -455,18 +455,18 @@ set_port_cos(const char *key __rte_unused,
@@ -3478 +3463 @@
-@@ -487,7 +487,7 @@ set_cos_bw(const char *key __rte_unused,
+@@ -484,7 +484,7 @@ set_cos_bw(const char *key __rte_unused,
@@ -3487 +3472 @@
-@@ -495,11 +495,11 @@ set_cos_bw(const char *key __rte_unused,
+@@ -492,11 +492,11 @@ set_cos_bw(const char *key __rte_unused,
@@ -3501 +3486 @@
-@@ -515,7 +515,7 @@ set_sw_credit_quanta(const char *key __rte_unused,
+@@ -512,7 +512,7 @@ set_sw_credit_quanta(const char *key __rte_unused,
@@ -3510 +3495 @@
-@@ -524,7 +524,7 @@ set_sw_credit_quanta(const char *key __rte_unused,
+@@ -521,7 +521,7 @@ set_sw_credit_quanta(const char *key __rte_unused,
@@ -3519 +3504 @@
-@@ -540,7 +540,7 @@ set_hw_credit_quanta(const char *key __rte_unused,
+@@ -537,7 +537,7 @@ set_hw_credit_quanta(const char *key __rte_unused,
@@ -3528 +3513 @@
-@@ -560,7 +560,7 @@ set_default_depth_thresh(const char *key __rte_unused,
+@@ -557,7 +557,7 @@ set_default_depth_thresh(const char *key __rte_unused,
@@ -3537 +3522 @@
-@@ -579,7 +579,7 @@ set_vector_opts_enab(const char *key __rte_unused,
+@@ -576,7 +576,7 @@ set_vector_opts_enab(const char *key __rte_unused,
@@ -3546 +3531 @@
-@@ -599,7 +599,7 @@ set_default_ldb_port_allocation(const char *key __rte_unused,
+@@ -596,7 +596,7 @@ set_default_ldb_port_allocation(const char *key __rte_unused,
@@ -3555 +3540 @@
-@@ -619,7 +619,7 @@ set_enable_cq_weight(const char *key __rte_unused,
+@@ -616,7 +616,7 @@ set_enable_cq_weight(const char *key __rte_unused,
@@ -3564 +3549 @@
-@@ -640,7 +640,7 @@ set_qid_depth_thresh(const char *key __rte_unused,
+@@ -637,7 +637,7 @@ set_qid_depth_thresh(const char *key __rte_unused,
@@ -3573 +3558 @@
-@@ -657,18 +657,18 @@ set_qid_depth_thresh(const char *key __rte_unused,
+@@ -654,18 +654,18 @@ set_qid_depth_thresh(const char *key __rte_unused,
@@ -3595 +3580 @@
-@@ -688,7 +688,7 @@ set_qid_depth_thresh_v2_5(const char *key __rte_unused,
+@@ -685,7 +685,7 @@ set_qid_depth_thresh_v2_5(const char *key __rte_unused,
@@ -3604 +3589 @@
-@@ -705,18 +705,18 @@ set_qid_depth_thresh_v2_5(const char *key __rte_unused,
+@@ -702,18 +702,18 @@ set_qid_depth_thresh_v2_5(const char *key __rte_unused,
@@ -3626 +3611 @@
-@@ -738,7 +738,7 @@ dlb2_eventdev_info_get(struct rte_eventdev *dev,
+@@ -735,7 +735,7 @@ dlb2_eventdev_info_get(struct rte_eventdev *dev,
@@ -3635 +3620 @@
-@@ -781,7 +781,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2,
+@@ -778,7 +778,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2,
@@ -3644 +3629 @@
-@@ -809,7 +809,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2,
+@@ -806,7 +806,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2,
@@ -3653 +3638 @@
-@@ -854,7 +854,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2,
+@@ -851,7 +851,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2,
@@ -3662 +3647 @@
-@@ -930,27 +930,27 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
+@@ -927,27 +927,27 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
@@ -3694 +3679 @@
-@@ -1000,7 +1000,7 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
+@@ -997,7 +997,7 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
@@ -3703 +3688 @@
-@@ -1068,7 +1068,7 @@ dlb2_get_sn_allocation(struct dlb2_eventdev *dlb2, int group)
+@@ -1065,7 +1065,7 @@ dlb2_get_sn_allocation(struct dlb2_eventdev *dlb2, int group)
@@ -3712 +3697 @@
-@@ -1088,7 +1088,7 @@ dlb2_set_sn_allocation(struct dlb2_eventdev *dlb2, int group, int num)
+@@ -1085,7 +1085,7 @@ dlb2_set_sn_allocation(struct dlb2_eventdev *dlb2, int group, int num)
@@ -3721 +3706 @@
-@@ -1107,7 +1107,7 @@ dlb2_get_sn_occupancy(struct dlb2_eventdev *dlb2, int group)
+@@ -1104,7 +1104,7 @@ dlb2_get_sn_occupancy(struct dlb2_eventdev *dlb2, int group)
@@ -3730 +3715 @@
-@@ -1161,7 +1161,7 @@ dlb2_program_sn_allocation(struct dlb2_eventdev *dlb2,
+@@ -1158,7 +1158,7 @@ dlb2_program_sn_allocation(struct dlb2_eventdev *dlb2,
@@ -3739 +3724 @@
-@@ -1236,7 +1236,7 @@ dlb2_hw_create_ldb_queue(struct dlb2_eventdev *dlb2,
+@@ -1233,7 +1233,7 @@ dlb2_hw_create_ldb_queue(struct dlb2_eventdev *dlb2,
@@ -3748 +3733 @@
-@@ -1272,7 +1272,7 @@ dlb2_eventdev_ldb_queue_setup(struct rte_eventdev *dev,
+@@ -1269,7 +1269,7 @@ dlb2_eventdev_ldb_queue_setup(struct rte_eventdev *dev,
@@ -3757 +3742 @@
-@@ -1380,7 +1380,7 @@ dlb2_init_consume_qe(struct dlb2_port *qm_port, char *mz_name)
+@@ -1377,7 +1377,7 @@ dlb2_init_consume_qe(struct dlb2_port *qm_port, char *mz_name)
@@ -3766 +3751 @@
-@@ -1412,7 +1412,7 @@ dlb2_init_int_arm_qe(struct dlb2_port *qm_port, char *mz_name)
+@@ -1409,7 +1409,7 @@ dlb2_init_int_arm_qe(struct dlb2_port *qm_port, char *mz_name)
@@ -3775 +3760 @@
-@@ -1440,20 +1440,20 @@ dlb2_init_qe_mem(struct dlb2_port *qm_port, char *mz_name)
+@@ -1437,20 +1437,20 @@ dlb2_init_qe_mem(struct dlb2_port *qm_port, char *mz_name)
@@ -3799 +3784 @@
-@@ -1536,14 +1536,14 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
+@@ -1533,14 +1533,14 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
@@ -3816 +3801 @@
-@@ -1579,7 +1579,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
+@@ -1576,7 +1576,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
@@ -3825 +3810 @@
-@@ -1602,7 +1602,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
+@@ -1599,7 +1599,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
@@ -3834 +3819 @@
-@@ -1615,7 +1615,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
+@@ -1612,7 +1612,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
@@ -3843 +3828 @@
-@@ -1717,7 +1717,7 @@ error_exit:
+@@ -1714,7 +1714,7 @@ error_exit:
@@ -3852 +3837 @@
-@@ -1761,13 +1761,13 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
+@@ -1758,13 +1758,13 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
@@ -3868 +3853 @@
-@@ -1802,7 +1802,7 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
+@@ -1799,7 +1799,7 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
@@ -3877 +3862 @@
-@@ -1827,7 +1827,7 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
+@@ -1824,7 +1824,7 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
@@ -3886 +3871 @@
-@@ -1916,7 +1916,7 @@ error_exit:
+@@ -1913,7 +1913,7 @@ error_exit:
@@ -3895 +3880 @@
-@@ -1932,7 +1932,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
+@@ -1929,7 +1929,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
@@ -3904 +3889 @@
-@@ -1950,7 +1950,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
+@@ -1947,7 +1947,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
@@ -3913 +3898 @@
-@@ -1982,7 +1982,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
+@@ -1979,7 +1979,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
@@ -3922 +3907 @@
-@@ -2004,7 +2004,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
+@@ -2001,7 +2001,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
@@ -3931 +3916 @@
-@@ -2015,7 +2015,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
+@@ -2012,7 +2012,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
@@ -3940 +3925 @@
-@@ -2082,9 +2082,9 @@ dlb2_hw_map_ldb_qid_to_port(struct dlb2_hw_dev *handle,
+@@ -2079,9 +2079,9 @@ dlb2_hw_map_ldb_qid_to_port(struct dlb2_hw_dev *handle,
@@ -3952 +3937 @@
-@@ -2117,7 +2117,7 @@ dlb2_event_queue_join_ldb(struct dlb2_eventdev *dlb2,
+@@ -2114,7 +2114,7 @@ dlb2_event_queue_join_ldb(struct dlb2_eventdev *dlb2,
@@ -3961 +3946 @@
-@@ -2154,7 +2154,7 @@ dlb2_hw_create_dir_queue(struct dlb2_eventdev *dlb2,
+@@ -2151,7 +2151,7 @@ dlb2_hw_create_dir_queue(struct dlb2_eventdev *dlb2,
@@ -3970 +3955 @@
-@@ -2172,7 +2172,7 @@ dlb2_eventdev_dir_queue_setup(struct dlb2_eventdev *dlb2,
+@@ -2169,7 +2169,7 @@ dlb2_eventdev_dir_queue_setup(struct dlb2_eventdev *dlb2,
@@ -3979 +3964 @@
-@@ -2202,7 +2202,7 @@ dlb2_do_port_link(struct rte_eventdev *dev,
+@@ -2199,7 +2199,7 @@ dlb2_do_port_link(struct rte_eventdev *dev,
@@ -3988 +3973 @@
-@@ -2240,7 +2240,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
+@@ -2237,7 +2237,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
@@ -3997 +3982 @@
-@@ -2250,7 +2250,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
+@@ -2247,7 +2247,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
@@ -4006 +3991 @@
-@@ -2258,7 +2258,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
+@@ -2255,7 +2255,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
@@ -4015 +4000 @@
-@@ -2267,7 +2267,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
+@@ -2264,7 +2264,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
@@ -4024 +4009 @@
-@@ -2289,14 +2289,14 @@ dlb2_eventdev_port_link(struct rte_eventdev *dev, void *event_port,
+@@ -2286,14 +2286,14 @@ dlb2_eventdev_port_link(struct rte_eventdev *dev, void *event_port,
@@ -4041 +4026 @@
-@@ -2381,7 +2381,7 @@ dlb2_hw_unmap_ldb_qid_from_port(struct dlb2_hw_dev *handle,
+@@ -2378,7 +2378,7 @@ dlb2_hw_unmap_ldb_qid_from_port(struct dlb2_hw_dev *handle,
@@ -4050 +4035 @@
-@@ -2434,7 +2434,7 @@ dlb2_eventdev_port_unlink(struct rte_eventdev *dev, void *event_port,
+@@ -2431,7 +2431,7 @@ dlb2_eventdev_port_unlink(struct rte_eventdev *dev, void *event_port,
@@ -4059 +4044 @@
-@@ -2459,7 +2459,7 @@ dlb2_eventdev_port_unlink(struct rte_eventdev *dev, void *event_port,
+@@ -2456,7 +2456,7 @@ dlb2_eventdev_port_unlink(struct rte_eventdev *dev, void *event_port,
@@ -4068 +4053 @@
-@@ -2477,7 +2477,7 @@ dlb2_eventdev_port_unlink(struct rte_eventdev *dev, void *event_port,
+@@ -2474,7 +2474,7 @@ dlb2_eventdev_port_unlink(struct rte_eventdev *dev, void *event_port,
@@ -4077 +4062 @@
-@@ -2504,7 +2504,7 @@ dlb2_eventdev_port_unlinks_in_progress(struct rte_eventdev *dev,
+@@ -2501,7 +2501,7 @@ dlb2_eventdev_port_unlinks_in_progress(struct rte_eventdev *dev,
@@ -4086 +4071 @@
-@@ -2516,7 +2516,7 @@ dlb2_eventdev_port_unlinks_in_progress(struct rte_eventdev *dev,
+@@ -2513,7 +2513,7 @@ dlb2_eventdev_port_unlinks_in_progress(struct rte_eventdev *dev,
@@ -4095 +4080 @@
-@@ -2609,7 +2609,7 @@ dlb2_eventdev_start(struct rte_eventdev *dev)
+@@ -2606,7 +2606,7 @@ dlb2_eventdev_start(struct rte_eventdev *dev)
@@ -4104 +4089 @@
-@@ -2645,7 +2645,7 @@ dlb2_eventdev_start(struct rte_eventdev *dev)
+@@ -2642,7 +2642,7 @@ dlb2_eventdev_start(struct rte_eventdev *dev)
@@ -4113 +4098 @@
-@@ -2890,7 +2890,7 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
+@@ -2887,7 +2887,7 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
@@ -4115 +4100 @@
- 			DLB2_LOG_LINE_DBG("dlb2: put_qe: RTE_SCHED_TYPE_ORDERED");
+ 			DLB2_LOG_DBG("dlb2: put_qe: RTE_SCHED_TYPE_ORDERED\n");
@@ -4122 +4107 @@
-@@ -2909,7 +2909,7 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
+@@ -2906,7 +2906,7 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
@@ -4131 +4116 @@
-@@ -3156,7 +3156,7 @@ dlb2_event_release(struct dlb2_eventdev *dlb2,
+@@ -3153,7 +3153,7 @@ dlb2_event_release(struct dlb2_eventdev *dlb2,
@@ -4140 +4125 @@
-@@ -3213,7 +3213,7 @@ dlb2_event_release(struct dlb2_eventdev *dlb2,
+@@ -3210,7 +3210,7 @@ dlb2_event_release(struct dlb2_eventdev *dlb2,
@@ -4149 +4134 @@
-@@ -3367,7 +3367,7 @@ dlb2_process_dequeue_qes(struct dlb2_eventdev_port *ev_port,
+@@ -3364,7 +3364,7 @@ dlb2_process_dequeue_qes(struct dlb2_eventdev_port *ev_port,
@@ -4158 +4143 @@
-@@ -4283,7 +4283,7 @@ dlb2_get_ldb_queue_depth(struct dlb2_eventdev *dlb2,
+@@ -4278,7 +4278,7 @@ dlb2_get_ldb_queue_depth(struct dlb2_eventdev *dlb2,
@@ -4167 +4152 @@
-@@ -4303,7 +4303,7 @@ dlb2_get_dir_queue_depth(struct dlb2_eventdev *dlb2,
+@@ -4298,7 +4298,7 @@ dlb2_get_dir_queue_depth(struct dlb2_eventdev *dlb2,
@@ -4176 +4161 @@
-@@ -4394,7 +4394,7 @@ dlb2_drain(struct rte_eventdev *dev)
+@@ -4389,7 +4389,7 @@ dlb2_drain(struct rte_eventdev *dev)
@@ -4185 +4170 @@
-@@ -4402,7 +4402,7 @@ dlb2_drain(struct rte_eventdev *dev)
+@@ -4397,7 +4397,7 @@ dlb2_drain(struct rte_eventdev *dev)
@@ -4194 +4179 @@
-@@ -4420,7 +4420,7 @@ dlb2_drain(struct rte_eventdev *dev)
+@@ -4415,7 +4415,7 @@ dlb2_drain(struct rte_eventdev *dev)
@@ -4203 +4188 @@
-@@ -4435,7 +4435,7 @@ dlb2_drain(struct rte_eventdev *dev)
+@@ -4430,7 +4430,7 @@ dlb2_drain(struct rte_eventdev *dev)
@@ -4212 +4197 @@
-@@ -4454,7 +4454,7 @@ dlb2_eventdev_stop(struct rte_eventdev *dev)
+@@ -4449,7 +4449,7 @@ dlb2_eventdev_stop(struct rte_eventdev *dev)
@@ -4221 +4206 @@
-@@ -4610,7 +4610,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
+@@ -4605,7 +4605,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
@@ -4230 +4215 @@
-@@ -4618,14 +4618,14 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
+@@ -4613,14 +4613,14 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
@@ -4247 +4232 @@
-@@ -4648,7 +4648,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
+@@ -4643,7 +4643,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
@@ -4256 +4241 @@
-@@ -4656,7 +4656,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
+@@ -4651,7 +4651,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
@@ -4265 +4250 @@
-@@ -4664,7 +4664,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
+@@ -4659,7 +4659,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
@@ -4274 +4259 @@
-@@ -4694,14 +4694,14 @@ dlb2_secondary_eventdev_probe(struct rte_eventdev *dev,
+@@ -4689,14 +4689,14 @@ dlb2_secondary_eventdev_probe(struct rte_eventdev *dev,
@@ -4292 +4277 @@
-index 22094f30bb..c037cfe786 100644
+index ff15271dda..28de48e24e 100644
@@ -4561 +4546 @@
-index b3576e5f42..ed4e6e424c 100644
+index 3d15250e11..019e90f7e7 100644
@@ -4653 +4638 @@
-index 1273455673..f0b2c7de99 100644
+index dd4e64395f..4658eaf3a2 100644
@@ -4674 +4659 @@
-@@ -851,7 +851,7 @@ dpaa2_eventdev_crypto_queue_add_all(const struct rte_eventdev *dev,
+@@ -849,7 +849,7 @@ dpaa2_eventdev_crypto_queue_add_all(const struct rte_eventdev *dev,
@@ -4683 +4668 @@
-@@ -885,7 +885,7 @@ dpaa2_eventdev_crypto_queue_add(const struct rte_eventdev *dev,
+@@ -883,7 +883,7 @@ dpaa2_eventdev_crypto_queue_add(const struct rte_eventdev *dev,
@@ -4692 +4677 @@
-@@ -905,7 +905,7 @@ dpaa2_eventdev_crypto_queue_del_all(const struct rte_eventdev *dev,
+@@ -903,7 +903,7 @@ dpaa2_eventdev_crypto_queue_del_all(const struct rte_eventdev *dev,
@@ -4701 +4686 @@
-@@ -928,7 +928,7 @@ dpaa2_eventdev_crypto_queue_del(const struct rte_eventdev *dev,
+@@ -926,7 +926,7 @@ dpaa2_eventdev_crypto_queue_del(const struct rte_eventdev *dev,
@@ -4710 +4695 @@
-@@ -1161,7 +1161,7 @@ dpaa2_eventdev_destroy(const char *name)
+@@ -1159,7 +1159,7 @@ dpaa2_eventdev_destroy(const char *name)
@@ -4733 +4718 @@
-index b34a5fcacd..25853166bf 100644
+index 0cccaf7e97..fe0c0ede6f 100644
@@ -4844 +4829 @@
-@@ -662,7 +662,7 @@ opdl_probe(struct rte_vdev_device *vdev)
+@@ -659,7 +659,7 @@ opdl_probe(struct rte_vdev_device *vdev)
@@ -4853 +4838 @@
-@@ -709,7 +709,7 @@ opdl_probe(struct rte_vdev_device *vdev)
+@@ -706,7 +706,7 @@ opdl_probe(struct rte_vdev_device *vdev)
@@ -4862 +4847 @@
-@@ -753,7 +753,7 @@ opdl_remove(struct rte_vdev_device *vdev)
+@@ -750,7 +750,7 @@ opdl_remove(struct rte_vdev_device *vdev)
@@ -5367 +5352 @@
-index 19a52afc7d..7913bc547e 100644
+index 2096496917..babe77a20f 100644
@@ -5415 +5400 @@
-@@ -772,7 +772,7 @@ sw_start(struct rte_eventdev *dev)
+@@ -769,7 +769,7 @@ sw_start(struct rte_eventdev *dev)
@@ -5424 +5409 @@
-@@ -780,7 +780,7 @@ sw_start(struct rte_eventdev *dev)
+@@ -777,7 +777,7 @@ sw_start(struct rte_eventdev *dev)
@@ -5433 +5418 @@
-@@ -788,7 +788,7 @@ sw_start(struct rte_eventdev *dev)
+@@ -785,7 +785,7 @@ sw_start(struct rte_eventdev *dev)
@@ -5442 +5427 @@
-@@ -1000,7 +1000,7 @@ sw_probe(struct rte_vdev_device *vdev)
+@@ -997,7 +997,7 @@ sw_probe(struct rte_vdev_device *vdev)
@@ -5451 +5436 @@
-@@ -1070,7 +1070,7 @@ sw_probe(struct rte_vdev_device *vdev)
+@@ -1067,7 +1067,7 @@ sw_probe(struct rte_vdev_device *vdev)
@@ -5460 +5445 @@
-@@ -1134,7 +1134,7 @@ sw_remove(struct rte_vdev_device *vdev)
+@@ -1131,7 +1131,7 @@ sw_remove(struct rte_vdev_device *vdev)
@@ -5492 +5477 @@
-index 42e17d984c..886fb7fbb0 100644
+index 84371d5d1a..b0c6d153e4 100644
@@ -5495 +5480 @@
-@@ -69,7 +69,7 @@ rte_hw_mbuf_create_pool(struct rte_mempool *mp)
+@@ -67,7 +67,7 @@ rte_hw_mbuf_create_pool(struct rte_mempool *mp)
@@ -5504 +5489 @@
-@@ -213,7 +213,7 @@ rte_dpaa2_mbuf_release(struct rte_mempool *pool __rte_unused,
+@@ -198,7 +198,7 @@ rte_dpaa2_mbuf_release(struct rte_mempool *pool __rte_unused,
@@ -5513 +5498 @@
-@@ -357,7 +357,7 @@ rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool,
+@@ -342,7 +342,7 @@ rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool,
@@ -5522 +5507 @@
-@@ -472,7 +472,7 @@ dpaa2_populate(struct rte_mempool *mp, unsigned int max_objs,
+@@ -457,7 +457,7 @@ dpaa2_populate(struct rte_mempool *mp, unsigned int max_objs,
@@ -5954 +5939 @@
-index 17b7b5c543..5448a5f3d7 100644
+index 6ce87f83f4..da45ebf45f 100644
@@ -5957 +5942 @@
-@@ -1353,7 +1353,7 @@ axgbe_priority_flow_ctrl_set(struct rte_eth_dev *dev,
+@@ -1352,7 +1352,7 @@ axgbe_priority_flow_ctrl_set(struct rte_eth_dev *dev,
@@ -6000 +5985 @@
-index cdedf67c6f..209cf5a80c 100644
+index 06c21ebe6d..3cca8a07f3 100644
@@ -6087,39 +6071,0 @@
-diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
-index 55ed54bb0f..ad6bc1ec21 100644
---- a/drivers/net/cnxk/cn10k_ethdev.c
-+++ b/drivers/net/cnxk/cn10k_ethdev.c
-@@ -707,7 +707,7 @@ cn10k_rx_descriptor_dump(const struct rte_eth_dev *eth_dev, uint16_t qid,
- 	available_pkts = cn10k_nix_rx_avail_get(rxq);
-
- 	if ((offset + num - 1) >= available_pkts) {
--		plt_err("Invalid BD num=%u\n", num);
-+		plt_err("Invalid BD num=%u", num);
- 		return -EINVAL;
- 	}
-
-diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
-index ea92b1dcb6..84c88655f8 100644
---- a/drivers/net/cnxk/cn9k_ethdev.c
-+++ b/drivers/net/cnxk/cn9k_ethdev.c
-@@ -708,7 +708,7 @@ cn9k_rx_descriptor_dump(const struct rte_eth_dev *eth_dev, uint16_t qid,
- 	available_pkts = cn9k_nix_rx_avail_get(rxq);
-
- 	if ((offset + num - 1) >= available_pkts) {
--		plt_err("Invalid BD num=%u\n", num);
-+		plt_err("Invalid BD num=%u", num);
- 		return -EINVAL;
- 	}
-
-diff --git a/drivers/net/cnxk/cnxk_eswitch_devargs.c b/drivers/net/cnxk/cnxk_eswitch_devargs.c
-index 8167ce673a..655813c71a 100644
---- a/drivers/net/cnxk/cnxk_eswitch_devargs.c
-+++ b/drivers/net/cnxk/cnxk_eswitch_devargs.c
-@@ -26,7 +26,7 @@ populate_repr_hw_info(struct cnxk_eswitch_dev *eswitch_dev, struct rte_eth_devar
-
- 	if (eth_da->type != RTE_ETH_REPRESENTOR_VF && eth_da->type != RTE_ETH_REPRESENTOR_PF &&
- 	    eth_da->type != RTE_ETH_REPRESENTOR_SF) {
--		plt_err("unsupported representor type %d\n", eth_da->type);
-+		plt_err("unsupported representor type %d", eth_da->type);
- 		return -ENOTSUP;
- 	}
-
@@ -6127 +6073 @@
-index 38746c81c5..33bac55704 100644
+index c841b31051..60baf806ab 100644
@@ -6130 +6076 @@
-@@ -589,7 +589,7 @@ cnxk_nix_process_rx_conf(const struct rte_eth_rxconf *rx_conf,
+@@ -582,7 +582,7 @@ cnxk_nix_process_rx_conf(const struct rte_eth_rxconf *rx_conf,
@@ -6139 +6085 @@
-@@ -617,7 +617,7 @@ cnxk_nix_process_rx_conf(const struct rte_eth_rxconf *rx_conf,
+@@ -610,7 +610,7 @@ cnxk_nix_process_rx_conf(const struct rte_eth_rxconf *rx_conf,
@@ -6178 +6124 @@
-index b1093dd584..5b0948e07a 100644
+index c8f4848f92..89e00f8fc7 100644
@@ -6181 +6127 @@
-@@ -532,7 +532,7 @@ cnxk_nix_sq_flush(struct rte_eth_dev *eth_dev)
+@@ -528,7 +528,7 @@ cnxk_nix_sq_flush(struct rte_eth_dev *eth_dev)
@@ -6190,66 +6135,0 @@
-diff --git a/drivers/net/cnxk/cnxk_rep.c b/drivers/net/cnxk/cnxk_rep.c
-index ca0637bde5..652d419ad8 100644
---- a/drivers/net/cnxk/cnxk_rep.c
-+++ b/drivers/net/cnxk/cnxk_rep.c
-@@ -270,7 +270,7 @@ cnxk_representee_mtu_msg_process(struct cnxk_eswitch_dev *eswitch_dev, uint16_t
-
- 		rep_dev = cnxk_rep_pmd_priv(rep_eth_dev);
- 		if (rep_dev->rep_id == rep_id) {
--			plt_rep_dbg("Setting MTU as %d for hw_func %x rep_id %d\n", mtu, hw_func,
-+			plt_rep_dbg("Setting MTU as %d for hw_func %x rep_id %d", mtu, hw_func,
- 				    rep_id);
- 			rep_dev->repte_mtu = mtu;
- 			break;
-@@ -423,7 +423,7 @@ cnxk_rep_parent_setup(struct cnxk_eswitch_dev *eswitch_dev)
- 			plt_err("Failed to alloc switch domain: %d", rc);
- 			goto fail;
- 		}
--		plt_rep_dbg("Allocated switch domain id %d for pf %d\n", switch_domain_id, pf);
-+		plt_rep_dbg("Allocated switch domain id %d for pf %d", switch_domain_id, pf);
- 		eswitch_dev->sw_dom[j].switch_domain_id = switch_domain_id;
- 		eswitch_dev->sw_dom[j].pf = pf;
- 		prev_pf = pf;
-@@ -549,7 +549,7 @@ cnxk_rep_dev_probe(struct rte_pci_device *pci_dev, struct cnxk_eswitch_dev *eswi
- 	int i, j, rc;
-
- 	if (eswitch_dev->repr_cnt.nb_repr_created > RTE_MAX_ETHPORTS) {
--		plt_err("nb_representor_ports %d > %d MAX ETHPORTS\n",
-+		plt_err("nb_representor_ports %d > %d MAX ETHPORTS",
- 			eswitch_dev->repr_cnt.nb_repr_created, RTE_MAX_ETHPORTS);
- 		rc = -EINVAL;
- 		goto fail;
-@@ -604,7 +604,7 @@ cnxk_rep_dev_probe(struct rte_pci_device *pci_dev, struct cnxk_eswitch_dev *eswi
- 						   name, cnxk_representee_msg_thread_main,
- 						   eswitch_dev);
- 		if (rc != 0) {
--			plt_err("Failed to create thread for VF mbox handling\n");
-+			plt_err("Failed to create thread for VF mbox handling");
- 			goto thread_fail;
- 		}
- 	}
-diff --git a/drivers/net/cnxk/cnxk_rep.h b/drivers/net/cnxk/cnxk_rep.h
-index ad89649702..aaae2d4e8f 100644
---- a/drivers/net/cnxk/cnxk_rep.h
-+++ b/drivers/net/cnxk/cnxk_rep.h
-@@ -93,7 +93,7 @@ cnxk_rep_pmd_priv(const struct rte_eth_dev *eth_dev)
- static __rte_always_inline void
- cnxk_rep_pool_buffer_stats(struct rte_mempool *pool)
- {
--	plt_rep_dbg("        pool %s size %d buffer count in use  %d available %d\n", pool->name,
-+	plt_rep_dbg("        pool %s size %d buffer count in use  %d available %d", pool->name,
- 		    pool->size, rte_mempool_in_use_count(pool), rte_mempool_avail_count(pool));
- }
-
-diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
-index 222e178949..6f6707a0bd 100644
---- a/drivers/net/cpfl/cpfl_ethdev.c
-+++ b/drivers/net/cpfl/cpfl_ethdev.c
-@@ -2284,7 +2284,7 @@ get_running_host_id(void)
- 	uint8_t host_id = CPFL_INVALID_HOST_ID;
-
- 	if (uname(&unamedata) != 0)
--		PMD_INIT_LOG(ERR, "Cannot fetch node_name for host\n");
-+		PMD_INIT_LOG(ERR, "Cannot fetch node_name for host");
- 	else if (strstr(unamedata.nodename, "ipu-imc"))
- 		PMD_INIT_LOG(ERR, "CPFL PMD cannot be running on IMC.");
- 	else if (strstr(unamedata.nodename, "ipu-acc"))
@@ -6310 +6190 @@
-index 449bbda7ca..88374ea905 100644
+index 8e610b6bba..c5b1f161fd 100644
@@ -6331 +6211 @@
-@@ -1934,7 +1934,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
+@@ -1933,7 +1933,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
@@ -6340 +6220 @@
-@@ -2308,7 +2308,7 @@ int dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev,
+@@ -2307,7 +2307,7 @@ int dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev,
@@ -6349 +6229 @@
-@@ -2424,7 +2424,7 @@ rte_pmd_dpaa2_thread_init(void)
+@@ -2423,7 +2423,7 @@ rte_pmd_dpaa2_thread_init(void)
@@ -6358 +6238 @@
-@@ -2839,7 +2839,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
+@@ -2838,7 +2838,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
@@ -6367 +6247 @@
-@@ -2847,7 +2847,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
+@@ -2846,7 +2846,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
@@ -6376 +6256 @@
-@@ -2930,7 +2930,7 @@ rte_dpaa2_probe(struct rte_dpaa2_driver *dpaa2_drv,
+@@ -2929,7 +2929,7 @@ rte_dpaa2_probe(struct rte_dpaa2_driver *dpaa2_drv,
@@ -6386 +6266 @@
-index 6c7bac4d48..62e350d736 100644
+index eec7e60650..e590f6f748 100644
@@ -6398 +6278 @@
-@@ -3602,7 +3602,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
+@@ -3601,7 +3601,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
@@ -6407 +6287 @@
-@@ -3720,14 +3720,14 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
+@@ -3718,14 +3718,14 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
@@ -6424 +6304 @@
-@@ -3749,7 +3749,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
+@@ -3747,7 +3747,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
@@ -6433 +6313 @@
-@@ -3774,7 +3774,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
+@@ -3772,7 +3772,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
@@ -6442 +6322 @@
-@@ -3843,20 +3843,20 @@ dpaa2_dev_verify_attr(struct dpni_attr *dpni_attr,
+@@ -3841,20 +3841,20 @@ dpaa2_dev_verify_attr(struct dpni_attr *dpni_attr,
@@ -6467 +6347 @@
-@@ -3935,7 +3935,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
+@@ -3933,7 +3933,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
@@ -6476 +6356 @@
-@@ -3947,7 +3947,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
+@@ -3945,7 +3945,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
@@ -6485 +6365 @@
-@@ -3957,7 +3957,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
+@@ -3955,7 +3955,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
@@ -6494 +6374 @@
-@@ -3967,7 +3967,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
+@@ -3965,7 +3965,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
@@ -6503 +6383 @@
-@@ -4014,13 +4014,13 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
+@@ -4012,13 +4012,13 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
@@ -6519 +6399 @@
-@@ -4031,13 +4031,13 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
+@@ -4029,13 +4029,13 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
@@ -6656 +6536 @@
-index 36a14526a5..59f7a172c6 100644
+index 63463c4fbf..eb649fb063 100644
@@ -6696 +6576 @@
-index cb854964b4..97d65e7181 100644
+index 8fe5bfa013..3c0f282ec3 100644
@@ -6774 +6654 @@
-index 095be27b08..1e0a483d4a 100644
+index 8858f975f8..d64a1aedd3 100644
@@ -6777 +6657 @@
-@@ -5116,7 +5116,7 @@ eth_igb_get_module_info(struct rte_eth_dev *dev,
+@@ -5053,7 +5053,7 @@ eth_igb_get_module_info(struct rte_eth_dev *dev,
@@ -6787 +6667 @@
-index d02ee206f1..ffbecc407c 100644
+index c9352f0746..d8c30ef150 100644
@@ -6790 +6670 @@
-@@ -151,7 +151,7 @@ print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
+@@ -150,7 +150,7 @@ print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
@@ -6799 +6679 @@
-@@ -198,7 +198,7 @@ enetc_hardware_init(struct enetc_eth_hw *hw)
+@@ -197,7 +197,7 @@ enetc_hardware_init(struct enetc_eth_hw *hw)
@@ -6880 +6760 @@
-index cad8db2f6f..c1dba0c0fd 100644
+index b04b6c9aa1..1121874346 100644
@@ -6883 +6763 @@
-@@ -672,7 +672,7 @@ static void debug_log_add_del_addr(struct rte_ether_addr *addr, bool add)
+@@ -670,7 +670,7 @@ static void debug_log_add_del_addr(struct rte_ether_addr *addr, bool add)
@@ -6892 +6772 @@
-@@ -695,7 +695,7 @@ static int enicpmd_set_mc_addr_list(struct rte_eth_dev *eth_dev,
+@@ -693,7 +693,7 @@ static int enicpmd_set_mc_addr_list(struct rte_eth_dev *eth_dev,
@@ -6901 +6781 @@
-@@ -703,7 +703,7 @@ static int enicpmd_set_mc_addr_list(struct rte_eth_dev *eth_dev,
+@@ -701,7 +701,7 @@ static int enicpmd_set_mc_addr_list(struct rte_eth_dev *eth_dev,
@@ -6910 +6790 @@
-@@ -716,7 +716,7 @@ static int enicpmd_set_mc_addr_list(struct rte_eth_dev *eth_dev,
+@@ -714,7 +714,7 @@ static int enicpmd_set_mc_addr_list(struct rte_eth_dev *eth_dev,
@@ -6919 +6799 @@
-@@ -982,7 +982,7 @@ static int udp_tunnel_common_check(struct enic *enic,
+@@ -980,7 +980,7 @@ static int udp_tunnel_common_check(struct enic *enic,
@@ -6928 +6808 @@
-@@ -995,10 +995,10 @@ static int update_tunnel_port(struct enic *enic, uint16_t port, bool vxlan)
+@@ -993,10 +993,10 @@ static int update_tunnel_port(struct enic *enic, uint16_t port, bool vxlan)
@@ -6941 +6821 @@
-@@ -1029,7 +1029,7 @@ static int enicpmd_dev_udp_tunnel_port_add(struct rte_eth_dev *eth_dev,
+@@ -1027,7 +1027,7 @@ static int enicpmd_dev_udp_tunnel_port_add(struct rte_eth_dev *eth_dev,
@@ -6950 +6830 @@
-@@ -1061,7 +1061,7 @@ static int enicpmd_dev_udp_tunnel_port_del(struct rte_eth_dev *eth_dev,
+@@ -1059,7 +1059,7 @@ static int enicpmd_dev_udp_tunnel_port_del(struct rte_eth_dev *eth_dev,
@@ -6959 +6839 @@
-@@ -1325,7 +1325,7 @@ static int eth_enic_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+@@ -1323,7 +1323,7 @@ static int eth_enic_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
@@ -7188 +7068 @@
-index 09c6bff026..bcb983e4a0 100644
+index 343bd13d67..438c0c5441 100644
@@ -7200,35 +7079,0 @@
-@@ -736,7 +736,7 @@ gve_set_max_desc_cnt(struct gve_priv *priv,
- {
- 	if (priv->queue_format == GVE_DQO_RDA_FORMAT) {
- 		PMD_DRV_LOG(DEBUG, "Overriding max ring size from device for DQ "
--			    "queue format to 4096.\n");
-+			    "queue format to 4096.");
- 		priv->max_rx_desc_cnt = GVE_MAX_QUEUE_SIZE_DQO;
- 		priv->max_tx_desc_cnt = GVE_MAX_QUEUE_SIZE_DQO;
- 		return;
-diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c
-index 89b6ef384a..1f5fa3f1da 100644
---- a/drivers/net/gve/gve_rx.c
-+++ b/drivers/net/gve/gve_rx.c
-@@ -306,7 +306,7 @@ gve_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id,
-
- 	/* Ring size is required to be a power of two. */
- 	if (!rte_is_power_of_2(nb_desc)) {
--		PMD_DRV_LOG(ERR, "Invalid ring size %u. GVE ring size must be a power of 2.\n",
-+		PMD_DRV_LOG(ERR, "Invalid ring size %u. GVE ring size must be a power of 2.",
- 			    nb_desc);
- 		return -EINVAL;
- 	}
-diff --git a/drivers/net/gve/gve_tx.c b/drivers/net/gve/gve_tx.c
-index 658bfb972b..015ea9646b 100644
---- a/drivers/net/gve/gve_tx.c
-+++ b/drivers/net/gve/gve_tx.c
-@@ -561,7 +561,7 @@ gve_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id, uint16_t nb_desc,
-
- 	/* Ring size is required to be a power of two. */
- 	if (!rte_is_power_of_2(nb_desc)) {
--		PMD_DRV_LOG(ERR, "Invalid ring size %u. GVE ring size must be a power of 2.\n",
-+		PMD_DRV_LOG(ERR, "Invalid ring size %u. GVE ring size must be a power of 2.",
- 			    nb_desc);
- 		return -EINVAL;
- 	}
@@ -7398 +7243 @@
-index 26fa2eb951..f7162ee7bc 100644
+index 916bf30dcb..0b768ef140 100644
@@ -7491 +7336 @@
-index f847bf82bc..42f51c7621 100644
+index ffc1f6d874..2b043cd693 100644
@@ -7494 +7339 @@
-@@ -668,7 +668,7 @@ eth_i40e_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+@@ -653,7 +653,7 @@ eth_i40e_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
@@ -7503 +7348 @@
-@@ -1583,10 +1583,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
+@@ -1480,10 +1480,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
@@ -7515 +7360 @@
-@@ -2326,7 +2323,7 @@ i40e_phy_conf_link(struct i40e_hw *hw,
+@@ -2222,7 +2219,7 @@ i40e_phy_conf_link(struct i40e_hw *hw,
@@ -7524 +7369 @@
-@@ -2336,7 +2333,7 @@ i40e_phy_conf_link(struct i40e_hw *hw,
+@@ -2232,7 +2229,7 @@ i40e_phy_conf_link(struct i40e_hw *hw,
@@ -7533 +7378 @@
-@@ -2361,7 +2358,7 @@ i40e_phy_conf_link(struct i40e_hw *hw,
+@@ -2257,7 +2254,7 @@ i40e_phy_conf_link(struct i40e_hw *hw,
@@ -7542 +7387 @@
-@@ -6959,7 +6956,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
+@@ -6814,7 +6811,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
@@ -7551 +7396 @@
-@@ -6975,7 +6972,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
+@@ -6830,7 +6827,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
@@ -7560 +7405 @@
-@@ -6985,13 +6982,13 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
+@@ -6840,13 +6837,13 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
@@ -7576 +7421 @@
-@@ -7004,7 +7001,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
+@@ -6859,7 +6856,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
@@ -7585 +7430 @@
-@@ -7014,7 +7011,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
+@@ -6869,7 +6866,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
@@ -7594 +7439 @@
-@@ -11449,7 +11446,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
+@@ -11304,7 +11301,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
@@ -7603 +7448 @@
-@@ -11460,7 +11457,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
+@@ -11315,7 +11312,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
@@ -7612 +7457 @@
-@@ -11490,7 +11487,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
+@@ -11345,7 +11342,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
@@ -7621 +7466 @@
-@@ -11526,7 +11523,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
+@@ -11381,7 +11378,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
@@ -7630 +7475 @@
-@@ -11828,7 +11825,7 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
+@@ -11683,7 +11680,7 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
@@ -7639 +7484 @@
-@@ -11972,7 +11969,7 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
+@@ -11827,7 +11824,7 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
@@ -7648,63 +7492,0 @@
-@@ -12317,7 +12314,7 @@ i40e_fec_get_capability(struct rte_eth_dev *dev,
- 	if (hw->mac.type == I40E_MAC_X722 &&
- 	    !(hw->flags & I40E_HW_FLAG_X722_FEC_REQUEST_CAPABLE)) {
- 		PMD_DRV_LOG(ERR, "Setting FEC encoding not supported by"
--			 " firmware. Please update the NVM image.\n");
-+			 " firmware. Please update the NVM image.");
- 		return -ENOTSUP;
- 	}
-
-@@ -12359,7 +12356,7 @@ i40e_fec_get(struct rte_eth_dev *dev, uint32_t *fec_capa)
- 	/* Get link info */
- 	ret = i40e_aq_get_link_info(hw, enable_lse, &link_status, NULL);
- 	if (ret != I40E_SUCCESS) {
--		PMD_DRV_LOG(ERR, "Failed to get link information: %d\n",
-+		PMD_DRV_LOG(ERR, "Failed to get link information: %d",
- 				ret);
- 		return -ENOTSUP;
- 	}
-@@ -12369,7 +12366,7 @@ i40e_fec_get(struct rte_eth_dev *dev, uint32_t *fec_capa)
- 	ret = i40e_aq_get_phy_capabilities(hw, false, false, &abilities,
- 						  NULL);
- 	if (ret) {
--		PMD_DRV_LOG(ERR, "Failed to get PHY capabilities: %d\n",
-+		PMD_DRV_LOG(ERR, "Failed to get PHY capabilities: %d",
- 				ret);
- 		return -ENOTSUP;
- 	}
-@@ -12435,7 +12432,7 @@ i40e_fec_set(struct rte_eth_dev *dev, uint32_t fec_capa)
- 	if (hw->mac.type == I40E_MAC_X722 &&
- 	    !(hw->flags & I40E_HW_FLAG_X722_FEC_REQUEST_CAPABLE)) {
- 		PMD_DRV_LOG(ERR, "Setting FEC encoding not supported by"
--			 " firmware. Please update the NVM image.\n");
-+			 " firmware. Please update the NVM image.");
- 		return -ENOTSUP;
- 	}
-
-@@ -12507,7 +12504,7 @@ i40e_fec_set(struct rte_eth_dev *dev, uint32_t fec_capa)
- 	status = i40e_aq_get_phy_capabilities(hw, false, false, &abilities,
- 					      NULL);
- 	if (status) {
--		PMD_DRV_LOG(ERR, "Failed to get PHY capabilities: %d\n",
-+		PMD_DRV_LOG(ERR, "Failed to get PHY capabilities: %d",
- 				status);
- 		return -ENOTSUP;
- 	}
-@@ -12524,7 +12521,7 @@ i40e_fec_set(struct rte_eth_dev *dev, uint32_t fec_capa)
- 		config.fec_config = req_fec & I40E_AQ_PHY_FEC_CONFIG_MASK;
- 		status = i40e_aq_set_phy_config(hw, &config, NULL);
- 		if (status) {
--			PMD_DRV_LOG(ERR, "Failed to set PHY capabilities: %d\n",
-+			PMD_DRV_LOG(ERR, "Failed to set PHY capabilities: %d",
- 			status);
- 			return -ENOTSUP;
- 		}
-@@ -12532,7 +12529,7 @@ i40e_fec_set(struct rte_eth_dev *dev, uint32_t fec_capa)
-
- 	status = i40e_update_link_info(hw);
- 	if (status) {
--		PMD_DRV_LOG(ERR, "Failed to set PHY capabilities: %d\n",
-+		PMD_DRV_LOG(ERR, "Failed to set PHY capabilities: %d",
- 			status);
- 		return -ENOTSUP;
- 	}
@@ -7746 +7528 @@
-index ff977a3681..839c8a5442 100644
+index 5e693cb1ea..e65e8829d9 100644
@@ -7785,73 +7567 @@
-@@ -1564,7 +1564,7 @@ i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts
-
- 		if ((adapter->mbuf_check & I40E_MBUF_CHECK_F_TX_MBUF) &&
- 		    (rte_mbuf_check(mb, 1, &reason) != 0)) {
--			PMD_TX_LOG(ERR, "INVALID mbuf: %s\n", reason);
-+			PMD_TX_LOG(ERR, "INVALID mbuf: %s", reason);
- 			pkt_error = true;
- 			break;
- 		}
-@@ -1573,7 +1573,7 @@ i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts
- 		    (mb->data_len > mb->pkt_len ||
- 		     mb->data_len < I40E_TX_MIN_PKT_LEN ||
- 		     mb->data_len > adapter->max_pkt_len)) {
--			PMD_TX_LOG(ERR, "INVALID mbuf: data_len (%u) is out of range, reasonable range (%d - %u)\n",
-+			PMD_TX_LOG(ERR, "INVALID mbuf: data_len (%u) is out of range, reasonable range (%d - %u)",
- 				mb->data_len, I40E_TX_MIN_PKT_LEN, adapter->max_pkt_len);
- 			pkt_error = true;
- 			break;
-@@ -1586,13 +1586,13 @@ i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts
- 				 * the limites.
- 				 */
- 				if (mb->nb_segs > I40E_TX_MAX_MTU_SEG) {
--					PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs (%d) exceeds HW limit, maximum allowed value is %d\n",
-+					PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs (%d) exceeds HW limit, maximum allowed value is %d",
- 						mb->nb_segs, I40E_TX_MAX_MTU_SEG);
- 					pkt_error = true;
- 					break;
- 				}
- 				if (mb->pkt_len > I40E_FRAME_SIZE_MAX) {
--					PMD_TX_LOG(ERR, "INVALID mbuf: pkt_len (%d) exceeds HW limit, maximum allowed value is %d\n",
-+					PMD_TX_LOG(ERR, "INVALID mbuf: pkt_len (%d) exceeds HW limit, maximum allowed value is %d",
- 						mb->nb_segs, I40E_FRAME_SIZE_MAX);
- 					pkt_error = true;
- 					break;
-@@ -1606,18 +1606,18 @@ i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts
- 					/**
- 					 * MSS outside the range are considered malicious
- 					 */
--					PMD_TX_LOG(ERR, "INVALID mbuf: tso_segsz (%u) is out of range, reasonable range (%d - %u)\n",
-+					PMD_TX_LOG(ERR, "INVALID mbuf: tso_segsz (%u) is out of range, reasonable range (%d - %u)",
- 						mb->tso_segsz, I40E_MIN_TSO_MSS, I40E_MAX_TSO_MSS);
- 					pkt_error = true;
- 					break;
- 				}
- 				if (mb->nb_segs > ((struct i40e_tx_queue *)tx_queue)->nb_tx_desc) {
--					PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length\n");
-+					PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length");
- 					pkt_error = true;
- 					break;
- 				}
- 				if (mb->pkt_len > I40E_TSO_FRAME_SIZE_MAX) {
--					PMD_TX_LOG(ERR, "INVALID mbuf: pkt_len (%d) exceeds HW limit, maximum allowed value is %d\n",
-+					PMD_TX_LOG(ERR, "INVALID mbuf: pkt_len (%d) exceeds HW limit, maximum allowed value is %d",
- 						mb->nb_segs, I40E_TSO_FRAME_SIZE_MAX);
- 					pkt_error = true;
- 					break;
-@@ -1627,13 +1627,13 @@ i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts
-
- 		if (adapter->mbuf_check & I40E_MBUF_CHECK_F_TX_OFFLOAD) {
- 			if (ol_flags & I40E_TX_OFFLOAD_NOTSUP_MASK) {
--				PMD_TX_LOG(ERR, "INVALID mbuf: TX offload is not supported\n");
-+				PMD_TX_LOG(ERR, "INVALID mbuf: TX offload is not supported");
- 				pkt_error = true;
- 				break;
- 			}
-
- 			if (!rte_validate_tx_offload(mb)) {
--				PMD_TX_LOG(ERR, "INVALID mbuf: TX offload setup error\n");
-+				PMD_TX_LOG(ERR, "INVALID mbuf: TX offload setup error");
- 				pkt_error = true;
- 				break;
- 			}
-@@ -3573,7 +3573,7 @@ i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct i40e_tx_queue *txq)
+@@ -3467,7 +3467,7 @@ i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct i40e_tx_queue *txq)
@@ -7867 +7577 @@
-index 44276dcf38..c56fcfadf0 100644
+index 54bff05675..9087909ec2 100644
@@ -7870 +7580 @@
-@@ -2383,7 +2383,7 @@ static int iavf_parse_devargs(struct rte_eth_dev *dev)
+@@ -2301,7 +2301,7 @@ static int iavf_parse_devargs(struct rte_eth_dev *dev)
@@ -7879 +7589 @@
-@@ -2418,7 +2418,7 @@ static int iavf_parse_devargs(struct rte_eth_dev *dev)
+@@ -2336,7 +2336,7 @@ static int iavf_parse_devargs(struct rte_eth_dev *dev)
@@ -7888 +7598 @@
-@@ -3059,12 +3059,12 @@ iavf_dev_reset(struct rte_eth_dev *dev)
+@@ -2972,12 +2972,12 @@ iavf_dev_reset(struct rte_eth_dev *dev)
@@ -7903 +7613 @@
-@@ -3109,7 +3109,7 @@ iavf_handle_hw_reset(struct rte_eth_dev *dev)
+@@ -3022,7 +3022,7 @@ iavf_handle_hw_reset(struct rte_eth_dev *dev)
@@ -7912 +7622 @@
-@@ -3136,7 +3136,7 @@ iavf_handle_hw_reset(struct rte_eth_dev *dev)
+@@ -3049,7 +3049,7 @@ iavf_handle_hw_reset(struct rte_eth_dev *dev)
@@ -7922 +7632 @@
-index ecc31430d1..4850b9e381 100644
+index f19aa14646..ec0dffa30e 100644
@@ -7925 +7635 @@
-@@ -3036,7 +3036,7 @@ iavf_check_vlan_up2tc(struct iavf_tx_queue *txq, struct rte_mbuf *m)
+@@ -3027,7 +3027,7 @@ iavf_check_vlan_up2tc(struct iavf_tx_queue *txq, struct rte_mbuf *m)
@@ -7934,58 +7643,0 @@
-@@ -3830,7 +3830,7 @@ iavf_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts,
-
- 		if ((adapter->devargs.mbuf_check & IAVF_MBUF_CHECK_F_TX_MBUF) &&
- 		    (rte_mbuf_check(mb, 1, &reason) != 0)) {
--			PMD_TX_LOG(ERR, "INVALID mbuf: %s\n", reason);
-+			PMD_TX_LOG(ERR, "INVALID mbuf: %s", reason);
- 			pkt_error = true;
- 			break;
- 		}
-@@ -3838,7 +3838,7 @@ iavf_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts,
- 		if ((adapter->devargs.mbuf_check & IAVF_MBUF_CHECK_F_TX_SIZE) &&
- 		    (mb->data_len < IAVF_TX_MIN_PKT_LEN ||
- 		     mb->data_len > adapter->vf.max_pkt_len)) {
--			PMD_TX_LOG(ERR, "INVALID mbuf: data_len (%u) is out of range, reasonable range (%d - %u)\n",
-+			PMD_TX_LOG(ERR, "INVALID mbuf: data_len (%u) is out of range, reasonable range (%d - %u)",
- 					mb->data_len, IAVF_TX_MIN_PKT_LEN, adapter->vf.max_pkt_len);
- 			pkt_error = true;
- 			break;
-@@ -3848,7 +3848,7 @@ iavf_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts,
- 			/* Check condition for nb_segs > IAVF_TX_MAX_MTU_SEG. */
- 			if (!(ol_flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG))) {
- 				if (mb->nb_segs > IAVF_TX_MAX_MTU_SEG) {
--					PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs (%d) exceeds HW limit, maximum allowed value is %d\n",
-+					PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs (%d) exceeds HW limit, maximum allowed value is %d",
- 							mb->nb_segs, IAVF_TX_MAX_MTU_SEG);
- 					pkt_error = true;
- 					break;
-@@ -3856,12 +3856,12 @@ iavf_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts,
- 			} else if ((mb->tso_segsz < IAVF_MIN_TSO_MSS) ||
- 				   (mb->tso_segsz > IAVF_MAX_TSO_MSS)) {
- 				/* MSS outside the range are considered malicious */
--				PMD_TX_LOG(ERR, "INVALID mbuf: tso_segsz (%u) is out of range, reasonable range (%d - %u)\n",
-+				PMD_TX_LOG(ERR, "INVALID mbuf: tso_segsz (%u) is out of range, reasonable range (%d - %u)",
- 						mb->tso_segsz, IAVF_MIN_TSO_MSS, IAVF_MAX_TSO_MSS);
- 				pkt_error = true;
- 				break;
- 			} else if (mb->nb_segs > txq->nb_tx_desc) {
--				PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length\n");
-+				PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length");
- 				pkt_error = true;
- 				break;
- 			}
-@@ -3869,13 +3869,13 @@ iavf_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts,
-
- 		if (adapter->devargs.mbuf_check & IAVF_MBUF_CHECK_F_TX_OFFLOAD) {
- 			if (ol_flags & IAVF_TX_OFFLOAD_NOTSUP_MASK) {
--				PMD_TX_LOG(ERR, "INVALID mbuf: TX offload is not supported\n");
-+				PMD_TX_LOG(ERR, "INVALID mbuf: TX offload is not supported");
- 				pkt_error = true;
- 				break;
- 			}
-
- 			if (!rte_validate_tx_offload(mb)) {
--				PMD_TX_LOG(ERR, "INVALID mbuf: TX offload setup error\n");
-+				PMD_TX_LOG(ERR, "INVALID mbuf: TX offload setup error");
- 				pkt_error = true;
- 				break;
- 			}
@@ -7993 +7645 @@
-index 8f3a385ca5..91f4943a11 100644
+index 5d845bba31..a025b0ea7f 100644
@@ -8005 +7657 @@
-@@ -2088,7 +2088,7 @@ eth_ice_dcf_pci_probe(__rte_unused struct rte_pci_driver *pci_drv,
+@@ -2087,7 +2087,7 @@ eth_ice_dcf_pci_probe(__rte_unused struct rte_pci_driver *pci_drv,
@@ -8082 +7734 @@
-index 304f959b7e..7b1bd163a2 100644
+index c1d2b91ad7..86f43050a5 100644
@@ -8085 +7737 @@
-@@ -1907,7 +1907,7 @@ no_dsn:
+@@ -1867,7 +1867,7 @@ no_dsn:
@@ -8094 +7746 @@
-@@ -1916,7 +1916,7 @@ load_fw:
+@@ -1876,7 +1876,7 @@ load_fw:
@@ -8103 +7755 @@
-@@ -2166,7 +2166,7 @@ static int ice_parse_devargs(struct rte_eth_dev *dev)
+@@ -2074,7 +2074,7 @@ static int ice_parse_devargs(struct rte_eth_dev *dev)
@@ -8112 +7764 @@
-@@ -2405,20 +2405,20 @@ ice_dev_init(struct rte_eth_dev *dev)
+@@ -2340,20 +2340,20 @@ ice_dev_init(struct rte_eth_dev *dev)
@@ -8136 +7788 @@
-@@ -2470,14 +2470,14 @@ ice_dev_init(struct rte_eth_dev *dev)
+@@ -2405,14 +2405,14 @@ ice_dev_init(struct rte_eth_dev *dev)
@@ -8147 +7799 @@
- 	ret = ice_lldp_fltr_add_remove(hw, vsi->vsi_id, true);
+ 	ret = ice_vsi_config_sw_lldp(vsi, true);
@@ -8154,3 +7806,3 @@
-@@ -2502,7 +2502,7 @@ ice_dev_init(struct rte_eth_dev *dev)
- 	if (hw->phy_model == ICE_PHY_E822) {
- 		ret = ice_start_phy_timer_e822(hw, hw->pf_id);
+@@ -2439,7 +2439,7 @@ ice_dev_init(struct rte_eth_dev *dev)
+ 	if (hw->phy_cfg == ICE_PHY_E822) {
+ 		ret = ice_start_phy_timer_e822(hw, hw->pf_id, true);
@@ -8163 +7815 @@
-@@ -2748,7 +2748,7 @@ ice_hash_moveout(struct ice_pf *pf, struct ice_rss_hash_cfg *cfg)
+@@ -2686,7 +2686,7 @@ ice_hash_moveout(struct ice_pf *pf, struct ice_rss_hash_cfg *cfg)
@@ -8172 +7824 @@
-@@ -2769,7 +2769,7 @@ ice_hash_moveback(struct ice_pf *pf, struct ice_rss_hash_cfg *cfg)
+@@ -2707,7 +2707,7 @@ ice_hash_moveback(struct ice_pf *pf, struct ice_rss_hash_cfg *cfg)
@@ -8181 +7833 @@
-@@ -3164,7 +3164,7 @@ ice_rem_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
+@@ -3102,7 +3102,7 @@ ice_rem_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
@@ -8190 +7842 @@
-@@ -3180,15 +3180,15 @@ ice_add_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
+@@ -3118,15 +3118,15 @@ ice_add_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
@@ -8209 +7861 @@
-@@ -3378,7 +3378,7 @@ ice_get_default_rss_key(uint8_t *rss_key, uint32_t rss_key_size)
+@@ -3316,7 +3316,7 @@ ice_get_default_rss_key(uint8_t *rss_key, uint32_t rss_key_size)
@@ -8218 +7870 @@
-@@ -3413,12 +3413,12 @@ static int ice_init_rss(struct ice_pf *pf)
+@@ -3351,12 +3351,12 @@ static int ice_init_rss(struct ice_pf *pf)
@@ -8233 +7885 @@
-@@ -4277,7 +4277,7 @@ ice_phy_conf_link(struct ice_hw *hw,
+@@ -4202,7 +4202,7 @@ ice_phy_conf_link(struct ice_hw *hw,
@@ -8242 +7894 @@
-@@ -5734,7 +5734,7 @@ ice_get_module_info(struct rte_eth_dev *dev,
+@@ -5657,7 +5657,7 @@ ice_get_module_info(struct rte_eth_dev *dev,
@@ -8251 +7903 @@
-@@ -5805,7 +5805,7 @@ ice_get_module_eeprom(struct rte_eth_dev *dev,
+@@ -5728,7 +5728,7 @@ ice_get_module_eeprom(struct rte_eth_dev *dev,
@@ -8260,27 +7911,0 @@
-@@ -6773,7 +6773,7 @@ ice_fec_get_capability(struct rte_eth_dev *dev, struct rte_eth_fec_capa *speed_f
- 	ret = ice_aq_get_phy_caps(hw->port_info, false, ICE_AQC_REPORT_TOPO_CAP_MEDIA,
- 				  &pcaps, NULL);
- 	if (ret != ICE_SUCCESS) {
--		PMD_DRV_LOG(ERR, "Failed to get capability information: %d\n",
-+		PMD_DRV_LOG(ERR, "Failed to get capability information: %d",
- 				ret);
- 		return -ENOTSUP;
- 	}
-@@ -6805,7 +6805,7 @@ ice_fec_get(struct rte_eth_dev *dev, uint32_t *fec_capa)
-
- 	ret = ice_get_link_info_safe(pf, enable_lse, &link_status);
- 	if (ret != ICE_SUCCESS) {
--		PMD_DRV_LOG(ERR, "Failed to get link information: %d\n",
-+		PMD_DRV_LOG(ERR, "Failed to get link information: %d",
- 			ret);
- 		return -ENOTSUP;
- 	}
-@@ -6815,7 +6815,7 @@ ice_fec_get(struct rte_eth_dev *dev, uint32_t *fec_capa)
- 	ret = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_TOPO_CAP_MEDIA,
- 				  &pcaps, NULL);
- 	if (ret != ICE_SUCCESS) {
--		PMD_DRV_LOG(ERR, "Failed to get capability information: %d\n",
-+		PMD_DRV_LOG(ERR, "Failed to get capability information: %d",
- 			ret);
- 		return -ENOTSUP;
- 	}
@@ -8288 +7913 @@
-index edd8cc8f1a..741107f939 100644
+index 0b7920ad44..dd9130ace3 100644
@@ -8301 +7926 @@
-index b720e0f755..00d65bc637 100644
+index d8c46347d2..dad117679d 100644
@@ -8304 +7929 @@
-@@ -1245,13 +1245,13 @@ ice_hash_add_raw_cfg(struct ice_adapter *ad,
+@@ -1242,13 +1242,13 @@ ice_hash_add_raw_cfg(struct ice_adapter *ad,
@@ -8320 +7945 @@
-@@ -1259,7 +1259,7 @@ ice_hash_add_raw_cfg(struct ice_adapter *ad,
+@@ -1256,7 +1256,7 @@ ice_hash_add_raw_cfg(struct ice_adapter *ad,
@@ -8329 +7954 @@
-@@ -1381,7 +1381,7 @@ ice_hash_rem_raw_cfg(struct ice_adapter *ad,
+@@ -1378,7 +1378,7 @@ ice_hash_rem_raw_cfg(struct ice_adapter *ad,
@@ -8339 +7964 @@
-index f270498ed1..acd7539b5e 100644
+index dea6a5b535..7da314217a 100644
@@ -8342 +7967 @@
-@@ -2839,7 +2839,7 @@ ice_xmit_cleanup(struct ice_tx_queue *txq)
+@@ -2822,7 +2822,7 @@ ice_xmit_cleanup(struct ice_tx_queue *txq)
@@ -8351,66 +7975,0 @@
-@@ -3714,7 +3714,7 @@ ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-
- 		if ((adapter->devargs.mbuf_check & ICE_MBUF_CHECK_F_TX_MBUF) &&
- 		    (rte_mbuf_check(mb, 1, &reason) != 0)) {
--			PMD_TX_LOG(ERR, "INVALID mbuf: %s\n", reason);
-+			PMD_TX_LOG(ERR, "INVALID mbuf: %s", reason);
- 			pkt_error = true;
- 			break;
- 		}
-@@ -3723,7 +3723,7 @@ ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
- 		    (mb->data_len > mb->pkt_len ||
- 		     mb->data_len < ICE_TX_MIN_PKT_LEN ||
- 		     mb->data_len > ICE_FRAME_SIZE_MAX)) {
--			PMD_TX_LOG(ERR, "INVALID mbuf: data_len (%u) is out of range, reasonable range (%d - %d)\n",
-+			PMD_TX_LOG(ERR, "INVALID mbuf: data_len (%u) is out of range, reasonable range (%d - %d)",
- 				mb->data_len, ICE_TX_MIN_PKT_LEN, ICE_FRAME_SIZE_MAX);
- 			pkt_error = true;
- 			break;
-@@ -3736,13 +3736,13 @@ ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
- 				 * the limites.
- 				 */
- 				if (mb->nb_segs > ICE_TX_MTU_SEG_MAX) {
--					PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs (%d) exceeds HW limit, maximum allowed value is %d\n",
-+					PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs (%d) exceeds HW limit, maximum allowed value is %d",
- 						mb->nb_segs, ICE_TX_MTU_SEG_MAX);
- 					pkt_error = true;
- 					break;
- 				}
- 				if (mb->pkt_len > ICE_FRAME_SIZE_MAX) {
--					PMD_TX_LOG(ERR, "INVALID mbuf: pkt_len (%d) exceeds HW limit, maximum allowed value is %d\n",
-+					PMD_TX_LOG(ERR, "INVALID mbuf: pkt_len (%d) exceeds HW limit, maximum allowed value is %d",
- 						mb->nb_segs, ICE_FRAME_SIZE_MAX);
- 					pkt_error = true;
- 					break;
-@@ -3756,13 +3756,13 @@ ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
- 					/**
- 					 * MSS outside the range are considered malicious
- 					 */
--					PMD_TX_LOG(ERR, "INVALID mbuf: tso_segsz (%u) is out of range, reasonable range (%d - %u)\n",
-+					PMD_TX_LOG(ERR, "INVALID mbuf: tso_segsz (%u) is out of range, reasonable range (%d - %u)",
- 						mb->tso_segsz, ICE_MIN_TSO_MSS, ICE_MAX_TSO_MSS);
- 					pkt_error = true;
- 					break;
- 				}
- 				if (mb->nb_segs > ((struct ice_tx_queue *)tx_queue)->nb_tx_desc) {
--					PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length\n");
-+					PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length");
- 					pkt_error = true;
- 					break;
- 				}
-@@ -3771,13 +3771,13 @@ ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-
- 		if (adapter->devargs.mbuf_check & ICE_MBUF_CHECK_F_TX_OFFLOAD) {
- 			if (ol_flags & ICE_TX_OFFLOAD_NOTSUP_MASK) {
--				PMD_TX_LOG(ERR, "INVALID mbuf: TX offload is not supported\n");
-+				PMD_TX_LOG(ERR, "INVALID mbuf: TX offload is not supported");
- 				pkt_error = true;
- 				break;
- 			}
-
- 			if (!rte_validate_tx_offload(mb)) {
--				PMD_TX_LOG(ERR, "INVALID mbuf: TX offload setup error\n");
-+				PMD_TX_LOG(ERR, "INVALID mbuf: TX offload setup error");
- 				pkt_error = true;
- 				break;
- 			}
@@ -8692 +8251 @@
-index d88d4065f1..357307b2e0 100644
+index a44497ce51..3ac65ca3b3 100644
@@ -8695 +8254 @@
-@@ -1155,10 +1155,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
+@@ -1154,10 +1154,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
@@ -8707 +8266 @@
-@@ -1783,7 +1780,7 @@ eth_ixgbe_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+@@ -1782,7 +1779,7 @@ eth_ixgbe_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
@@ -8825 +8384 @@
-index 91ba395ac3..e967fe5e48 100644
+index 0a0f639e39..002bc71c2a 100644
@@ -8828 +8387 @@
-@@ -173,14 +173,14 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
+@@ -171,14 +171,14 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
@@ -8845 +8404 @@
-@@ -193,7 +193,7 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
+@@ -191,7 +191,7 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
@@ -8854 +8413 @@
-@@ -424,7 +424,7 @@ ixgbe_disable_vf_mc_promisc(struct rte_eth_dev *dev, uint32_t vf)
+@@ -422,7 +422,7 @@ ixgbe_disable_vf_mc_promisc(struct rte_eth_dev *dev, uint32_t vf)
@@ -8863 +8422 @@
-@@ -630,7 +630,7 @@ ixgbe_negotiate_vf_api(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
+@@ -628,7 +628,7 @@ ixgbe_negotiate_vf_api(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
@@ -8872 +8431 @@
-@@ -679,7 +679,7 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
+@@ -677,7 +677,7 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
@@ -8881 +8440 @@
-@@ -713,7 +713,7 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
+@@ -711,7 +711,7 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
@@ -8890 +8449 @@
-@@ -769,7 +769,7 @@ ixgbe_set_vf_mc_promisc(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
+@@ -767,7 +767,7 @@ ixgbe_set_vf_mc_promisc(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
@@ -8899 +8458 @@
-@@ -806,7 +806,7 @@ ixgbe_set_vf_macvlan_msg(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
+@@ -804,7 +804,7 @@ ixgbe_set_vf_macvlan_msg(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
@@ -8944 +8503 @@
-index 16da22b5c6..e220ffaf92 100644
+index 18377d9caf..f05f4c24df 100644
@@ -8957 +8516 @@
-index c19db5c0eb..9c2872429f 100644
+index a1a7e93288..7c0ac6888b 100644
@@ -9001 +8560 @@
-index e8dda8d460..29944f5070 100644
+index 4dced0d328..68b0a8b8ab 100644
@@ -9004 +8563 @@
-@@ -1119,7 +1119,7 @@ s32 ngbe_set_pcie_master(struct ngbe_hw *hw, bool enable)
+@@ -1067,7 +1067,7 @@ s32 ngbe_set_pcie_master(struct ngbe_hw *hw, bool enable)
@@ -9014 +8573 @@
-index 23a452cacd..6c45ffaad3 100644
+index fb86e7b10d..4321924cb9 100644
@@ -9017,3 +8576,3 @@
-@@ -382,7 +382,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
- 		err = ngbe_flash_read_dword(hw, 0xFFFDC, &ssid);
- 		if (err) {
+@@ -381,7 +381,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
+ 		ssid = ngbe_flash_read_dword(hw, 0xFFFDC);
+ 		if (ssid == 0x1) {
@@ -9076 +8635 @@
-index 45ea0b9c34..e84de5c1c7 100644
+index 9f11a2f317..8628edf8a7 100644
@@ -9079 +8638 @@
-@@ -170,7 +170,7 @@ cnxk_ep_xmit_pkts_scalar_mseg(struct rte_mbuf **tx_pkts, struct otx_ep_instr_que
+@@ -139,7 +139,7 @@ cnxk_ep_xmit_pkts_scalar_mseg(struct rte_mbuf **tx_pkts, struct otx_ep_instr_que
@@ -9089 +8648 @@
-index 39b28de2d0..d44ac211f1 100644
+index ef275703c3..74b63a161f 100644
@@ -9147 +8706 @@
-index 2aeebb4675..76f72c64c9 100644
+index 7f4edf8dcf..fdab542246 100644
@@ -9232 +8791 @@
-index 73eb0c9d31..7d5dd91a77 100644
+index 82e57520d3..938c51b35d 100644
@@ -9235 +8794 @@
-@@ -120,7 +120,7 @@ union otx_ep_instr_irh {
+@@ -119,7 +119,7 @@ union otx_ep_instr_irh {
@@ -9245 +8804 @@
-index 46211361a0..c4a5a67c79 100644
+index 615cbbb648..c0298a56ac 100644
@@ -9248 +8807 @@
-@@ -175,7 +175,7 @@ otx_ep_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
+@@ -118,7 +118,7 @@ otx_ep_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
@@ -9257 +8816 @@
-@@ -220,7 +220,7 @@ otx_ep_dev_set_default_mac_addr(struct rte_eth_dev *eth_dev,
+@@ -163,7 +163,7 @@ otx_ep_dev_set_default_mac_addr(struct rte_eth_dev *eth_dev,
@@ -9266 +8825 @@
-@@ -237,7 +237,7 @@ otx_ep_dev_start(struct rte_eth_dev *eth_dev)
+@@ -180,7 +180,7 @@ otx_ep_dev_start(struct rte_eth_dev *eth_dev)
@@ -9275 +8834 @@
-@@ -246,7 +246,7 @@ otx_ep_dev_start(struct rte_eth_dev *eth_dev)
+@@ -189,7 +189,7 @@ otx_ep_dev_start(struct rte_eth_dev *eth_dev)
@@ -9284 +8843 @@
-@@ -255,7 +255,7 @@ otx_ep_dev_start(struct rte_eth_dev *eth_dev)
+@@ -198,7 +198,7 @@ otx_ep_dev_start(struct rte_eth_dev *eth_dev)
@@ -9293 +8852 @@
-@@ -298,7 +298,7 @@ otx_ep_ism_setup(struct otx_ep_device *otx_epvf)
+@@ -241,7 +241,7 @@ otx_ep_ism_setup(struct otx_ep_device *otx_epvf)
@@ -9302 +8861 @@
-@@ -342,12 +342,12 @@ otx_ep_chip_specific_setup(struct otx_ep_device *otx_epvf)
+@@ -285,12 +285,12 @@ otx_ep_chip_specific_setup(struct otx_ep_device *otx_epvf)
@@ -9317 +8876 @@
-@@ -361,7 +361,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
+@@ -304,7 +304,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
@@ -9326 +8885 @@
-@@ -385,7 +385,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
+@@ -328,7 +328,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
@@ -9335 +8894 @@
-@@ -393,7 +393,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
+@@ -336,7 +336,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
@@ -9344 +8903 @@
-@@ -413,10 +413,10 @@ otx_ep_dev_configure(struct rte_eth_dev *eth_dev)
+@@ -356,10 +356,10 @@ otx_ep_dev_configure(struct rte_eth_dev *eth_dev)
@@ -9357 +8916 @@
-@@ -460,29 +460,29 @@ otx_ep_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
+@@ -403,29 +403,29 @@ otx_ep_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
@@ -9392 +8951 @@
-@@ -511,7 +511,7 @@ otx_ep_rx_queue_release(struct rte_eth_dev *dev, uint16_t q_no)
+@@ -454,7 +454,7 @@ otx_ep_rx_queue_release(struct rte_eth_dev *dev, uint16_t q_no)
@@ -9401 +8960 @@
-@@ -545,16 +545,16 @@ otx_ep_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
+@@ -488,16 +488,16 @@ otx_ep_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
@@ -9421 +8980 @@
-@@ -562,12 +562,12 @@ otx_ep_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
+@@ -505,12 +505,12 @@ otx_ep_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
@@ -9436 +8995 @@
-@@ -660,23 +660,23 @@ otx_ep_dev_close(struct rte_eth_dev *eth_dev)
+@@ -603,23 +603,23 @@ otx_ep_dev_close(struct rte_eth_dev *eth_dev)
@@ -9465 +9024 @@
-@@ -692,7 +692,7 @@ otx_ep_dev_get_mac_addr(struct rte_eth_dev *eth_dev,
+@@ -635,7 +635,7 @@ otx_ep_dev_get_mac_addr(struct rte_eth_dev *eth_dev,
@@ -9474 +9033 @@
-@@ -741,22 +741,22 @@ static int otx_ep_eth_dev_query_set_vf_mac(struct rte_eth_dev *eth_dev,
+@@ -684,22 +684,22 @@ static int otx_ep_eth_dev_query_set_vf_mac(struct rte_eth_dev *eth_dev,
@@ -9502,10 +9061 @@
-@@ -780,7 +780,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
-
- 	/* Parse devargs string */
- 	if (otx_ethdev_parse_devargs(eth_dev->device->devargs, otx_epvf)) {
--		otx_ep_err("Failed to parse devargs\n");
-+		otx_ep_err("Failed to parse devargs");
- 		return -EINVAL;
- 	}
-
-@@ -797,7 +797,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
+@@ -734,7 +734,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
@@ -9520 +9070 @@
-@@ -817,12 +817,12 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
+@@ -754,12 +754,12 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
@@ -9536 +9086 @@
-@@ -831,7 +831,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
+@@ -768,7 +768,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
@@ -9665 +9215 @@
-index ec32ab087e..9680a59797 100644
+index c421ef0a1c..65a1f304e8 100644
@@ -9668 +9218 @@
-@@ -23,19 +23,19 @@ otx_ep_dmazone_free(const struct rte_memzone *mz)
+@@ -22,19 +22,19 @@ otx_ep_dmazone_free(const struct rte_memzone *mz)
@@ -9691 +9241 @@
-@@ -47,7 +47,7 @@ otx_ep_delete_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no)
+@@ -46,7 +46,7 @@ otx_ep_delete_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no)
@@ -9700 +9250 @@
-@@ -69,7 +69,7 @@ otx_ep_delete_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no)
+@@ -68,7 +68,7 @@ otx_ep_delete_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no)
@@ -9709 +9259 @@
-@@ -95,7 +95,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
+@@ -94,7 +94,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
@@ -9718 +9268 @@
-@@ -103,7 +103,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
+@@ -102,7 +102,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
@@ -9727 +9277 @@
-@@ -118,7 +118,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
+@@ -117,7 +117,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
@@ -9736 +9286 @@
-@@ -126,7 +126,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
+@@ -125,7 +125,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
@@ -9745 +9295 @@
-@@ -134,14 +134,14 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
+@@ -133,14 +133,14 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
@@ -9762 +9312 @@
-@@ -187,12 +187,12 @@ otx_ep_setup_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no, int num_descs,
+@@ -185,12 +185,12 @@ otx_ep_setup_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no, int num_descs,
@@ -9777 +9327 @@
-@@ -235,7 +235,7 @@ otx_ep_delete_oqs(struct otx_ep_device *otx_ep, uint32_t oq_no)
+@@ -233,7 +233,7 @@ otx_ep_delete_oqs(struct otx_ep_device *otx_ep, uint32_t oq_no)
@@ -9786 +9336 @@
-@@ -255,7 +255,7 @@ otx_ep_delete_oqs(struct otx_ep_device *otx_ep, uint32_t oq_no)
+@@ -253,7 +253,7 @@ otx_ep_delete_oqs(struct otx_ep_device *otx_ep, uint32_t oq_no)
@@ -9795 +9345 @@
-@@ -270,7 +270,7 @@ otx_ep_droq_setup_ring_buffers(struct otx_ep_droq *droq)
+@@ -268,7 +268,7 @@ otx_ep_droq_setup_ring_buffers(struct otx_ep_droq *droq)
@@ -9804 +9354 @@
-@@ -324,7 +324,7 @@ otx_ep_init_droq(struct otx_ep_device *otx_ep, uint32_t q_no,
+@@ -296,7 +296,7 @@ otx_ep_init_droq(struct otx_ep_device *otx_ep, uint32_t q_no,
@@ -9813 +9363 @@
-@@ -344,23 +344,23 @@ otx_ep_init_droq(struct otx_ep_device *otx_ep, uint32_t q_no,
+@@ -316,23 +316,23 @@ otx_ep_init_droq(struct otx_ep_device *otx_ep, uint32_t q_no,
@@ -9841 +9391 @@
-@@ -396,17 +396,17 @@ otx_ep_setup_oqs(struct otx_ep_device *otx_ep, int oq_no, int num_descs,
+@@ -366,17 +366,17 @@ otx_ep_setup_oqs(struct otx_ep_device *otx_ep, int oq_no, int num_descs,
@@ -9862 +9412 @@
-@@ -431,12 +431,12 @@ otx_ep_iqreq_delete(struct otx_ep_instr_queue *iq, uint32_t idx)
+@@ -401,12 +401,12 @@ otx_ep_iqreq_delete(struct otx_ep_instr_queue *iq, uint32_t idx)
@@ -9877 +9427 @@
-@@ -599,7 +599,7 @@ prepare_xmit_gather_list(struct otx_ep_instr_queue *iq, struct rte_mbuf *m, uint
+@@ -568,7 +568,7 @@ prepare_xmit_gather_list(struct otx_ep_instr_queue *iq, struct rte_mbuf *m, uint
@@ -9886 +9436 @@
-@@ -675,16 +675,16 @@ otx_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
+@@ -644,16 +644,16 @@ otx_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
@@ -9910 +9460 @@
-@@ -757,7 +757,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, struct otx_ep_droq *droq,
+@@ -726,7 +726,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, struct otx_ep_droq *droq,
@@ -9919 +9469 @@
-@@ -766,7 +766,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, struct otx_ep_droq *droq,
+@@ -735,7 +735,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, struct otx_ep_droq *droq,
@@ -9928 +9478 @@
-@@ -834,7 +834,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, struct otx_ep_droq *droq,
+@@ -803,7 +803,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, struct otx_ep_droq *droq,
@@ -10039 +9589 @@
-index 3c2154043c..c802b2c389 100644
+index 2a8378a33e..5f0cd1bb7f 100644
@@ -10079 +9629 @@
-index eccaaa2448..725ffcb2bc 100644
+index 0073dd7405..dc04a52639 100644
@@ -10103 +9653 @@
-@@ -583,16 +583,16 @@ pfe_eth_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
+@@ -582,16 +582,16 @@ pfe_eth_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
@@ -10123 +9673 @@
-@@ -603,7 +603,7 @@ pfe_eth_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
+@@ -602,7 +602,7 @@ pfe_eth_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
@@ -10132 +9682 @@
-@@ -993,24 +993,24 @@ pmd_pfe_probe(struct rte_vdev_device *vdev)
+@@ -992,24 +992,24 @@ pmd_pfe_probe(struct rte_vdev_device *vdev)
@@ -10227 +9777 @@
-index ede5fc83e3..25e28fd9f6 100644
+index c35585f5fd..dcc8cbe943 100644
@@ -10499 +10049 @@
-index 609d95dcfa..4441a90bdf 100644
+index ba2ef4058e..ee563c55ce 100644
@@ -10502 +10052 @@
-@@ -1814,7 +1814,7 @@ nicvf_dev_start(struct rte_eth_dev *dev)
+@@ -1817,7 +1817,7 @@ nicvf_dev_start(struct rte_eth_dev *dev)
@@ -10512 +10062 @@
-index 2fabb9fc4e..2834468764 100644
+index ad29c3cfec..a8bdc10232 100644
@@ -10515,3 +10065,3 @@
-@@ -613,7 +613,7 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
- 		err = txgbe_flash_read_dword(hw, 0xFFFDC, &ssid);
- 		if (err) {
+@@ -612,7 +612,7 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
+ 		ssid = txgbe_flash_read_dword(hw, 0xFFFDC);
+ 		if (ssid == 0x1) {
@@ -10524 +10074 @@
-@@ -2762,7 +2762,7 @@ txgbe_dev_detect_sfp(void *param)
+@@ -2756,7 +2756,7 @@ txgbe_dev_detect_sfp(void *param)
@@ -10734,13 +10283,0 @@
-diff --git a/drivers/net/virtio/virtio_user/vhost_vdpa.c b/drivers/net/virtio/virtio_user/vhost_vdpa.c
-index 3246b74e13..bc3e2a9af5 100644
---- a/drivers/net/virtio/virtio_user/vhost_vdpa.c
-+++ b/drivers/net/virtio/virtio_user/vhost_vdpa.c
-@@ -670,7 +670,7 @@ vhost_vdpa_map_notification_area(struct virtio_user_dev *dev)
- 		notify_area[i] = mmap(NULL, page_size, PROT_WRITE, MAP_SHARED | MAP_FILE,
- 				      data->vhostfd, i * page_size);
- 		if (notify_area[i] == MAP_FAILED) {
--			PMD_DRV_LOG(ERR, "(%s) Map failed for notify address of queue %d\n",
-+			PMD_DRV_LOG(ERR, "(%s) Map failed for notify address of queue %d",
- 				    dev->path, i);
- 			i--;
- 			goto map_err;
@@ -10748 +10285 @@
-index 48b872524a..e8642be86b 100644
+index 1bfd6aba80..d93d443ec9 100644
@@ -10751 +10288 @@
-@@ -1149,7 +1149,7 @@ virtio_user_dev_create_shadow_cvq(struct virtio_user_dev *dev, struct virtqueue
+@@ -1088,7 +1088,7 @@ virtio_user_dev_create_shadow_cvq(struct virtio_user_dev *dev, struct virtqueue
@@ -10761 +10298 @@
-index 467fb61137..78fac63ab6 100644
+index 70ae9c6035..f98cdb6d58 100644
@@ -10764 +10301 @@
-@@ -1095,10 +1095,10 @@ vmxnet3_dev_start(struct rte_eth_dev *dev)
+@@ -1094,10 +1094,10 @@ vmxnet3_dev_start(struct rte_eth_dev *dev)
@@ -10870 +10407 @@
-index a972b3b7a4..113a22b0a7 100644
+index f89bd3f9e2..997fbf8a0d 100644
@@ -10916 +10453 @@
-@@ -661,7 +661,7 @@ ifpga_rawdev_info_get(struct rte_rawdev *dev,
+@@ -660,7 +660,7 @@ ifpga_rawdev_info_get(struct rte_rawdev *dev,
@@ -10925 +10462 @@
-@@ -816,13 +816,13 @@ fpga_pr(struct rte_rawdev *raw_dev, u32 port_id, const char *buffer, u32 size,
+@@ -815,13 +815,13 @@ fpga_pr(struct rte_rawdev *raw_dev, u32 port_id, const char *buffer, u32 size,
@@ -10941 +10478 @@
-@@ -846,14 +846,14 @@ rte_fpga_do_pr(struct rte_rawdev *rawdev, int port_id,
+@@ -845,14 +845,14 @@ rte_fpga_do_pr(struct rte_rawdev *rawdev, int port_id,
@@ -10959 +10496 @@
-@@ -864,7 +864,7 @@ rte_fpga_do_pr(struct rte_rawdev *rawdev, int port_id,
+@@ -863,7 +863,7 @@ rte_fpga_do_pr(struct rte_rawdev *rawdev, int port_id,
@@ -10968 +10505 @@
-@@ -880,7 +880,7 @@ rte_fpga_do_pr(struct rte_rawdev *rawdev, int port_id,
+@@ -879,7 +879,7 @@ rte_fpga_do_pr(struct rte_rawdev *rawdev, int port_id,
@@ -10977 +10514 @@
-@@ -923,7 +923,7 @@ ifpga_rawdev_pr(struct rte_rawdev *dev,
+@@ -922,7 +922,7 @@ ifpga_rawdev_pr(struct rte_rawdev *dev,
@@ -10986 +10523 @@
-@@ -954,7 +954,7 @@ ifpga_rawdev_pr(struct rte_rawdev *dev,
+@@ -953,7 +953,7 @@ ifpga_rawdev_pr(struct rte_rawdev *dev,
@@ -10995 +10532 @@
-@@ -1230,13 +1230,13 @@ fme_err_read_seu_emr(struct opae_manager *mgr)
+@@ -1229,13 +1229,13 @@ fme_err_read_seu_emr(struct opae_manager *mgr)
@@ -11011 +10548 @@
-@@ -1251,7 +1251,7 @@ static int fme_clear_warning_intr(struct opae_manager *mgr)
+@@ -1250,7 +1250,7 @@ static int fme_clear_warning_intr(struct opae_manager *mgr)
@@ -11020 +10557 @@
-@@ -1263,14 +1263,14 @@ static int fme_clean_fme_error(struct opae_manager *mgr)
+@@ -1262,14 +1262,14 @@ static int fme_clean_fme_error(struct opae_manager *mgr)
@@ -11037 +10574 @@
-@@ -1290,15 +1290,15 @@ fme_err_handle_error0(struct opae_manager *mgr)
+@@ -1289,15 +1289,15 @@ fme_err_handle_error0(struct opae_manager *mgr)
@@ -11058 +10595 @@
-@@ -1321,17 +1321,17 @@ fme_err_handle_catfatal_error(struct opae_manager *mgr)
+@@ -1320,17 +1320,17 @@ fme_err_handle_catfatal_error(struct opae_manager *mgr)
@@ -11082 +10619 @@
-@@ -1350,28 +1350,28 @@ fme_err_handle_nonfaterror(struct opae_manager *mgr)
+@@ -1349,28 +1349,28 @@ fme_err_handle_nonfaterror(struct opae_manager *mgr)
@@ -11122 +10659 @@
-@@ -1381,7 +1381,7 @@ fme_interrupt_handler(void *param)
+@@ -1380,7 +1380,7 @@ fme_interrupt_handler(void *param)
@@ -11131 +10668 @@
-@@ -1407,7 +1407,7 @@ ifpga_unregister_msix_irq(struct ifpga_rawdev *dev, enum ifpga_irq_type type,
+@@ -1406,7 +1406,7 @@ ifpga_unregister_msix_irq(struct ifpga_rawdev *dev, enum ifpga_irq_type type,
@@ -11140 +10677 @@
-@@ -1417,7 +1417,7 @@ ifpga_unregister_msix_irq(struct ifpga_rawdev *dev, enum ifpga_irq_type type,
+@@ -1416,7 +1416,7 @@ ifpga_unregister_msix_irq(struct ifpga_rawdev *dev, enum ifpga_irq_type type,
@@ -11149 +10686 @@
-@@ -1480,7 +1480,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
+@@ -1479,7 +1479,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
@@ -11158 +10695 @@
-@@ -1521,7 +1521,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
+@@ -1520,7 +1520,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,

  parent reply	other threads:[~2024-11-12  9:07 UTC|newest]

Thread overview: 128+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-11-11  6:26 patch " Xueming Li
2024-11-11  6:26 ` patch 'bus/vdev: revert fix devargs in secondary process' " Xueming Li
2024-11-11  6:26 ` patch 'log: add a per line log helper' " Xueming Li
2024-11-12  9:02   ` David Marchand
2024-11-12 11:35     ` Xueming Li
2024-11-12 12:47       ` David Marchand
2024-11-12 13:56         ` Xueming Li
2024-11-12 14:09           ` David Marchand
2024-11-12 14:11             ` Xueming Li
2024-11-11  6:26 ` Xueming Li [this message]
2024-11-11  6:26 ` patch 'eal/x86: fix 32-bit write combining store' " Xueming Li
2024-11-11  6:26 ` patch 'test/event: fix schedule type' " Xueming Li
2024-11-11  6:26 ` patch 'test/event: fix target event queue' " Xueming Li
2024-11-11  6:26 ` patch 'examples/eventdev: fix queue crash with generic pipeline' " Xueming Li
2024-11-11  6:26 ` patch 'crypto/dpaa2_sec: fix memory leak' " Xueming Li
2024-11-11  6:26 ` patch 'common/dpaax/caamflib: fix PDCP SNOW-ZUC watchdog' " Xueming Li
2024-11-11  6:26 ` patch 'dev: fix callback lookup when unregistering device' " Xueming Li
2024-11-11  6:26 ` patch 'crypto/scheduler: fix session size computation' " Xueming Li
2024-11-11  6:26 ` patch 'examples/ipsec-secgw: fix dequeue count from cryptodev' " Xueming Li
2024-11-11  6:26 ` patch 'bpf: fix free function mismatch if convert fails' " Xueming Li
2024-11-11  6:27 ` patch 'baseband/la12xx: fix use after free in modem config' " Xueming Li
2024-11-11  6:27 ` patch 'common/qat: fix use after free in device probe' " Xueming Li
2024-11-11  6:27 ` patch 'common/idpf: fix use after free in mailbox init' " Xueming Li
2024-11-11  6:27 ` patch 'crypto/bcmfs: fix free function mismatch' " Xueming Li
2024-11-11  6:27 ` patch 'dma/idxd: fix free function mismatch in device probe' " Xueming Li
2024-11-11  6:27 ` patch 'event/cnxk: fix free function mismatch in port config' " Xueming Li
2024-11-11  6:27 ` patch 'net/cnxk: fix use after free in mempool create' " Xueming Li
2024-11-11  6:27 ` patch 'net/cpfl: fix invalid free in JSON parser' " Xueming Li
2024-11-11  6:27 ` patch 'net/e1000: fix use after free in filter flush' " Xueming Li
2024-11-11  6:27 ` patch 'net/nfp: fix double free in flow destroy' " Xueming Li
2024-11-11  6:27 ` patch 'net/sfc: fix use after free in debug logs' " Xueming Li
2024-11-11  6:27 ` patch 'raw/ifpga/base: fix use after free' " Xueming Li
2024-11-11  6:27 ` patch 'raw/ifpga: fix free function mismatch in interrupt config' " Xueming Li
2024-11-11  6:27 ` patch 'examples/vhost: fix free function mismatch' " Xueming Li
2024-11-11  6:27 ` patch 'net/nfb: fix use after free' " Xueming Li
2024-11-11  6:27 ` patch 'power: enable CPPC' " Xueming Li
2024-11-11  6:27 ` patch 'fib6: add runtime checks in AVX512 lookup' " Xueming Li
2024-11-11  6:27 ` patch 'pcapng: fix handling of chained mbufs' " Xueming Li
2024-11-11  6:27 ` patch 'app/dumpcap: fix handling of jumbo frames' " Xueming Li
2024-11-11  6:27 ` patch 'ml/cnxk: fix handling of TVM model I/O' " Xueming Li
2024-11-11  6:27 ` patch 'net/cnxk: fix Rx timestamp handling for VF' " Xueming Li
2024-11-11  6:27 ` patch 'net/cnxk: fix Rx offloads to handle timestamp' " Xueming Li
2024-11-11  6:27 ` patch 'event/cnxk: fix Rx timestamp handling' " Xueming Li
2024-11-11  6:27 ` patch 'common/cnxk: fix MAC address change with active VF' " Xueming Li
2024-11-11  6:27 ` patch 'common/cnxk: fix inline CTX write' " Xueming Li
2024-11-11  6:27 ` patch 'common/cnxk: fix CPT HW word size for outbound SA' " Xueming Li
2024-11-11  6:27 ` patch 'net/cnxk: fix OOP handling for inbound packets' " Xueming Li
2024-11-11  6:27 ` patch 'event/cnxk: fix OOP handling in event mode' " Xueming Li
2024-11-11  6:27 ` patch 'common/cnxk: fix base log level' " Xueming Li
2024-11-11  6:27 ` patch 'common/cnxk: fix IRQ reconfiguration' " Xueming Li
2024-11-11  6:27 ` patch 'baseband/acc: fix access to deallocated mem' " Xueming Li
2024-11-11  6:27 ` patch 'baseband/acc: fix soft output bypass RM' " Xueming Li
2024-11-11  6:27 ` patch 'vhost: fix offset while mapping log base address' " Xueming Li
2024-11-11  6:27 ` patch 'vdpa: update used flags in used ring relay' " Xueming Li
2024-11-11  6:27 ` patch 'vdpa/nfp: fix hardware initialization' " Xueming Li
2024-11-11  6:27 ` patch 'vdpa/nfp: fix reconfiguration' " Xueming Li
2024-11-11  6:27 ` patch 'net/virtio-user: reset used index counter' " Xueming Li
2024-11-11  6:27 ` patch 'vhost: restrict set max queue pair API to VDUSE' " Xueming Li
2024-11-11  6:27 ` patch 'fib: fix AVX512 lookup' " Xueming Li
2024-11-11  6:27 ` patch 'net/e1000: fix link status crash in secondary process' " Xueming Li
2024-11-11  6:27 ` patch 'net/cpfl: add checks for flow action types' " Xueming Li
2024-11-11  6:27 ` patch 'net/iavf: fix crash when link is unstable' " Xueming Li
2024-11-11  6:27 ` patch 'net/cpfl: fix parsing protocol ID mask field' " Xueming Li
2024-11-11  6:27 ` patch 'net/ice/base: fix link speed for 200G' " Xueming Li
2024-11-11  6:27 ` patch 'net/ice/base: fix iteration of TLVs in Preserved Fields Area' " Xueming Li
2024-11-11  6:27 ` patch 'net/ixgbe/base: fix unchecked return value' " Xueming Li
2024-11-11  6:27 ` patch 'net/i40e/base: fix setting flags in init function' " Xueming Li
2024-11-11  6:27 ` patch 'net/i40e/base: fix misleading debug logs and comments' " Xueming Li
2024-11-11  6:27 ` patch 'net/i40e/base: add missing X710TL device check' " Xueming Li
2024-11-11  6:27 ` patch 'net/i40e/base: fix blinking X722 with X557 PHY' " Xueming Li
2024-11-11  6:27 ` patch 'net/i40e/base: fix DDP loading with reserved track ID' " Xueming Li
2024-11-11  6:27 ` patch 'net/i40e/base: fix repeated register dumps' " Xueming Li
2024-11-11  6:27 ` patch 'net/i40e/base: fix unchecked return value' " Xueming Li
2024-11-11  6:27 ` patch 'net/i40e/base: fix loop bounds' " Xueming Li
2024-11-11  6:27 ` patch 'net/iavf: delay VF reset command' " Xueming Li
2024-11-11  6:27 ` patch 'net/i40e: fix AVX-512 pointer copy on 32-bit' " Xueming Li
2024-11-11  6:27 ` patch 'net/ice: " Xueming Li
2024-11-11  6:27 ` patch 'net/iavf: " Xueming Li
2024-11-11  6:27 ` patch 'common/idpf: " Xueming Li
2024-11-11  6:27 ` patch 'net/gve: fix queue setup and stop' " Xueming Li
2024-11-11  6:28 ` patch 'net/gve: fix Tx for chained mbuf' " Xueming Li
2024-11-11  6:28 ` patch 'net/tap: avoid memcpy with null argument' " Xueming Li
2024-11-11  6:28 ` patch 'app/testpmd: remove unnecessary cast' " Xueming Li
2024-11-11  6:28 ` patch 'net/pcap: set live interface as non-blocking' " Xueming Li
2024-11-11  6:28 ` patch 'net/mana: support rdma-core via pkg-config' " Xueming Li
2024-11-11  6:28 ` patch 'net/ena: revert redefining memcpy' " Xueming Li
2024-11-11  6:28 ` patch 'net/hns3: remove some basic address dump' " Xueming Li
2024-11-11  6:28 ` patch 'net/hns3: fix dump counter of registers' " Xueming Li
2024-11-11  6:28 ` patch 'ethdev: fix overflow in descriptor count' " Xueming Li
2024-11-11  6:28 ` patch 'bus/dpaa: fix PFDRs leaks due to FQRNIs' " Xueming Li
2024-11-11  6:28 ` patch 'net/dpaa: fix typecasting channel ID' " Xueming Li
2024-11-11  6:28 ` patch 'bus/dpaa: fix VSP for 1G fm1-mac9 and 10' " Xueming Li
2024-11-11  6:28 ` patch 'bus/dpaa: fix the fman details status' " Xueming Li
2024-11-11  6:28 ` patch 'net/dpaa: fix reallocate mbuf handling' " Xueming Li
2024-11-11  6:28 ` patch 'net/gve: fix mbuf allocation memory leak for DQ Rx' " Xueming Li
2024-11-11  6:28 ` patch 'net/gve: always attempt Rx refill on DQ' " Xueming Li
2024-11-11  6:28 ` patch 'net/nfp: fix type declaration of some variables' " Xueming Li
2024-11-11  6:28 ` patch 'net/nfp: fix representor port link status update' " Xueming Li
2024-11-11  6:28 ` patch 'net/gve: fix refill logic causing memory corruption' " Xueming Li
2024-11-11  6:28 ` patch 'net/gve: add IO memory barriers before reading descriptors' " Xueming Li
2024-11-11  6:28 ` patch 'net/memif: fix buffer overflow in zero copy Rx' " Xueming Li
2024-11-11  6:28 ` patch 'net/tap: restrict maximum number of MP FDs' " Xueming Li
2024-11-11  6:28 ` patch 'ethdev: verify queue ID in Tx done cleanup' " Xueming Li
2024-11-11  6:28 ` patch 'net/hns3: verify reset type from firmware' " Xueming Li
2024-11-11  6:28 ` patch 'net/nfp: fix link change return value' " Xueming Li
2024-11-11  6:28 ` patch 'net/nfp: fix pause frame setting check' " Xueming Li
2024-11-11  6:28 ` patch 'net/pcap: fix blocking Rx' " Xueming Li
2024-11-11  6:28 ` patch 'net/ice/base: add bounds check' " Xueming Li
2024-11-11  6:28 ` patch 'net/ice/base: fix VLAN replay after reset' " Xueming Li
2024-11-11  6:28 ` patch 'net/iavf: preserve MAC address with i40e PF Linux driver' " Xueming Li
2024-11-11  6:28 ` patch 'net/mlx5: workaround list management of Rx queue control' " Xueming Li
2024-11-11  6:28 ` patch 'net/mlx5/hws: fix flex item as tunnel header' " Xueming Li
2024-11-11  6:28 ` patch 'net/mlx5: add flex item query for tunnel mode' " Xueming Li
2024-11-11  6:28 ` patch 'net/mlx5: fix flex item " Xueming Li
2024-11-11  6:28 ` patch 'net/mlx5: fix number of supported flex parsers' " Xueming Li
2024-11-11  6:28 ` patch 'app/testpmd: remove flex item init command leftover' " Xueming Li
2024-11-11  6:28 ` patch 'net/mlx5: fix next protocol validation after flex item' " Xueming Li
2024-11-11  6:28 ` patch 'net/mlx5: fix non full word sample fields in " Xueming Li
2024-11-11  6:28 ` patch 'net/mlx5: fix flex item header length field translation' " Xueming Li
2024-11-11  6:28 ` patch 'build: remove version check on compiler links function' " Xueming Li
2024-11-11  6:28 ` patch 'hash: fix thash LFSR initialization' " Xueming Li
2024-11-11  6:28 ` patch 'net/nfp: notify flower firmware about PF speed' " Xueming Li
2024-11-11  6:28 ` patch 'net/nfp: do not set IPv6 flag in transport mode' " Xueming Li
2024-11-11  6:28 ` patch 'dmadev: fix potential null pointer access' " Xueming Li
2024-11-11  6:28 ` patch 'net/gve/base: fix build with Fedora Rawhide' " Xueming Li
2024-11-11  6:28 ` patch 'power: fix mapped lcore ID' " Xueming Li
2024-11-11  6:28 ` patch 'net/ionic: fix build with Fedora Rawhide' " Xueming Li
2024-11-11  6:28 ` patch '' " Xueming Li

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20241111062847.216344-4-xuemingl@nvidia.com \
    --to=xuemingl@nvidia.com \
    --cc=david.marchand@redhat.com \
    --cc=fengchengwen@huawei.com \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).